| { |
| "paper_id": "P00-1015", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T08:14:02.121782Z" |
| }, |
| "title": "A Unified Statistical Model for the Identification of English BaseNP", |
| "authors": [ |
| { |
| "first": "Endong", |
| "middle": [], |
| "last": "Xun", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Microsoft Research China No", |
| "location": { |
| "addrLine": "49 Zhichun Road Haidian District 100080", |
| "country": "China" |
| } |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Ming", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Microsoft", |
| "location": { |
| "addrLine": "Research China No. 49 Zhichun Road Haidian District 100080", |
| "country": "China" |
| } |
| }, |
| "email": "mingzhou@microsoft.com" |
| }, |
| { |
| "first": "Changning", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Microsoft", |
| "location": { |
| "addrLine": "Research China No. 49 Zhichun Road Haidian District 100080", |
| "country": "China" |
| } |
| }, |
| "email": "cnhuang@microsoft.com" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "This paper presents a novel statistical model for automatic identification of English baseNP. It uses two steps: the Nbest Part-Of-Speech (POS) tagging and baseNP identification given the N-best POS-sequences. Unlike the other approaches where the two steps are separated, we integrate them into a unified statistical framework. Our model also integrates lexical information. Finally, Viterbi algorithm is applied to make global search in the entire sentence, allowing us to obtain linear complexity for the entire process. Compared with other methods using the same testing set, our approach achieves 92.3% in precision and 93.2% in recall. The result is comparable with or better than the previously reported results.", |
| "pdf_parse": { |
| "paper_id": "P00-1015", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "This paper presents a novel statistical model for automatic identification of English baseNP. It uses two steps: the Nbest Part-Of-Speech (POS) tagging and baseNP identification given the N-best POS-sequences. Unlike the other approaches where the two steps are separated, we integrate them into a unified statistical framework. Our model also integrates lexical information. Finally, Viterbi algorithm is applied to make global search in the entire sentence, allowing us to obtain linear complexity for the entire process. Compared with other methods using the same testing set, our approach achieves 92.3% in precision and 93.2% in recall. The result is comparable with or better than the previously reported results.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Finding simple and non-recursive base Noun Phrase (baseNP) is an important subtask for many natural language processing applications, such as partial parsing, information retrieval and machine translation. A baseNP is a simple noun phrase that does not contain other noun phrase recursively, for example, the elements within [...] in the following example are baseNPs, where NNS, IN VBG etc are part-of-speech tags [as defined in M. Marcus 1993] .", |
| "cite_spans": [ |
| { |
| "start": 325, |
| "end": 330, |
| "text": "[...]", |
| "ref_id": null |
| }, |
| { |
| "start": 433, |
| "end": 445, |
| "text": "Marcus 1993]", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "[ Measures/NNS] (Church 1988; Bourigault 1992; Voutilainen 1993; Justeson & Katz 1995) . Recently some researchers have made experiments with the same test corpus extracted from the 20 th section of the Penn Treebank Wall Street Journal (Penn Treebank). Ramshaw & Markus (1998) applied transformbased error-driven algorithm (Brill 1995) to learn a set of transformation rules, and using those rules to locally updates the bracket positions. Argamon, Dagan & Krymolowski (1998) introduced a memory-based sequences learning method, the training examples are stored and generalization is performed at application time by comparing subsequence of the new text to positive and negative evidence. Cardie & Pierce (1998 1999 devised error driven pruning approach trained on Penn Treebank. It extracts baseNP rules from the training corpus and prune some bad baseNP by incremental training, and then apply the pruned rules to identify baseNP through maximum length matching (or dynamic program algorithm). Most of the prior work treats POS tagging and baseNP identification as two separate procedures. However, uncertainty is involved in both steps. Using the result of the first step as if they are certain will lead to more errors in the second step. A better approach is to consider the two steps together such that the final output takes the uncertainty in both steps together. The approaches proposed by Ramshaw & Markus and Cardie&Pierce are deterministic and local, while Argamon, Dagan & Krymolowski consider the problem globally and assigned a score to each possible baseNP structures. However, they did not consider any lexical information. This paper presents a novel statistical approach to baseNP identification, which considers both steps together within a unified statistical framework. It also takes lexical information into account. In addition, in order to make the best choice for the entire sentence, Viterbi algorithm is applied. Our tests with the Penn Treebank showed that our integrated approach achieves 92.3% in precision and 93.2% in recall. The result is comparable or better that the current state of the art. In the following sections, we will describe the detail for the algorithm, parameter estimation and search algorithms in section 2. The experiment results are given in section 3. In section 4 we make further analysis and comparison. In the final section we give some conclusions.", |
| "cite_spans": [ |
| { |
| "start": 2, |
| "end": 15, |
| "text": "Measures/NNS]", |
| "ref_id": null |
| }, |
| { |
| "start": 16, |
| "end": 29, |
| "text": "(Church 1988;", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 30, |
| "end": 46, |
| "text": "Bourigault 1992;", |
| "ref_id": null |
| }, |
| { |
| "start": 47, |
| "end": 64, |
| "text": "Voutilainen 1993;", |
| "ref_id": null |
| }, |
| { |
| "start": 65, |
| "end": 86, |
| "text": "Justeson & Katz 1995)", |
| "ref_id": null |
| }, |
| { |
| "start": 254, |
| "end": 277, |
| "text": "Ramshaw & Markus (1998)", |
| "ref_id": null |
| }, |
| { |
| "start": 324, |
| "end": 336, |
| "text": "(Brill 1995)", |
| "ref_id": null |
| }, |
| { |
| "start": 441, |
| "end": 476, |
| "text": "Argamon, Dagan & Krymolowski (1998)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 691, |
| "end": 712, |
| "text": "Cardie & Pierce (1998", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 713, |
| "end": 717, |
| "text": "1999", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this section, we will describe the two-pass statistical model, parameters training and Viterbi algorithm for the search of the best sequences of POS tagging and baseNP identification. Before describing our algorithm, we introduce some notations we will use", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The statistical approach", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Let us express an input sentence E as a word sequence and a sequence of POS respectively as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Notation", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "n n w w w w E 1 2 1 ... \u2212 = n n t t t t T 1 2 1 ... \u2212 =", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Notation", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Where n is the number of words in the sentence, i t is the POS tag of the word i w .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Notation", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Given E, the result of the baseNP identification is assumed to be a sequence, in which some words are grouped into baseNP as follows", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Notation", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "... ] ... [ ... 1 1 1 + + \u2212 j j i i i w w w w w", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Notation", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "The corresponding tag sequence is as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Notation", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "(a) m j j i i j j i i i n n n t b t t t t t t B ... ... ... ... ] ... [ ... 2 1 1 , 1 1 1 1 = = = + \u2212 + + \u2212", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Notation", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "In which j i b , corresponds to the tag sequence of a baseNP:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Notation", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "] ... [ 1 j i i t t t + . j i b", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Notation", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": ", may also be thought of as a baseNP rule. Therefore B is a sequence of both POS tags and baseNP rules.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Notation", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Thus \u2208 \u2264 \u2264 i n n m , 1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Notation", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "(POS tag set \u222a baseNP rules set), This is the first expression of a sentence with baseNP annotated. Sometime, we also use the following equivalent form: . F, E and I mean respectively that the word is the left boundary, right boundary of a baseNP, or at another position inside a baseNP. O means that the word is outside a baseNP. S marks a single word baseNP. This second expression is similar to that used in [Marcus 1995] . For example, the two expressions of the example given in Figure 1 are as follows: ", |
| "cite_spans": [ |
| { |
| "start": 411, |
| "end": 424, |
| "text": "[Marcus 1995]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 484, |
| "end": 492, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Notation", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "(b) n j j j j i i i i i i q q q", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Notation", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "The principle of our approach is as follows. The most probable baseNP sequence * B may be expressed generally as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "An 'integrated' two-pass procedure", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": ")) | ( ( max arg * E B p B B =", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "An 'integrated' two-pass procedure", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "We separate the whole procedure into two passes, i.e.:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "An 'integrated' two-pass procedure", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": ")) , | ( ) | ( ( max arg * E T B P E T P B B \u00d7 \u2248", |
| "eq_num": "(1)" |
| } |
| ], |
| "section": "An 'integrated' two-pass procedure", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "In order to reduce the search space and computational complexity, we only consider the N best POS tagging of E, i.e.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "An 'integrated' two-pass procedure", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": ")) | ( ( max arg ) ( ,..., 1 E T P best N T N T T T= = \u2212 (2)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "An 'integrated' two-pass procedure", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Therefore, we have:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "An 'integrated' two-pass procedure", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": ")) , | ( ) | ( ( max arg ,..., , * 1 E T B P E T P B N T T T B \u00d7 \u2248 = (3)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "An 'integrated' two-pass procedure", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Correspondingly, the algorithm is composed of two steps: determining the N-best POS tagging using Equation (2). And then determining the best baseNP sequence from those POS sequences using Equation (3). One can see that the two steps are integrated together, rather that separated as in the other approaches. Let us now examine the two steps more closely.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "An 'integrated' two-pass procedure", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "The goal of the algorithm in the 1 st pass is to search for the N-best POS-sequences within the search space (POS lattice). According to Bayes' Rule, we have ) (", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Determining the N best POS sequences", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": ") ( ) | ( ) | ( E P T P T E P E T P \u00d7 = Since ) (E P does not affect the maximizing procedure of ) | ( E T P , equation (2) becomes )) ( ) | ( ( max arg )) | ( ( max arg ) ( ,..., ,..., 1 1 T P T E P E T P best N T N N T T T T T T \u00d7 = = \u2212 = = (4)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Determining the N best POS sequences", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "We now assume that the words in E are independent. Thus", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Determining the N best POS sequences", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "\u220f = \u2248 n i i i t w P T E P 1 ) | ( ) | ( (5)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Determining the N best POS sequences", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "We then use a trigram model as an approximation of", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Determining the N best POS sequences", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": ") (T P , i.e.: \u220f = \u2212 \u2212 \u2248 n i i i i t t t P T P 1 1 2 ) , | ( ) ( (6) Finally we have )) | ( ( max arg ) ( ,..., 1 E T P best N T N T T T = = \u2212 )) , | ( ) | ( ( max arg 1 2 1 ,..., 1 \u2212 \u2212 = = \u00d7 = \u220f i i i n i i i T T T t t t P t w P N (7) In Viterbi algorithm of N best search, ) | ( i i t w P", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Determining the N best POS sequences", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "is called lexical generation (or output) probability, and", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Determining the N best POS sequences", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": ") , | ( 1 2 \u2212 \u2212 i i i t t t P", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Determining the N best POS sequences", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "is called transition probability in Hidden Markov Model.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Determining the N best POS sequences", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "As mentioned before, the goal of the 2 nd pass is to search the best baseNP-sequence given the Nbest POS-sequences.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Determining the baseNPs", |
| "sec_num": "2.3.1" |
| }, |
| { |
| "text": "Considering E ,T and B as random variables, according to Bayes' Rule, we have", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Determining the baseNPs", |
| "sec_num": "2.3.1" |
| }, |
| { |
| "text": ") | ( ) , | ( ) | ( ) , | ( T E P T B E P T B P E T B P \u00d7 = Since ) ( ) ( ) | ( ) | ( T P B P B T P T B P \u00d7 = we have, ) ( ) | ( ) ( ) | ( ) , | ( ) , | ( T P T E P B P B T P T B E P E T B P \u00d7 \u00d7 \u00d7 = (8)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Determining the baseNPs", |
| "sec_num": "2.3.1" |
| }, |
| { |
| "text": "Because we search for the best baseNP sequence for each possible POS-sequence of the given sentence E, so ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Determining the baseNPs", |
| "sec_num": "2.3.1" |
| }, |
| { |
| "text": "const T E P T P T E P = \u2229 = \u00d7 ) ( ) ( ) | ( ,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Determining the baseNPs", |
| "sec_num": "2.3.1" |
| }, |
| { |
| "text": "\u220f = = = n i j i j i b t t P B T P 1 , 1 ) | ,..., ( ) | ( . Therefore, equation (3) becomes )) , | ( ) | ( ( max arg ,..., , * 1 E T B P E T P B N T T T B \u00d7 = = )) ( ) , | ( ) | ( ( max arg ,..., , 1 B P T B E P E T P N T T T B \u00d7 \u00d7 = = (9)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Determining the baseNPs", |
| "sec_num": "2.3.1" |
| }, |
| { |
| "text": "using the independence assumption, we have ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Determining the baseNPs", |
| "sec_num": "2.3.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "\u220f = \u2248 n i i i i bm t w P T B E P 1 ) , | ( ) , | (", |
| "eq_num": "(10" |
| } |
| ], |
| "section": "Determining the baseNPs", |
| "sec_num": "2.3.1" |
| }, |
| { |
| "text": "\u220f = \u2212 \u2212 \u2248 m i i i i n n n P B P 1 1 2 ) , | ( ) ( (11) Finally, we obtain ) ) , | ( ) , | ( ) | ( ( max arg , 1 1 2 1 ,.. , * 1 \u220f \u220f = \u2212 \u2212 = = \u00d7 \u00d7 = m i i i i n i i i i T T T B n n n P t bm w P E T P B N 12", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Determining the baseNPs", |
| "sec_num": "2.3.1" |
| }, |
| { |
| "text": "To summarize, In the first step, Viterbi N-best searching algorithm is applied in the POS tagging procedure, It determines a path probability t f for each POS sequence calculated as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Determining the baseNPs", |
| "sec_num": "2.3.1" |
| }, |
| { |
| "text": "\u220f = \u2212 \u2212 \u00d7 = n i i i i i i t t t t p t w p f , 1 1 2 ) , | ( ) | ( .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Determining the baseNPs", |
| "sec_num": "2.3.1" |
| }, |
| { |
| "text": "In the second step, for each possible POS tagging result, Viterbi algorithm is applied again to search for the best baseNP sequence. Every baseNP sequence found in this pass is also asssociated with a path probability in our experiments). When we determine the best baseNP sequence for the given sentence E , we also determine the best POS sequence of E , which corresponds to the best baseNP of E . Now let us illustrate the whole process through an example: \"stock was down 9.1 points yesterday morning.\". In the first pass, one of the N-best POS tagging result of the sentence is: T = NN VBD RB CD NNS NN NN. For this POS sequence, the 2 nd pass will try to determine the baseNPs as shown in Figure 2 . The details of the path in the dash line are given in Figure 3 , Its probability calculated in the second pass is as follows ( \u03a6 is pseudo variable): Figure 2 : All possible brackets of \"stock was down 9.1 points yesterday morning\" Figure 3 : the transformed form of the path with dash line for the second pass processing", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 695, |
| "end": 703, |
| "text": "Figure 2", |
| "ref_id": null |
| }, |
| { |
| "start": 760, |
| "end": 768, |
| "text": "Figure 3", |
| "ref_id": null |
| }, |
| { |
| "start": 856, |
| "end": 864, |
| "text": "Figure 2", |
| "ref_id": null |
| }, |
| { |
| "start": 938, |
| "end": 946, |
| "text": "Figure 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Determining the baseNPs", |
| "sec_num": "2.3.1" |
| }, |
| { |
| "text": "\u220f \u220f = \u2212 \u2212 = \u00d7 = m i i i i n i i i i b n n n p bm t w p f , 1 1 2 1 ) , | ( ) , | (", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Determining the baseNPs", |
| "sec_num": "2.3.1" |
| }, |
| { |
| "text": ") , | ( ) , | ( ) , | ( ) , | ( ) , | ( B CD NUMBER p O RB down p O VBD was p S NN stock p E T B P \u00d7 \u00d7 \u00d7 =", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Determining the baseNPs", |
| "sec_num": "2.3.1" |
| }, |
| { |
| "text": "In this work, the training and testing data were derived from the 25 sections of Penn Treebank. We divided the whole Penn Treebank data into two sections, one for training and the other for testing.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The statistical parameter training", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "As required in our statistical model, we have to calculate the following four probabilities:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The statistical parameter training", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "(1) ) , | ( 1 2 \u2212 \u2212 i i i t t t P ,", |
| "eq_num": "(2)" |
| } |
| ], |
| "section": "The statistical parameter training", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": ") | ( i i t w P , (3) ) | ( 1 2 \u2212 \u2212 i i i n n n P and (4) ) , | ( i i i bm t w P", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The statistical parameter training", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": ". The first and the third parameters are trigrams of T and B respectively. The second and the fourth are lexical generation probabilities. Probabilities", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The statistical parameter training", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "(1) and (2) can be calculated from POS tagged data with following formulae:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The statistical parameter training", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "\u2211 \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 = j j i i i i i i i i t t t count t t t count t t t p ) ( ) ( ) , | ( 1 2 1 2 1 2 (13) ) ( ) ( ) | ( i i i i i t count t tag with w count t w p =", |
| "eq_num": "(14)" |
| } |
| ], |
| "section": "The statistical parameter training", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "As each sentence in the training set has both POS tags and baseNP boundary tags, it can be converted to the two sequences as B (a) and Q (b) described in the last section. Using these sequences, parameters (3) and (4) can be calculated, The calculation formulas are similar with equations (13) and 14respectively. Before training trigram model (3), all possible baseNP rules should be extracted from the training corpus. For instance, the following three sequences are among the baseNP rules extracted.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The statistical parameter training", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "There are more than 6,000 baseNP rules in the Penn Treebank. When training trigram model (3), we treat those baseNP rules in two ways. (1) Each baseNP rule is assigned a unique identifier (UID). This means that the algorithm considers the corresponding structure of each baseNP rule.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The statistical parameter training", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "(2) All of those rules are assigned to the same identifier (SID). In this case, those rules are grouped into the same class. Nevertheless, the identifiers of baseNP rules are still different from the identifiers assigned to POS tags. We used the approach of Katz (Katz.1987) for parameter smoothing, and build a trigram model to predict the probabilities of parameter (1) and (3). In the case that unknown words are encountered during baseNP identification, we calculate parameter (2) and (4) in the following way: ", |
| "cite_spans": [ |
| { |
| "start": 258, |
| "end": 274, |
| "text": "Katz (Katz.1987)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The statistical parameter training", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "2 )) , ( ( max ) , ( ) , | ( i j j i i i i i t bm", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The statistical parameter training", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "We designed five experiments as shown in Table 1 . \"UID\" and \"SID\" mean respectively that an identifier is assigned to each baseNP rule or the same identifier is assigned to all the baseNP rules. \"+1\" and \"+4\" denote the number of beat POS sequences retained in the first step. And \"UID+R\" means the POS tagging result of the given sentence is totally correct for the 2nd step. This provides an ideal upper bound for the system. The reason why we choose N=4 for the N-best POS tagging can be explained in Figure 4 , which shows how the precision of POS tagging changes with the number N. Figure 7: POS tagging precision under different training sets Figure 5 -7 summarize the outcomes of our statistical model on various size of the training data, x-coordinate denotes the size of the training set, where \"1\" indicates that the training set is from section 0-8 th of Penn Treebank, \"2\" corresponds to the corpus that add additional three sections 9-11 th into \"1\" and so on. In this way the size of the training data becomes larger and larger. In those cases the testing data is always section 20 (which is excluded from the training data). From Figure 7 , we learned that the POS tagging and baseNP identification are influenced each other. We conducted two experiments to study whether the POS tagging process can make use of baseNP information. One is UID+4, in which the precision of POS tagging dropped slightly with respect to the standard POS tagging with Trigram Viterbi search. In the second experiment SID+4, the precision of POS tagging has increase slightly. This result shows that POS tagging can benefit from baseNP information. Whether or not the baseNP information can improve the precision of POS tagging in our approach is determined by the identifier assignment of the baseNP rules when training trigram model of", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 41, |
| "end": 49, |
| "text": "Table 1", |
| "ref_id": null |
| }, |
| { |
| "start": 506, |
| "end": 515, |
| "text": "Figure 4", |
| "ref_id": null |
| }, |
| { |
| "start": 652, |
| "end": 660, |
| "text": "Figure 5", |
| "ref_id": null |
| }, |
| { |
| "start": 1148, |
| "end": 1156, |
| "text": "Figure 7", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiment result", |
| "sec_num": "3" |
| }, |
| { |
| "text": ") , | ( 1 2 \u2212 \u2212 i i i n n n P", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiment result", |
| "sec_num": "3" |
| }, |
| { |
| "text": ". In the future, we will further study optimal baseNP rules clustering to further improve the performances of both baseNP identification and POS tagging.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiment result", |
| "sec_num": "3" |
| }, |
| { |
| "text": "To our knowledge, three other approaches to baseNP identification have been evaluated using Penn Treebank-Ramshaw & Marcus's transformation-based chunker, Argamon et al.'s MBSL, and Cardie's Treebank_lex in Table 2 , we give a comparison of our method with other these three. In this experiment, we use the testing data prepared by Ramshaw (available at http://www.cs.biu.ac.il/~yuvalk/MBSL), the training data is selected from the 24 sections of Penn Treebank (excluding the section 20). We can see that our method achieves better result than the others . ", |
| "cite_spans": [ |
| { |
| "start": 155, |
| "end": 206, |
| "text": "Argamon et al.'s MBSL, and Cardie's Treebank_lex in", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 207, |
| "end": 214, |
| "text": "Table 2", |
| "ref_id": "TABREF8" |
| } |
| ], |
| "eq_spans": [], |
| "section": "with other approaches", |
| "sec_num": null |
| }, |
| { |
| "text": "\u220f = n i i i i bm t w P 1 ) , | (", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "with other approaches", |
| "sec_num": null |
| }, |
| { |
| "text": "in the 2 nd pass of our model, the precision/recall ratios are reduced to 90.0/92.4% from 92.3/93.2%. Cardie's approach to Treebank rule pruning may be regarded as the special case of our statistical model, since the maximum-matching algorithm of baseNP rules is only a simplified processing version of our statistical model. Compared with this rule pruning method, all baseNP rules are kept in our model. Therefore in principle we have less likelihood of failing to recognize baseNP types As to the complexity of algorithm, our approach is determined by the Viterbi algorithm approach, or ) (n O , linear with the length.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "with other approaches", |
| "sec_num": null |
| }, |
| { |
| "text": "This paper presented a unified statistical model to identify baseNP in English text. Compared with other methods, our approach has following characteristics:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "5" |
| }, |
| { |
| "text": "(1) baseNP identification is implemented in two related stages: N-best POS taggings are first determined, then baseNPs are identified given the N best POS-sequences. Unlike other approaches that use POS tagging as preprocessing, our approach is not dependant on perfect POS-tagging, Moreover, we can apply baseNP information to further increase the precision of POS tagging can be improved. These experiments triggered an interesting future research challenge: how to cluster certain baseNP rules into certain identifiers so as to improve the precision of both baseNP and POS tagging. This is one of our further research topics.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "5" |
| }, |
| { |
| "text": "(2) Our statistical model makes use of more lexical information than other approaches. Every word in the sentence is taken into account during baseNP identification.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "5" |
| }, |
| { |
| "text": "(3) Viterbi algorithm is applied to make global search at the sentence level. Experiment with the same testing data used by the other methods showed that the precision is 92.3% and the recall is 93.2%. To our knowledge, these results are comparable with or better than all previously reported results.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "5" |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Man vs. machine: A case study in baseNP learning", |
| "authors": [], |
| "year": 1999, |
| "venue": "Proceedings of the 18 th International Conference on Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Eric Brill and Grace Ngai. (1999) Man vs. machine: A case study in baseNP learning. In Proceedings of the 18 th International Conference on Computational Linguistics, pp.65-72. ACL'99", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "A memory-based approach to learning shallow language patterns", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Argamon", |
| "suffix": "" |
| }, |
| { |
| "first": "I", |
| "middle": [], |
| "last": "Dagan", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Krymolowski", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Proceedings of the 17 th International Conference on Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "67--73", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "S. Argamon, I. Dagan, and Y. Krymolowski (1998) A memory-based approach to learning shallow language patterns. In Proceedings of the 17 th International Conference on Computational Linguistics, pp.67-73. COLING-ACL'98", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Error-driven pruning of treebank grammas for baseNP identification", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Cardie", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Pierce", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Proceedings of the 36 th International Conference on Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "218--224", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Cardie and D. Pierce (1998) Error-driven pruning of treebank grammas for baseNP identification. In Proceedings of the 36 th International Conference on Computational Linguistics, pp.218-224. COLING-ACL'98", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Text chunking using transformation-based learning", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Lance", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [ |
| "P" |
| ], |
| "last": "Ramshaw", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Marcus", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "Natural Language Processing Using Very large Corpora. Kluwer. Originally appeared in The second workshop on very large corpora WVLC'95", |
| "volume": "", |
| "issue": "", |
| "pages": "82--94", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lance A. Ramshaw and Michael P. Marcus ( In Press). Text chunking using transformation-based learning. In Natural Language Processing Using Very large Corpora. Kluwer. Originally appeared in The second workshop on very large corpora WVLC'95, pp.82-94.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Error bounds for convolution codes and asymptotically optimum decoding algorithm", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [ |
| "J" |
| ], |
| "last": "Viterbi", |
| "suffix": "" |
| } |
| ], |
| "year": 1967, |
| "venue": "IEEE Transactions on Information Theory IT", |
| "volume": "13", |
| "issue": "2", |
| "pages": "260--269", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Viterbi, A.J. (1967) Error bounds for convolution codes and asymptotically optimum decoding algorithm. IEEE Transactions on Information Theory IT-13(2): pp.260-269, April, 1967", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Estimation of probabilities from sparse data for the language model component of speech recognize", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [ |
| "M" |
| ], |
| "last": "Katz", |
| "suffix": "" |
| } |
| ], |
| "year": 1987, |
| "venue": "IEEE Transactions on Acoustics, Speech and Signal Processing", |
| "volume": "35", |
| "issue": "", |
| "pages": "400--401", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "S.M. Katz.(1987) Estimation of probabilities from sparse data for the language model component of speech recognize. IEEE Transactions on Acoustics, Speech and Signal Processing. Volume ASSP-35, pp.400-401, March 1987", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "A stochastic parts program and noun phrase parser for unrestricted text", |
| "authors": [ |
| { |
| "first": "Kenneth", |
| "middle": [], |
| "last": "Church", |
| "suffix": "" |
| } |
| ], |
| "year": 1988, |
| "venue": "Proceedings of the Second Conference on Applied Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "136--143", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Church, Kenneth. (1988) A stochastic parts program and noun phrase parser for unrestricted text. In Proceedings of the Second Conference on Applied Natural Language Processing, pages 136-143. Association of Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Building a large annotated corpus of English: the Penn Treebank", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Marcus", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Marcinkiewicx", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Santorini", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "Computational Linguistics", |
| "volume": "19", |
| "issue": "2", |
| "pages": "313--330", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M. Marcus, M. Marcinkiewicx, and B. Santorini (1993) Building a large annotated corpus of English: the Penn Treebank. Computational Linguistics, 19(2): 313-330", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "type_str": "figure", |
| "text": "An example sentence with baseNP brackets A number of researchers have dealt with the problem of baseNP identification", |
| "uris": null, |
| "num": null |
| }, |
| "FIGREF1": { |
| "type_str": "figure", |
| "text": "(a) B= [NNS] IN [VBG NN] VBD RBR IN [DT JJ NNS] (b) Q=(NNS S) (IN O) (VBG F) (NN E) (VBD O) (RBR O) (IN O) (DT F) (JJ I) (NNS E) (. O)", |
| "uris": null, |
| "num": null |
| }, |
| "FIGREF2": { |
| "type_str": "figure", |
| "text": "The integrated probability of a baseNP sequence is determined by", |
| "uris": null, |
| "num": null |
| }, |
| "TABREF8": { |
| "num": null, |
| "text": "The comparison of our statistical method with three other approaches", |
| "html": null, |
| "content": "<table><tr><td/><td>Transforamtion-Based</td><td>Treebank_Lex</td><td>MBSL</td><td>Unified Statistical</td></tr><tr><td>Unifying POS & baseNP</td><td>NO</td><td>NO</td><td>NO</td><td>YES</td></tr><tr><td>Lexical Information</td><td>YES</td><td>YES</td><td>NO</td><td>YES</td></tr><tr><td>Global Searching</td><td>NO</td><td>NO</td><td>YES</td><td>YES</td></tr><tr><td>Context</td><td>YES</td><td>NO</td><td>YES</td><td>YES</td></tr></table>", |
| "type_str": "table" |
| }, |
| "TABREF9": { |
| "num": null, |
| "text": "The comparison of some characteristics of our statistical method with three other approaches", |
| "html": null, |
| "content": "<table/>", |
| "type_str": "table" |
| }, |
| "TABREF10": { |
| "num": null, |
| "text": "", |
| "html": null, |
| "content": "<table><tr><td>summarizes some interesting aspects of</td><td>recognition. If we neglect the probability of</td></tr><tr><td>our approach and the three other methods. Our</td><td/></tr><tr><td>statistical model unifies baseNP identification</td><td/></tr><tr><td>and POS tagging through tracing N-best</td><td/></tr><tr><td>sequences of POS tagging in the pass of baseNP</td><td/></tr><tr><td>recognition, while other methods use POS</td><td/></tr><tr><td>tagging as a pre-processing procedure. From</td><td/></tr><tr><td>Table 1, if we reviewed 4 best output of POS</td><td/></tr><tr><td>tagging, rather that only one, the F-measure of</td><td/></tr><tr><td>baseNP identification is improved from 93.02 %</td><td/></tr><tr><td>to 93.07%. After considering baseNP</td><td/></tr><tr><td>information, the error ratio of POS tagging is</td><td/></tr><tr><td>reduced by 2.4% (comparing SID+4 with</td><td/></tr><tr><td>SID+1).</td><td/></tr><tr><td>The transformation-based method (R&M 95)</td><td/></tr><tr><td>identifies baseNP within a local windows of</td><td/></tr><tr><td>sentence by matching transformation rules.</td><td/></tr><tr><td>Similarly to MBSL, the 2 nd pass of our algorithm</td><td/></tr><tr><td>traces all possible baseNP brackets, and makes</td><td/></tr><tr><td>global decision through Viterbi searching. On</td><td/></tr><tr><td>the other hand, unlike MSBL we take lexical</td><td/></tr><tr><td>information into account. The experiments show</td><td/></tr><tr><td>that lexical information is very helpful to</td><td/></tr><tr><td>improve both precision and recall of baseNP</td><td/></tr></table>", |
| "type_str": "table" |
| } |
| } |
| } |
| } |