| { |
| "paper_id": "P05-1010", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T09:38:42.862006Z" |
| }, |
| "title": "Probabilistic CFG with latent annotations", |
| "authors": [ |
| { |
| "first": "Takuya", |
| "middle": [], |
| "last": "Matsuzaki", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "matuzaki@is.s.u-tokyo.ac.jp" |
| }, |
| { |
| "first": "Yusuke", |
| "middle": [], |
| "last": "Miyao", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "yusuke@is.s.u-tokyo.ac.jp" |
| }, |
| { |
| "first": "Jun", |
| "middle": [ |
| "'" |
| ], |
| "last": "Ichi Tsujii", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "JST(Japan Science and Technology Agency", |
| "location": { |
| "addrLine": "Honcho 4-1-8, Kawaguchi-shi", |
| "postCode": "332-0012", |
| "settlement": "Saitama" |
| } |
| }, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "This paper defines a generative probabilistic model of parse trees, which we call PCFG-LA. This model is an extension of PCFG in which non-terminal symbols are augmented with latent variables. Finegrained CFG rules are automatically induced from a parsed corpus by training a PCFG-LA model using an EM-algorithm. Because exact parsing with a PCFG-LA is NP-hard, several approximations are described and empirically compared. In experiments using the Penn WSJ corpus, our automatically trained model gave a performance of 86.6% (F\u00a5 , sentences \u00a6 40 words), which is comparable to that of an unlexicalized PCFG parser created using extensive manual feature selection.", |
| "pdf_parse": { |
| "paper_id": "P05-1010", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "This paper defines a generative probabilistic model of parse trees, which we call PCFG-LA. This model is an extension of PCFG in which non-terminal symbols are augmented with latent variables. Finegrained CFG rules are automatically induced from a parsed corpus by training a PCFG-LA model using an EM-algorithm. Because exact parsing with a PCFG-LA is NP-hard, several approximations are described and empirically compared. In experiments using the Penn WSJ corpus, our automatically trained model gave a performance of 86.6% (F\u00a5 , sentences \u00a6 40 words), which is comparable to that of an unlexicalized PCFG parser created using extensive manual feature selection.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Variants of PCFGs form the basis of several broadcoverage and high-precision parsers (Collins, 1999; Charniak, 1999; Klein and Manning, 2003) . In those parsers, the strong conditional independence assumption made in vanilla treebank PCFGs is weakened by annotating non-terminal symbols with many 'features' (Goodman, 1997; Johnson, 1998) . Examples of such features are head words of constituents, labels of ancestor and sibling nodes, and subcategorization frames of lexical heads. Effective features and their good combinations are normally explored using trial-and-error. This paper defines a generative model of parse trees that we call PCFG with latent annotations (PCFG-LA). This model is an extension of PCFG models in which non-terminal symbols are annotated with latent variables. The latent variables work just like the features attached to non-terminal symbols. A fine-grained PCFG is automatically induced from parsed corpora by training a PCFG-LA model using an EM-algorithm, which replaces the manual feature selection used in previous research.", |
| "cite_spans": [ |
| { |
| "start": 85, |
| "end": 100, |
| "text": "(Collins, 1999;", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 101, |
| "end": 116, |
| "text": "Charniak, 1999;", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 117, |
| "end": 141, |
| "text": "Klein and Manning, 2003)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 308, |
| "end": 323, |
| "text": "(Goodman, 1997;", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 324, |
| "end": 338, |
| "text": "Johnson, 1998)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The main focus of this paper is to examine the effectiveness of the automatically trained models in parsing. Because exact inference with a PCFG-LA, i.e., selection of the most probable parse, is NP-hard, we are forced to use some approximation of it. We empirically compared three different approximation methods. One of the three methods gives a performance of 86.6% (F\u00a5 , sentences \u00a6 40 words) on the standard test set of the Penn WSJ corpus. Utsuro et al. (1996) proposed a method that automatically selects a proper level of generalization of non-terminal symbols of a PCFG, but they did not report the results of parsing with the obtained PCFG. Henderson's parsing model (Henderson, 2003) has a similar motivation as ours in that a derivation history of a parse tree is compactly represented by induced hidden variables (hidden layer activation of a neural network), although the details of his approach is quite different from ours.", |
| "cite_spans": [ |
| { |
| "start": 446, |
| "end": 466, |
| "text": "Utsuro et al. (1996)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 677, |
| "end": 694, |
| "text": "(Henderson, 2003)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "PCFG-LA is a generative probabilistic model of parse trees. In this model, an observed parse tree is considered as an incomplete data, and the corre- sponding complete data is a tree with latent annotations. Each non-terminal node in the complete data is labeled with a complete symbol of the form D E 9F G B , where D is the non-terminal symbol of the corresponding node in the observed tree and F is a latent annotation symbol, which is an element of a fixed set H . A complete/incomplete tree pair of the sentence, \"the cat grinned,\" is shown in Figure 2 . The complete parse tree, 8 @ 9A C B (left), is generated through a process just like the one in ordinary PCFGs, but the non-terminal symbols in the CFG rules are annotated with latent symbols,", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 549, |
| "end": 557, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Probabilistic model", |
| "sec_num": "2" |
| }, |
| { |
| "text": "A P I R Q F \u00a5 T S F V U S X W X W X W \u1ef2 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Probabilistic model", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Thus, the probability of the complete tree (8", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Probabilistic model", |
| "sec_num": "2" |
| }, |
| { |
| "text": "@ 9A C B ) is a Q b 8 E 9A c B Y I e d 4 Q g f 6 9F h \u00a5 i B Y ! p r q Q g f 9F s \u00a5 i B u t w v a 9F U B x a 9F y u B Y p q Q v a 9F U u B u t w 8 E 9F V u B v 9F u B Y p q Q c 8 @ 9F u B u t V Y ! p q Q v 9F V B u t w Y p q Q g x a 9F y B u t d x e 9F V f g B Y ! p r q Q g x e 9F f u B u t i h j \u00a2 k g l m l h T n Y g S where d 4 Q g f 9F \u00a5 B Y", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Probabilistic model", |
| "sec_num": "2" |
| }, |
| { |
| "text": "denotes the probability of an occurrence of the symbol f 6 9F \u00a5 B at a root node and q Q j Y denotes the probability of a CFG rule j . The probability of the observed tree a Q b 8 Y is obtained by summing a Q b 8 @ 9A C B Y for all the assignments to latent annotation symbols,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Probabilistic model", |
| "sec_num": "2" |
| }, |
| { |
| "text": "A :", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Probabilistic model", |
| "sec_num": "2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "a Q b 8 Y I p o q i r \u00a2 s o q \" t r ) s u X u X u o q 7 r \u00a2 s a Q b 8 @ 9A C B Y g W", |
| "eq_num": "(1)" |
| } |
| ], |
| "section": "Probabilistic model", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Using dynamic programming, the theoretical bound of the time complexity of the summation in Eq. 1 is reduced to be proportional to the number of non-terminal nodes in a parse tree. However, the calculation at node l still has a cost that exponentially grows with the number of l 's daughters because we must sum up the probabilities of vH w v y x { z \u00a5 combinations of latent annotation symbols for a node with n daughters. We thus took a kind of transformation/detransformation approach, in which a tree is binarized before parameter estimation and restored to its original form after parsing. The details of the binarization are explained in Section 4.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Probabilistic model", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Using syntactically annotated corpora as training data, we can estimate the parameters of a PCFG-LA model using an EM algorithm. The algorithm is a special variant of the inside-outside algorithm of Pereira and Schabes (1992) . Several recent work also use similar estimation algorithm as ours, i.e, inside-outside re-estimation on parse trees (Chiang and Bikel, 2002; Shen, 2004) .", |
| "cite_spans": [ |
| { |
| "start": 199, |
| "end": 225, |
| "text": "Pereira and Schabes (1992)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 344, |
| "end": 368, |
| "text": "(Chiang and Bikel, 2002;", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 369, |
| "end": 380, |
| "text": "Shen, 2004)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Probabilistic model", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The rest of this section precisely defines PCFG-LA models and briefly explains the estimation algorithm. The derivation of the estimation algorithm is largely omitted; see Pereira and Schabes (1992) for details.", |
| "cite_spans": [ |
| { |
| "start": 172, |
| "end": 198, |
| "text": "Pereira and Schabes (1992)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Probabilistic model", |
| "sec_num": "2" |
| }, |
| { |
| "text": "We define a PCFG-LA | as a tuple ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model definition", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "| I } v 1 t S v S H S t E S d S i q ,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model definition", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "I i Q D @ 9F B m t Y v Q D r t Y E F H Q D E 9F G B u t 9 B 9 \u00a2 B Y v Q D t Y E F S i S c H W", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model definition", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "We assume that non-terminal nodes in a parse tree 8 are indexed by integers k I S X W X W X W S i , starting from the root node. A complete tree is denoted by", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model definition", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "8 @ 9A C B , where A I Q F \u00a5 T S X W X W X W S F \u00a1 Y d H \u00a1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model definition", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "is a vector of latent annotation symbols and F m \u00a2 is the latent annotation symbol attached to the k -th non-terminal node.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model definition", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "We do not assume any structured parametrizations in", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model definition", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "q and d ; that is, each q Q j Y Q j \u00a3 9H B Y and d 4 Q D E 9F G B Y Q D E 9F G B \u1e7d { 9H B Y", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model definition", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "is itself a parameter to be tuned. Therefore, an annotation symbol, say, F , generally does not express any commonalities among the complete non-terminals annotated by F , such as", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model definition", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "D E 9F G B S t 9F G B S T 0 . The probability of a complete parse tree 8 E 9A c B is defined as a Q b 8 @ 9A C B Y I e d 4 Q D \u00a5 9F \u00a5 B Y \u00a4 \u00a5 r \u00a2 \u00a6 \u00a7 \u00a9\u00aa \u00ab q Q j Y g S (2)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model definition", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "where", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model definition", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "D \u00a5 9F \u00a5 B", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model definition", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "is the label of the root node of 8 @ 9A C B and @ \u00ac h \u00ae denotes the multiset of annotated CFG rules used in the generation of 8 @ 9A C B . We have the probability of an observable tree 8 by marginalizing out the latent annotation symbols in 8 @ 9A C B :", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model definition", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "a Q b 8 Y I o\u00b0r ) s \u00a9 \u00b1 d 4 Q D \u00a5 9F s \u00a5 i B Y \u00a4 \u00a5 r \u00a2 \u00a6 \u00a7 \u00a9\u00aa G \u00ab q Q j Y g S (3)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model definition", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "where is the number of non-terminal nodes in 8 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model definition", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "The sum in Eq. 3 can be calculated using a dynamic programming algorithm analogous to the forward algorithm for HMMs. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Forward-backward probability", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "' \u00b6 , then \u00b3 \u00a2 \u00ac Q F Y I q Q v \u00a2 9F G B \u2022 t \u00a9 \u00b6 Y . \u00b5", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Forward-backward probability", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Otherwise, let\u00b8and \u00b9 be the two daughter nodes of k . Then", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Forward-backward probability", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "\u00b3 \u00a2 \u00ac Q F Y I o q { \u00ba T \u00bbq \u00bc r \u00a2 s q Q v \u00a2 9F G B u t v \u00b6 9F \u00b6 u B v @ \u00bd 9F \u00bd T B Y p \u00b3 \u00b6 \u00ac Q F \u00b6 Y \u00b3 \u00bd \u00ac Q F \u00bd Y g W Using backward probabilities, a Q b 8 Y is calculated as a Q b 8 Y I \u00bf \u00be q r \u00a2 s d 4 Q v \u00a5 9F \u00a5 B Y \u00b3 \u00a5 \u00ac Q F \u00a5 u Y . We define forward probabilities \u00c0 \u00a2 \u00ac Q F Y", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Forward-backward probability", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": ", which are used in the estimation described below, as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Forward-backward probability", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "\u00b5", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Forward-backward probability", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "If node k is the root node (i.e., k = 1), then", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Forward-backward probability", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "\u00c0 \u00a2 \u00ac Q F Y I e d 4 Q v \u00a2 9F B Y . \u00b5", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Forward-backward probability", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "If node k has a right sibling \u00b9 , let\u00b8be the mother node of k . Then", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Forward-backward probability", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "\u00c0 \u00a2 \u00ac Q F Y I o q { \u00ba \u00bbq \u00bc r \u00a2 s q Q v \u00b6 9F \u00c1 \u00b6 u B u t w v \u00a2 9F B v 1 \u00bd \u00c1 9F V \u00bd B Y p \u00c0 \u00b6 \u00ac Q F \u00b6 Y \u00b3 \u00bd \u00ac Q F \u00bd Y g W \u00b5", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Forward-backward probability", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "If node k has a left sibling, \u00c0 \u00a2 \u00ac Q F Y is defined analogously.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Forward-backward probability", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "We now derive the EM algorithm for PCFG-LA, which estimates the parameters . Using the Lagrange multiplier method and re-arranging the results using the backward and forward probabilities, we obtain the update formulas in Figure 2 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 222, |
| "end": 230, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Estimation", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "\u00c2 I Q q 4 S d Y . Let \u00c3 I X 8 \u00a5 T S 8 \u2022 U S X W X W X W", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Estimation", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "In theory, we can use PCFG-LAs to parse a given sentence by selecting the most probable parse:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Parsing with PCFG-LA", |
| "sec_num": "3" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "8 \u2022 \u00d1 \u00d3 \u00d2 \u00d4 $ \u00d5 \u00cf e \u00d6 \u00a2 \u00d7 \u00ce # \u00d8 \u00d6 \u00c9 \u00d9 \u00ac r \u00a2 \u00da \u00db \u00dd \u00dc m \u00de a Q b 8 E v Y I e \u00d6 \u00a2 \u00d7 \u00ce # \u00d8 \u00d6 \u00c9 \u00d9 \u00ac r \u00a2 \u00da \u00db \u00df \u00dc \u2022 \u00de a Q b 8 Y g S", |
| "eq_num": "(4)" |
| } |
| ], |
| "section": "Parsing with PCFG-LA", |
| "sec_num": "3" |
| }, |
| { |
| "text": "where \u00e0 Q Y denotes the set of possible parses for under the observable grammar . While the optimization problem in Eq. 4 can be efficiently solved", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Parsing with PCFG-LA", |
| "sec_num": "3" |
| }, |
| { |
| "text": "\u00e1 \u00cf \u00e2 \u00e4 \u00e3 \u00c9 ae \u00e5 \u00e7 \u00e8 T \u00df \u00e9 &\u00ea g \u00df \u00eb \u00ec \u00ed \u00ee \u00ef \u00a9\u00f0 \u00ab h \u00f1 \u00a7 t \u00f2 g \u00f3 g \u00f4 \u00e2 \u00a7 \u00c4 \u00eb ae \u00ee \u00f5 \u00f1 \u00f6\u00ba ae \u00f7\u00bc \u00f7\u00f8 y \u00f9 \u00f3 Covered \u00f6 \u00a7 \u00f2 \u00f7\u00ef \u00fa 4 \u00fb \u00fc \u00f9 \u00fd \u00ba \u00a7 \u00f2 \u00e2 # \u00eb \u00e1 \u00e2 \u00e4 \u00e3 \u00c9 ae \u00e5 \u00e7 \u00fe\u00e8 X \u00df \u00e9 &\u00ea g \u00df \u00eb \u00ff \u00bc \u00a7 \u00f2 \u00e2 \u00e8 \u00c9 \u00eb \u00ff \u00f8 \u00a7 \u00f2 \u00e2 \u00ea \u00eb \u00e1 \u00cf \u00e2 \u00e4 \u00e3 \u00c9 ae \u00e5 \u00a1 \u00eb \u00c1 \u00ec \u00ed \u00ee \u00ef \u00a9\u00f0 \u00ab h \u00f1 \u00a7 \u00f2 \u00f3 g \u00f4 \u00e2 \u00a7 \u00c4 \u00eb ae \u00ee \u00f1 \u00ba \u00f3 Covered \u00f6 \u00a7 { \u00f2 \u00f7\u00ef \u00fa \u00a3 \u00a2\u00f9 \u00fd \u00ba \u00a7 \u00f2 \u00e2 # \u00eb \u00e1 \u00e2 \u00e4 \u00e3 \u00c9 ae \u00e5 \u00a1 \u00eb \u00a4 \u00cf b \u00e2 \u00e4 \u00e3 \u00eb G \u00ec \u00a6 \u00a5 \u00a7 \u00a5\u00ee \u00f1 \u00a7 \u00f2 \u00f3 Root \u00f6 \u00f4 \u00f7\u00ef \u00f9 \u00e2 \u00a7 \u00c4 \u00eb ae \u00ee \u00a4 \u00e2 \u00e4 \u00e3 \u00c9 \u00df \u00eb \u00ff \u00a7 t \u00f2 \u00e2 # \u00eb \u00ed \u00ef \u00a9\u00f0 \u00ab \u00ec \u00f1 \u00a7 \u00f2 \u00f3 u \u00f4 \u00e2 \u00a7 \u00c4 \u00eb \u00ee \u00f1 \u00ba \u00f3 Labeled \u00f6 \u00a7 \u00f2 \u00f7\u00ef \u00f9 \u00fd \u00ba \u00a7 \u00f2 \u00e2 # \u00eb \u00c1 \u00ff \u00ba \u00a7 \u00f2 \u00e2 # \u00eb Covered \u00e2 \u00a7 \u00c4 \u00a9 \u00e3 \u00e5 i \u00e7 ' \u00e9 \u00eb G \u00ec \u00e2 \u00a9 \u00a9 \u00eb \u00a5 u \u00c4 \u00ba \u00e5 \u00c4 \u00bc \u00c4 \u00f8 ! % \u00a7 \u00f2 # \" \u00e2 \u00c4 \u00ba \u00a9 \u00c4 \u00bc \u00a9 \u00c4 \u00f8 \u00eb G \u00ec \u00e2 \u00e4 \u00e3 \u00a9 \u00e7 \u00a9 \u00e9 \u00eb $ Covered \u00e2 \u00a7 \u00c4 \u00a9 \u00e3 \u00e5 % \u00eb G \u00ec \u00a5 { \u00c4 \u00ba \u00e5 \u00a1 % \u00a7 \u00f2 \" \u00c4 \u00ba \u00ec \u00e3 $ Labeled \u00e2 \u00a7 \u00c4 \u00a9 \u00e3 \u00eb G \u00ec \u00a5 u \u00c4 \u00ba \u00ec \u00e3 $ Root \u00e2 \u00a7 \u00a9 \u00e3 \u00eb G \u00ec \u00a7 \u00c4 \u00a7 & \u00a5 the root of \u00a7 \u00c4", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Parsing with PCFG-LA", |
| "sec_num": "3" |
| }, |
| { |
| "text": "is labeled with \u00e3 $ Figure 2 : Parameter update formulas.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 20, |
| "end": 28, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Parsing with PCFG-LA", |
| "sec_num": "3" |
| }, |
| { |
| "text": "for PCFGs using dynamic programming algorithms, the sum-of-products form of a Q b 8 Y in PCFG-LA models (see Eq. 2 and Eq. 3) makes it difficult to apply such techniques to solve Eq. 4.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Parsing with PCFG-LA", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Actually, the optimization problem in Eq. 4 is NPhard for general PCFG-LA models. Although we omit the details, we can prove the NP-hardness by observing that a stochastic tree substitution grammar (STSG) can be represented by a PCFG-LA model in a similar way to one described by Goodman (1996a) , and then using the NP-hardness of STSG parsing (Sima\u00e1n, 2002) .", |
| "cite_spans": [ |
| { |
| "start": 280, |
| "end": 295, |
| "text": "Goodman (1996a)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 345, |
| "end": 359, |
| "text": "(Sima\u00e1n, 2002)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Parsing with PCFG-LA", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The difficulty of the exact optimization in Eq. 4 forces us to use some approximations of it. The rest of this section describes three different approximations, which are empirically compared in the next section. The first method simply limits the number of candidate parse trees compared in Eq. 4; we first create N-best parses using a PCFG and then, within the N-best parses, select the one with the highest probability in terms of the PCFG-LA. The other two methods are a little more complicated, and we explain them in separate subsections.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Parsing with PCFG-LA", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The second approximation method selects the best complete tree", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Approximation by Viterbi complete trees", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "8 \u00c5 9A \u00c5B , that is, 8 \u00c5 9A \u00c5B \u2022 I \u00d6 \u00a2 \u00d7 \u00ce # \u00d8 \u00d6 \u00c9 \u00d9 \u00ac r ) \u00da \u00db \u00dd \u00dc m \u00de \u00bb \u00ae r ) s ( ' \u00aa ' a Q b 8 @ 9A C B Y g W (5)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Approximation by Viterbi complete trees", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "We call 8 \u00c5 \u00d3 9A C \u00c5 \u00dd B a Viterbi complete tree. Such a tree can be obtained in ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Approximation by Viterbi complete trees", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": ") Q t v v", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Approximation by Viterbi complete trees", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "In the third method, we approximate the true distribution , is renormalized so that the total mass for the subset sums to 1. Figure 3 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 125, |
| "end": 133, |
| "text": "Figure 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Viterbi parse in approximate distribution", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "a Q b 8 E v", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Viterbi parse in approximate distribution", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "\u00e3 \u00e7 \u00e9 % h \" C ( \u00e3 \u00e9 \u00e7 % h \" C s ( D \u00ec \u00e2 \u00e4 \u00e3 \u00a9 # G H \u00a9 I \u00eb \u00a9 E\" \u00ec \u00e2 \u00e7 \u00a9 # G H \u00a9 P \u00eb \u00a9 E( \u00ec \u00e2 \u00e7 \u00a9 P Q \u00a9 I \u00eb , E2 h \u00ec \u00e2 \u00e9 \u00a9 # G R \u00a9 # G \u00eb \u00a9 E5 h \u00ec \u00e2 % \u00a9 S P T \u00a9 P \u00eb \u00a9 E7 \u00ec \u00e2 C \u00a9 I U \u00a9 I \u00eb V \u00e2 E\u00eb \u00ec \u00e2 E\" \u00a9 E7 t \u00eb \u00a9 \u00e2 E( \u00a9 E2\u00eb $ V \u00e2 E\" i \u00eb \u00ec \u00e2 E2 \u00a9 E5 i \u00eb $ \u00a9 V \u00e2 E( t \u00eb G \u00ec \u00e2 E5 \u00a9 E7\u00eb $ V \u00e2 E2 t \u00eb G \u00ec W $ \u00a9 V \u00e2 E5 \u00eb \u00ec h \" F $ \u00a9 V \u00e2 E7\u00eb G \u00ec s ( 6 $", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Viterbi parse in approximate distribution", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "We require that each tree 8 w \u00e0 Q Y has a unique representation as a set of connected chart items in", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Viterbi parse in approximate distribution", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": ". A packed representation satisfying the uniqueness condition is created using the CKY algorithm with the observable grammar , for instance. The approximate distribution, 4 is generated as a root node. We define", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "4", |
| "sec_num": null |
| }, |
| { |
| "text": "\u00c7 Q b 8 E v", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "4", |
| "sec_num": null |
| }, |
| { |
| "text": "\u00c7 c Q b 8 E v Y as \u00c7 Q b 8 E v Y I \u1ef2 \u00a5 Q k \u00a5 u Y \u00a1 \u00a4 \u00bd T a \u00a5 Y Q k $ \u00bd t b X # \u00bd Y g S", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "4", |
| "sec_num": null |
| }, |
| { |
| "text": "where the set of connected items Frey et al., 2000) : ", |
| "cite_spans": [ |
| { |
| "start": 33, |
| "end": 51, |
| "text": "Frey et al., 2000)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "4", |
| "sec_num": null |
| }, |
| { |
| "text": "T k \u00a5 T S X W X W X W S k g \u00a1 d c", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "4", |
| "sec_num": null |
| }, |
| { |
| "text": "e \u00a1 f Q a v \u00cd v\u00c7 Y I o \u00ac r ) \u00da \u00db \u00dd \u00dc m \u00de a Q b 8 v Y \u00cb \u00cd \u00cc ) \u00ce a Q b 8 E v Y \u00c7 Q b 8 E v", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "4", |
| "sec_num": null |
| }, |
| { |
| "text": ", as shown in Figure 4 . a in and a out in Figure 4 are similar to ordinary inside/outside probabilities. We define a in as follows:", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 14, |
| "end": 22, |
| "text": "Figure 4", |
| "ref_id": "FIGREF1" |
| }, |
| { |
| "start": 43, |
| "end": 51, |
| "text": "Figure 4", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Y", |
| "sec_num": null |
| }, |
| { |
| "text": "\u00b5 If k I Q D S \u00b9 S \u00b9 Y 4", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Y", |
| "sec_num": null |
| }, |
| { |
| "text": "is a pre-terminal node above \u00fe \u00bd , then a Several parsing algorithms that also use insideoutside calculation on packed chart have been proposed (Goodman, 1996b; Sima\u00e1n, 2003; Clark and Curran, 2004) . Those algorithms optimize some evaluation metric of parse trees other than the posterior probability a Q b 8 E v Y , e.g., (expected) labeled constituent recall or (expected) recall rate of dependency relations contained in a parse. It is in contrast with our approach where (approximated) posterior probability is optimized.", |
| "cite_spans": [ |
| { |
| "start": 144, |
| "end": 160, |
| "text": "(Goodman, 1996b;", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 161, |
| "end": 174, |
| "text": "Sima\u00e1n, 2003;", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 175, |
| "end": 198, |
| "text": "Clark and Curran, 2004)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Y", |
| "sec_num": null |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "in Q k g 9F B Y I q Q D E 9F G B s t \u00bd Y . \u00b5 Otherwise, a in Q k g 9F B Y I o \u00b6 \u00bd r h g { \u00db \u00a2 \u00de o i \u00bbp r \u00a2 s q Q D E 9F G B u t \u00b6 9 B & \u00bd 9 \u00a2 B Y p a in Q \u00cd V 9 B Y a in Q \u00d3 \u00b9", |
| "eq_num": "m" |
| } |
| ], |
| "section": "Y", |
| "sec_num": null |
| }, |
| { |
| "text": "We conducted four sets of experiments. In the first set of experiments, the degree of dependency of trained models on initialization was examined because EM-style algorithms yield different results with different initial values of parameters. In the second set of experiments, we examined the relationship between model types and their parsing performances. In the third set of experiments, we compared the three parsing methods described in the previous section. Finally, we show the result of a parsing experiment using the standard test set.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "4" |
| }, |
| { |
| "text": "We used sections 2 through 20 of the Penn WSJ corpus as training data and section 21 as heldout data. The heldout data was used for early stopping; i.e., the estimation was stopped when the rate", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "4" |
| }, |
| { |
| "text": "r If E D", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "4" |
| }, |
| { |
| "text": "is not a pre-terminal node, for each", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "4" |
| }, |
| { |
| "text": "s \u00ec E\"E( V \u00e2 E $ \u00eb , let \u00e3 \u00a9 \u00e7", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "4" |
| }, |
| { |
| "text": ", and \u00e9 be non-terminal symbols of E \u00a9 E\"", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "4" |
| }, |
| { |
| "text": ", and", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "4" |
| }, |
| { |
| "text": "E( . Then, t \u00e2 E \u00e5 s \u00eb \u00ec v u \u00f0 u \u00f3 x w u y \u00f3 T w u \u00f3 T w out \u00e2 E \u00c9 \u00df \u00eb \u00e1 \u00e2 \u00e4 \u00e3 ae \u00e5 \u00e7 \u00e8 X \u00e9 &\u00ea g \u00df \u00eb in \u00e2 E\"\u00e8 T \u00df \u00eb in \u00e2 E(\u00a8\u00ea { \u00eb u \u00f0 u \u00f3 T w out \u00e2 E \u1e97 \u00eb in \u00e2 E \u00ef \u00c9 \u00df \u00eb r If E D", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "4" |
| }, |
| { |
| "text": "is a pre-terminal node above word", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "4" |
| }, |
| { |
| "text": "\u00bc , then t \u00e2 E\u00e5 % \u00bc \u00eb G \u00ec G . r If E D", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "4" |
| }, |
| { |
| "text": "is a root node, let \u00e3 be the non-terminal symbol of of increase in the likelihood of the heldout data became lower than a certain threshold. Section 22 was used as test data in all parsing experiments except in the final one, in which section 23 was used. We stripped off all function tags and eliminated empty nodes in the training and heldout data, but any other pre-processing, such as comma raising or base-NP marking (Collins, 1999) , was not done except for binarizations.", |
| "cite_spans": [ |
| { |
| "start": 422, |
| "end": 437, |
| "text": "(Collins, 1999)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "4" |
| }, |
| { |
| "text": "E . Then t F \u00e2 E\u00eb G \u00ec G \u00e2 \u00eb \u00f1 \u00f0 u \u00f3 T w \u00a4 \u00e2 \u00e4 \u00e3 \u00c9 \u00df \u00eb in \u00e2 E \u00c9 \u00df \u00eb .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "4" |
| }, |
| { |
| "text": "To see the degree of dependency of trained models on initializations, four instances of the same model were trained with different initial values of parameters. 3 The model used in this experiment was created by CENTER-PARENT binarization and vH w v was set to 16. Table 1 lists training/heldout data loglikelihood per sentence (LL) for the four instances and their parsing performances on the test set (section 22). The parsing performances were obtained using the approximate distribution method in Section 3.2. Different initial values were shown to affect the results of training to some extent (Table 1) . 3 The initial value for an annotated rule probability, \u00e1 \u00e2 \u00e4 \u00e3 \u00c9 \u00a9 \u00e5 \u00e7 \u00e8 T \u00df \u00e9 &\u00ea g \u00df \u00eb", |
| "cite_spans": [ |
| { |
| "start": 161, |
| "end": 162, |
| "text": "3", |
| "ref_id": null |
| }, |
| { |
| "start": 611, |
| "end": 612, |
| "text": "3", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 265, |
| "end": 272, |
| "text": "Table 1", |
| "ref_id": "TABREF11" |
| }, |
| { |
| "start": 599, |
| "end": 608, |
| "text": "(Table 1)", |
| "ref_id": "TABREF11" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Dependency on initial values", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": ", was created by randomly multiplying the maximum likelihood estimation of the corresponding PCFG rule probability, \u00e2 \u00e4 \u00e3 \u00e5 i \u00e7 ' \u00e9 \u00eb", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dependency on initial values", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": ", as follows: ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dependency on initial values", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "\u00e1 \u00e2 \u00e4 \u00e3 \u00c9 \u00a2 \u00e5 i \u00e7 \u00e8 X \u00e9 &\u00ea g \u00df \u00eb \u00ec \u00ed \u00ee \u00ef & F \u00e2 \u00e4 \u00e3 \u00e5 i \u00e7 ' \u00e9 \u00eb \u00a9", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dependency on initial values", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "We compared four types of binarization. The original form is depicted in Figure 5 and the results are shown in Figure 6 . In the first two methods, called CENTER-PARENT and CENTER-HEAD, the headfinding rules of Collins (1999) were used. We obtained an observable grammar for each model by reading off grammar rules from the binarized training trees. For each binarization method, PCFG-LA models with different numbers of latent annotation symbols, vH", |
| "cite_spans": [ |
| { |
| "start": 211, |
| "end": 225, |
| "text": "Collins (1999)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 73, |
| "end": 81, |
| "text": "Figure 5", |
| "ref_id": "FIGREF2" |
| }, |
| { |
| "start": 111, |
| "end": 119, |
| "text": "Figure 6", |
| "ref_id": "FIGREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Model types and parsing performance", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "w v ) I w S 6 s S u t S 6 v", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model types and parsing performance", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": ", and Q w , were trained. The relationships between the number of parameters in the models and their parsing performances are shown in Figure 7 . Note that models created using different binarization methods have different numbers of parameters for the same vH w v . The parsing performances were measured using F\u00a5 scores of the parse trees that were obtained by re-ranking of 1000-best parses by a PCFG.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 135, |
| "end": 143, |
| "text": "Figure 7", |
| "ref_id": "FIGREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Model types and parsing performance", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "We can see that the parsing performance gets better as the model size increases. We can also see that models of roughly the same size yield similar performances regardless of the binarization scheme used for them, except the models created using LEFT binarization with small numbers of parameters ( vH v I and s ) . Taking into account the dependency on initial values at the level shown in the previous experiment, we cannot say that any single model is superior to the other models when the sizes of the models are large enough.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model types and parsing performance", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "The results shown in Figure 7 suggest that we could further improve parsing performance by increasing the model size. However, both the memory size and the training time are more than linear in vH w v , and the training time for the largest (", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 21, |
| "end": 29, |
| "text": "Figure 7", |
| "ref_id": "FIGREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Model types and parsing performance", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "v H w v # I Q w", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model types and parsing performance", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": ") models was about 15 hours for the models created using CENTER-PARENT, CENTER-HEAD, and LEFT and about 20 hours for the model created using RIGHT. To deal with larger (e.g., vH w v = 32 or 64) models, we therefore need to use a model search that reduces the number of parameters while maintaining the model's performance, and an approximation during training to reduce the training time. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model types and parsing performance", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "The relationships between the average parse time and parsing performance using the three parsing methods described in Section 3 are shown in Figure 8 . A model created using CENTER-PARENT with vH v I Q w was used throughout this experiment. The data points were made by varying configurable parameters of each method, which control the number of candidate parses. To create the candidate parses, we first parsed input sentences using a PCFG 4 , using beam thresholding with beam width", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 141, |
| "end": 149, |
| "text": "Figure 8", |
| "ref_id": "FIGREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Comparison of parsing methods", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "x . The data points on a line in the figure were created by varying x with other parameters fixed. The first method re-ranked the v -best parses enumerated from the chart after the PCFG parsing. The two lines for the first method in the figure correspond to v = 100 and v = 300. In the second and the third methods, we removed all the dominance relations among chart items that did not contribute to any parses whose PCFG-scores were higher than y a max , where a max is the PCFG-score of the best parse in the chart. The parses remaining in the chart were the candidate parses for the second and the third methods. The different lines for the second and the third methods correspond to different values of y . The third method outperforms the other two methods unless the parse time is very limited (i.e., z 1 4 The PCFG used in creating the candidate parses is roughly the same as the one that Klein and Manning (2003) call a 'markovised PCFG with vertical order = 2 and horizontal order = 1' and was extracted from Section 02-20. The PCFG itself gave a performance of 79.6/78.5 LP/LR on the development set. This PCFG was also used in the experiment in Section 4.4.", |
| "cite_spans": [ |
| { |
| "start": 896, |
| "end": 920, |
| "text": "Klein and Manning (2003)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Comparison of parsing methods", |
| "sec_num": "4.3" |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": " Klein and Manning (2003) 85.7 86.9 1.10 60.3 Collins (1999) 88.5 88.7 0.92 66.7 Charniak (1999) 90.1 90.1 0.74 70.1 { 100 words LR LP CB 0 CB This paper 86.0 86.1 1.39 58.3 Klein and Manning (2003) 85.1 86.3 1.31 57.2 Collins (1999) 88.1 88.3 1.06 64.0 Charniak (1999) 89.6 89.5 0.88 67.6 Table 2 : Comparison with other parsers.sec is required), as shown in the figure. The superiority of the third method over the first method seems to stem from the difference in the number of candidate parses from which the outputs are selected. 5 The superiority of the third method over the second method is a natural consequence of the consistent use of a Q b 8 Y both in the estimation (as the objective function) and in the parsing (as the score of a parse).", |
| "cite_spans": [ |
| { |
| "start": 1, |
| "end": 25, |
| "text": "Klein and Manning (2003)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 46, |
| "end": 60, |
| "text": "Collins (1999)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 81, |
| "end": 96, |
| "text": "Charniak (1999)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 174, |
| "end": 198, |
| "text": "Klein and Manning (2003)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 219, |
| "end": 233, |
| "text": "Collins (1999)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 254, |
| "end": 269, |
| "text": "Charniak (1999)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 535, |
| "end": 536, |
| "text": "5", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 290, |
| "end": 297, |
| "text": "Table 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "annex", |
| "sec_num": null |
| }, |
| { |
| "text": "Parsing performance on section 23 of the WSJ corpus using a PCFG-LA model is shown in Table 2 . We used the instance of the four compared in the second experiment that gave the best results on the development set. Several previously reported results on the same test set are also listed in Table 2. Our result is lower than the state-of-the-art lexicalized PCFG parsers (Collins, 1999; Charniak, 1999) , but comparable to the unlexicalized PCFG parser of Klein and Manning (2003) . Klein and Manning's PCFG is annotated by many linguistically motivated features that they found using extensive manual feature selection. In contrast, our method induces all parameters automatically, except that manually written head-rules are used in binarization. Thus, our method can extract a considerable amount of hidden regularity from parsed corpora. However, our result is worse than the lexicalized parsers despite the fact that our model has access to words in the sentences. It suggests that certain types of information used in those lexicalized 5 Actually, the number of parses contained in the packed forest is more than 1 million for over half of the test sentences when, while the number of parses for which the first method can compute the exact probability in a comparable time (around 4 sec) is only about 300.parsers are hard to be learned by our approach.", |
| "cite_spans": [ |
| { |
| "start": 370, |
| "end": 385, |
| "text": "(Collins, 1999;", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 386, |
| "end": 401, |
| "text": "Charniak, 1999)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 455, |
| "end": 479, |
| "text": "Klein and Manning (2003)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 482, |
| "end": 491, |
| "text": "Klein and", |
| "ref_id": null |
| }, |
| { |
| "start": 1041, |
| "end": 1042, |
| "text": "5", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 86, |
| "end": 93, |
| "text": "Table 2", |
| "ref_id": null |
| }, |
| { |
| "start": 290, |
| "end": 298, |
| "text": "Table 2.", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Comparison with related work", |
| "sec_num": "4.4" |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "A maximum-entropy-inspired parser", |
| "authors": [ |
| { |
| "first": "Eugene", |
| "middle": [], |
| "last": "Charniak", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Eugene Charniak. 1999. A maximum-entropy-inspired parser. Technical Report CS-99-12.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Recovering latent information in treebanks", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Chiang", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [ |
| "M" |
| ], |
| "last": "Bikel", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proc. COLING", |
| "volume": "", |
| "issue": "", |
| "pages": "183--189", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David Chiang and Daniel M. Bikel. 2002. Recovering latent information in treebanks. In Proc. COLING, pages 183-189.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Parsing the wsj using ccg and log-linear models", |
| "authors": [ |
| { |
| "first": "Stephen", |
| "middle": [], |
| "last": "Clark", |
| "suffix": "" |
| }, |
| { |
| "first": "James", |
| "middle": [ |
| "R" |
| ], |
| "last": "Curran", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proc. ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "104--111", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Stephen Clark and James R. Curran. 2004. Parsing the wsj using ccg and log-linear models. In Proc. ACL, pages 104-111.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Head-Driven Statistical Models for Natural Language Parsing", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael Collins. 1999. Head-Driven Statistical Models for Natural Language Parsing. Ph.D. thesis, Univer- sity of Pennsylvania.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Sequentially fitting \"inclusive\" trees for inference in noisy-OR networks", |
| "authors": [ |
| { |
| "first": "Brendan", |
| "middle": [ |
| "J" |
| ], |
| "last": "Frey", |
| "suffix": "" |
| }, |
| { |
| "first": "Relu", |
| "middle": [], |
| "last": "Patrascu", |
| "suffix": "" |
| }, |
| { |
| "first": "Tommi", |
| "middle": [], |
| "last": "Jaakkola", |
| "suffix": "" |
| }, |
| { |
| "first": "Jodi", |
| "middle": [], |
| "last": "Moran", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Proc. NIPS", |
| "volume": "", |
| "issue": "", |
| "pages": "493--499", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Brendan J. Frey, Relu Patrascu, Tommi Jaakkola, and Jodi Moran. 2000. Sequentially fitting \"inclusive\" trees for inference in noisy-OR networks. In Proc. NIPS, pages 493-499.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Efficient algorithms for parsing the DOP model", |
| "authors": [ |
| { |
| "first": "Joshua", |
| "middle": [], |
| "last": "Goodman", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "Proc. EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "143--152", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Joshua Goodman. 1996a. Efficient algorithms for pars- ing the DOP model. In Proc. EMNLP, pages 143-152.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Parsing algorithms and metric", |
| "authors": [ |
| { |
| "first": "Joshua", |
| "middle": [], |
| "last": "Goodman", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "Proc. ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "177--183", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Joshua Goodman. 1996b. Parsing algorithms and metric. In Proc. ACL, pages 177-183.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Probabilistic feature grammars", |
| "authors": [ |
| { |
| "first": "Joshua", |
| "middle": [], |
| "last": "Goodman", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Proc. IWPT", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Joshua Goodman. 1997. Probabilistic feature grammars. In Proc. IWPT.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Inducing history representations for broad coverage statistical parsing", |
| "authors": [ |
| { |
| "first": "James", |
| "middle": [], |
| "last": "Henderson", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proc. HLT-NAACL", |
| "volume": "", |
| "issue": "", |
| "pages": "103--110", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "James Henderson. 2003. Inducing history representa- tions for broad coverage statistical parsing. In Proc. HLT-NAACL, pages 103-110.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "PCFG models of linguistic tree representations", |
| "authors": [ |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Johnson", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Computational Linguistics", |
| "volume": "24", |
| "issue": "4", |
| "pages": "613--632", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mark Johnson. 1998. PCFG models of linguis- tic tree representations. Computational Linguistics, 24(4):613-632.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Accurate unlexicalized parsing", |
| "authors": [ |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proc. ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "423--430", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dan Klein and Christopher D. Manning. 2003. Accurate unlexicalized parsing. In Proc. ACL, pages 423-430.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Insideoutside reestimation from partially bracketed corpora", |
| "authors": [ |
| { |
| "first": "Fernando", |
| "middle": [], |
| "last": "Pereira", |
| "suffix": "" |
| }, |
| { |
| "first": "Yves", |
| "middle": [], |
| "last": "Schabes", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "Proc. ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "128--135", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Fernando Pereira and Yves Schabes. 1992. Inside- outside reestimation from partially bracketed corpora. In Proc. ACL, pages 128-135.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Nondeterministic LTAG derivation tree extraction", |
| "authors": [ |
| { |
| "first": "Libin", |
| "middle": [], |
| "last": "Shen", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proc. TAG+7", |
| "volume": "", |
| "issue": "", |
| "pages": "199--203", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Libin Shen. 2004. Nondeterministic LTAG derivation tree extraction. In Proc. TAG+7, pages 199-203.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Computational complexity of probabilistic disambiguation", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Khalil Sima\u00e1n", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Grammars", |
| "volume": "5", |
| "issue": "2", |
| "pages": "125--151", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Khalil Sima\u00e1n. 2002. Computational complexity of probabilistic disambiguation. Grammars, 5(2):125- 151.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "On maximizing metrics for syntactic disambiguation", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Khalil Sima\u00e1n", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proc. IWPT", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Khalil Sima\u00e1n. 2003. On maximizing metrics for syn- tactic disambiguation. In Proc. IWPT.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Generalization/specialization of context free grammars based-on entropy of non-terminals", |
| "authors": [ |
| { |
| "first": "Takehito", |
| "middle": [], |
| "last": "Utsuro", |
| "suffix": "" |
| }, |
| { |
| "first": "Syuuji", |
| "middle": [], |
| "last": "Kodama", |
| "suffix": "" |
| }, |
| { |
| "first": "Yuji", |
| "middle": [], |
| "last": "Matsumoto", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "Proc. JSAI", |
| "volume": "", |
| "issue": "", |
| "pages": "327--330", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Takehito Utsuro, Syuuji Kodama, and Yuji Matsumoto. 1996. Generalization/specialization of context free grammars based-on entropy of non-terminals. In Proc. JSAI (in Japanese), pages 327-330.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "text": "Figure 1: Tree with latent annotations 8 @ 9A C B (complete data) and observed tree 8 (incomplete data).", |
| "num": null, |
| "uris": null, |
| "type_str": "figure" |
| }, |
| "FIGREF1": { |
| "text": "Optimal parameters of approximate distribution \u00c7 .", |
| "num": null, |
| "uris": null, |
| "type_str": "figure" |
| }, |
| "FIGREF2": { |
| "text": "Original subtree.", |
| "num": null, |
| "uris": null, |
| "type_str": "figure" |
| }, |
| "FIGREF3": { |
| "text": "Four types of binarization (H: head daughter).", |
| "num": null, |
| "uris": null, |
| "type_str": "figure" |
| }, |
| "FIGREF4": { |
| "text": "Model size vs. parsing performance.", |
| "num": null, |
| "uris": null, |
| "type_str": "figure" |
| }, |
| "FIGREF5": { |
| "text": "Comparison of parsing methods.", |
| "num": null, |
| "uris": null, |
| "type_str": "figure" |
| }, |
| "TABREF11": { |
| "content": "<table><tr><td colspan=\"3\">CENTER-PARENT</td><td/><td colspan=\"4\">CENTER-HEAD</td></tr><tr><td colspan=\"2\">k m l h n</td><td/><td/><td/><td>k</td><td colspan=\"2\">l r n</td></tr><tr><td colspan=\"3\">k m o p n p n m o k q</td><td/><td/><td/><td>k</td><td>k p n o</td><td>o</td><td>n q</td></tr><tr><td colspan=\"2\">LEFT</td><td/><td/><td/><td/><td colspan=\"2\">RIGHT</td></tr><tr><td>k m n</td><td>k m n</td><td>k m n</td><td>k m n</td><td>k m n</td><td colspan=\"3\">k m n</td><td>q</td></tr></table>", |
| "type_str": "table", |
| "text": "Dependency on initial values.", |
| "num": null, |
| "html": null |
| } |
| } |
| } |
| } |