| { |
| "paper_id": "P05-1024", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T09:38:10.722126Z" |
| }, |
| "title": "Boosting-based parse reranking with subtree features", |
| "authors": [ |
| { |
| "first": "Taku", |
| "middle": [], |
| "last": "Kudo", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "NTT Communication Science Laboratories", |
| "location": { |
| "addrLine": "2-4 Hikaridai, Seika-cho", |
| "settlement": "Soraku", |
| "region": "Kyoto", |
| "country": "Japan" |
| } |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Jun", |
| "middle": [], |
| "last": "Suzuki", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "NTT Communication Science Laboratories", |
| "location": { |
| "addrLine": "2-4 Hikaridai, Seika-cho", |
| "settlement": "Soraku", |
| "region": "Kyoto", |
| "country": "Japan" |
| } |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Hideki", |
| "middle": [], |
| "last": "Isozaki", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "NTT Communication Science Laboratories", |
| "location": { |
| "addrLine": "2-4 Hikaridai, Seika-cho", |
| "settlement": "Soraku", |
| "region": "Kyoto", |
| "country": "Japan" |
| } |
| }, |
| "email": "isozaki@cslab.kecl.ntt.co.jp" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "This paper introduces a new application of boosting for parse reranking. Several parsers have been proposed that utilize the all-subtrees representation (e.g., tree kernel and data oriented parsing). This paper argues that such an all-subtrees representation is extremely redundant and a comparable accuracy can be achieved using just a small set of subtrees. We show how the boosting algorithm can be applied to the all-subtrees representation and how it selects a small and relevant feature set efficiently. Two experiments on parse reranking show that our method achieves comparable or even better performance than kernel methods and also improves the testing efficiency.", |
| "pdf_parse": { |
| "paper_id": "P05-1024", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "This paper introduces a new application of boosting for parse reranking. Several parsers have been proposed that utilize the all-subtrees representation (e.g., tree kernel and data oriented parsing). This paper argues that such an all-subtrees representation is extremely redundant and a comparable accuracy can be achieved using just a small set of subtrees. We show how the boosting algorithm can be applied to the all-subtrees representation and how it selects a small and relevant feature set efficiently. Two experiments on parse reranking show that our method achieves comparable or even better performance than kernel methods and also improves the testing efficiency.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Recent work on statistical natural language parsing and tagging has explored discriminative techniques. One of the novel discriminative approaches is reranking, where discriminative machine learning algorithms are used to rerank the n-best outputs of generative or conditional parsers. The discriminative reranking methods allow us to incorporate various kinds of features to distinguish the correct parse tree from all other candidates.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "With such feature design flexibility, it is nontrivial to employ an appropriate feature set that has a good discriminative ability for parse reranking. In early studies, feature sets were given heuristically by simply preparing task-dependent feature templates (Collins, 2000; Collins, 2002) . These ad-hoc solutions might provide us with reasonable levels of per- * Currently, Google Japan Inc., taku@google.com formance. However, they are highly task dependent and require careful design to create the optimal feature set for each task. Kernel methods offer an elegant solution to these problems. They can work on a potentially huge or even infinite number of features without a loss of generalization. The best known kernel for modeling a tree is the tree kernel (Collins and Duffy, 2002) , which argues that a feature vector is implicitly composed of the counts of subtrees. Although kernel methods are general and can cover almost all useful features, the set of subtrees that is used is extremely redundant. The main question addressed in this paper concerns whether it is possible to achieve a comparable or even better accuracy using just a small and non-redundant set of subtrees.", |
| "cite_spans": [ |
| { |
| "start": 261, |
| "end": 276, |
| "text": "(Collins, 2000;", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 277, |
| "end": 291, |
| "text": "Collins, 2002)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 766, |
| "end": 791, |
| "text": "(Collins and Duffy, 2002)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this paper, we present a new application of boosting for parse reranking. While tree kernel implicitly uses the all-subtrees representation, our boosting algorithm uses it explicitly. Although this set-up makes the feature space large, the l 1 -norm regularization achived by boosting automatically selects a small and relevant feature set. Such a small feature set is useful in practice, as it is interpretable and makes the parsing (reranking) time faster. We also incorporate a variant of the branch-and-bound technique to achieve efficient feature selection in each boosting iteration.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We describe the general setting of parse reranking.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "General setting of parse reranking", |
| "sec_num": "2" |
| }, |
| { |
| "text": "\u2022 Training data T is a set of input/output pairs, e.g.,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "General setting of parse reranking", |
| "sec_num": "2" |
| }, |
| { |
| "text": "T = { x 1 , y 1 , . . . , x L , y L },", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "General setting of parse reranking", |
| "sec_num": "2" |
| }, |
| { |
| "text": "where x i is an input sentence, and y i is a correct parse associated with the sentence x i .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "General setting of parse reranking", |
| "sec_num": "2" |
| }, |
| { |
| "text": "\u2022 Let Y(x) be a function that returns a set of candi-date parse trees for a particular sentence x.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "General setting of parse reranking", |
| "sec_num": "2" |
| }, |
| { |
| "text": "\u2022 We assume that Y(x i ) contains the correct parse tree y i , i.e., y i \u2208 Y(x i ) *", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "General setting of parse reranking", |
| "sec_num": "2" |
| }, |
| { |
| "text": "\u2022 Let \u03a6(y) \u2208 R d be a feature function that maps the given parse tree y into R d space. w \u2208 R d is a parameter vector of the model. The output pars\u00ea y of this model on input sentence x is given as:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "General setting of parse reranking", |
| "sec_num": "2" |
| }, |
| { |
| "text": "y = argmax y\u2208Y(x) w \u2022 \u03a6(y).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "General setting of parse reranking", |
| "sec_num": "2" |
| }, |
| { |
| "text": "There are two questions as regards this formulation. One is how to set the parameters w, and the other is how to design the feature function \u03a6(y). We briefly describe the well-known solutions to these two problems in the next subsections.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "General setting of parse reranking", |
| "sec_num": "2" |
| }, |
| { |
| "text": "We usually adopt a general loss function Loss(w), and set the parameters w that minimize the loss, i.e.,\u0175 = argmin w\u2208R d Loss(w) . Generally, the loss function has the following form:", |
| "cite_spans": [ |
| { |
| "start": 121, |
| "end": 128, |
| "text": "Loss(w)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Parameter estimation", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Loss(w) = L i=1 L(w, \u03a6(y i ), x i ), where L(w, \u03a6(y i ), x i )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Parameter estimation", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "is an arbitrary loss function. We can design a variety of parameter estimation methods by changing the loss function. The following three loss functions, LogLoss, HingeLoss, and BoostLoss, have been widely used in parse reranking tasks.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Parameter estimation", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "LogLoss = \u2212 log \u0163 X y\u2208Y(x i ) exp \u015f w \u2022 [\u03a6(yi) \u2212 \u03a6(y)] \u0165 \u0171 HingeLoss = X y\u2208Y(x i ) max(0, 1 \u2212 w \u2022 [\u03a6(yi) \u2212 \u03a6(y)]) BoostLos = X y\u2208Y(x i ) exp \u015f \u2212 w \u2022 [\u03a6(yi) \u2212 \u03a6(y)] \u0165", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Parameter estimation", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "LogLoss is based on the standard maximum likelihood optimization, and is used with maximum entropy models. HingeLoss captures the errors only when w", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Parameter estimation", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "\u2022 [\u03a6(y i ) \u2212 \u03a6(y)]) < 1.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Parameter estimation", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "This loss is closely related to the maximum margin strategy in SVMs (Vapnik, 1998) . BoostLoss is analogous to the boosting algorithm and is used in (Collins, 2000; Collins, 2002) .", |
| "cite_spans": [ |
| { |
| "start": 68, |
| "end": 82, |
| "text": "(Vapnik, 1998)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 149, |
| "end": 164, |
| "text": "(Collins, 2000;", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 165, |
| "end": 179, |
| "text": "Collins, 2002)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Parameter estimation", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "It is non-trivial to define an appropriate feature function \u03a6(y) that has a good ability to distinguish the correct parse y i from all other candidates In early studies, the feature functions were given heuristically by simply preparing feature templates (Collins, 2000; Collins, 2002) . However, such heuristic selections are task dependent and would not cover all useful features that contribute to overall accuracy.", |
| "cite_spans": [ |
| { |
| "start": 255, |
| "end": 270, |
| "text": "(Collins, 2000;", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 271, |
| "end": 285, |
| "text": "Collins, 2002)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition of feature function", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "When we select the special family of loss functions, the problem can be reduced to a dual form that depends only on the inner products of two instances \u03a6(y 1 ) \u2022 \u03a6(y 2 ). This property is important as we can use a kernel trick and we do not need to provide an explicit feature function. For example, tree kernel (Collins and Duffy, 2002) , one of the convolution kernels, implicitly maps the instance represented in a tree into all-subtrees space. Even though the feature space is large, inner products under this feature space can be calculated efficiently using dynamic programming. Tree kernel is more general than feature templates since it can use the all-subtrees representation without loss of efficiency.", |
| "cite_spans": [ |
| { |
| "start": 312, |
| "end": 337, |
| "text": "(Collins and Duffy, 2002)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition of feature function", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "A simple question related to kernel-based parse reranking asks whether all subtrees are really needed to construct the final parameters w. Suppose we have two large trees t and t , where t is simply generated by attaching a single node to t. In most cases, these two trees yield an almost equivalent discriminative ability, since they are very similar and highly correlated with each other. Even when we exploit all subtrees, most of them are extremely redundant.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "RankBoost with subtree features", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The motivation of this paper is based on the above observation. We think that only a small set of subtrees is needed to express the final parameters. A compact, non-redundant, and highly relevant feature set is useful in practice, as it is interpretable and increases the parsing (reranking) speed.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "RankBoost with subtree features", |
| "sec_num": "3" |
| }, |
| { |
| "text": "To realize this goal, we propose a new boostingbased reranking algorithm based on the all-subtrees representation. First, we describe the architecture of our reranking method. Second, we show a connection between boosting and SVMs, and describe how the algorithm realizes the sparse feature representa-", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "RankBoost with subtree features", |
| "sec_num": "3" |
| }, |
| { |
| "text": "\u00a1 \u00a2 \u00a1 \u00a2 \u00a3 \u00a1 \u00a2 \u00a2 \u00a1 \u00a4 \u00a4 \u00a6 \u00a5 \u00a7\u00a8 \u00a9", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "RankBoost with subtree features", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Figure 1: Labeled ordered tree and subtree relation tion described above.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "RankBoost with subtree features", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Let us introduce a labeled ordered tree (or simply 'tree'), its definition and notations, first.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Preliminaries", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "A labeled ordered tree is a tree where each node is associated with a label and is ordered among its siblings, that is, there is a first child, second child, third child, etc.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 1 Labeled ordered tree (Tree)", |
| "sec_num": null |
| }, |
| { |
| "text": "Let t and u be labeled ordered trees. We say that t matches u, or t is a subtree of u (t \u2286 u), if there is a one-to-one function \u03c8 from nodes in t to u, satisfying the conditions: (1) \u03c8 preserves the parent-daughter relation, (2) \u03c8 preserves the sibling relation, (3) \u03c8 preserves the labels.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definition 2 Subtree", |
| "sec_num": null |
| }, |
| { |
| "text": "We denote the number of nodes in t as |t|. Figure 1 shows an example of a labeled ordered tree and its subtree and non-subtree.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 43, |
| "end": 51, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Definition 2 Subtree", |
| "sec_num": null |
| }, |
| { |
| "text": "We first assume that a parse tree y is represented in a labeled ordered tree. Note that the outputs of partof-speech tagging, shallow parsing, and dependency analysis can be modeled as labeled ordered trees. The feature set F consists of all subtrees seen in the training data, i.e.,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature space given by subtrees", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "F = \u222a i,y\u2208Y(x i ) {t | t \u2286 y}.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature space given by subtrees", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "The feature mapping \u03a6(y) is then given by letting the existence of a tree t be a single dimension, i.e.,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature space given by subtrees", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "\u03a6(y) = {I(t 1 \u2286 y), . . . , I(t m \u2286 y)} \u2208 {0, 1} m ,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature space given by subtrees", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "where I(\u2022) is the indicator function, m = |F|, and {t 1 , . . . , t m } \u2208 F. The feature space is essentially the same as that of tree kernel \u2020 \u2020 Strictly speaking, tree kernel uses the cardinality of each subtree", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature space given by subtrees", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "The parameter estimation method we adopt is a variant of the RankBoost algorithm introduced in (Freund et al., 2003) . Collins et al. used RankBoost to parse reranking tasks (Collins, 2000; Collins, 2002) . The algorithm proceeds for K iterations and tries to minimize the BoostLoss for given training data \u2021 . At each iteration, a single feature (hypothesis) is chosen, and its weight is updated.", |
| "cite_spans": [ |
| { |
| "start": 95, |
| "end": 116, |
| "text": "(Freund et al., 2003)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 119, |
| "end": 138, |
| "text": "Collins et al. used", |
| "ref_id": null |
| }, |
| { |
| "start": 174, |
| "end": 189, |
| "text": "(Collins, 2000;", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 190, |
| "end": 204, |
| "text": "Collins, 2002)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "RankBoost algorithm", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Suppose we have current parameters:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "RankBoost algorithm", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "w = {w 1 , w 2 , . . . , w m } \u2208 R m .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "RankBoost algorithm", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "New parameters w * k,\u03b4 \u2208 R m are then given by selecting a single feature k and updating the weight through an increment \u03b4:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "RankBoost algorithm", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "w * k,\u03b4 = {w 1 , w 2 , . . . , w k + \u03b4, . . . , w m }.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "RankBoost algorithm", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "After the update, the new loss is given:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "RankBoost algorithm", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "Loss(w * k,\u03b4 ) = X i, y\u2208Y(x i ) exp \u015f \u2212 w * k,\u03b4 \u2022 [\u03a6(yi) \u2212 \u03a6(y)] \u0165 .", |
| "eq_num": "(1)" |
| } |
| ], |
| "section": "RankBoost algorithm", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "The RankBoost algorithm iteratively selects the optimal pair k ,\u03b4 that minimizes the loss, i.e.,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "RankBoost algorithm", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "k ,\u03b4 = argmin k,\u03b4 Loss(w * k,\u03b4 ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "RankBoost algorithm", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "By setting the differential of (1) at 0, the following optimal solutions are obtained:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "RankBoost algorithm", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "k = argmax k=1,...,m \u0155 \u0155 \u0155 \u0155 q W + k \u2212 q W \u2212 k \u0155 \u0155 \u0155 \u0155 , and \u03b4 = 1 2 log W + k W \u2212 k ,", |
| "eq_num": "(2)" |
| } |
| ], |
| "section": "RankBoost algorithm", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "where", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "RankBoost algorithm", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "W b k = i,y\u2208Y(x i ) D(y i , y) \u2022 I[I(t k \u2286 y i ) \u2212 I(t k \u2286 y) = b], b \u2208 {+1, \u22121}, and D(y i , y) = exp ( \u2212 w \u2022 [\u03a6(y i ) \u2212 \u03a6(y)]).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "RankBoost algorithm", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Following (Freund et al., 2003; Collins, 2000) , we introduce smoothing to prevent the case when either W + k or W \u2212 k is 0 \u00a7 :", |
| "cite_spans": [ |
| { |
| "start": 10, |
| "end": 31, |
| "text": "(Freund et al., 2003;", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 32, |
| "end": 46, |
| "text": "Collins, 2000)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "RankBoost algorithm", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "\u03b4 = 1 2 log W + k + Z W \u2212 k + Z , where Z = X i,y\u2208Y(x i ) D(yi, y) and \u2208 R + .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "RankBoost algorithm", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "The function Y(x) is usually performed by a probabilistic history-based parser, which can output not only a parse tree but the log probability of the \u2021 In our experiments, optimal settings for K were selected by using development data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "RankBoost algorithm", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "\u00a7 For simplicity, we fix at 0.001 in all our experiments.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "RankBoost algorithm", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "tree. We incorporate the log probability into the reranking by using it as a feature:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "RankBoost algorithm", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "\u03a6(y) = {L(y), I(t 1 \u2286 y), .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "RankBoost algorithm", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": ". . , I(t m \u2286 y)}, and", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "RankBoost algorithm", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "w = {w 0 , w 1 , w 2 , . . . , w m },", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "RankBoost algorithm", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "where L(y) is the log probability of a tree y under the base parser and w 0 is the parameter of L(y).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "RankBoost algorithm", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Note that the update algorithm (2) does not allow us to calculate the parameter w 0 , since (2) is restricted to binary features. To prevent this problem, we use the approximation technique introduced in (Freund et al., 2003) .", |
| "cite_spans": [ |
| { |
| "start": 204, |
| "end": 225, |
| "text": "(Freund et al., 2003)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "RankBoost algorithm", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Recent studies (Schapire et al., 1997; R\u00e4tsch, 2001) have shown that both boosting and SVMs (Vapnik, 1998) work according to similar strategies: constructing optimal parameters w that maximize the smallest margin between positive and negative examples. The critical difference is the definition of margin or the way they regularize the vector w.", |
| "cite_spans": [ |
| { |
| "start": 15, |
| "end": 38, |
| "text": "(Schapire et al., 1997;", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 39, |
| "end": 52, |
| "text": "R\u00e4tsch, 2001)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 92, |
| "end": 106, |
| "text": "(Vapnik, 1998)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sparse feature representation", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "( R\u00e4tsch, 2001) shows that the iterative feature selection performed in boosting asymptotically realizes an l 1 -norm ||w|| 1 regularization. In contrast, it is well known that SVMs are reformulated as an l 2norm ||w|| 2 regularized algorithm. The relationship between two regularizations has been studied in the machine learning community. (Perkins et al., 2003) reported that l 1 -norm should be chosen for a problem where most given features are irrelevant. On the other hand, l 2 -norm should be chosen when most given features are relevant. An advantage of the l 1 -norm regularizer is that it often leads to sparse solutions where most w k are exactly 0. The features assigned zero weight are thought to be irrelevant features as regards classifications.", |
| "cite_spans": [ |
| { |
| "start": 2, |
| "end": 15, |
| "text": "R\u00e4tsch, 2001)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 341, |
| "end": 363, |
| "text": "(Perkins et al., 2003)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sparse feature representation", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "The l 1 -norm regularization is useful for our setting, since most features (subtrees) are redundant and irrelevant, and these redundant features are automatically eliminated.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sparse feature representation", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "In each boosting iteration, we have to solve the following optimization problem:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Efficient Computation", |
| "sec_num": "4" |
| }, |
| { |
| "text": "k = argmax k=1,...,m gain(t k ),", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Efficient Computation", |
| "sec_num": "4" |
| }, |
| { |
| "text": "where", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Efficient Computation", |
| "sec_num": "4" |
| }, |
| { |
| "text": "gain(t k ) = W + k \u2212 W \u2212 k .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Efficient Computation", |
| "sec_num": "4" |
| }, |
| { |
| "text": "It is non-trivial to find the optimal tree tk that maximizes gain(t k ), since the number of subtrees is exponential to its size. In fact, the problem is known to be NP-hard (Yang, 2004) . However, in real applications, the problem is manageable, since the maximum number of subtrees is usually bounded by a constant. To solve the problem efficiently, we now adopt a variant of the branch-and-bound algorithm, similar to that described in (Kudo and Matsumoto, 2004) ", |
| "cite_spans": [ |
| { |
| "start": 174, |
| "end": 186, |
| "text": "(Yang, 2004)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 439, |
| "end": 465, |
| "text": "(Kudo and Matsumoto, 2004)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Efficient Computation", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Abe and Zaki independently proposed an efficient method, rightmost-extension, for enumerating all subtrees from a given tree (Abe et al., 2002; Zaki, 2002) . First, the algorithm starts with a set of trees consisting of single nodes, and then expands a given tree of size (n\u22121) by attaching a new node to it to obtain trees of size n. However, it would be inefficient to expand nodes at arbitrary positions of the tree, as duplicated enumeration is inevitable. The algorithm, rightmost extension, avoids such duplicated enumerations by restricting the position of attachment. Here we give the definition of rightmost extension to describe this restriction in detail.", |
| "cite_spans": [ |
| { |
| "start": 125, |
| "end": 143, |
| "text": "(Abe et al., 2002;", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 144, |
| "end": 155, |
| "text": "Zaki, 2002)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Efficient Enumeration of Trees", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Definition 3 Rightmost Extension (Abe et al., 2002; Zaki, 2002) Let t and t be labeled ordered trees. We say t is a rightmost extension of t, if and only if t and t satisfy the following three conditions:", |
| "cite_spans": [ |
| { |
| "start": 33, |
| "end": 51, |
| "text": "(Abe et al., 2002;", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 52, |
| "end": 63, |
| "text": "Zaki, 2002)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Efficient Enumeration of Trees", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "(1) t is created by adding a single node to t, (i.e., t \u2282 t and |t| + 1 = |t |).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Efficient Enumeration of Trees", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "(2) A node is added to a node existing on the unique path from the root to the rightmost leaf (rightmostpath) in t.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Efficient Enumeration of Trees", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "(3) A node is added as the rightmost sibling.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Efficient Enumeration of Trees", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Consider Figure 2 , which illustrates example tree t with labels drawn from the set L = {a, b, c}. For the sake of convenience, each node in this figure has its original number (depth-first enumeration). The rightmost-path of the tree t is (a(c(b))), and it occurs at positions 1, 4 and 6 respectively. The set of rightmost extended trees is then enumerated by simply adding a single node to a node on the rightmost path. Since there are three nodes on the rightmost path and the size of the label set is 3 (= |L|), a to- Figure 2 : Rightmost extension tal of 9 trees are enumerated from the original tree t. By repeating the rightmost-extension process recursively, we can create a search space in which all trees drawn from the set L are enumerated.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 9, |
| "end": 17, |
| "text": "Figure 2", |
| "ref_id": null |
| }, |
| { |
| "start": 522, |
| "end": 530, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Efficient Enumeration of Trees", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Rightmost extension defines a canonical search space in which we can enumerate all subtrees from a given set of trees. Here we consider an upper bound of the gain that allows subspace pruning in this canonical search space. The following observation provides a convenient way of computing an upper bound of the gain(t k ) for any super-tree t k of t k .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pruning", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "For any t k \u2287 t k , the gain of t k is bounded by \u00b5(t k ):", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Observation 1 Upper bound of the gain(t k )", |
| "sec_num": null |
| }, |
| { |
| "text": "gain(t k ) = \u0155 \u0155 \u0155 \u0155 q W + k \u2212 q W \u2212 k \u0155 \u0155 \u0155 \u0155 \u2264 max( q W + k , q W \u2212 k ) \u2264 max( q W + k , q W \u2212 k ) = \u00b5(t k ), since t k \u2287 t k \u21d2 W b k \u2264 W b k , b \u2208 {+1, \u22121}.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Observation 1 Upper bound of the gain(t k )", |
| "sec_num": null |
| }, |
| { |
| "text": "We can efficiently prune the search space spanned by the rightmost extension using the upper bound of gain \u00b5(t). During the traverse of the subtree lattice built by the recursive process of rightmost extension, we always maintain the temporally suboptimal gain \u03c4 of all the previously calculated gains. If \u00b5(t) < \u03c4 , the gain of any super-tree t \u2287 t is no greater than \u03c4 , and therefore we can safely prune the search space spanned from the subtree t. In contrast, if \u00b5(t) \u2265 \u03c4 , we cannot prune this space, since there might be a super-tree t \u2287 t such that gain(t ) \u2265 \u03c4 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Observation 1 Upper bound of the gain(t k )", |
| "sec_num": null |
| }, |
| { |
| "text": "In real applications, we also employ the following practical methods to reduce the training costs.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Ad-hoc techniques", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Larger trees are usually less effective to discrimination. Thus, we give a size threshold s, and use subtrees whose size is no greater than s. This constraint is easily realized by controlling the rightmost extension according to the size of the trees.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u2022 Size constraint", |
| "sec_num": null |
| }, |
| { |
| "text": "The frequency-based cut-off has been widely used in feature selections. We employ a frequency threshold f , and use subtrees seen on at least one parse for at least f different sentences. Note that a similar branch-and-bound technique can also be applied to the cut-off. When we find that the frequency of a tree t is no greater than f , we can safely prune the space spanned from t as the frequencies of any super-trees t \u2287 t are also no greater than f . \u2022 Pseudo iterations After several 5-or 10-iterations of boosting, we alternately perform 100-or 300 pseudo iterations, in which the optimal feature (subtree) is selected from the cache that maintains the features explored in the previous iterations. The idea is based on our observation that a feature in the cache tends to be reused as the number of boosting iterations increases. Pseudo iterations converge very fast, and help the branch-and-bound algorithm find new features that are not in the cache.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u2022 Frequency constraint", |
| "sec_num": null |
| }, |
| { |
| "text": "In our experiments, we used the same data set that used in (Collins, 2000) . Sections 2-21 of the Penn Treebank were used as training data, and section 23 was used as test data. The training data contains about 40,000 sentences, each of which has an average of 27 distinct parses. Of the 40,000 training sentences, the first 36,000 sentences were used to perform the RankBoost algorithm. The remaining 4,000 sentences were used as development data. Model2 of (Collins, 1999) was used to parse both the training and test data.", |
| "cite_spans": [ |
| { |
| "start": 59, |
| "end": 74, |
| "text": "(Collins, 2000)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 459, |
| "end": 474, |
| "text": "(Collins, 1999)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Parsing Wall Street Journal Text", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "To capture the lexical information of the parse trees, we did not use a standard CFG tree but a lexicalized-CFG tree where each non-terminal node has an extra lexical node labeled with the head word of the constituent. Figure 3 : Lexicalized CFG tree for WSJ parsing head word, e.g., (saw), is put as a leftmost constituent size parameter s and frequency parameter f were experimentally set at 6 and 10, respectively. As the data set is very large, it is difficult to employ the experiments with more unrestricted parameters. Table 1 lists results on test data for the Model2 of (Collins, 1999) , for several previous studies, and for our best model. We achieve recall and precision of 89.3/%89.6% and 89.9%/90.1% for sentences with \u2264 100 words and \u2264 40 words, respectively. The method shows a 1.2% absolute improvement in average precision and recall (from 88.2% to 89.4% for sentences \u2264 100 words), a 10.1% relative reduction in error. (Collins, 2000) achieved 89.6%/89.9% recall and precision for the same datasets (sentences \u2264 100 words) using boosting and manually constructed features. (Charniak, 2000) extends PCFG and achieves similar performance to (Collins, 2000) . The tree kernel method of (Collins and Duffy, 2002) uses the all-subtrees representation and achieves 88.6%/88.9% recall and precision, which are slightly worse than the results obtained with our model. (Bod, 2001 ) also uses the all-subtrees representation with a very different parameter estimation method, and realizes 90.06%/90.08% recall and precision for sentences of \u2264 40 words.", |
| "cite_spans": [ |
| { |
| "start": 579, |
| "end": 594, |
| "text": "(Collins, 1999)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 938, |
| "end": 953, |
| "text": "(Collins, 2000)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 1092, |
| "end": 1108, |
| "text": "(Charniak, 2000)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 1158, |
| "end": 1173, |
| "text": "(Collins, 2000)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 1202, |
| "end": 1227, |
| "text": "(Collins and Duffy, 2002)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 1379, |
| "end": 1389, |
| "text": "(Bod, 2001", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 219, |
| "end": 227, |
| "text": "Figure 3", |
| "ref_id": null |
| }, |
| { |
| "start": 526, |
| "end": 533, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Parsing Wall Street Journal Text", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "We used the same data set as the CoNLL 2000 shared task (Tjong Kim Sang and Buchholz, 2000) . Sections 15-18 of the Penn Treebank were used as training data, and section 20 was used as test data.", |
| "cite_spans": [ |
| { |
| "start": 56, |
| "end": 91, |
| "text": "(Tjong Kim Sang and Buchholz, 2000)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Shallow Parsing", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "As a baseline model, we used a shallow parser based on Conditional Random Fields (CRFs), very similar to that described in (Sha and Pereira, 2003) . CRFs have shown remarkable results in a number of tagging and chunking tasks in NLP. n-best outputs were obtained by a combination of forward (Collins, 1999) . CH00 = (Charniak, 2000) , CO00= (Collins, 2000) . CO02= (Collins and Duffy, 2002) .", |
| "cite_spans": [ |
| { |
| "start": 123, |
| "end": 146, |
| "text": "(Sha and Pereira, 2003)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 291, |
| "end": 306, |
| "text": "(Collins, 1999)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 316, |
| "end": 332, |
| "text": "(Charniak, 2000)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 341, |
| "end": 356, |
| "text": "(Collins, 2000)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 365, |
| "end": 390, |
| "text": "(Collins and Duffy, 2002)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Shallow Parsing", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Viterbi search and backward A* search. Note that this search algorithm yields optimal n-best results in terms of the CRFs score. Each sentence has at most 20 distinct parses. The log probability from the CRFs shallow parser was incorporated into the reranking. Following (Collins, 2000) , the training set was split into 5 portions, and the CRFs shallow parser was trained on 4/5 of the data, then used to decode the remaining 1/5. The outputs of the base parser, which consist of base phrases, were converted into right-branching trees by assuming that two adjacent base phrases are in a parent-child relationship. Figure 4 shows an example of the tree for shallow parsing task. We also put two virtual nodes, left/right boundaries, to capture local transitions. The size parameter s and frequency parameter f were experimentally set at 6 and 5, respectively. Table 2 lists results on test data for the baseline CRFs parser, for several previous studies, and for our best model. Our model achieves a 94.12 Fmeasure, and outperforms the baseline CRFs parser and the SVMs parser (Kudo and Matsumoto, 2001) . (Zhang et al., 2002) reported a higher F-measure with a generalized winnow using additional linguistic features. The accuracy of our model is very similar to that of (Zhang et al., 2002) without using such additional features. Table 3 shows the results for our best model per chunk type. 93.76 8 SVMs-voting (Kudo and Matsumoto, 2001) 93.91 RW + linguistic features (Zhang et al., 2002) 94.17 Boosting (our model) 94.12 ", |
| "cite_spans": [ |
| { |
| "start": 271, |
| "end": 286, |
| "text": "(Collins, 2000)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 1078, |
| "end": 1104, |
| "text": "(Kudo and Matsumoto, 2001)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 1107, |
| "end": 1127, |
| "text": "(Zhang et al., 2002)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 1273, |
| "end": 1293, |
| "text": "(Zhang et al., 2002)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 1415, |
| "end": 1441, |
| "text": "(Kudo and Matsumoto, 2001)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 1473, |
| "end": 1493, |
| "text": "(Zhang et al., 2002)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 616, |
| "end": 624, |
| "text": "Figure 4", |
| "ref_id": "FIGREF3" |
| }, |
| { |
| "start": 861, |
| "end": 868, |
| "text": "Table 2", |
| "ref_id": "TABREF1" |
| }, |
| { |
| "start": 1334, |
| "end": 1341, |
| "text": "Table 3", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Shallow Parsing", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "The numbers of active (non-zero) features selected by boosting are around 8,000 and 3,000 in the WSJ parsing and shallow parsing, respectively. Although almost all the subtrees are used as feature candidates, boosting selects a small and highly relevant subset of features. When we explicitly enumerate the subtrees used in tree kernel, the number of active features might amount to millions or more. Note that the accuracies under such sparse feature spaces are still comparable to those obtained with tree kernel. This result supports our first intuition that we do not always need all the subtrees to construct the parameters. IN(for) )(NP(VP(TO)))) has a large positive weight, while the tree (SBAR((IN(for))(NP(O)))) has a negative weight. The improvement on subordinate phrases is considerable. We achieve 19% of the relative error reduction for subordinate phrase (from 87.68 to 90.02 in F-measure)", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 630, |
| "end": 637, |
| "text": "IN(for)", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Interpretablity and Efficiency", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "The testing speed of our model is much higher than that of other models. The speeds of reranking for WSJ parsing and shallow parsing are 0.055 sec./sent. and 0.042 sec./sent. respectively, which are fast enough for real applications \u00b6 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Interpretablity and Efficiency", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "Tree kernel uses the all-subtrees representation not explicitly but implicitly by reducing the problem to the calculation of the inner-products of two trees. The implicit calculation yields a practical computation in training. However, in testing, kernel methods require a number of kernel evaluations, which are too heavy to allow us to realize real applications. Moreover, tree kernel needs to incorporate a decay factor to downweight the contribution of larger subtrees. It is non-trivial to set the optimal decay factor as the accuracies are sensitive to its selection. Similar to our model, data oriented parsing (DOP) methods (Bod, 1998) deal with the all-subtrees representation explicitly. Since the exact computation of scores for DOP is NP-complete, several approximations are employed to perform an efficient parsing. The critical difference between our model and DOP is that our model leads to an extremely sparse solution and automatically eliminates redundant subtrees. With the DOP methods, (Bod, 2001) also employs constraints (e.g., depth of subtrees) to WSJ parsing w active trees that contain the word \"in\" 0.3864 (VP(NP(NNS(plants)))(PP(in))) 0.3326 (VP(VP(PP)(PP(in)))(VP)) 0.2196 (NP(VP(VP(PP)(PP(in))))) 0.1748 (S(NP(NNP))(PP(in)(NP)))", |
| "cite_spans": [ |
| { |
| "start": 632, |
| "end": 643, |
| "text": "(Bod, 1998)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 1006, |
| "end": 1017, |
| "text": "(Bod, 2001)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Relationship to previous work", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "... ... -1.1217 (PP(in)(NP(NP(effect)))) -1.1634 (VP(yield)(PP(PP))(PP(in))) -1.3574 (NP(PP(in)(NP(NN(way))))) -1.8030 (NP(PP(in)(NP(trading)(JJ)))) shallow parsing w active trees that contain the phrase \"SBAR\" All trees are represented in S-expression. In the shallow parsing task, O is a special phrase that means \"out of chunk\". select relevant subtrees and achieves the best results for WSJ parsing. However, these techniques are not based on the regularization framework focused on this paper and do not always eliminate all the redundant subtrees. Even using the methods of (Bod, 2001) , millions of subtrees are still exploited, which leads to inefficiency in real problems.", |
| "cite_spans": [ |
| { |
| "start": 580, |
| "end": 591, |
| "text": "(Bod, 2001)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Relationship to previous work", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "In this paper, we presented a new application of boosting for parse reranking, in which all subtrees are potentially used as distinct features. Although this set-up greatly increases the feature space, the l 1 -norm regularization performed by boosting selects a compact and relevant feature set. Our model achieved a comparable or even better accuracy than kernel methods even with an extremely small number of features (subtrees).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "7" |
| }, |
| { |
| "text": "* In the real setting, we cannot assume this condition. In this case, we select the parse tree\u0177 that is the most similar to yi and take\u0177 as the correct parse tree yi.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "\u00b6 We ran these tests on a Linux PC with Pentium 4 3.2 Ghz.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Optimized substructure discovery for semi-structured data", |
| "authors": [ |
| { |
| "first": "Kenji", |
| "middle": [], |
| "last": "Abe", |
| "suffix": "" |
| }, |
| { |
| "first": "Shinji", |
| "middle": [], |
| "last": "Kawasoe", |
| "suffix": "" |
| }, |
| { |
| "first": "Tatsuya", |
| "middle": [], |
| "last": "Asai", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proc. of PKDD", |
| "volume": "", |
| "issue": "", |
| "pages": "1--14", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kenji Abe, Shinji Kawasoe, Tatsuya Asai, Hiroki Arimura, and Setsuo Arikawa. 2002. Optimized substructure discovery for semi-structured data. In Proc. of PKDD, pages 1-14.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Beyond Grammar: An Experience Based Theory of Language", |
| "authors": [ |
| { |
| "first": "Rens", |
| "middle": [], |
| "last": "Bod", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rens Bod. 1998. Beyond Grammar: An Experience Based The- ory of Language. CSLI Publications/Cambridge University Press.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "What is the minimal set of fragments that achieves maximal parse accuracy?", |
| "authors": [ |
| { |
| "first": "Rens", |
| "middle": [], |
| "last": "Bod", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Proc. of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "66--73", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rens Bod. 2001. What is the minimal set of fragments that achieves maximal parse accuracy? In Proc. of ACL, pages 66-73.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "A maximum-entropy-inspired parser", |
| "authors": [ |
| { |
| "first": "Eugene", |
| "middle": [], |
| "last": "Charniak", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Proc. of NAACL", |
| "volume": "", |
| "issue": "", |
| "pages": "132--139", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Eugene Charniak. 2000. A maximum-entropy-inspired parser. In Proc. of NAACL, pages 132-139.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "New ranking algorithms for parsing and tagging: Kernels over discrete structures, and the voted perceptron", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| }, |
| { |
| "first": "Nigel", |
| "middle": [], |
| "last": "Duffy", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proc. of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael Collins and Nigel Duffy. 2002. New ranking algo- rithms for parsing and tagging: Kernels over discrete struc- tures, and the voted perceptron. In Proc. of ACL.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Head-Driven Statistical Models for Natural Language Parsing", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael Collins. 1999. Head-Driven Statistical Models for Natural Language Parsing. Ph.D. thesis, University of Pennsylvania.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Discriminative reranking for natural language parsing", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Proc. of ICML", |
| "volume": "", |
| "issue": "", |
| "pages": "175--182", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael Collins. 2000. Discriminative reranking for natural language parsing. In Proc. of ICML, pages 175-182.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Ranking algorithms for named-entity extraction: Boosting and the voted perceptron", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proc. of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "489--496", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael Collins. 2002. Ranking algorithms for named-entity extraction: Boosting and the voted perceptron. In Proc. of ACL, pages 489-496.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "An efficient boosting algorithm for combining preferences", |
| "authors": [ |
| { |
| "first": "Yoav", |
| "middle": [], |
| "last": "Freund", |
| "suffix": "" |
| }, |
| { |
| "first": "Raj", |
| "middle": [ |
| "D" |
| ], |
| "last": "Iyer", |
| "suffix": "" |
| }, |
| { |
| "first": "Robert", |
| "middle": [ |
| "E" |
| ], |
| "last": "Schapire", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoram", |
| "middle": [], |
| "last": "Singer", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Journal of Machine Learning Research", |
| "volume": "4", |
| "issue": "", |
| "pages": "933--969", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yoav Freund, Raj D. Iyer, Robert E. Schapire, and Yoram Singer. 2003. An efficient boosting algorithm for combining preferences. Journal of Machine Learning Research, 4:933- 969.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Chunking with support vector machines", |
| "authors": [ |
| { |
| "first": "Taku", |
| "middle": [], |
| "last": "Kudo", |
| "suffix": "" |
| }, |
| { |
| "first": "Yuji", |
| "middle": [], |
| "last": "Matsumoto", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Proc. of NAACL", |
| "volume": "", |
| "issue": "", |
| "pages": "192--199", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Taku Kudo and Yuji Matsumoto. 2001. Chunking with support vector machines. In Proc. of NAACL, pages 192-199.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "A boosting algorithm for classification of semi-structured text", |
| "authors": [ |
| { |
| "first": "Taku", |
| "middle": [], |
| "last": "Kudo", |
| "suffix": "" |
| }, |
| { |
| "first": "Yuji", |
| "middle": [], |
| "last": "Matsumoto", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proc. of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "301--308", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Taku Kudo and Yuji Matsumoto. 2004. A boosting algo- rithm for classification of semi-structured text. In Proc. of EMNLP, pages 301-308.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Grafting: Fast, incremental feature selection by gradient descent in function space", |
| "authors": [ |
| { |
| "first": "Simon", |
| "middle": [], |
| "last": "Perkins", |
| "suffix": "" |
| }, |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Lacker", |
| "suffix": "" |
| }, |
| { |
| "first": "James", |
| "middle": [], |
| "last": "Thiler", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Journal of Machine Learning Research", |
| "volume": "3", |
| "issue": "", |
| "pages": "1333--1356", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Simon Perkins, Kevin Lacker, and James Thiler. 2003. Graft- ing: Fast, incremental feature selection by gradient descent in function space. Journal of Machine Learning Research, 3:1333-1356.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Robust Boosting via Convex Optimization", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Gunnar", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "R\u00e4tsch", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gunnar. R\u00e4tsch. 2001. Robust Boosting via Convex Optimiza- tion. Ph.D. thesis, Department of Computer Science, Uni- versity of Potsdam.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Boosting the margin: a new explanation for the effectiveness of voting methods", |
| "authors": [ |
| { |
| "first": "Robert", |
| "middle": [ |
| "E" |
| ], |
| "last": "Schapire", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoav", |
| "middle": [], |
| "last": "Freund", |
| "suffix": "" |
| }, |
| { |
| "first": "Peter", |
| "middle": [], |
| "last": "Bartlett", |
| "suffix": "" |
| }, |
| { |
| "first": "Wee Sun", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Proc. of ICML", |
| "volume": "", |
| "issue": "", |
| "pages": "322--330", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Robert E. Schapire, Yoav Freund, Peter Bartlett, and Wee Sun Lee. 1997. Boosting the margin: a new explanation for the effectiveness of voting methods. In Proc. of ICML, pages 322-330.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Shallow parsing with conditional random fields", |
| "authors": [ |
| { |
| "first": "Fei", |
| "middle": [], |
| "last": "Sha", |
| "suffix": "" |
| }, |
| { |
| "first": "Fernando", |
| "middle": [], |
| "last": "Pereira", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proc. of HLT-NAACL", |
| "volume": "", |
| "issue": "", |
| "pages": "213--220", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Fei Sha and Fernando Pereira. 2003. Shallow parsing with conditional random fields. In Proc. of HLT-NAACL, pages 213-220.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Introduction to the CoNLL-2000 Shared Task: Chunking", |
| "authors": [ |
| { |
| "first": "Erik", |
| "middle": [ |
| "F" |
| ], |
| "last": "Tjong", |
| "suffix": "" |
| }, |
| { |
| "first": "Kim", |
| "middle": [], |
| "last": "Sang", |
| "suffix": "" |
| }, |
| { |
| "first": "Sabine", |
| "middle": [], |
| "last": "Buchholz", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Proc. of CoNLL-2000 and LLL-2000", |
| "volume": "", |
| "issue": "", |
| "pages": "127--132", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Erik F. Tjong Kim Sang and Sabine Buchholz. 2000. Introduc- tion to the CoNLL-2000 Shared Task: Chunking. In Proc. of CoNLL-2000 and LLL-2000, pages 127-132.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Statistical Learning Theory", |
| "authors": [ |
| { |
| "first": "Vladimir", |
| "middle": [ |
| "N" |
| ], |
| "last": "Vapnik", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Vladimir N. Vapnik. 1998. Statistical Learning Theory. Wiley- Interscience.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "The complexity of mining maximal frequent itemsets and maximal frequent patterns", |
| "authors": [ |
| { |
| "first": "Guizhen", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proc. of SIGKDD", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Guizhen Yang. 2004. The complexity of mining maximal fre- quent itemsets and maximal frequent patterns. In Proc. of SIGKDD.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Efficiently mining frequent trees in a forest", |
| "authors": [ |
| { |
| "first": "Mohammed", |
| "middle": [], |
| "last": "Zaki", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proc. of SIGKDD", |
| "volume": "", |
| "issue": "", |
| "pages": "71--80", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mohammed Zaki. 2002. Efficiently mining frequent trees in a forest. In Proc. of SIGKDD, pages 71-80.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Text chunking based on a generalization of winnow", |
| "authors": [ |
| { |
| "first": "Tong", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Fred", |
| "middle": [], |
| "last": "Damerau", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Johnson", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Journal of Machine Learning Research", |
| "volume": "2", |
| "issue": "", |
| "pages": "615--637", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tong Zhang, Fred Damerau, and David Johnson. 2002. Text chunking based on a generalization of winnow. Journal of Machine Learning Research, 2:615-637.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF1": { |
| "uris": null, |
| "type_str": "figure", |
| "text": "Figure 3shows an example of the lexicalized-CFG tree used in our experiments.", |
| "num": null |
| }, |
| "FIGREF2": { |
| "uris": null, |
| "type_str": "figure", |
| "text": "LR/LP = labeled recall/precision. CBs is the average number of cross brackets per sentence. 0 CBs, and 2CBs are the percentage of sentences with 0 or \u2264 2 crossing brackets, respectively. COL99 = Model 2 of", |
| "num": null |
| }, |
| "FIGREF3": { |
| "uris": null, |
| "type_str": "figure", |
| "text": "Tree representation for shallow parsing Represented in a right-branching tree with two virtual nodes MODEL F \u03b2=1 CRFs (baseline)", |
| "num": null |
| }, |
| "FIGREF4": { |
| "uris": null, |
| "type_str": "figure", |
| "text": "1.4500 (SBAR(IN(for))(NP(VP(TO)))) 0.6177 (VP(SBAR(NP(VBD))) 0.6173 (SBAR(NP(VP(\")))) 0.5644 (VP(SBAR(NP(VP(JJ))))) .. .. -0.9034 (SBAR(IN(for))(NP(O))) -0.9181 (SBAR(NP(O))) -1.0695 (ADVP(NP(SBAR(NP(VP))))) -1.1699 (SBAR(NP(NN)(NP)))", |
| "num": null |
| }, |
| "TABREF0": { |
| "text": "CBs CO99 88.5% 88.7% 0.92 66.7% 87.1% CH00 90.1% 90.1% 0.74 70.1% 89.6% CO00 90.1% 90.4% 0.74 70.3% 89.6% CO02 89.1% 89.4% 0.85 69.3% 88.2% Boosting 89.9% 90.1% 0.77 70.5% 89.4% MODEL", |
| "content": "<table><tr><td>MODEL</td><td colspan=\"2\">\u2264 40 Words (2245 sentences)</td></tr><tr><td/><td colspan=\"2\">LR LP CBs 0 CBs 2 \u2264 100 Words (2416 sentences)</td></tr><tr><td/><td>LR</td><td>LP CBs 0 CBs 2 CBs</td></tr><tr><td colspan=\"3\">CO99 88.1% 88.3% 1.06 64.0% 85.1%</td></tr><tr><td colspan=\"3\">CH00 89.6% 89.5% 0.88 67.6% 87.7%</td></tr><tr><td colspan=\"3\">CO00 89.6% 89.9% 0.87 68.3% 87.7%</td></tr><tr><td colspan=\"3\">CO02 88.6% 88.9% 0.99 66.5% 86.3%</td></tr><tr><td colspan=\"3\">Boosting 89.3% 89.6% 0.90 67.9% 87.5%</td></tr><tr><td colspan=\"3\">Table 1: Results for section 23 of the WSJ Treebank</td></tr></table>", |
| "html": null, |
| "type_str": "table", |
| "num": null |
| }, |
| "TABREF1": { |
| "text": "Results of shallow parsingF \u03b2=1 is the harmonic mean of precision and recall.", |
| "content": "<table/>", |
| "html": null, |
| "type_str": "table", |
| "num": null |
| }, |
| "TABREF2": { |
| "text": "The sparse feature representations are useful in practice as they allow us to analyze what kinds of features are relevant.Table 4shows examples of active features along with their weights w k . In the shallow parsing tasks, subordinate phrases (SBAR) are difficult to analyze without seeing long dependencies. Subordinate phrases usually precede a sentence (NP and VP). However, Markov-based shallow parsers, such as MEMM or CRFs, cannot capture such a long dependency. Our model automatically selects useful subtrees to obtain an improvement on subordinate phrases. It is interesting that the", |
| "content": "<table><tr><td/><td colspan=\"2\">Precision Recall</td><td>F \u03b2=1</td></tr><tr><td>ADJP</td><td colspan=\"3\">80.35% 73.41% 76.72</td></tr><tr><td>ADVP</td><td colspan=\"3\">83.88% 82.33% 83.10</td></tr><tr><td colspan=\"4\">CONJP 42.86% 66.67% 52.17</td></tr><tr><td>INTJ</td><td colspan=\"3\">50.00% 50.00% 50.00</td></tr><tr><td>LST</td><td>0.00%</td><td>0.00%</td><td>0.00</td></tr><tr><td>NP</td><td colspan=\"3\">94.45% 94.36% 94.41</td></tr><tr><td>PP</td><td colspan=\"3\">97.24% 98.07% 97.65</td></tr><tr><td>PRT</td><td colspan=\"3\">76.92% 75.47% 76.19</td></tr><tr><td>SBAR</td><td colspan=\"3\">90.70% 89.35% 90.02</td></tr><tr><td>VP</td><td colspan=\"3\">93.95% 94.72% 94.33</td></tr><tr><td colspan=\"4\">Overall 94.11% 94.13% 94.12</td></tr></table>", |
| "html": null, |
| "type_str": "table", |
| "num": null |
| }, |
| "TABREF3": { |
| "text": "Results of shallow parsing per chunk type tree (SBAR(", |
| "content": "<table/>", |
| "html": null, |
| "type_str": "table", |
| "num": null |
| }, |
| "TABREF4": { |
| "text": "Examples of active features (subtrees)", |
| "content": "<table/>", |
| "html": null, |
| "type_str": "table", |
| "num": null |
| } |
| } |
| } |
| } |