ACL-OCL / Base_JSON /prefixP /json /P12 /P12-1023.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P12-1023",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T09:27:41.159366Z"
},
"title": "Utilizing Dependency Language Models for Graph-based Dependency Parsing Models",
"authors": [
{
"first": "Wenliang",
"middle": [],
"last": "Chen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Human Language Technology",
"location": {
"country": "Singapore"
}
},
"email": "wechen@i2r.a-star.edu.sg"
},
{
"first": "Min",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Human Language Technology",
"location": {
"country": "Singapore"
}
},
"email": "mzhang@i2r.a-star.edu.sg"
},
{
"first": "Haizhou",
"middle": [],
"last": "Li",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Human Language Technology",
"location": {
"country": "Singapore"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Most previous graph-based parsing models increase decoding complexity when they use high-order features due to exact-inference decoding. In this paper, we present an approach to enriching high-order feature representations for graph-based dependency parsing models using a dependency language model and beam search. The dependency language model is built on a large-amount of additional autoparsed data that is processed by a baseline parser. Based on the dependency language model, we represent a set of features for the parsing model. Finally, the features are efficiently integrated into the parsing model during decoding using beam search. Our approach has two advantages. Firstly we utilize rich high-order features defined over a view of large scope and additional large raw corpus. Secondly our approach does not increase the decoding complexity. We evaluate the proposed approach on English and Chinese data. The experimental results show that our new parser achieves the best accuracy on the Chinese data and comparable accuracy with the best known systems on the English data.",
"pdf_parse": {
"paper_id": "P12-1023",
"_pdf_hash": "",
"abstract": [
{
"text": "Most previous graph-based parsing models increase decoding complexity when they use high-order features due to exact-inference decoding. In this paper, we present an approach to enriching high-order feature representations for graph-based dependency parsing models using a dependency language model and beam search. The dependency language model is built on a large-amount of additional autoparsed data that is processed by a baseline parser. Based on the dependency language model, we represent a set of features for the parsing model. Finally, the features are efficiently integrated into the parsing model during decoding using beam search. Our approach has two advantages. Firstly we utilize rich high-order features defined over a view of large scope and additional large raw corpus. Secondly our approach does not increase the decoding complexity. We evaluate the proposed approach on English and Chinese data. The experimental results show that our new parser achieves the best accuracy on the Chinese data and comparable accuracy with the best known systems on the English data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In recent years, there are many data-driven models that have been proposed for dependency parsing . Among them, graphbased dependency parsing models have achieved state-of-the-art performance for a wide range of languages as shown in recent CoNLL shared tasks (Buchholz and Marsi, 2006; . In the graph-based models, dependency parsing is treated as a structured prediction problem in which the graphs are usually represented as factored structures. The information of the factored structures decides the features that the models can utilize. There are several previous studies that exploit high-order features that lead to significant improvements. McDonald et al. (2005) and Covington (2001) develop models that represent first-order features over a single arc in graphs. By extending the firstorder model, McDonald and Pereira (2006) and Carreras (2007) exploit second-order features over two adjacent arcs in second-order models. Koo and Collins (2010) further propose a third-order model that uses third-order features. These models utilize higher-order feature representations and achieve better performance than the first-order models. But this achievement is at the cost of the higher decoding complexity, from O(n 2 ) to O(n 4 ), where n is the length of the input sentence. Thus, it is very hard to develop higher-order models further in this way.",
"cite_spans": [
{
"start": 260,
"end": 286,
"text": "(Buchholz and Marsi, 2006;",
"ref_id": "BIBREF0"
},
{
"start": 649,
"end": 671,
"text": "McDonald et al. (2005)",
"ref_id": "BIBREF17"
},
{
"start": 676,
"end": 692,
"text": "Covington (2001)",
"ref_id": "BIBREF5"
},
{
"start": 808,
"end": 835,
"text": "McDonald and Pereira (2006)",
"ref_id": "BIBREF16"
},
{
"start": 840,
"end": 855,
"text": "Carreras (2007)",
"ref_id": "BIBREF1"
},
{
"start": 933,
"end": 955,
"text": "Koo and Collins (2010)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "How to enrich high-order feature representations without increasing the decoding complexity for graph-based models becomes a very challenging problem in the dependency parsing task. In this paper, we solve this issue by enriching the feature representations for a graph-based model using a dependency language model (DLM) (Shen et al., 2008) . The N-gram DLM has the ability to predict the next child based on the N-1 immediate previous children and their head (Shen et al., 2008) . The basic idea behind is that we use the DLM to evaluate whether a valid dependency tree is well-formed from a view of large scope. The parsing model searches for the final dependency trees by considering the original scores and the scores of DLM.",
"cite_spans": [
{
"start": 322,
"end": 341,
"text": "(Shen et al., 2008)",
"ref_id": "BIBREF22"
},
{
"start": 461,
"end": 480,
"text": "(Shen et al., 2008)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In our approach, the DLM is built on a large amount of auto-parsed data, which is processed by an original first-order parser (McDonald et al., 2005) . We represent the features based on the DLM. The DLM-based features can capture the N-gram information of the parent-children structures for the parsing model. Then, they are integrated directly in the decoding algorithms using beam-search. Our new parsing model can utilize rich high-order feature representations but without increasing the complexity.",
"cite_spans": [
{
"start": 126,
"end": 149,
"text": "(McDonald et al., 2005)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To demonstrate the effectiveness of the proposed approach, we conduct experiments on English and Chinese data. The results indicate that the approach greatly improves the accuracy. In summary, we make the following contributions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We utilize the dependency language model to enhance the graph-based parsing model. The DLM-based features are integrated directly into the beam-search decoder.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 The new parsing model uses the rich high-order features defined over a view of large scope and and additional large raw corpus, but without increasing the decoding complexity. \u2022 Our parser achieves the best accuracy on the Chinese data and comparable accuracy with the best known systems on the English data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Language models play a very important role for statistical machine translation (SMT). The standard Ngram based language model predicts the next word based on the N \u22121 immediate previous words. However, the traditional N-gram language model can not capture long-distance word relations. To overcome this problem, Shen et al. (2008) proposed a dependency language model (DLM) to exploit longdistance word relations for SMT. The N-gram DLM predicts the next child of a head based on the N \u2212 1 immediate previous children and the head itself. In this paper, we define a DLM, which is similar to the one of Shen et al. (2008) , to score entire dependency trees.",
"cite_spans": [
{
"start": 312,
"end": 330,
"text": "Shen et al. (2008)",
"ref_id": "BIBREF22"
},
{
"start": 602,
"end": 620,
"text": "Shen et al. (2008)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency language model",
"sec_num": "2"
},
{
"text": "An input sentence is denoted by x = (x 0 , x 1 , ..., x i , ..., x n ), where x 0 = ROOT and does not depend on any other token in x and each token x i refers to a word. Let y be a dependency tree for x and H(y) be a set that includes the words that have at least one dependent. For each x h \u2208 H(y), we have a dependency structure",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency language model",
"sec_num": "2"
},
{
"text": "D h = (x Lk , ...x L1 , x h , x R1 ...x Rm ), where x Lk , ...x L1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency language model",
"sec_num": "2"
},
{
"text": "are the children on the left side from the farthest to the nearest and x R1 ...x Rm are the children on the right side from the nearest to the farthest. Probability P (D h ) is defined as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency language model",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (D h ) = P L (D h ) \u00d7 P R (D h )",
"eq_num": "(1)"
}
],
"section": "Dependency language model",
"sec_num": "2"
},
{
"text": "Here P L and P R are left and right side generative probabilities respectively. Suppose, we use a Ngram dependency language model. P L is defined as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency language model",
"sec_num": "2"
},
{
"text": "P L (D h ) \u2248 P Lc (x L1 |x h ) \u00d7P Lc (x L2 |x L1 , x h ) \u00d7... (2) \u00d7P Lc (x Lk |x L(k\u22121) , ..., x L(k\u2212N +1) , x h )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency language model",
"sec_num": "2"
},
{
"text": "where the approximation is based on the nth order Markov assumption. The right side probability is similar. For a dependency tree, we calculate the probability as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency language model",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (y) = x h \u2208H(y) P (D h )",
"eq_num": "(3)"
}
],
"section": "Dependency language model",
"sec_num": "2"
},
{
"text": "In this paper, we use a linear model to calculate the scores for the parsing models (defined in Section 3.1). Accordingly, we reform Equation 3. We define f DLM as a high-dimensional feature representation which is based on arbitrary features of P Lc , P Rc and x. Then, the DLM score of tree y is in turn computed as the inner product of f DLM with a corresponding weight vector w DLM .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency language model",
"sec_num": "2"
},
{
"text": "score DLM (y) = f DLM \u2022 w DLM (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency language model",
"sec_num": "2"
},
{
"text": "In this section, we propose a parsing model which includes the dependency language model by extending the model of McDonald et al. (2005) .",
"cite_spans": [
{
"start": 115,
"end": 137,
"text": "McDonald et al. (2005)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parsing with dependency language model",
"sec_num": "3"
},
{
"text": "The graph-based parsing model aims to search for the maximum spanning tree (MST) in a graph (Mc-Donald et al., 2005) . We write (x i , x j ) \u2208 y if there is a dependency in tree y from word x i to word x j (x i is the head and x j is the dependent). A graph, denoted by G x , consists of a set of nodes",
"cite_spans": [
{
"start": 92,
"end": 116,
"text": "(Mc-Donald et al., 2005)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Graph-based parsing model",
"sec_num": "3.1"
},
{
"text": "V x = {x 0 , x 1 , ..., x i , ..., x n } and a set of arcs (edges) E x = {(x i , x j )|i = j, x i \u2208 V x , x j \u2208 (V x \u2212 x 0 )},",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph-based parsing model",
"sec_num": "3.1"
},
{
"text": "where the nodes in V x are the words in x. Let T (G x ) be the set of all the subgraphs of G x that are valid dependency trees for sentence x.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph-based parsing model",
"sec_num": "3.1"
},
{
"text": "The formulation defines the score of a dependency tree y \u2208 T (G x ) to be the sum of the edge scores,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph-based parsing model",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "s(x, y) = g\u2208y score(w, x, g)",
"eq_num": "(5)"
}
],
"section": "Graph-based parsing model",
"sec_num": "3.1"
},
{
"text": "where g is a spanning subgraph of y. g can be a single dependency or adjacent dependencies. Then y is represented as a set of factors. The model scores each factor using a weight vector w that contains the weights for the features to be learned during training using the Margin Infused Relaxed Algorithm (MIRA) (Crammer and Singer, 2003; McDonald and Pereira, 2006) . The scoring function is",
"cite_spans": [
{
"start": 311,
"end": 337,
"text": "(Crammer and Singer, 2003;",
"ref_id": "BIBREF7"
},
{
"start": 338,
"end": 365,
"text": "McDonald and Pereira, 2006)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Graph-based parsing model",
"sec_num": "3.1"
},
{
"text": "score(w, x, g) = f(x, g) \u2022 w (6)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph-based parsing model",
"sec_num": "3.1"
},
{
"text": "where f(x, g) is a high-dimensional feature representation which is based on arbitrary features of g and x.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph-based parsing model",
"sec_num": "3.1"
},
{
"text": "The parsing model finds a maximum spanning tree (MST), which is the highest scoring tree in T (G x ). The task of the decoding algorithm for a given sentence x is to find y * ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph-based parsing model",
"sec_num": "3.1"
},
{
"text": "y * = arg max y\u2208T (Gx) s(x, y) = arg max y\u2208T (Gx) g\u2208y score(w, x, g)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph-based parsing model",
"sec_num": "3.1"
},
{
"text": "In our approach, we consider the scores of the DLM when searching for the maximum spanning tree. Then for a given sentence x, we find y * DLM ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Add DLM scores",
"sec_num": "3.2"
},
{
"text": "y * DLM = arg max y\u2208T (Gx) ( g\u2208y score(w, x, g)+score DLM (y))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Add DLM scores",
"sec_num": "3.2"
},
{
"text": "After adding the DLM scores, the new parsing model can capture richer information. Figure 1 illustrates the changes. In the original first-order parsing model, we only utilize the information of single arc Figure 1 -(a). If we use 3-gram DLM, we can utilize the additional information of the two previous children (nearer to ",
"cite_spans": [],
"ref_spans": [
{
"start": 83,
"end": 91,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 206,
"end": 214,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Add DLM scores",
"sec_num": "3.2"
},
{
"text": "(x h , x L(k\u22121) ) for x L(k\u22121) as shown in",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Add DLM scores",
"sec_num": "3.2"
},
{
"text": "x h than x L(k\u22121) ): x L(k\u22122) and x L(k\u22123) as shown in Figure 1-(b).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Add DLM scores",
"sec_num": "3.2"
},
{
"text": "We define DLM-based features for",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DLM-based feature templates",
"sec_num": "3.3"
},
{
"text": "D h = (x Lk , ...x L1 , x h , x R1 ...x Rm ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DLM-based feature templates",
"sec_num": "3.3"
},
{
"text": "For each child x ch on the left side, we have P Lc (x ch |HIS), where HIS refers to the N \u2212 1 immediate previous right children and head x h . Similarly, we have P Rc (x ch |HIS) for each child on the right side. Let P u (x ch |HIS) (P u (ch) in short) be one of the above probabilities. We use the map function \u03a6(P u (ch)) to obtain the predefined discrete value (defined in Section 5.3). The feature templates are outlined in Table 1 , where TYPE refers to one of the types:P L or P R , h pos refers to the part-of-speech tag of x h , h word refers to the lexical form of x h , ch pos refers to the part-ofspeech tag of x ch , and ch word refers to the lexical form of x ch .",
"cite_spans": [],
"ref_spans": [
{
"start": 428,
"end": 435,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "DLM-based feature templates",
"sec_num": "3.3"
},
{
"text": "In this section, we turn to the problem of adding the DLM in the decoding algorithm. We propose two ways: (1) Rescoring, in which we rescore the Kbest list with the DLM-based features; (2) Intersect, < \u03a6(P u (ch)), TYPE > < \u03a6(P u (ch)), TYPE, h pos > < \u03a6(P u (ch)), TYPE, h word > < \u03a6(P u (ch)), TYPE, ch pos > < \u03a6(P u (ch)), TYPE, ch word > < \u03a6(P u (ch)), TYPE, h pos, ch pos > < \u03a6(P u (ch)), TYPE, h word, ch word > Table 1 : DLM-based feature templates in which we add the DLM-based features in the decoding algorithm directly.",
"cite_spans": [],
"ref_spans": [
{
"start": 418,
"end": 425,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Decoding",
"sec_num": "4"
},
{
"text": "We add the DLM-based features into the decoding procedure by using the rescoring technique used in (Shen et al., 2008) . We can use an original parser to produce the K-best list. This method has the potential to be very fast. However, because the performance of this method is restricted to the K-best list, we may have to set K to a high number in order to find the best parsing tree (with DLM) or a tree acceptably close to the best (Shen et al., 2008) .",
"cite_spans": [
{
"start": 99,
"end": 118,
"text": "(Shen et al., 2008)",
"ref_id": "BIBREF22"
},
{
"start": 435,
"end": 454,
"text": "(Shen et al., 2008)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Rescoring",
"sec_num": "4.1"
},
{
"text": "Then, we add the DLM-based features in the decoding algorithm directly. The DLM-based features are generated online during decoding.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Intersect",
"sec_num": "4.2"
},
{
"text": "For our parser, we use the decoding algorithm of McDonald et al. (2005) . The algorithm was extensions of the parsing algorithm of (Eisner, 1996) , which was a modified version of the CKY chart parsing algorithm. Here, we describe how to add the DLM-based features in the first-order algorithm. The second-order and higher-order algorithms can be extended by the similar way.",
"cite_spans": [
{
"start": 49,
"end": 71,
"text": "McDonald et al. (2005)",
"ref_id": "BIBREF17"
},
{
"start": 131,
"end": 145,
"text": "(Eisner, 1996)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Intersect",
"sec_num": "4.2"
},
{
"text": "The parsing algorithm independently parses the left and right dependents of a word and combines them later. There are two types of chart items (Mc-Donald and Pereira, 2006) : 1) a complete item in which the words are unable to accept more dependents in a certain direction; and 2) an incomplete item in which the words can accept more dependents in a certain direction. In the algorithm, we create both types of chart items with two directions for all the word pairs in a given sentence. The direction of a dependency is from the head to the dependent. The right (left) direction indicates the dependent is on the right (left) side of the head. Larger chart items are created from pairs of smaller ones in a bottom-up style. In the following figures, complete items are represented by triangles and incomplete items are represented by trapezoids. Figure 2 illustrates the cubic parsing actions of the algorithm (Eisner, 1996) in the right direction, where s, r, and t refer to the start and end indices of the chart items. In Figure 2-(a) , all the items on the left side are complete and the algorithm creates the incomplete item (trapezoid on the right side) of st. This action builds a dependency relation from s to t. In Figure 2-(b) , the item of sr is incomplete and the item of rt is complete. Then the algorithm creates the complete item of st. In this action, all the children of r are generated. In Figure 2 , the longer vertical edge in a triangle or a trapezoid corresponds to the subroot of the structure (spanning chart). For example, s is the subroot of the span st in Figure 2-(a) . For the left direction case, the actions are similar. (Eisner, 1996) Then, we add the DLM-based features into the parsing actions. Because the parsing algorithm is in the bottom-up style, the nearer children are generated earlier than the farther ones of the same head. Thus, we calculate the left or right side probability for a new child when a new dependency relation is built. For Figure 2-(a) , we add the features of P Rc (x t |HIS). Figure 3 shows the structure, where c Rs refers to the current children (nearer than We use beam search to choose the one having the overall best score as the final parse, where K spans are built at each step (Zhang and Clark, 2008) . At each step, we perform the parsing actions in the current beam and then choose the best K resulting spans for the next step. The time complexity of the new decoding algorithm is O(Kn 3 ) while the original one is O(n 3 ), where n is the length of the input sentence. With the rich feature set in Table 1 , the running time of Intersect is longer than the time of Rescoring. But Intersect considers more combination of spans with the DLM-based features than Rescoring that is only given a K-best list.",
"cite_spans": [
{
"start": 143,
"end": 172,
"text": "(Mc-Donald and Pereira, 2006)",
"ref_id": null
},
{
"start": 911,
"end": 925,
"text": "(Eisner, 1996)",
"ref_id": "BIBREF8"
},
{
"start": 1654,
"end": 1668,
"text": "(Eisner, 1996)",
"ref_id": "BIBREF8"
},
{
"start": 2249,
"end": 2272,
"text": "(Zhang and Clark, 2008)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [
{
"start": 847,
"end": 855,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 1026,
"end": 1039,
"text": "Figure 2-(a)",
"ref_id": "FIGREF1"
},
{
"start": 1226,
"end": 1238,
"text": "Figure 2-(b)",
"ref_id": "FIGREF1"
},
{
"start": 1410,
"end": 1418,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 1585,
"end": 1597,
"text": "Figure 2-(a)",
"ref_id": "FIGREF1"
},
{
"start": 1985,
"end": 1997,
"text": "Figure 2-(a)",
"ref_id": "FIGREF1"
},
{
"start": 2040,
"end": 2048,
"text": "Figure 3",
"ref_id": "FIGREF3"
},
{
"start": 2573,
"end": 2580,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Intersect",
"sec_num": "4.2"
},
{
"text": "We implement our parsers based on the MSTParser 1 , a freely available implementation of the graph-based model proposed by (McDonald and Pereira, 2006) . We train a first-order parser on the training data (described in Section 6.1) with the features defined in McDonald et al. (2005) . We call this first-order parser Baseline parser.",
"cite_spans": [
{
"start": 123,
"end": 151,
"text": "(McDonald and Pereira, 2006)",
"ref_id": "BIBREF16"
},
{
"start": 261,
"end": 283,
"text": "McDonald et al. (2005)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline parser",
"sec_num": "5.1"
},
{
"text": "We use a large amount of unannotated data to build the dependency language model. We first perform word segmentation (if needed) and part-of-speech tagging. After that, we obtain the word-segmented sentences with the part-of-speech tags. Then the sentences are parsed by the Baseline parser. Finally, we obtain the auto-parsed data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Build dependency language models",
"sec_num": "5.2"
},
{
"text": "Given the dependency trees, we estimate the probability distribution by relative frequency:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Build dependency language models",
"sec_num": "5.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P u (x ch |HIS) = count(x ch , HIS) x ch count(x ch , HIS)",
"eq_num": "(7)"
}
],
"section": "Build dependency language models",
"sec_num": "5.2"
},
{
"text": "No smoothing is performed because we use the mapping function for the feature representations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Build dependency language models",
"sec_num": "5.2"
},
{
"text": "We can define different mapping functions for the feature representations. Here, we use a simple way. First, the probabilities are sorted in decreasing order. Let N o(P u (ch)) be the position number of P u (ch) in the sorted list. The mapping function is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mapping function",
"sec_num": "5.3"
},
{
"text": "1 http://mstparser.sourceforge.net \u03a6(P u (ch)) = P H if No(Pu(ch)) \u2264 TOP10 P M if TOP10 < No(Pu(ch)) \u2264 TOP30 P L if TOP30 < No(Pu(ch)) P O",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mapping function",
"sec_num": "5.3"
},
{
"text": "if Pu(ch)) = 0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mapping function",
"sec_num": "5.3"
},
{
"text": "where TOP10 and TOP 30 refer to the position numbers of top 10% and top 30% respectively. The numbers, 10% and 30%, are tuned on the development sets in the experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mapping function",
"sec_num": "5.3"
},
{
"text": "We conducted experiments on English and Chinese data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "For English, we used the Penn Treebank (Marcus et al., 1993) in our experiments. We created a standard data split: sections 2-21 for training, section 22 for development, and section 23 for testing. Tool \"Penn2Malt\" 2 was used to convert the data into dependency structures using a standard set of head rules (Yamada and Matsumoto, 2003) . Following the work of (Koo et al., 2008) , we used the MX-POST (Ratnaparkhi, 1996) tagger trained on training data to provide part-of-speech tags for the development and the test set, and used 10-way jackknifing to generate part-of-speech tags for the training set. For the unannotated data, we used the BLLIP corpus (Charniak et al., 2000) that contains about 43 million words of WSJ text. 3 We used the MXPOST tagger trained on training data to assign part-of-speech tags and used the Baseline parser to process the sentences of the BLLIP corpus. For Chinese, we used the Chinese Treebank (CTB) version 4.0 4 in the experiments. We also used the \"Penn2Malt\" tool to convert the data and created a data split: files 1-270 and files 400-931 for training, files 271-300 for testing, and files 301-325 for development. We used gold standard segmentation and part-of-speech tags in the CTB. The data partition and part-of-speech settings were chosen to match previous work (Chen et al., 2008; Yu et al., 2008; Chen et al., 2009) . For the unannotated data, we used the XIN CMN portion of Chinese Gigaword 5 Version 2.0 (LDC2009T14) (Huang, 2009) , which has approximately 311 million words whose segmentation and POS tags are given. We discarded the annotations due to the differences in annotation policy between CTB and this corpus. We used the MMA system (Kruengkrai et al., 2009) trained on the training data to perform word segmentation and POS tagging and used the Baseline parser to parse all the sentences in the data.",
"cite_spans": [
{
"start": 39,
"end": 60,
"text": "(Marcus et al., 1993)",
"ref_id": "BIBREF13"
},
{
"start": 309,
"end": 337,
"text": "(Yamada and Matsumoto, 2003)",
"ref_id": "BIBREF25"
},
{
"start": 362,
"end": 380,
"text": "(Koo et al., 2008)",
"ref_id": "BIBREF11"
},
{
"start": 403,
"end": 422,
"text": "(Ratnaparkhi, 1996)",
"ref_id": "BIBREF20"
},
{
"start": 657,
"end": 680,
"text": "(Charniak et al., 2000)",
"ref_id": "BIBREF2"
},
{
"start": 731,
"end": 732,
"text": "3",
"ref_id": null
},
{
"start": 1310,
"end": 1329,
"text": "(Chen et al., 2008;",
"ref_id": "BIBREF3"
},
{
"start": 1330,
"end": 1346,
"text": "Yu et al., 2008;",
"ref_id": "BIBREF26"
},
{
"start": 1347,
"end": 1365,
"text": "Chen et al., 2009)",
"ref_id": "BIBREF4"
},
{
"start": 1469,
"end": 1482,
"text": "(Huang, 2009)",
"ref_id": "BIBREF9"
},
{
"start": 1695,
"end": 1720,
"text": "(Kruengkrai et al., 2009)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data sets",
"sec_num": "6.1"
},
{
"text": "The previous studies have defined four types of features: (FT1) the first-order features defined in McDonald et al. (2005) , (FT2SB) the second-order parent-siblings features defined in McDonald and Pereira (2006) , (FT2GC) the second-order parentchild-grandchild features defined in Carreras (2007) , and (FT3) the third-order features defined in (Koo and Collins, 2010) .",
"cite_spans": [
{
"start": 100,
"end": 122,
"text": "McDonald et al. (2005)",
"ref_id": "BIBREF17"
},
{
"start": 186,
"end": 213,
"text": "McDonald and Pereira (2006)",
"ref_id": "BIBREF16"
},
{
"start": 284,
"end": 299,
"text": "Carreras (2007)",
"ref_id": "BIBREF1"
},
{
"start": 348,
"end": 371,
"text": "(Koo and Collins, 2010)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features for basic and enhanced parsers",
"sec_num": "6.2"
},
{
"text": "We used the first-and second-order parsers of the MSTParser as the basic parsers. Then we enhanced them with other higher-order features using beam-search. Table 2 shows the feature settings of the systems, where MST1/2 refers to the basic first-/second-order parser and MSTB1/2 refers to the enhanced first-/second-order parser. MSTB1 and MSTB2 used the same feature setting, but used different order models. This resulted in the difference of using FT2SB (beam-search in MSTB1 vs exactinference in MSTB2). We used these four parsers as the Baselines in the experiments. We measured the parser quality by the unlabeled attachment score (UAS), i.e., the percentage of tokens (excluding all punctuation tokens) with the correct HEAD. In the following experiments, we used \"Inter\" to refer to the parser with Intersect, and \"Rescore\" to refer to the parser with Rescoring.",
"cite_spans": [],
"ref_spans": [
{
"start": 156,
"end": 163,
"text": "Table 2",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Features for basic and enhanced parsers",
"sec_num": "6.2"
},
{
"text": "MST1 (FT1) MSTB1 (FT1)+(FT2SB+FT2GC+FT3) MST2 (FT1+FT2SB) MSTB2 (FT1+FT2SB)+(FT2GC+FT3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Features",
"sec_num": null
},
{
"text": "Since the setting of K (for beam search) affects our parsers, we studied its influence on the development set for English. We added the DLM-based features to MST1. Figure 4 shows the UAS curves on the development set, where K is beam size for Intersect and K-best for Rescoring, the X-axis represents K, and the Y-axis represents the UAS scores. The parsing performance generally increased as the K increased. The parser with Intersect always outperformed the one with Rescoring. Table 3 shows the parsing times of Intersect on the development set for English. By comparing the curves of Figure 4 , we can see that, while using larger K reduced the parsing speed, it improved the performance of our parsers. In the rest of the experiments, we set K=8 in order to obtain the high accuracy with reasonable speed and used Intersect to add the DLM-based features. Then, we studied the effect of adding different Ngram DLMs to MST1. Table 4 shows the results. From the table, we found that the parsing performance roughly increased as the N increased. When N=3 and N=4, the parsers obtained the same scores for English. For Chinese, the parser obtained the best score when N=4. Note that the size of the Chinese unannotated data was larger than that of English. In the rest of the experiments, we used 3-gram for English and 4-gram for Chinese.",
"cite_spans": [],
"ref_spans": [
{
"start": 164,
"end": 172,
"text": "Figure 4",
"ref_id": null
},
{
"start": 480,
"end": 487,
"text": "Table 3",
"ref_id": "TABREF1"
},
{
"start": 588,
"end": 596,
"text": "Figure 4",
"ref_id": null
},
{
"start": 928,
"end": 935,
"text": "Table 4",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Development experiments",
"sec_num": "6.3"
},
{
"text": "We evaluated the systems on the testing data for English. The results are shown in Table 5 , where -DLM refers to adding the DLM-based features to the Baselines. The parsers using the DLM-based features consistently outperformed the Baselines. For the basic models (MST1/2), we obtained absolute improvements of 0.94 and 0.63 points respectively. For the enhanced models (MSTB1/2), we found that there were 0.63 and 0.66 points improvements respectively. The improvements were significant in McNemar's Test (p < 10 \u22125 ) (Nivre et al., 2004) . ",
"cite_spans": [
{
"start": 520,
"end": 540,
"text": "(Nivre et al., 2004)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 83,
"end": 90,
"text": "Table 5",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Main results on English data",
"sec_num": "6.4"
},
{
"text": "The results are shown in Table 6 , where the abbreviations used are the same as those in Table 5 . As in the English experiments, the parsers using the DLMbased features consistently outperformed the Baselines. For the basic models (MST1/2), we obtained absolute improvements of 4.28 and 3.51 points respectively. For the enhanced models (MSTB1/2), we got 3.00 and 2.93 points improvements respectively. We obtained large improvements on the Chinese data. The reasons may be that we use the very large amount of data and 4-gram DLM that captures high-order information. The improvements were significant in McNemar's Test (p < 10 \u22127 ). Table 7 shows the performance of the graph-based systems that were compared, where McDonald06 refers to the second-order parser of McDonald and Pereira (2006) , Koo08-standard refers to the second-order parser with the features defined in Koo et al. (2008) , Koo10-model1 refers to the third-order parser with model1 of Koo and Collins (2010) , Koo08-dep2c refers to the second-order parser with cluster-based features of (Koo et al., 2008) , Suzuki09 refers to the parser of Suzuki et al. (2009) , Chen09-ord2s refers to the second-order parser with subtree-based features of Chen et al. (2009) , and Zhou11 refers to the second-order parser with web-derived selectional preference features of Zhou et al. (2011) .",
"cite_spans": [
{
"start": 767,
"end": 794,
"text": "McDonald and Pereira (2006)",
"ref_id": "BIBREF16"
},
{
"start": 875,
"end": 892,
"text": "Koo et al. (2008)",
"ref_id": "BIBREF11"
},
{
"start": 956,
"end": 978,
"text": "Koo and Collins (2010)",
"ref_id": "BIBREF10"
},
{
"start": 1058,
"end": 1076,
"text": "(Koo et al., 2008)",
"ref_id": "BIBREF11"
},
{
"start": 1112,
"end": 1132,
"text": "Suzuki et al. (2009)",
"ref_id": "BIBREF24"
},
{
"start": 1213,
"end": 1231,
"text": "Chen et al. (2009)",
"ref_id": "BIBREF4"
},
{
"start": 1331,
"end": 1349,
"text": "Zhou et al. (2011)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [
{
"start": 25,
"end": 32,
"text": "Table 6",
"ref_id": "TABREF7"
},
{
"start": 89,
"end": 96,
"text": "Table 5",
"ref_id": "TABREF5"
},
{
"start": 636,
"end": 643,
"text": "Table 7",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Main results on Chinese data",
"sec_num": "6.5"
},
{
"text": "The results showed that our MSTB-DLM2 obtained the comparable accuracy with the previous state-of-the-art systems. Koo10-model1 (Koo and Collins, 2010) used the third-order features and achieved the best reported result among the supervised parsers. Suzuki2009 (Suzuki et al., 2009) reported the best reported result by combining a Semisupervised Structured Conditional Model (Suzuki and Isozaki, 2008) with the method of (Koo et al., 2008) . However, their decoding complexities were higher than ours and we believe that the performance of our parser can be further enhanced by integrating their methods with our parser. Table 8 shows the comparative results, where Chen08 refers to the parser of (Chen et al., 2008) , Yu08 refers to the parser of (Yu et al., 2008) , Zhao09 refers to the parser of (Zhao et al., 2009) , and Chen09-ord2s refers to the second-order parser with subtree-based features of Chen et al. (2009) . The results showed that our score for this data was the best reported so far and significantly higher than the previous scores. ",
"cite_spans": [
{
"start": 128,
"end": 151,
"text": "(Koo and Collins, 2010)",
"ref_id": "BIBREF10"
},
{
"start": 261,
"end": 282,
"text": "(Suzuki et al., 2009)",
"ref_id": "BIBREF24"
},
{
"start": 376,
"end": 402,
"text": "(Suzuki and Isozaki, 2008)",
"ref_id": "BIBREF23"
},
{
"start": 422,
"end": 440,
"text": "(Koo et al., 2008)",
"ref_id": "BIBREF11"
},
{
"start": 698,
"end": 717,
"text": "(Chen et al., 2008)",
"ref_id": "BIBREF3"
},
{
"start": 749,
"end": 766,
"text": "(Yu et al., 2008)",
"ref_id": "BIBREF26"
},
{
"start": 800,
"end": 819,
"text": "(Zhao et al., 2009)",
"ref_id": "BIBREF28"
},
{
"start": 904,
"end": 922,
"text": "Chen et al. (2009)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 622,
"end": 629,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Compare with previous work on English",
"sec_num": "6.6"
},
{
"text": "Dependency parsers tend to perform worse on heads which have many children. Here, we studied the effect of DLM-based features for this structure. We calculated the number of children for each head and listed the accuracy changes for different numbers. We compared the MST-DLM1 and MST1 systems on the English data. The accuracy is the percentage of heads having all the correct children. Figure 5 shows the results for English, where the X-axis represents the number of children, the Yaxis represents the accuracies, OURS refers to MST-DLM1, and Baseline refers to MST1. For example, for heads having two children, Baseline obtained 89.04% accuracy while OURS obtained 89.32%. From the figure, we found that OURS achieved better performance consistently in all cases and when the larger the number of children became, the more significant the performance improvement was. ",
"cite_spans": [],
"ref_spans": [
{
"start": 388,
"end": 396,
"text": "Figure 5",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "7"
},
{
"text": "Several previous studies related to our work have been conducted. Koo et al. (2008) used a clustering algorithm to produce word clusters on a large amount of unannotated data and represented new features based on the clusters for dependency parsing models. Chen et al. (2009) proposed an approach that extracted partial tree structures from a large amount of data and used them as the additional features to improve dependency parsing. They approaches were still restricted in a small number of arcs in the graphs. Suzuki et al. (2009) presented a semisupervised learning approach. They extended a Semi-supervised Structured Conditional Model (SS-SCM) (Suzuki and Isozaki, 2008) to the dependency parsing problem and combined their method with the approach of Koo et al. (2008) . In future work, we may consider apply their methods on our parsers to improve further.",
"cite_spans": [
{
"start": 66,
"end": 83,
"text": "Koo et al. (2008)",
"ref_id": "BIBREF11"
},
{
"start": 257,
"end": 275,
"text": "Chen et al. (2009)",
"ref_id": "BIBREF4"
},
{
"start": 515,
"end": 535,
"text": "Suzuki et al. (2009)",
"ref_id": "BIBREF24"
},
{
"start": 652,
"end": 678,
"text": "(Suzuki and Isozaki, 2008)",
"ref_id": "BIBREF23"
},
{
"start": 760,
"end": 777,
"text": "Koo et al. (2008)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "8"
},
{
"text": "Another group of methods are the cotraining/self-training techniques.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "8"
},
{
"text": "McClosky et al. 2006presented a self-training approach for phrase structure parsing. Sagae and Tsujii (2007) used the co-training technique to improve performance. First, two parsers were used to parse the sentences in unannotated data. Then they selected some sentences which have the same trees produced by those two parsers. They retrained a parser on newly parsed sentences and the original labeled data. We are able to use the output of our systems for co-training/self-training techniques.",
"cite_spans": [
{
"start": 85,
"end": 108,
"text": "Sagae and Tsujii (2007)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "8"
},
{
"text": "We have presented an approach to utilizing the dependency language model to improve graph-based dependency parsing. We represent new features based on the dependency language model and integrate them in the decoding algorithm directly using beam-search. Our approach enriches the feature representations but without increasing the decoding complexity. When tested on both English and Chinese data, our parsers provided very competitive performance compared with the best systems on the English data and achieved the best performance on the Chinese data in the literature.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "9"
},
{
"text": "http://w3.msi.vxu.se/\u02dcnivre/research/Penn2Malt.html3 We ensured that the text used for extracting subtrees did not include the sentences of the Penn Treebank.4 http://www.cis.upenn.edu/\u02dcchinese/.5 We excluded the sentences of the CTB data from the Gigaword data",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "CoNLL-X shared task on multilingual dependency parsing",
"authors": [
{
"first": "S",
"middle": [],
"last": "Buchholz",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Marsi",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. of CoNLL-X. SIGNLL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Buchholz and E. Marsi. 2006. CoNLL-X shared task on multilingual dependency parsing. In Proc. of CoNLL-X. SIGNLL.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Experiments with a higher-order projective dependency parser",
"authors": [
{
"first": "Xavier",
"middle": [],
"last": "Carreras",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the CoNLL Shared Task Session of EMNLP-CoNLL",
"volume": "",
"issue": "",
"pages": "957--961",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xavier Carreras. 2007. Experiments with a higher-order projective dependency parser. In Proceedings of the CoNLL Shared Task Session of EMNLP-CoNLL 2007, pages 957-961, Prague, Czech Republic, June. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "BLLIP 1987-89 WSJ Corpus Release 1, LDC2000T43. Linguistic Data Consortium",
"authors": [
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
},
{
"first": "Don",
"middle": [],
"last": "Blaheta",
"suffix": ""
},
{
"first": "Niyu",
"middle": [],
"last": "Ge",
"suffix": ""
},
{
"first": "Keith",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Hale",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eugene Charniak, Don Blaheta, Niyu Ge, Keith Hall, John Hale, and Mark Johnson. 2000. BLLIP 1987- 89 WSJ Corpus Release 1, LDC2000T43. Linguistic Data Consortium.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Dependency parsing with short dependency relations in unlabeled data",
"authors": [
{
"first": "Wenliang",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Daisuke",
"middle": [],
"last": "Kawahara",
"suffix": ""
},
{
"first": "Kiyotaka",
"middle": [],
"last": "Uchimoto",
"suffix": ""
},
{
"first": "Yujie",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Hitoshi",
"middle": [],
"last": "Isahara",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of IJCNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wenliang Chen, Daisuke Kawahara, Kiyotaka Uchimoto, Yujie Zhang, and Hitoshi Isahara. 2008. Dependency parsing with short dependency relations in unlabeled data. In Proceedings of IJCNLP 2008.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Improving dependency parsing with subtrees from auto-parsed data",
"authors": [
{
"first": "Wenliang",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Kiyotaka",
"middle": [],
"last": "Kazama",
"suffix": ""
},
{
"first": "Kentaro",
"middle": [],
"last": "Uchimoto",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Torisawa",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of EMNLP 2009",
"volume": "",
"issue": "",
"pages": "570--579",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wenliang Chen, Jun'ichi Kazama, Kiyotaka Uchimoto, and Kentaro Torisawa. 2009. Improving dependency parsing with subtrees from auto-parsed data. In Pro- ceedings of EMNLP 2009, pages 570-579, Singapore, August.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A dundamental algorithm for dependency parsing",
"authors": [
{
"first": "Michael",
"middle": [
"A"
],
"last": "Covington",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 39th",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael A. Covington. 2001. A dundamental algorithm for dependency parsing. In Proceedings of the 39th",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Ultraconservative online algorithms for multiclass problems",
"authors": [
{
"first": "Koby",
"middle": [],
"last": "Crammer",
"suffix": ""
},
{
"first": "Yoram",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 2003,
"venue": "J. Mach. Learn. Res",
"volume": "3",
"issue": "",
"pages": "951--991",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Koby Crammer and Yoram Singer. 2003. Ultraconser- vative online algorithms for multiclass problems. J. Mach. Learn. Res., 3:951-991.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Three new probabilistic models for dependency parsing: An exploration",
"authors": [
{
"first": "J",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of COLING1996",
"volume": "",
"issue": "",
"pages": "340--345",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Eisner. 1996. Three new probabilistic models for de- pendency parsing: An exploration. In Proceedings of COLING1996, pages 340-345.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Tagged Chinese Gigaword Version 2.0, LDC2009T14. Linguistic Data Consortium",
"authors": [
{
"first": "Chu-Ren",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chu-Ren Huang. 2009. Tagged Chinese Gigaword Ver- sion 2.0, LDC2009T14. Linguistic Data Consortium.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Efficient thirdorder dependency parsers",
"authors": [
{
"first": "Terry",
"middle": [],
"last": "Koo",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of ACL 2010",
"volume": "",
"issue": "",
"pages": "1--11",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Terry Koo and Michael Collins. 2010. Efficient third- order dependency parsers. In Proceedings of ACL 2010, pages 1-11, Uppsala, Sweden, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Simple semi-supervised dependency parsing",
"authors": [
{
"first": "T",
"middle": [],
"last": "Koo",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Carreras",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of ACL-08: HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Koo, X. Carreras, and M. Collins. 2008. Simple semi-supervised dependency parsing. In Proceedings of ACL-08: HLT, Columbus, Ohio, June.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "An error-driven word-character hybrid model for joint Chinese word segmentation and POS tagging",
"authors": [
{
"first": "Canasai",
"middle": [],
"last": "Kruengkrai",
"suffix": ""
},
{
"first": "Kiyotaka",
"middle": [],
"last": "Uchimoto",
"suffix": ""
},
{
"first": "Yiou",
"middle": [],
"last": "Jun'ichi Kazama",
"suffix": ""
},
{
"first": "Kentaro",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Hitoshi",
"middle": [],
"last": "Torisawa",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Isahara",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of ACL-IJCNLP2009",
"volume": "",
"issue": "",
"pages": "513--521",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Canasai Kruengkrai, Kiyotaka Uchimoto, Jun'ichi Kazama, Yiou Wang, Kentaro Torisawa, and Hitoshi Isahara. 2009. An error-driven word-character hybrid model for joint Chinese word segmentation and POS tagging. In Proceedings of ACL-IJCNLP2009, pages 513-521, Suntec, Singapore, August. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Building a large annotated corpus of English: the Penn Treebank",
"authors": [
{
"first": "Mitchell",
"middle": [
"P"
],
"last": "Marcus",
"suffix": ""
},
{
"first": "Beatrice",
"middle": [],
"last": "Santorini",
"suffix": ""
},
{
"first": "Mary",
"middle": [
"Ann"
],
"last": "Marcinkiewicz",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguisticss",
"volume": "19",
"issue": "2",
"pages": "313--330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated cor- pus of English: the Penn Treebank. Computational Linguisticss, 19(2):313-330.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Reranking and self-training for parser adaptation",
"authors": [
{
"first": "D",
"middle": [],
"last": "Mcclosky",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Charniak",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of Coling-ACL",
"volume": "",
"issue": "",
"pages": "337--344",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. McClosky, E. Charniak, and M. Johnson. 2006. Reranking and self-training for parser adaptation. In Proceedings of Coling-ACL, pages 337-344.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Characterizing the errors of data-driven dependency parsing models",
"authors": [
{
"first": "R",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of EMNLP-CoNLL",
"volume": "",
"issue": "",
"pages": "122--131",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. McDonald and J. Nivre. 2007. Characterizing the errors of data-driven dependency parsing models. In Proceedings of EMNLP-CoNLL, pages 122-131.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Online learning of approximate dependency parsing algorithms",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of EACL 2006",
"volume": "",
"issue": "",
"pages": "81--88",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan McDonald and Fernando Pereira. 2006. On- line learning of approximate dependency parsing algo- rithms. In Proceedings of EACL 2006, pages 81-88.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Online large-margin training of dependency parsers",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Koby",
"middle": [],
"last": "Crammer",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of ACL 2005",
"volume": "",
"issue": "",
"pages": "91--98",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005. Online large-margin training of dependency parsers. In Proceedings of ACL 2005, pages 91-98. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Memory-based dependency parsing",
"authors": [
{
"first": "J",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Nilsson",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. of CoNLL",
"volume": "",
"issue": "",
"pages": "49--56",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Nivre, J. Hall, and J. Nilsson. 2004. Memory-based dependency parsing. In Proc. of CoNLL 2004, pages 49-56.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "The CoNLL 2007 shared task on dependency parsing",
"authors": [
{
"first": "J",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "K\u00fcbler",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Nilsson",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Yuret",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the CoNLL Shared Task Session of EMNLP-CoNLL",
"volume": "",
"issue": "",
"pages": "915--932",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Nivre, J. Hall, S. K\u00fcbler, R. McDonald, J. Nilsson, S. Riedel, and D. Yuret. 2007. The CoNLL 2007 shared task on dependency parsing. In Proceedings of the CoNLL Shared Task Session of EMNLP-CoNLL 2007, pages 915-932.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A maximum entropy model for part-of-speech tagging",
"authors": [
{
"first": "Adwait",
"middle": [],
"last": "Ratnaparkhi",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of EMNLP 1996",
"volume": "",
"issue": "",
"pages": "133--142",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adwait Ratnaparkhi. 1996. A maximum entropy model for part-of-speech tagging. In Proceedings of EMNLP 1996, pages 133-142.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Dependency parsing and domain adaptation with LR models and parser ensembles",
"authors": [
{
"first": "K",
"middle": [],
"last": "Sagae",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Tsujii",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the CoNLL Shared Task Session of EMNLP-CoNLL",
"volume": "",
"issue": "",
"pages": "1044--1050",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Sagae and J. Tsujii. 2007. Dependency parsing and domain adaptation with LR models and parser ensem- bles. In Proceedings of the CoNLL Shared Task Ses- sion of EMNLP-CoNLL 2007, pages 1044-1050.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "A new string-to-dependency machine translation algorithm with a target dependency language model",
"authors": [
{
"first": "Libin",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Jinxi",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Weischedel",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of ACL-08: HLT",
"volume": "",
"issue": "",
"pages": "577--585",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Libin Shen, Jinxi Xu, and Ralph Weischedel. 2008. A new string-to-dependency machine translation algo- rithm with a target dependency language model. In Proceedings of ACL-08: HLT, pages 577-585, Colum- bus, Ohio, June. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Semi-supervised sequential labeling and segmentation using giga-word scale unlabeled data",
"authors": [
{
"first": "Jun",
"middle": [],
"last": "Suzuki",
"suffix": ""
},
{
"first": "Hideki",
"middle": [],
"last": "Isozaki",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of ACL-08: HLT",
"volume": "",
"issue": "",
"pages": "665--673",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jun Suzuki and Hideki Isozaki. 2008. Semi-supervised sequential labeling and segmentation using giga-word scale unlabeled data. In Proceedings of ACL-08: HLT, pages 665-673, Columbus, Ohio, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "An empirical study of semi-supervised structured conditional models for dependency parsing",
"authors": [
{
"first": "Jun",
"middle": [],
"last": "Suzuki",
"suffix": ""
},
{
"first": "Hideki",
"middle": [],
"last": "Isozaki",
"suffix": ""
},
{
"first": "Xavier",
"middle": [],
"last": "Carreras",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2009,
"venue": "Singapore, August. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "551--560",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jun Suzuki, Hideki Isozaki, Xavier Carreras, and Michael Collins. 2009. An empirical study of semi-supervised structured conditional models for dependency parsing. In Proceedings of EMNLP2009, pages 551-560, Sin- gapore, August. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Statistical dependency analysis with support vector machines",
"authors": [
{
"first": "Hiroyasu",
"middle": [],
"last": "Yamada",
"suffix": ""
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of IWPT 2003",
"volume": "",
"issue": "",
"pages": "195--206",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hiroyasu Yamada and Yuji Matsumoto. 2003. Statistical dependency analysis with support vector machines. In Proceedings of IWPT 2003, pages 195-206.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Chinese dependency parsing with large scale automatically constructed case structures",
"authors": [
{
"first": "K",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Kawahara",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Kurohashi",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of Coling",
"volume": "",
"issue": "",
"pages": "1049--1056",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Yu, D. Kawahara, and S. Kurohashi. 2008. Chi- nese dependency parsing with large scale automati- cally constructed case structures. In Proceedings of Coling 2008, pages 1049-1056, Manchester, UK, Au- gust.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "A tale of two parsers: Investigating and combining graph-based and transitionbased dependency parsing",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of EMNLP 2008",
"volume": "",
"issue": "",
"pages": "562--571",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. Zhang and S. Clark. 2008. A tale of two parsers: In- vestigating and combining graph-based and transition- based dependency parsing. In Proceedings of EMNLP 2008, pages 562-571, Honolulu, Hawaii, October.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Cross language dependency parsing using a bilingual lexicon",
"authors": [
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Yan",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Chunyu",
"middle": [],
"last": "Kit",
"suffix": ""
},
{
"first": "Guodong",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of ACL-IJCNLP2009",
"volume": "",
"issue": "",
"pages": "55--63",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hai Zhao, Yan Song, Chunyu Kit, and Guodong Zhou. 2009. Cross language dependency parsing us- ing a bilingual lexicon. In Proceedings of ACL- IJCNLP2009, pages 55-63, Suntec, Singapore, Au- gust. Association for Computational Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Exploiting web-derived selectional preference to improve statistical dependency parsing",
"authors": [
{
"first": "Guangyou",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Cai",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of ACL-HLT2011",
"volume": "",
"issue": "",
"pages": "1556--1565",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guangyou Zhou, Jun Zhao, Kang Liu, and Li Cai. 2011. Exploiting web-derived selectional preference to im- prove statistical dependency parsing. In Proceedings of ACL-HLT2011, pages 1556-1565, Portland, Ore- gon, USA, June. Association for Computational Lin- guistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Adding the DLM scores to the parsing model"
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Cubic parsing actions of Eisner"
},
"FIGREF2": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "x t ) of x s . In the figure, HIS includes c Rs and x s ."
},
"FIGREF3": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Add DLM-based features in cubic parsing"
},
"FIGREF4": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Figure 4: The influence of K on the development data"
},
"FIGREF5": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Improvement relative to numbers of children"
},
"TABREF0": {
"type_str": "table",
"html": null,
"num": null,
"text": "Baseline parsers",
"content": "<table/>"
},
"TABREF1": {
"type_str": "table",
"html": null,
"num": null,
"text": "The parsing times on the development set (seconds for all the sentences)",
"content": "<table/>"
},
"TABREF3": {
"type_str": "table",
"html": null,
"num": null,
"text": "Effect of different N-gram DLMs",
"content": "<table/>"
},
"TABREF5": {
"type_str": "table",
"html": null,
"num": null,
"text": "Main results for English",
"content": "<table/>"
},
"TABREF7": {
"type_str": "table",
"html": null,
"num": null,
"text": "Main results for Chinese",
"content": "<table/>"
},
"TABREF9": {
"type_str": "table",
"html": null,
"num": null,
"text": "",
"content": "<table><tr><td>: Relevant results for English. G denotes the su-</td></tr><tr><td>pervised graph-based parsers, S denotes the graph-based</td></tr><tr><td>parsers with semi-supervised methods, D denotes our</td></tr><tr><td>new parsers</td></tr><tr><td>6.7 Compare with previous work on Chinese</td></tr></table>"
}
}
}
}