ACL-OCL / Base_JSON /prefixD /json /D07 /D07-1015.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D07-1015",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:20:14.839694Z"
},
"title": "Structured Prediction Models via the Matrix-Tree Theorem",
"authors": [
{
"first": "Terry",
"middle": [],
"last": "Koo",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "MIT CSAIL",
"location": {
"postCode": "02139",
"settlement": "Cambridge",
"region": "MA",
"country": "USA"
}
},
"email": ""
},
{
"first": "Amir",
"middle": [],
"last": "Globerson",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "MIT CSAIL",
"location": {
"postCode": "02139",
"settlement": "Cambridge",
"region": "MA",
"country": "USA"
}
},
"email": ""
},
{
"first": "Xavier",
"middle": [],
"last": "Carreras",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "MIT CSAIL",
"location": {
"postCode": "02139",
"settlement": "Cambridge",
"region": "MA",
"country": "USA"
}
},
"email": "carreras@csail.mit.edu"
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "MIT CSAIL",
"location": {
"postCode": "02139",
"settlement": "Cambridge",
"region": "MA",
"country": "USA"
}
},
"email": "mcollins@csail.mit.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper provides an algorithmic framework for learning statistical models involving directed spanning trees, or equivalently non-projective dependency structures. We show how partition functions and marginals for directed spanning trees can be computed by an adaptation of Kirchhoff's Matrix-Tree Theorem. To demonstrate an application of the method, we perform experiments which use the algorithm in training both log-linear and max-margin dependency parsers. The new training methods give improvements in accuracy over perceptron-trained models.",
"pdf_parse": {
"paper_id": "D07-1015",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper provides an algorithmic framework for learning statistical models involving directed spanning trees, or equivalently non-projective dependency structures. We show how partition functions and marginals for directed spanning trees can be computed by an adaptation of Kirchhoff's Matrix-Tree Theorem. To demonstrate an application of the method, we perform experiments which use the algorithm in training both log-linear and max-margin dependency parsers. The new training methods give improvements in accuracy over perceptron-trained models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Learning with structured data typically involves searching or summing over a set with an exponential number of structured elements, for example the set of all parse trees for a given sentence. Methods for summing over such structures include the inside-outside algorithm for probabilistic contextfree grammars (Baker, 1979) , the forward-backward algorithm for hidden Markov models (Baum et al., 1970) , and the belief-propagation algorithm for graphical models (Pearl, 1988) . These algorithms compute marginal probabilities and partition functions, quantities which are central to many methods for the statistical modeling of complex structures (e.g., the EM algorithm (Baker, 1979; Baum et al., 1970) , contrastive estimation (Smith and Eisner, 2005) , training algorithms for CRFs (Lafferty et al., 2001) , and training algorithms for max-margin models (Bartlett et al., 2004; Taskar et al., 2004a) ).",
"cite_spans": [
{
"start": 310,
"end": 323,
"text": "(Baker, 1979)",
"ref_id": "BIBREF0"
},
{
"start": 382,
"end": 401,
"text": "(Baum et al., 1970)",
"ref_id": "BIBREF2"
},
{
"start": 462,
"end": 475,
"text": "(Pearl, 1988)",
"ref_id": "BIBREF26"
},
{
"start": 671,
"end": 684,
"text": "(Baker, 1979;",
"ref_id": "BIBREF0"
},
{
"start": 685,
"end": 703,
"text": "Baum et al., 1970)",
"ref_id": "BIBREF2"
},
{
"start": 729,
"end": 753,
"text": "(Smith and Eisner, 2005)",
"ref_id": "BIBREF29"
},
{
"start": 785,
"end": 808,
"text": "(Lafferty et al., 2001)",
"ref_id": "BIBREF17"
},
{
"start": 857,
"end": 880,
"text": "(Bartlett et al., 2004;",
"ref_id": "BIBREF1"
},
{
"start": 881,
"end": 902,
"text": "Taskar et al., 2004a)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper describes inside-outside-style algorithms for the case of directed spanning trees. These structures are equivalent to non-projective dependency parses (McDonald et al., 2005b) , and more generally could be relevant to any task that involves learning a mapping from a graph to an underlying spanning tree. Unlike the case for projective dependency structures, partition functions and marginals for non-projective trees cannot be computed using dynamic-programming methods such as the insideoutside algorithm. In this paper we describe how these quantities can be computed by adapting a wellknown result in graph theory: Kirchhoff's Matrix-Tree Theorem (Tutte, 1984) . A na\u00efve application of the theorem yields O(n 4 ) and O(n 6 ) algorithms for computation of the partition function and marginals, respectively. However, our adaptation finds the partition function and marginals in O(n 3 ) time using simple matrix determinant and inversion operations.",
"cite_spans": [
{
"start": 162,
"end": 186,
"text": "(McDonald et al., 2005b)",
"ref_id": "BIBREF21"
},
{
"start": 662,
"end": 675,
"text": "(Tutte, 1984)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We demonstrate an application of the new inference algorithm to non-projective dependency parsing. Specifically, we show how to implement two popular supervised learning approaches for this task: globally-normalized log-linear models and max-margin models. Log-linear estimation critically depends on the calculation of partition functions and marginals, which can be computed by our algorithms. For max-margin models, Bartlett et al. (2004) have provided a simple training algorithm, based on exponentiated-gradient (EG) updates, that requires computation of marginals and can thus be implemented within our framework. Both of these methods explicitly minimize the loss incurred when parsing the entire training set. This contrasts with the online learning algorithms used in previous work with spanning-tree models (McDonald et al., 2005b) .",
"cite_spans": [
{
"start": 419,
"end": 441,
"text": "Bartlett et al. (2004)",
"ref_id": "BIBREF1"
},
{
"start": 817,
"end": 841,
"text": "(McDonald et al., 2005b)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We applied the above two marginal-based training algorithms to six languages with varying degrees of non-projectivity, using datasets obtained from the CoNLL-X shared task (Buchholz and Marsi, 2006) . Our experimental framework compared three training approaches: log-linear models, max-margin models, and the averaged perceptron. Each of these was applied to both projective and non-projective parsing. Our results demonstrate that marginal-based training yields models which out-perform those trained using the averaged perceptron.",
"cite_spans": [
{
"start": 172,
"end": 198,
"text": "(Buchholz and Marsi, 2006)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In summary, the contributions of this paper are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1. We introduce algorithms for inside-outsidestyle calculations for directed spanning trees, or equivalently non-projective dependency structures. These algorithms should have wide applicability in learning problems involving spanning-tree structures.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2. We illustrate the utility of these algorithms in log-linear training of dependency parsing models, and show improvements in accuracy when compared to averaged-perceptron training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "3. We also train max-margin models for dependency parsing via an EG algorithm (Bartlett et al., 2004) . The experiments presented here constitute the first application of this algorithm to a large-scale problem. We again show improved performance over the perceptron.",
"cite_spans": [
{
"start": 78,
"end": 101,
"text": "(Bartlett et al., 2004)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The goal of our experiments is to give a rigorous comparative study of the marginal-based training algorithms and a highly-competitive baseline, the averaged perceptron, using the same feature sets for all approaches. We stress, however, that the purpose of this work is not to give competitive performance on the CoNLL data sets; this would require further engineering of the approach. Similar adaptations of the Matrix-Tree Theorem have been developed independently and simultaneously by Smith and Smith (2007) and McDonald and Satta (2007) ; see Section 5 for more discussion.",
"cite_spans": [
{
"start": 490,
"end": 512,
"text": "Smith and Smith (2007)",
"ref_id": "BIBREF30"
},
{
"start": 517,
"end": 542,
"text": "McDonald and Satta (2007)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Dependency parsing is the task of mapping a sentence x to a dependency structure y. Given a sentence x with n words, a dependency for that sentence is a tuple (h, m) where h \u2208 [0 . . . n] is the index of the head word in the sentence, and m \u2208 [1 . . . n] is the index of a modifier word. The value h = 0 is a special root-symbol that may only appear as the head of a dependency. We use D(x) to refer to all possible dependencies for a sentence x:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discriminative Dependency Parsing",
"sec_num": "2.1"
},
{
"text": "D(x) = {(h, m) : h \u2208 [0 . . . n], m \u2208 [1 . . . n]}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discriminative Dependency Parsing",
"sec_num": "2.1"
},
{
"text": "A dependency parse is a set of dependencies that forms a directed tree, with the sentence's rootsymbol as its root. We will consider both projective trees, where dependencies are not allowed to cross, and non-projective trees, where crossing dependencies are allowed. Dependency annotations for some languages, for example Czech, can exhibit a significant number of crossing dependencies. In addition, we consider both single-root and multi-root trees. In a single-root tree y, the root-symbol has exactly one child, while in a multi-root tree, the root-symbol has one or more children. This distinction is relevant as our training sets include both single-root corpora (in which all trees are single-root structures) and multi-root corpora (in which some trees are multiroot structures). The two distinctions described above are orthogonal, yielding four classes of dependency structures; see Figure 1 for examples of each kind of structure. We use T s p (x) to denote the set of all possible projective single-root dependency structures for a sentence x, and T s np (x) to denote the set of single-root non-projective structures for x. The sets T m p (x) and T m np (x) are defined analogously for multi-root structures. In contexts where any class of dependency structures may be used, we use the notation T (x) as a placeholder that may be defined as",
"cite_spans": [],
"ref_spans": [
{
"start": 894,
"end": 902,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Discriminative Dependency Parsing",
"sec_num": "2.1"
},
{
"text": "T s p (x), T s np (x), T m p (x) or T m np (x). Following McDonald et al. (2005a)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discriminative Dependency Parsing",
"sec_num": "2.1"
},
{
"text": ", we use a discriminative model for dependency parsing. Features in the model are defined through a function f (x, h, m) which maps a sentence x together with a dependency (h, m) to a feature vector in R d . A feature vector can be sensitive to any properties of the triple (x, h, m). Given a parameter vector w, the optimal dependency structure for a sentence x is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discriminative Dependency Parsing",
"sec_num": "2.1"
},
{
"text": "y * (x; w) = argmax y\u2208T (x) (h,m)\u2208y w \u2022 f (x, h, m) (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discriminative Dependency Parsing",
"sec_num": "2.1"
},
{
"text": "where the set T (x) can be defined as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discriminative Dependency Parsing",
"sec_num": "2.1"
},
{
"text": "T s p (x), T s np (x), T m p (x) or T m np (x)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discriminative Dependency Parsing",
"sec_num": "2.1"
},
{
"text": ", depending on the type of parsing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discriminative Dependency Parsing",
"sec_num": "2.1"
},
{
"text": "The parameters w will be learned from a training set",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discriminative Dependency Parsing",
"sec_num": "2.1"
},
{
"text": "{(x i , y i )} N i=1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discriminative Dependency Parsing",
"sec_num": "2.1"
},
{
"text": "where each x i is a sentence and each y i is a dependency structure. Much of the previous work on learning w has focused on training local models (see Section 5). McDonald et al. (2005a; 2005b) trained global models using online algorithms such as the perceptron algorithm or MIRA. In this paper we consider training algorithms based on work in conditional random fields (CRFs) (Lafferty et al., 2001 ) and max-margin methods (Taskar et al., 2004a) .",
"cite_spans": [
{
"start": 163,
"end": 186,
"text": "McDonald et al. (2005a;",
"ref_id": "BIBREF20"
},
{
"start": 187,
"end": 193,
"text": "2005b)",
"ref_id": "BIBREF21"
},
{
"start": 378,
"end": 400,
"text": "(Lafferty et al., 2001",
"ref_id": "BIBREF17"
},
{
"start": 426,
"end": 448,
"text": "(Taskar et al., 2004a)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discriminative Dependency Parsing",
"sec_num": "2.1"
},
{
"text": "This section highlights three inference problems which arise in training and decoding discriminative dependency parsers, and which are central to the approaches described in this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Three Inference Problems",
"sec_num": "2.2"
},
{
"text": "Assume that we have a vector \u03b8 with values \u03b8 h,m \u2208 R for all (h, m) \u2208 D(x); these values correspond to weights on the different dependencies in D(x). Define a conditional distribution over all dependency structures y \u2208 T (x) as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Three Inference Problems",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (y | x; \u03b8) = exp (h,m)\u2208y \u03b8 h,m Z(x; \u03b8) (2) Z(x; \u03b8) = y\u2208T (x) exp \uf8f1 \uf8f2 \uf8f3 (h,m)\u2208y \u03b8 h,m \uf8fc \uf8fd \uf8fe",
"eq_num": "(3)"
}
],
"section": "Three Inference Problems",
"sec_num": "2.2"
},
{
"text": "The function Z(x; \u03b8) is commonly referred to as the partition function. Given the distribution P (y | x; \u03b8), we can define the marginal probability of a dependency (h, m) as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Three Inference Problems",
"sec_num": "2.2"
},
{
"text": "\u00b5 h,m (x; \u03b8) = y\u2208T (x) : (h,m)\u2208y P (y | x; \u03b8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Three Inference Problems",
"sec_num": "2.2"
},
{
"text": "The inference problems are then as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Three Inference Problems",
"sec_num": "2.2"
},
{
"text": "Problem 1: Decoding:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Three Inference Problems",
"sec_num": "2.2"
},
{
"text": "Find argmax y\u2208T (x) (h,m)\u2208y \u03b8 h,m",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Three Inference Problems",
"sec_num": "2.2"
},
{
"text": "Problem 2: Computation of the Partition Function: Calculate Z(x; \u03b8).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Three Inference Problems",
"sec_num": "2.2"
},
{
"text": "Problem 3: Computation of the Marginals:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Three Inference Problems",
"sec_num": "2.2"
},
{
"text": "For all (h, m) \u2208 D(x), calculate \u00b5 h,m (x; \u03b8).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Three Inference Problems",
"sec_num": "2.2"
},
{
"text": "Note that all three problems require a maximization or summation over the set T (x), which is exponential in size. There is a clear motivation for being able to solve Problem 1: by setting \u03b8 h,m = w \u2022 f (x, h, m), the optimal dependency structure y * (x; w) (see Eq. 1) can be computed. In this paper the motivation for solving Problems 2 and 3 arises from training algorithms for discriminative models. As we will describe in Section 4, both log-linear and max-margin models can be trained via methods that make direct use of algorithms for Problems 2 and 3. In the case of projective dependency structures (i.e., T (x) defined as T s p (x) or T m p (x)), there are well-known algorithms for all three inference problems. Decoding can be carried out using Viterbistyle dynamic-programming algorithms, for example the O(n 3 ) algorithm of Eisner (1996) . Computation of the marginals and partition function can also be achieved in O(n 3 ) time, using a variant of the inside-outside algorithm (Baker, 1979) applied to the Eisner (1996) data structures (Paskin, 2001 ).",
"cite_spans": [
{
"start": 839,
"end": 852,
"text": "Eisner (1996)",
"ref_id": "BIBREF11"
},
{
"start": 993,
"end": 1006,
"text": "(Baker, 1979)",
"ref_id": "BIBREF0"
},
{
"start": 1022,
"end": 1035,
"text": "Eisner (1996)",
"ref_id": "BIBREF11"
},
{
"start": 1052,
"end": 1065,
"text": "(Paskin, 2001",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Three Inference Problems",
"sec_num": "2.2"
},
{
"text": "In the non-projective case (i.e., et al. (2005b) describe how the CLE algorithm (Chu and Liu, 1965; Edmonds, 1967) can be used for decoding. However, it is not possible to compute the marginals and partition function using the inside-outside algorithm. We next describe a method for computing these quantities in O(n 3 ) time using matrix inverse and determinant operations.",
"cite_spans": [
{
"start": 34,
"end": 48,
"text": "et al. (2005b)",
"ref_id": null
},
{
"start": 80,
"end": 99,
"text": "(Chu and Liu, 1965;",
"ref_id": "BIBREF6"
},
{
"start": 100,
"end": 114,
"text": "Edmonds, 1967)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Three Inference Problems",
"sec_num": "2.2"
},
{
"text": "T (x) defined as T s np (x) or T m np (x)), McDonald",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Three Inference Problems",
"sec_num": "2.2"
},
{
"text": "3 Spanning-tree inference using the Matrix-Tree Theorem",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Three Inference Problems",
"sec_num": "2.2"
},
{
"text": "In this section we present algorithms for computing the partition function and marginals, as defined in Section 2.2, for non-projective parsing. We first reiterate the observation of McDonald et al. (2005a) that non-projective parses correspond to directed spanning trees on a complete directed graph of n nodes, where n is the length of the sentence. The above inference problems thus involve summation over the set of all directed spanning trees. Note that this set is exponentially large, and there is no obvious method for decomposing the sum into dynamicprogramming-like subproblems. This section describes how a variant of Kirchhoff's Matrix-Tree Theorem (Tutte, 1984) can be used to evaluate the partition function and marginals efficiently.",
"cite_spans": [
{
"start": 183,
"end": 206,
"text": "McDonald et al. (2005a)",
"ref_id": "BIBREF20"
},
{
"start": 661,
"end": 674,
"text": "(Tutte, 1984)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Three Inference Problems",
"sec_num": "2.2"
},
{
"text": "In what follows, we consider the single-root setting (i.e., T (x) = T s np (x)), leaving the multi-root case (i.e., T (x) = T m np (x)) to Section 3.3. For a sentence x with n words, define a complete directed graph G on n nodes, where each node corresponds to a word in x, and each edge corresponds to a dependency between two words in x. Note that G does not include the root-symbol h = 0, nor does it account for any dependencies (0, m) headed by the root-symbol. We assign non-negative weights to the edges of this graph, yielding the following weighted adjacency matrix A(\u03b8) \u2208 R n\u00d7n , for h, m = 1 . . . n:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Three Inference Problems",
"sec_num": "2.2"
},
{
"text": "A h,m (\u03b8) = 0, if h = m exp {\u03b8 h,m } , otherwise",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Three Inference Problems",
"sec_num": "2.2"
},
{
"text": "To account for the dependencies (0, m) headed by the root-symbol, we define a vector of root-selection scores r(\u03b8) \u2208 R n , for m = 1 . . . n:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Three Inference Problems",
"sec_num": "2.2"
},
{
"text": "r m (\u03b8) = exp {\u03b8 0,m }",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Three Inference Problems",
"sec_num": "2.2"
},
{
"text": "Let the weight of a dependency structure y \u2208 T s np (x) be defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Three Inference Problems",
"sec_num": "2.2"
},
{
"text": "\u03c8(y; \u03b8) = r root(y) (\u03b8) (h,m)\u2208y : h =0 A h,m (\u03b8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Three Inference Problems",
"sec_num": "2.2"
},
{
"text": "Here, root(y) = m : (0, m) \u2208 y is the child of the root-symbol; there is exactly one such child, since y \u2208 T s np (x). Eq. 2 and 3 can be rephrased as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Three Inference Problems",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (y | x; \u03b8) = \u03c8(y; \u03b8) Z(x; \u03b8) (4) Z(x; \u03b8) = y\u2208T s np (x) \u03c8(y; \u03b8)",
"eq_num": "(5)"
}
],
"section": "Three Inference Problems",
"sec_num": "2.2"
},
{
"text": "In the remainder of this section, we drop the notational dependence on x for brevity. The original Matrix-Tree Theorem addressed the problem of counting the number of undirected spanning trees in an undirected graph. For the models we study here, we require a sum of weighted and directed spanning trees. Tutte (1984) extended the Matrix-Tree Theorem to this case. We briefly summarize his method below.",
"cite_spans": [
{
"start": 305,
"end": 317,
"text": "Tutte (1984)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Three Inference Problems",
"sec_num": "2.2"
},
{
"text": "First, define the Laplacian matrix L(\u03b8) \u2208 R n\u00d7n of G, for h, m = 1 . . . n:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Three Inference Problems",
"sec_num": "2.2"
},
{
"text": "L h,m (\u03b8) = n h =1 A h ,m (\u03b8) if h = m \u2212A h,m (\u03b8) otherwise",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Three Inference Problems",
"sec_num": "2.2"
},
{
"text": "Second, for a matrix X, let X (h,m) be the minor of X with respect to row h and column m; i.e., the determinant of the matrix formed by deleting row h and column m from X. Finally, define the weight of any directed spanning tree of G to be the product of the weights A h,m (\u03b8) for the edges in that tree. Theorem 1 (Tutte, 1984, p. 140) . Let L(\u03b8) be the Laplacian matrix of G. Then L (m,m) (\u03b8) is equal to the sum of the weights of all directed spanning trees of G which are rooted at m. Furthermore, the minors vary only in sign when traversing the columns of the Laplacian (Tutte, 1984, p. 150) :",
"cite_spans": [
{
"start": 315,
"end": 336,
"text": "(Tutte, 1984, p. 140)",
"ref_id": null
},
{
"start": 576,
"end": 597,
"text": "(Tutte, 1984, p. 150)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Three Inference Problems",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2200h, m: (\u22121) h+m L (h,m) (\u03b8) = L (m,m) (\u03b8)",
"eq_num": "(6)"
}
],
"section": "Three Inference Problems",
"sec_num": "2.2"
},
{
"text": "3.1 Partition functions via matrix determinants",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Three Inference Problems",
"sec_num": "2.2"
},
{
"text": "From Theorem 1, it directly follows that",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Three Inference Problems",
"sec_num": "2.2"
},
{
"text": "L (m,m) (\u03b8) = y\u2208U (m) (h,m)\u2208y : h =0 A h,m (\u03b8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Three Inference Problems",
"sec_num": "2.2"
},
{
"text": "where U(m) = {y \u2208 T s np : root(y) = m}. A na\u00efve method for computing the partition function is therefore to evaluate",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Three Inference Problems",
"sec_num": "2.2"
},
{
"text": "Z(\u03b8) = n m=1 r m (\u03b8)L (m,m) (\u03b8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Three Inference Problems",
"sec_num": "2.2"
},
{
"text": "The above would require calculating n determinants, resulting in O(n 4 ) complexity. However, as we show below Z(\u03b8) may be obtained in O(n 3 ) time using a single determinant evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Three Inference Problems",
"sec_num": "2.2"
},
{
"text": "Define a new matrixL(\u03b8) to be L(\u03b8) with the first row replaced by the root-selection scores:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Three Inference Problems",
"sec_num": "2.2"
},
{
"text": "L h,m (\u03b8) = r m (\u03b8) h = 1 L h,m (\u03b8) h > 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Three Inference Problems",
"sec_num": "2.2"
},
{
"text": "This matrix allows direct computation of the partition function, as the following proposition shows. Proposition 1 The partition function in Eq. 5 is given by Z(\u03b8) = |L(\u03b8)|. Proof: Consider the row expansion of |L(\u03b8)| with respect to row 1:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Three Inference Problems",
"sec_num": "2.2"
},
{
"text": "|L(\u03b8)| = n m=1 (\u22121) 1+mL 1,m (\u03b8)L (1,m) (\u03b8) = n m=1 (\u22121) 1+m r m (\u03b8)L (1,m) (\u03b8) = n m=1 r m (\u03b8)L (m,m) (\u03b8) = Z(\u03b8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Three Inference Problems",
"sec_num": "2.2"
},
{
"text": "The second line follows from the construction of L(\u03b8), and the third line follows from Eq. 6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Three Inference Problems",
"sec_num": "2.2"
},
{
"text": "The marginals we require are given by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Marginals via matrix inversion",
"sec_num": "3.2"
},
{
"text": "\u00b5 h,m (\u03b8) = 1 Z(\u03b8) y\u2208T s np : (h,m)\u2208y \u03c8(y; \u03b8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Marginals via matrix inversion",
"sec_num": "3.2"
},
{
"text": "To calculate these marginals efficiently for all values of (h, m) we use a well-known identity relating the log partition-function to marginals",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Marginals via matrix inversion",
"sec_num": "3.2"
},
{
"text": "\u00b5 h,m (\u03b8) = \u2202 log Z(\u03b8) \u2202\u03b8 h,m",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Marginals via matrix inversion",
"sec_num": "3.2"
},
{
"text": "Since the partition function in this case has a closedform expression (i.e., the determinant of a matrix constructed from \u03b8), the marginals can also obtained in closed form. Using the chain rule, the derivative of the log partition-function in Proposition 1 is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Marginals via matrix inversion",
"sec_num": "3.2"
},
{
"text": "\u00b5 h,m (\u03b8) = \u2202 log |L(\u03b8)| \u2202\u03b8 h,m = n h =1 n m =1 \u2202 log |L(\u03b8)| \u2202L h ,m (\u03b8) \u2202L h ,m (\u03b8) \u2202\u03b8 h,m",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Marginals via matrix inversion",
"sec_num": "3.2"
},
{
"text": "To perform the derivative, we use the identity",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Marginals via matrix inversion",
"sec_num": "3.2"
},
{
"text": "\u2202 log |X| \u2202X = X \u22121 T",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Marginals via matrix inversion",
"sec_num": "3.2"
},
{
"text": "and the fact that \u2202L h ,m (\u03b8)/\u2202\u03b8 h,m is nonzero for only a few h , m . Specifically, when h = 0, the marginals are given by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Marginals via matrix inversion",
"sec_num": "3.2"
},
{
"text": "\u00b5 0,m (\u03b8) = r m (\u03b8) L \u22121 (\u03b8) m,1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Marginals via matrix inversion",
"sec_num": "3.2"
},
{
"text": "and for h > 0, the marginals are given by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Marginals via matrix inversion",
"sec_num": "3.2"
},
{
"text": "\u00b5 h,m (\u03b8) = (1 \u2212 \u03b4 1,m )A h,m (\u03b8) L \u22121 (\u03b8) m,m \u2212 (1 \u2212 \u03b4 h,1 )A h,m (\u03b8) L \u22121 (\u03b8) m,h",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Marginals via matrix inversion",
"sec_num": "3.2"
},
{
"text": "where \u03b4 h,m is the Kronecker delta. Thus, the complexity of evaluating all the relevant marginals is dominated by the matrix inversion, and the total complexity is therefore O(n 3 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Marginals via matrix inversion",
"sec_num": "3.2"
},
{
"text": "In the case of multiple roots, we can still compute the partition function and marginals efficiently. In fact, the derivation of this case is simpler than for single-root structures. Create an extended graph G which augments G with a dummy root node that has edges pointing to all of the existing nodes, weighted by the appropriate root-selection scores. Note that there is a bijection between directed spanning trees of G rooted at the dummy root and multi-root structures y \u2208 T m np (x). Thus, Theorem 1 can be used to compute the partition function directly: construct a Laplacian matrix L(\u03b8) for G and compute the minor L (0,0) (\u03b8). Since this minor is also a determinant, the marginals can be obtained analogously to the single-root case. More concretely, this technique corresponds to defining the matrixL(\u03b8) a\u015d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multiple Roots",
"sec_num": "3.3"
},
{
"text": "L(\u03b8) = L(\u03b8) + diag(r(\u03b8))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multiple Roots",
"sec_num": "3.3"
},
{
"text": "where diag(v) is the diagonal matrix with the vector v on its diagonal.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multiple Roots",
"sec_num": "3.3"
},
{
"text": "The techniques above extend easily to the case where dependencies are labeled. For a model with L different labels, it suffices to define the edge and root scores as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Labeled Trees",
"sec_num": "3.4"
},
{
"text": "A h,m (\u03b8) = L =1 exp {\u03b8 h,m, } and r m (\u03b8) = L =1 exp {\u03b8 0,m, }.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Labeled Trees",
"sec_num": "3.4"
},
{
"text": "The partition function over labeled trees is obtained by operating on these values as described previously, and the marginals are given by an application of the chain rule. Both inference problems are solvable in O(n 3 + Ln 2 ) time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Labeled Trees",
"sec_num": "3.4"
},
{
"text": "This section describes two methods for parameter estimation that rely explicitly on the computation of the partition function and marginals.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Algorithms",
"sec_num": "4"
},
{
"text": "In conditional log-linear models (Johnson et al., 1999; Lafferty et al., 2001 ), a distribution over parse trees for a sentence x is defined as follows:",
"cite_spans": [
{
"start": 33,
"end": 55,
"text": "(Johnson et al., 1999;",
"ref_id": "BIBREF14"
},
{
"start": 56,
"end": 77,
"text": "Lafferty et al., 2001",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Log-Linear Estimation",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (y | x; w) = exp (h,m)\u2208y w \u2022 f (x, h, m) Z(x; w)",
"eq_num": "(7)"
}
],
"section": "Log-Linear Estimation",
"sec_num": "4.1"
},
{
"text": "where Z(x; w) is the partition function, a sum over",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Log-Linear Estimation",
"sec_num": "4.1"
},
{
"text": "T s p (x), T s np (x), T m p (x) or T m np (x)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Log-Linear Estimation",
"sec_num": "4.1"
},
{
"text": ". We train the model using the approach described by Sha and Pereira (2003) . Assume that we have a training set",
"cite_spans": [
{
"start": 53,
"end": 75,
"text": "Sha and Pereira (2003)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Log-Linear Estimation",
"sec_num": "4.1"
},
{
"text": "{(x i , y i )} N i=1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Log-Linear Estimation",
"sec_num": "4.1"
},
{
"text": "The optimal parameters are taken to be w * = argmin w L(w) where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Log-Linear Estimation",
"sec_num": "4.1"
},
{
"text": "L(w) = \u2212C N i=1 log P (y i | x i ; w) + 1 2 ||w|| 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Log-Linear Estimation",
"sec_num": "4.1"
},
{
"text": "The parameter C > 0 is a constant dictating the level of regularization in the model. Since L(w) is a convex function, gradient descent methods can be used to search for the global minimum. Such methods typically involve repeated computation of the loss L(w) and gradient \u2202L(w) \u2202w , requiring efficient implementations of both functions. Note that the log-probability of a parse is",
"cite_spans": [
{
"start": 272,
"end": 277,
"text": "\u2202L(w)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Log-Linear Estimation",
"sec_num": "4.1"
},
{
"text": "log P (y | x; w) = (h,m)\u2208y w \u2022 f (x, h, m) \u2212 log Z(x; w)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Log-Linear Estimation",
"sec_num": "4.1"
},
{
"text": "so that the main issue in calculating the loss function L(w) is the evaluation of the partition functions Z(x i ; w). The gradient of the loss is given by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Log-Linear Estimation",
"sec_num": "4.1"
},
{
"text": "\u2202L(w) \u2202w = w \u2212 C N i=1 (h,m)\u2208y i f (x i , h, m) + C N i=1 (h,m)\u2208D(x i ) \u00b5 h,m (x i ; w)f (x i , h, m)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Log-Linear Estimation",
"sec_num": "4.1"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Log-Linear Estimation",
"sec_num": "4.1"
},
{
"text": "\u00b5 h,m (x; w) = y\u2208T (x) : (h,m)\u2208y P (y | x; w)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Log-Linear Estimation",
"sec_num": "4.1"
},
{
"text": "is the marginal probability of a dependency (h, m). Thus, the main issue in the evaluation of the gradient is the computation of the marginals \u00b5 h,m (x i ; w). Note that Eq. 7 forms a special case of the loglinear distribution defined in Eq. 2 in Section 2.2. If we set \u03b8 h,m = w \u2022 f (x, h, m) then we have P (y | x; w) = P (y | x; \u03b8), Z(x; w) = Z(x; \u03b8), and \u00b5 h,m (x; w) = \u00b5 h,m (x; \u03b8). Thus in the projective case the inside-outside algorithm can be used to calculate the partition function and marginals, thereby enabling training of a log-linear model; in the nonprojective case the algorithms in Section 3 can be used for this purpose.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Log-Linear Estimation",
"sec_num": "4.1"
},
{
"text": "The second learning algorithm we consider is the large-margin approach for structured prediction (Taskar et al., 2004a; Taskar et al., 2004b) . Learning in this framework again involves minimization of a convex function L(w). Let the margin for parse tree y on the i'th training example be defined as",
"cite_spans": [
{
"start": 97,
"end": 119,
"text": "(Taskar et al., 2004a;",
"ref_id": "BIBREF31"
},
{
"start": 120,
"end": 141,
"text": "Taskar et al., 2004b)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Max-Margin Estimation",
"sec_num": "4.2"
},
{
"text": "m i,y (w) = (h,m)\u2208y i w\u2022f (x i , h, m) \u2212 (h,m)\u2208y w\u2022f (x i , h, m)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Max-Margin Estimation",
"sec_num": "4.2"
},
{
"text": "The loss function is then defined as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Max-Margin Estimation",
"sec_num": "4.2"
},
{
"text": "L(w) = C N i=1 max y\u2208T (x i ) (E i,y \u2212 m i,y (w)) + 1 2 ||w|| 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Max-Margin Estimation",
"sec_num": "4.2"
},
{
"text": "where E i,y is a measure of the loss-or number of errors-for parse y on the i'th training sentence. In this paper we take E i,y to be the number of incorrect dependencies in the parse tree y when compared to the gold-standard parse tree y i . The definition of L(w) makes use of the expression max y\u2208T (x i ) (E i,y \u2212 m i,y (w)) for the i'th training example, which is commonly referred to as the hinge loss. Note that E i,y i = 0, and also that m i,y i (w) = 0, so that the hinge loss is always nonnegative. In addition, the hinge loss is 0 if and only if m i,y (w) \u2265 E i,y for all y \u2208 T (x i ). Thus the hinge loss directly penalizes margins m i,y (w) which are less than their corresponding losses E i,y . Figure 2 shows an algorithm for minimizing L(w) that is based on the exponentiated-gradient algorithm for large-margin optimization described by Bartlett et al. (2004) . The algorithm maintains a set of weights \u03b8 i,h,m for i = 1 . . . N, (h, m) \u2208 D(x i ), which are updated example-by-example. The algorithm relies on the repeated computation of marginal values \u00b5 i,h,m , which are defined as follows: 1",
"cite_spans": [
{
"start": 854,
"end": 876,
"text": "Bartlett et al. (2004)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 709,
"end": 717,
"text": "Figure 2",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Max-Margin Estimation",
"sec_num": "4.2"
},
{
"text": "\u00b5 i,h,m = y\u2208T (x i ) : (h,m)\u2208y P (y | x i ) (8) P (y | x i ) = exp (h,m)\u2208y \u03b8 i,h,m y \u2208T (x i ) exp (h,m)\u2208y \u03b8 i,h,m",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Max-Margin Estimation",
"sec_num": "4.2"
},
{
"text": "A similar definition is used to derive marginal values \u00b5 i,h,m from the values \u03b8 i,h,m . Computation of the \u00b5 and \u00b5 values is again inference of the form described in Problem 3 in Section 2.2, and can be Algorithm: Repeat T passes over the training set, where each pass is as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Max-Margin Estimation",
"sec_num": "4.2"
},
{
"text": "w = C i (h,m)\u2208D(x i ) \u03b4 i,h,m f (xi, h, m) where \u03b4 i,h,m = (1 \u2212 l i,h,m \u2212 \u00b5 i,h",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Max-Margin Estimation",
"sec_num": "4.2"
},
{
"text": "Set obj = 0 For i = 1 . . . N \u2022 For all (h, m) \u2208 D(xi): \u03b8 i,h,m = \u03b8 i,h,m + \u03b7C (l i,h,m + w \u2022 f (xi, h, m))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Max-Margin Estimation",
"sec_num": "4.2"
},
{
"text": "\u2022 For example i, calculate marginals \u00b5 i,h,m from \u03b8 i,h,m values, and marginals \u00b5 i,h,m from \u03b8 i,h,m values (see Eq. 8).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Max-Margin Estimation",
"sec_num": "4.2"
},
{
"text": "\u2022 Update the parameters:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Max-Margin Estimation",
"sec_num": "4.2"
},
{
"text": "w = w + C (h,m)\u2208D(x i ) \u03b4 i,h,m f (xi, h, m) where \u03b4 i,h,m = \u00b5 i,h,m \u2212 \u00b5 i,h,m , \u2022 For all (h, m) \u2208 D(xi), set \u03b8 i,h,m = \u03b8 i,h,m \u2022 Set obj = obj + C (h,m)\u2208D(x i ) l i,h,m \u00b5 i,h,m",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Max-Margin Estimation",
"sec_num": "4.2"
},
{
"text": "Set obj = obj \u2212 ||w|| 2 2 . If obj has decreased compared to last iteration, set \u03b7 = \u03b7 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Max-Margin Estimation",
"sec_num": "4.2"
},
{
"text": "Output: Parameter values w. The learning rate \u03b7 is halved each time the dual objective function (see (Bartlett et al., 2004) ) fails to increase. In our experiments we chose \u03b2 = 9, which was found to work well during development of the algorithm.",
"cite_spans": [
{
"start": 101,
"end": 124,
"text": "(Bartlett et al., 2004)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Max-Margin Estimation",
"sec_num": "4.2"
},
{
"text": "achieved using the inside-outside algorithm for projective structures, and the algorithms described in Section 3 for non-projective structures.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Max-Margin Estimation",
"sec_num": "4.2"
},
{
"text": "Global log-linear training has been used in the context of PCFG parsing (Johnson, 2001) . Riezler et al. (2004) explore a similar application of log-linear models to LFG parsing. Max-margin learning has been applied to PCFG parsing by Taskar et al. (2004b) . They show that this problem has a QP dual of polynomial size, where the dual variables correspond to marginal probabilities of CFG rules. A similar QP dual may be obtained for max-margin projective dependency parsing. However, for nonprojective parsing, the dual QP would require an exponential number of constraints on the dependency marginals (Chopra, 1989) . Nevertheless, alternative optimization methods like that of Tsochantaridis et al. (2004) , or the EG method presented here, can still be applied. The majority of previous work on dependency parsing has focused on local (i.e., classification of individual edges) discriminative training methods (Yamada and Matsumoto, 2003; Nivre et al., 2004; Y. Cheng, 2005 ). Non-local (i.e., classification of entire trees) training methods were used by McDonald et al. (2005a) , who employed online learning.",
"cite_spans": [
{
"start": 72,
"end": 87,
"text": "(Johnson, 2001)",
"ref_id": "BIBREF15"
},
{
"start": 90,
"end": 111,
"text": "Riezler et al. (2004)",
"ref_id": "BIBREF27"
},
{
"start": 235,
"end": 256,
"text": "Taskar et al. (2004b)",
"ref_id": "BIBREF32"
},
{
"start": 604,
"end": 618,
"text": "(Chopra, 1989)",
"ref_id": "BIBREF5"
},
{
"start": 681,
"end": 709,
"text": "Tsochantaridis et al. (2004)",
"ref_id": "BIBREF33"
},
{
"start": 915,
"end": 943,
"text": "(Yamada and Matsumoto, 2003;",
"ref_id": null
},
{
"start": 944,
"end": 963,
"text": "Nivre et al., 2004;",
"ref_id": "BIBREF23"
},
{
"start": 964,
"end": 978,
"text": "Y. Cheng, 2005",
"ref_id": null
},
{
"start": 1061,
"end": 1084,
"text": "McDonald et al. (2005a)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Dependency parsing accuracy can be improved by allowing second-order features, which consider more than one dependency simultaneously. McDonald and Pereira (2006) define a second-order dependency parsing model in which interactions between adjacent siblings are allowed, and Carreras (2007) defines a second-order model that allows grandparent and sibling interactions. Both authors give polytime algorithms for exact projective parsing. By adapting the inside-outside algorithm to these models, partition functions and marginals can be computed for second-order projective structures, allowing log-linear and max-margin training to be applied via the framework developed in this paper. For higher-order non-projective parsing, however, computational complexity results McDonald and Satta, 2007) indicate that exact solutions to the three inference problems of Section 2.2 will be intractable. Exploration of approximate second-order non-projective inference is a natural avenue for future research.",
"cite_spans": [
{
"start": 275,
"end": 290,
"text": "Carreras (2007)",
"ref_id": "BIBREF4"
},
{
"start": 770,
"end": 795,
"text": "McDonald and Satta, 2007)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Two other groups of authors have independently and simultaneously proposed adaptations of the Matrix-Tree Theorem for structured inference on directed spanning trees (McDonald and Satta, 2007; Smith and Smith, 2007) . There are some algorithmic differences between these papers and ours. First, we define both multi-root and single-root algorithms, whereas the other papers only consider multi-root parsing. This distinction can be important as one often expects a dependency structure to have exactly one child attached to the root-symbol, as is the case in a single-root structure. Second, McDonald and Satta (2007) propose an O(n 5 ) algorithm for computing the marginals, as opposed to the O(n 3 ) matrix-inversion approach used by Smith and Smith (2007) and ourselves.",
"cite_spans": [
{
"start": 166,
"end": 192,
"text": "(McDonald and Satta, 2007;",
"ref_id": "BIBREF19"
},
{
"start": 193,
"end": 215,
"text": "Smith and Smith, 2007)",
"ref_id": "BIBREF30"
},
{
"start": 584,
"end": 617,
"text": "Second, McDonald and Satta (2007)",
"ref_id": null
},
{
"start": 736,
"end": 758,
"text": "Smith and Smith (2007)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "In addition to the algorithmic differences, both groups of authors consider applications of the Matrix-Tree Theorem which we have not discussed. For example, both papers propose minimum-risk decoding, and McDonald and Satta (2007) discuss unsupervised learning and language modeling, while Smith and Smith (2007) define hiddenvariable models based on spanning trees.",
"cite_spans": [
{
"start": 205,
"end": 230,
"text": "McDonald and Satta (2007)",
"ref_id": "BIBREF19"
},
{
"start": 290,
"end": 312,
"text": "Smith and Smith (2007)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "In this paper we used EG training methods only for max-margin models (Bartlett et al., 2004) . However, Globerson et al. (2007) have recently shown how EG updates can be applied to efficient training of log-linear models.",
"cite_spans": [
{
"start": 69,
"end": 92,
"text": "(Bartlett et al., 2004)",
"ref_id": "BIBREF1"
},
{
"start": 104,
"end": 127,
"text": "Globerson et al. (2007)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "In this section, we present experimental results applying our inference algorithms for dependency parsing models. Our primary purpose is to establish comparisons along two relevant dimensions: projective training vs. non-projective training, and marginal-based training algorithms vs. the averaged perceptron. The feature representation and other relevant dimensions are kept fixed in the experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments on Dependency Parsing",
"sec_num": "6"
},
{
"text": "We used data from the CoNLL-X shared task on multilingual dependency parsing (Buchholz and Marsi, 2006) . In our experiments, we used a subset consisting of six languages; Table 1 gives details of the data sets used. 2 For each language we created a validation set that was a subset of the CoNLL-X 2 Our subset includes the two languages with the lowest accuracy in the CoNLL-X evaluations (Turkish and Arabic), the language with the highest accuracy (Japanese), the most nonprojective language (Dutch), a moderately non-projective language (Slovene), and a highly projective language (Spanish). All languages but Spanish have multi-root parses in their data. We are grateful to the providers of the treebanks that constituted the data of our experiments (Haji\u010d et al., 2004; van der Beek et al., 2002; Kawata and Bartels, 2000; D\u017eeroski et al., 2006; Civit and Mart\u00ed, 2002; Oflazer et al., 2003 The 2nd column (%cd) is the percentage of crossing dependencies in the training and validation sets. The last three columns report the size in tokens of the training, validation and test sets.",
"cite_spans": [
{
"start": 77,
"end": 103,
"text": "(Buchholz and Marsi, 2006)",
"ref_id": "BIBREF3"
},
{
"start": 755,
"end": 775,
"text": "(Haji\u010d et al., 2004;",
"ref_id": "BIBREF13"
},
{
"start": 776,
"end": 802,
"text": "van der Beek et al., 2002;",
"ref_id": "BIBREF35"
},
{
"start": 803,
"end": 828,
"text": "Kawata and Bartels, 2000;",
"ref_id": "BIBREF16"
},
{
"start": 829,
"end": 851,
"text": "D\u017eeroski et al., 2006;",
"ref_id": "BIBREF9"
},
{
"start": 852,
"end": 874,
"text": "Civit and Mart\u00ed, 2002;",
"ref_id": "BIBREF7"
},
{
"start": 875,
"end": 895,
"text": "Oflazer et al., 2003",
"ref_id": "BIBREF24"
}
],
"ref_spans": [
{
"start": 172,
"end": 179,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Data Sets and Features",
"sec_num": "6.1"
},
{
"text": "training set for that language. The remainder of each training set was used to train the models for the different languages. The validation sets were used to tune the meta-parameters (e.g., the value of the regularization constant C) of the different training algorithms. We used the official test sets and evaluation script from the CoNLL-X task. All of the results that we report are for unlabeled dependency parsing. 3 The non-projective models were trained on the CoNLL-X data in its original form. Since the projective models assume that the dependencies in the data are non-crossing, we created a second training set for each language where non-projective dependency structures were automatically transformed into projective structures. All projective models were trained on these new training sets. 4 Our feature space is based on that of McDonald et al. (2005a) . 5",
"cite_spans": [
{
"start": 420,
"end": 421,
"text": "3",
"ref_id": null
},
{
"start": 806,
"end": 807,
"text": "4",
"ref_id": null
},
{
"start": 846,
"end": 869,
"text": "McDonald et al. (2005a)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Sets and Features",
"sec_num": "6.1"
},
{
"text": "We performed experiments using three training algorithms: the averaged perceptron (Collins, 2002) , log-linear training (via conjugate gradient descent), and max-margin training (via the EG algorithm). Each of these algorithms was trained using projective and non-projective methods, yielding six training settings per language.",
"cite_spans": [
{
"start": 82,
"end": 97,
"text": "(Collins, 2002)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6.2"
},
{
"text": "The different training algorithms have various meta-parameters, which we optimized on the validation set for each language/training-setting combination. The 3 Our algorithms also support labeled parsing (see Section 3.4). Initial experiments with labeled models showed the same trend that we report here for unlabeled parsing, so for simplicity we conducted extensive experiments only for unlabeled parsing. 4 The transformations were performed by running the projective parser with score +1 on correct dependencies and -1 otherwise: the resulting trees are guaranteed to be projective and to have a minimum loss with respect to the correct tree. Note that only the training sets were transformed. Table 3 : Results for the three training algorithms on the different languages (P = perceptron, E = EG, L = log-linear models). AV is an average across the results for the different languages.",
"cite_spans": [
{
"start": 408,
"end": 409,
"text": "4",
"ref_id": null
}
],
"ref_spans": [
{
"start": 698,
"end": 705,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6.2"
},
{
"text": "averaged perceptron has a single meta-parameter, namely the number of iterations over the training set. The log-linear models have two meta-parameters: the regularization constant C and the number of gradient steps T taken by the conjugate-gradient optimizer. The EG approach also has two metaparameters: the regularization constant C and the number of iterations, T . 6 For models trained using non-projective algorithms, both projective and nonprojective parsing was tested on the validation set, and the highest scoring of these two approaches was then used to decode test data sentences. Table 2 reports test results for the six training scenarios. These results show that for Dutch, which is the language in our data that has the highest number of crossing dependencies, non-projective training gives significant gains over projective training for all three training methods. For the other languages, non-projective training gives similar or even improved performance over projective training. Table 3 gives an additional set of results, which were calculated as follows. For each of the three training methods, we used the validation set results to choose between projective and non-projective training. This allows us to make a direct comparison of the three training algorithms. Table 3 shows the results of this comparison. 7 The results show that log-linear and max-margin models both give a higher average accuracy than the perceptron. For some languages (e.g., Japanese), the differences from the perceptron are small; however for other languages (e.g., Arabic, Dutch or Slovene) the improvements seen are quite substantial.",
"cite_spans": [
{
"start": 1333,
"end": 1334,
"text": "7",
"ref_id": null
}
],
"ref_spans": [
{
"start": 592,
"end": 599,
"text": "Table 2",
"ref_id": "TABREF4"
},
{
"start": 999,
"end": 1006,
"text": "Table 3",
"ref_id": null
},
{
"start": 1287,
"end": 1294,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6.2"
},
{
"text": "This paper describes inference algorithms for spanning-tree distributions, focusing on the fundamental problems of computing partition functions and marginals. Although we concentrate on loglinear and max-margin estimation, the inference algorithms we present can serve as black-boxes in many other statistical modeling techniques.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "Our experiments suggest that marginal-based training produces more accurate models than perceptron learning. Notably, this is the first large-scale application of the EG algorithm, and shows that it is a promising approach for structured learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "In line with McDonald et al. 2005b, we confirm that spanning-tree models are well-suited to dependency parsing, especially for highly non-projective languages such as Dutch. Moreover, spanning-tree models should be useful for a variety of other problems involving structured data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "Bartlett et al. (2004) write P (y | xi) as \u03b1i,y. The \u03b1i,y variables are dual variables that appear in the dual objective function, i.e., the convex dual of L(w). Analysis of the algorithm shows that as the \u03b8 i,h,m variables are updated, the dual variables converge to the optimal point of the dual objective, and the parameters w converge to the minimum of L(w).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We trained the perceptron for 100 iterations, and chose the iteration which led to the best score on the validation set. Note that in all of our experiments, the best perceptron results were actually obtained with 30 or fewer iterations. For the log-linear and EG algorithms we tested a number of values for C, and for each value of C ran 100 gradient steps or EG iterations, finally choosing the best combination of C and T found in validation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We ran the sign test at the sentence level to measure the statistical significance of the results aggregated across the six languages. Out of 2,472 sentences total, log-linear models gave improved parses over the perceptron on 448 sentences, and worse parses on 343 sentences. The max-margin method gave improved/worse parses for 500/383 sentences. Both results are significant with p \u2264 0.001.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors would like to thank the anonymous reviewers for their constructive comments. In addition, the authors gratefully acknowledge the following sources of support. Terry Koo was funded by a grant from the NSF (DMS-0434222) and a grant from NTT, Agmt. Dtd. 6/21/1998. Amir Globerson was supported by a fellowship from the Rothschild Foundation -Yad Hanadiv. Xavier Carreras was supported by the Catalan Ministry of Innovation, Universities and Enterprise, and a grant from NTT, Agmt. Dtd. 6/21/1998. Michael Collins was funded by NSF grants 0347631 and DMS-0434222.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Trainable grammars for speech recognition",
"authors": [
{
"first": "J",
"middle": [],
"last": "Baker",
"suffix": ""
}
],
"year": 1979,
"venue": "97th meeting of the",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Baker. 1979. Trainable grammars for speech recognition. In 97th meeting of the Acoustical Society of America.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Exponentiated gradient algorithms for large-margin structured classification",
"authors": [
{
"first": "P",
"middle": [],
"last": "Bartlett",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Taskar",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Mcallester",
"suffix": ""
}
],
"year": 2004,
"venue": "NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Bartlett, M. Collins, B. Taskar, and D. McAllester. 2004. Ex- ponentiated gradient algorithms for large-margin structured classification. In NIPS.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A maximization technique occurring in the statistical analysis of probabilistic functions of markov chains",
"authors": [
{
"first": "L",
"middle": [
"E"
],
"last": "Baum",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Petrie",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Soules",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Weiss",
"suffix": ""
}
],
"year": 1970,
"venue": "Annals of Mathematical Statistics",
"volume": "41",
"issue": "",
"pages": "164--171",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L.E. Baum, T. Petrie, G. Soules, and N. Weiss. 1970. A max- imization technique occurring in the statistical analysis of probabilistic functions of markov chains. Annals of Mathe- matical Statistics, 41:164-171.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "CoNLL-X shared task on multilingual dependency parsing",
"authors": [
{
"first": "S",
"middle": [],
"last": "Buchholz",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Marsi",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. CoNLL-X",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Buchholz and E. Marsi. 2006. CoNLL-X shared task on multilingual dependency parsing. In Proc. CoNLL-X.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Experiments with a higher-order projective dependency parser",
"authors": [
{
"first": "X",
"middle": [],
"last": "Carreras",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. EMNLP-CoNLL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "X. Carreras. 2007. Experiments with a higher-order projective dependency parser. In Proc. EMNLP-CoNLL.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "On the spanning tree polyhedron",
"authors": [
{
"first": "S",
"middle": [],
"last": "Chopra",
"suffix": ""
}
],
"year": 1989,
"venue": "Oper. Res. Lett",
"volume": "",
"issue": "",
"pages": "25--29",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Chopra. 1989. On the spanning tree polyhedron. Oper. Res. Lett., pages 25-29.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "On the shortest arborescence of a directed graph",
"authors": [
{
"first": "Y",
"middle": [
"J"
],
"last": "Chu",
"suffix": ""
},
{
"first": "T",
"middle": [
"H"
],
"last": "Liu",
"suffix": ""
}
],
"year": 1965,
"venue": "Science Sinica",
"volume": "14",
"issue": "",
"pages": "1396--1400",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y.J. Chu and T.H. Liu. 1965. On the shortest arborescence of a directed graph. Science Sinica, 14:1396-1400.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Design principles for a Spanish treebank",
"authors": [
{
"first": "M",
"middle": [],
"last": "Civit",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Mart\u00ed",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. of the First Workshop on Treebanks and Linguistic Theories (TLT)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Civit and M a A. Mart\u00ed. 2002. Design principles for a Span- ish treebank. In Proc. of the First Workshop on Treebanks and Linguistic Theories (TLT).",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms",
"authors": [
{
"first": "M",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Collins. 2002. Discriminative training methods for hidden markov models: Theory and experiments with perceptron al- gorithms. In Proc. EMNLP.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Towards a Slovene dependency treebank",
"authors": [
{
"first": "S",
"middle": [],
"last": "D\u017eeroski",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Erjavec",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Ledinek",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Pajas",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "\u017dabokrtsky",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "\u017dele",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. of the Fifth Intern. Conf. on Language Resources and Evaluation (LREC)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. D\u017eeroski, T. Erjavec, N. Ledinek, P. Pajas, Z.\u017dabokrtsky, and A.\u017dele. 2006. Towards a Slovene dependency treebank. In Proc. of the Fifth Intern. Conf. on Language Resources and Evaluation (LREC).",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Optimum branchings",
"authors": [
{
"first": "J",
"middle": [],
"last": "Edmonds",
"suffix": ""
}
],
"year": 1967,
"venue": "Journal of Research of the National Bureau of Standards",
"volume": "71",
"issue": "",
"pages": "233--240",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Edmonds. 1967. Optimum branchings. Journal of Research of the National Bureau of Standards, 71B:233-240.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Three new probabilistic models for dependency parsing: An exploration",
"authors": [
{
"first": "J",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 1996,
"venue": "Proc. COLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Eisner. 1996. Three new probabilistic models for depen- dency parsing: An exploration. In Proc. COLING.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Exponentiated gradient algorithms for log-linear structured prediction",
"authors": [
{
"first": "A",
"middle": [],
"last": "Globerson",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Koo",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Carreras",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Globerson, T. Koo, X. Carreras, and M. Collins. 2007. Ex- ponentiated gradient algorithms for log-linear structured pre- diction. In Proc. ICML.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Prague Arabic dependency treebank: Development in data and tools",
"authors": [
{
"first": "J",
"middle": [],
"last": "Haji\u010d",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Smr\u017e",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Zem\u00e1nek",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "\u0160naidauf",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Be\u0161ka",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. of the NEMLAR Intern. Conf. on Arabic Language Resources and Tools",
"volume": "",
"issue": "",
"pages": "110--117",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Haji\u010d, O. Smr\u017e, P. Zem\u00e1nek, J.\u0160naidauf, and E. Be\u0161ka. 2004. Prague Arabic dependency treebank: Development in data and tools. In Proc. of the NEMLAR Intern. Conf. on Arabic Language Resources and Tools, pages 110-117.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Estimators for stochastic unification-based grammars",
"authors": [
{
"first": "M",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Geman",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Canon",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Chi",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Riezler",
"suffix": ""
}
],
"year": 1999,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Johnson, S. Geman, S. Canon, Z. Chi, and S. Riezler. 1999. Estimators for stochastic unification-based grammars. In Proc. ACL.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Joint and conditional estimation of tagging and parsing models",
"authors": [
{
"first": "M",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2001,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Johnson. 2001. Joint and conditional estimation of tagging and parsing models. In Proc. ACL.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Stylebook for the Japanese treebank in VERBMOBIL",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Kawata",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Bartels",
"suffix": ""
}
],
"year": 2000,
"venue": "Seminar f\u00fcr Sprachwissenschaft",
"volume": "240",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. Kawata and J. Bartels. 2000. Stylebook for the Japanese treebank in VERBMOBIL. Verbmobil-Report 240, Seminar f\u00fcr Sprachwissenschaft, Universit\u00e4t T\u00fcbingen.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Conditonal random fields: Probabilistic models for segmenting and labeling sequence data",
"authors": [
{
"first": "J",
"middle": [],
"last": "Lafferty",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2001,
"venue": "Proc. ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Lafferty, A. McCallum, and F. Pereira. 2001. Conditonal ran- dom fields: Probabilistic models for segmenting and labeling sequence data. In Proc. ICML.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Online learning of approximate dependency parsing algorithms",
"authors": [
{
"first": "R",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. EACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. McDonald and F. Pereira. 2006. Online learning of approx- imate dependency parsing algorithms. In Proc. EACL.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "On the complexity of nonprojective data-driven dependency parsing",
"authors": [
{
"first": "R",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Satta",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. IWPT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. McDonald and G. Satta. 2007. On the complexity of non- projective data-driven dependency parsing. In Proc. IWPT.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Online large-margin training of dependency parsers",
"authors": [
{
"first": "R",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Crammer",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. McDonald, K. Crammer, and F. Pereira. 2005a. Online large-margin training of dependency parsers. In Proc. ACL.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Non-projective dependency parsing using spanning tree algorithms",
"authors": [
{
"first": "R",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Pereira",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Ribarov",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Hajic",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. HLT-EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. McDonald, F. Pereira, K. Ribarov, and J. Hajic. 2005b. Non-projective dependency parsing using spanning tree al- gorithms. In Proc. HLT-EMNLP.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Multilingual dependency parsing with a two-stage discriminative parser",
"authors": [
{
"first": "R",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Lerman",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. CoNLL-X",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. McDonald, K. Lerman, and F. Pereira. 2006. Multilingual dependency parsing with a two-stage discriminative parser. In Proc. CoNLL-X.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Memory-based dependency parsing",
"authors": [
{
"first": "J",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Nilsson",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. CoNLL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Nivre, J. Hall, and J. Nilsson. 2004. Memory-based depen- dency parsing. In Proc. CoNLL.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Treebanks: Building and Using Parsed Corpora, chapter 15",
"authors": [
{
"first": "K",
"middle": [],
"last": "Oflazer",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Say",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Hakkani-T\u00fcr",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "T\u00fcr",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Oflazer, B. Say, D. Zeynep Hakkani-T\u00fcr, and G. T\u00fcr. 2003. Building a Turkish treebank. In A. Abeill\u00e9, editor, Tree- banks: Building and Using Parsed Corpora, chapter 15. Kluwer Academic Publishers.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Cubic-time parsing and learning algorithms for grammatical bigram models",
"authors": [
{
"first": "M",
"middle": [
"A"
],
"last": "Paskin",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M.A. Paskin. 2001. Cubic-time parsing and learning algo- rithms for grammatical bigram models. Technical Report UCB/CSD-01-1148, University of California, Berkeley.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference",
"authors": [
{
"first": "J",
"middle": [],
"last": "Pearl",
"suffix": ""
}
],
"year": 1988,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Pearl. 1988. Probabilistic Reasoning in Intelligent Sys- tems: Networks of Plausible Inference (2nd edition). Mor- gan Kaufmann Publishers.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Speed and accuracy in shallow and deep stochastic parsing",
"authors": [
{
"first": "S",
"middle": [],
"last": "Riezler",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Kaplan",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "King",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Maxwell",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Vasserman",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Crouch",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. HLT-NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Riezler, R. Kaplan, T. King, J. Maxwell, A. Vasserman, and R. Crouch. 2004. Speed and accuracy in shallow and deep stochastic parsing. In Proc. HLT-NAACL.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Shallow parsing with conditional random fields",
"authors": [
{
"first": "F",
"middle": [],
"last": "Sha",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. HLT-NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. Sha and F. Pereira. 2003. Shallow parsing with conditional random fields. In Proc. HLT-NAACL.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Contrastive estimation: Training log-linear models on unlabeled data",
"authors": [
{
"first": "N",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "N.A. Smith and J. Eisner. 2005. Contrastive estimation: Train- ing log-linear models on unlabeled data. In Proc. ACL.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Probabilistic models of nonprojective dependency trees",
"authors": [
{
"first": "D",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
},
{
"first": "N",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. EMNLP-CoNLL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D.A. Smith and N.A. Smith. 2007. Probabilistic models of nonprojective dependency trees. In Proc. EMNLP-CoNLL.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Max-margin markov networks",
"authors": [
{
"first": "B",
"middle": [],
"last": "Taskar",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Guestrin",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Koller",
"suffix": ""
}
],
"year": 2004,
"venue": "NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. Taskar, C. Guestrin, and D. Koller. 2004a. Max-margin markov networks. In NIPS.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Max-margin parsing",
"authors": [
{
"first": "B",
"middle": [],
"last": "Taskar",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Koller",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. Taskar, D. Klein, M. Collins, D. Koller, and C. Manning. 2004b. Max-margin parsing. In Proc. EMNLP.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Support vector machine learning for interdependent and structured output spaces",
"authors": [
{
"first": "I",
"middle": [],
"last": "Tsochantaridis",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Hofmann",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Joachims",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Altun",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "I. Tsochantaridis, T. Hofmann, T. Joachims, and Y. Altun. 2004. Support vector machine learning for interdependent and structured output spaces. In Proc. ICML.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Graph Theory",
"authors": [
{
"first": "W",
"middle": [],
"last": "Tutte",
"suffix": ""
}
],
"year": 1984,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. Tutte. 1984. Graph Theory. Addison-Wesley.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "The Alpino dependency treebank",
"authors": [
{
"first": "L",
"middle": [],
"last": "Van Der Beek",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Bouma",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Malouf",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Van Noord",
"suffix": ""
}
],
"year": 2002,
"venue": "Computational Linguistics in the Netherlands (CLIN)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. van der Beek, G. Bouma, R. Malouf, and G. van Noord. 2002. The Alpino dependency treebank. In Computational Linguistics in the Netherlands (CLIN).",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "Examples of the four types of dependency structures. We draw dependency arcs from head to modifier.",
"uris": null,
"type_str": "figure"
},
"FIGREF1": {
"num": null,
"text": "Training examples {(xi, yi)} N i=1 . Parameters: Regularization constant C, starting point \u03b2, number of passes over training set T . Data Structures: Real values \u03b8 i,h,m and l i,h,m for i = 1 . . . N, (h, m) \u2208 D(xi). Learning rate \u03b7. Initialization: Set learning rate \u03b7 = 1 C . Set \u03b8 i,h,m = \u03b2 for (h, m) \u2208 yi, and \u03b8 i,h,m = 0 for (h, m) / \u2208 yi. Set l i,h,m = 0 for (h, m) \u2208 yi, and l i,h,m = 1 for (h, m) / \u2208 yi. Calculate initial parameters as",
"uris": null,
"type_str": "figure"
},
"FIGREF2": {
"num": null,
"text": ",m ) and the \u00b5 i,h,m values are calculated from the \u03b8 i,h,m values as described in Eq. 8.",
"uris": null,
"type_str": "figure"
},
"FIGREF3": {
"num": null,
"text": "The EG Algorithm for Max-Margin Estimation.",
"uris": null,
"type_str": "figure"
},
"TABREF2": {
"num": null,
"type_str": "table",
"content": "<table/>",
"html": null,
"text": "Information for the languages in our experiments."
},
"TABREF3": {
"num": null,
"type_str": "table",
"content": "<table><tr><td/><td colspan=\"2\">Perceptron</td><td colspan=\"2\">Max-Margin</td><td colspan=\"2\">Log-Linear</td></tr><tr><td/><td>p</td><td>np</td><td>p</td><td>np</td><td>p</td><td>np</td></tr><tr><td>Ara</td><td>71.74</td><td/><td/><td/><td/></tr></table>",
"html": null,
"text": "5 It should be noted that McDonald et al. (2006) use a richer feature set that is incomparable to our features. 71.84 71.74 72.99 73.11 73.67 Dut 77.17 78.83 76.53 79.69 76.23 79.55 Jap 91.90 91.78 92.10 92.18 91.68 91.49 Slo 78.02 78.66 79.78 80.10 78.24 79.66 Spa 81.19 80.02 81.71 81.93 81.75 81.57 Tur 71.22 71.70 72.83 72.02 72.26 72.62"
},
"TABREF4": {
"num": null,
"type_str": "table",
"content": "<table><tr><td/><td>Ara</td><td>Dut</td><td>Jap</td><td>Slo</td><td>Spa</td><td>Tur</td><td>AV</td></tr><tr><td>P</td><td>71.74</td><td>78.83</td><td>91.78</td><td>78.66</td><td>81.19</td><td>71.70</td><td>79.05</td></tr><tr><td>E</td><td>72.99</td><td>79.69</td><td>92.18</td><td>80.10</td><td>81.93</td><td>72.02</td><td>79.82</td></tr><tr><td>L</td><td>73.67</td><td>79.55</td><td>91.49</td><td>79.66</td><td>81.57</td><td>72.26</td><td>79.71</td></tr></table>",
"html": null,
"text": "Test data results. The p and np columns show results with projective and non-projective training respectively."
}
}
}
}