ACL-OCL / Base_JSON /prefixD /json /D12 /D12-1019.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D12-1019",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:23:40.377146Z"
},
"title": "Spectral Dependency Parsing with Latent Variables",
"authors": [
{
"first": "Paramveer",
"middle": [
"S"
],
"last": "Dhillon",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Pennsylvania",
"location": {
"settlement": "Philadelphia",
"region": "PA",
"country": "U.S.A"
}
},
"email": ""
},
{
"first": "Jordan",
"middle": [],
"last": "Rodu",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Columbia University",
"location": {
"settlement": "New York",
"region": "NY",
"country": "U.S.A"
}
},
"email": "mcollins@cs.columbia.edu"
},
{
"first": "Dean",
"middle": [
"P"
],
"last": "Foster",
"suffix": "",
"affiliation": {},
"email": "jrodu|foster@wharton.upenn.edu"
},
{
"first": "Lyle",
"middle": [
"H"
],
"last": "Ungar",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Pennsylvania",
"location": {
"settlement": "Philadelphia",
"region": "PA",
"country": "U.S.A"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Recently there has been substantial interest in using spectral methods to learn generative sequence models like HMMs. Spectral methods are attractive as they provide globally consistent estimates of the model parameters and are very fast and scalable, unlike EM methods, which can get stuck in local minima. In this paper, we present a novel extension of this class of spectral methods to learn dependency tree structures. We propose a simple yet powerful latent variable generative model for dependency parsing, and a spectral learning method to efficiently estimate it. As a pilot experimental evaluation, we use the spectral tree probabilities estimated by our model to re-rank the outputs of a near state-of-theart parser. Our approach gives us a moderate reduction in error of up to 4.6% over the baseline re-ranker.",
"pdf_parse": {
"paper_id": "D12-1019",
"_pdf_hash": "",
"abstract": [
{
"text": "Recently there has been substantial interest in using spectral methods to learn generative sequence models like HMMs. Spectral methods are attractive as they provide globally consistent estimates of the model parameters and are very fast and scalable, unlike EM methods, which can get stuck in local minima. In this paper, we present a novel extension of this class of spectral methods to learn dependency tree structures. We propose a simple yet powerful latent variable generative model for dependency parsing, and a spectral learning method to efficiently estimate it. As a pilot experimental evaluation, we use the spectral tree probabilities estimated by our model to re-rank the outputs of a near state-of-theart parser. Our approach gives us a moderate reduction in error of up to 4.6% over the baseline re-ranker.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Markov models have been for two decades a workhorse of statistical pattern recognition with applications ranging from speech to vision to language. Adding latent variables to these models gives us additional modeling power and have shown success in applications like POS tagging (Merialdo, 1994) , speech recognition (Rabiner, 1989) and object recognition (Quattoni et al., 2004) . However, this comes at the cost that the resulting parameter estimation problem becomes non-convex and techniques like EM (Dempster et al., 1977) which are used to estimate the parameters can only lead to locally optimal solutions. Hsu et al. (2008) has shown that globally consistent estimates of the parameters of HMMs can be found by using spectral methods, particularly by singular value decomposition (SVD) of appropriately defined linear systems. They avoid the NP Hard problem of the global optimization problem of the HMM parameters (Terwijn, 2002) , by putting restrictions on the smallest singular value of the HMM parameters. The main intuition behind the model is that, although the observed data (i.e. words) seem to live in a very high dimensional space, but in reality they live in a very low dimensional space (size k \u223c 30 \u2212 50) and an appropriate eigen decomposition of the observed data will reveal the underlying low dimensional dynamics and thereby revealing the parameters of the model. Besides ducking the NP hard problem, the spectral methods are very fast and scalable to train compared to EM methods.",
"cite_spans": [
{
"start": 279,
"end": 295,
"text": "(Merialdo, 1994)",
"ref_id": "BIBREF13"
},
{
"start": 317,
"end": 332,
"text": "(Rabiner, 1989)",
"ref_id": "BIBREF18"
},
{
"start": 356,
"end": 379,
"text": "(Quattoni et al., 2004)",
"ref_id": "BIBREF17"
},
{
"start": 504,
"end": 527,
"text": "(Dempster et al., 1977)",
"ref_id": "BIBREF4"
},
{
"start": 614,
"end": 631,
"text": "Hsu et al. (2008)",
"ref_id": "BIBREF8"
},
{
"start": 923,
"end": 938,
"text": "(Terwijn, 2002)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper we generalize the approach of Hsu et al. (2008) to learn dependency tree structures with latent variables. 1 Petrov et al. (2006) and Musillo and Merlo (2008) have shown that learning PCFGs and dependency grammars respectively with latent variables can produce parsers with very good generalization performance. However, both these approaches rely on EM for parameter estimation and can benefit from using spectral methods.",
"cite_spans": [
{
"start": 44,
"end": 61,
"text": "Hsu et al. (2008)",
"ref_id": "BIBREF8"
},
{
"start": 121,
"end": 122,
"text": "1",
"ref_id": null
},
{
"start": 123,
"end": 143,
"text": "Petrov et al. (2006)",
"ref_id": "BIBREF16"
},
{
"start": 148,
"end": 172,
"text": "Musillo and Merlo (2008)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Recent work by",
"sec_num": null
},
{
"text": "We propose a simple yet powerful latent variable generative model for use with dependency pars-ing which has one hidden node for each word in the sentence, like the one shown in Figure 1 and work out the details for the parameter estimation of the corresponding spectral learning model. At a very high level, the parameter estimation of our model involves collecting unigram, bigram and trigram counts sensitive to the underlying dependency structure of the given sentence.",
"cite_spans": [],
"ref_spans": [
{
"start": 178,
"end": 186,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Recent work by",
"sec_num": null
},
{
"text": "Recently, Luque et al. (2012) have also proposed a spectral method for dependency parsing, however they deal with horizontal markovization and use hidden states to model sequential dependencies within a word's sequence of children. In contrast with that, in this paper, we propose a spectral learning algorithm where latent states are not restricted to HMM-like distributions of modifier sequences for a particular head, but instead allow information to be propagated through the entire tree.",
"cite_spans": [
{
"start": 10,
"end": 29,
"text": "Luque et al. (2012)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Recent work by",
"sec_num": null
},
{
"text": "More recently, Cohen et al. (2012) have proposed a spectral method for learning PCFGs.",
"cite_spans": [
{
"start": 15,
"end": 34,
"text": "Cohen et al. (2012)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Recent work by",
"sec_num": null
},
{
"text": "Its worth noting that recent work by Parikh et al. (2011) also extends Hsu et al. (2008) to latent variable dependency trees like us but under the restrictive conditions that model parameters are trained for a specified, albeit arbitrary, tree topology. 2 In other words, all training sentences and test sentences must have identical tree topologies. By doing this they allow for node-specific model parameters, but must retrain the model entirely when a different tree topology is encountered. Our model on the other hand allows the flexibility and efficiency of processing sentences with a variety of tree topologies from a single training run.",
"cite_spans": [
{
"start": 71,
"end": 88,
"text": "Hsu et al. (2008)",
"ref_id": "BIBREF8"
},
{
"start": 254,
"end": 255,
"text": "2",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Recent work by",
"sec_num": null
},
{
"text": "Most of the current state-of-the-art dependency parsers are discriminative parsers (Koo et al., 2008; McDonald, 2006) due to the flexibility of representations which can be used as features leading to better accuracies and the ease of reproducibility of results. However, unlike discriminative models, generative models can exploit unlabeled data. Also, as is common in statistical parsing, re-ranking the outputs of a parser leads to significant reductions in error (Collins and Koo, 2005) .",
"cite_spans": [
{
"start": 83,
"end": 101,
"text": "(Koo et al., 2008;",
"ref_id": "BIBREF9"
},
{
"start": 102,
"end": 117,
"text": "McDonald, 2006)",
"ref_id": "BIBREF12"
},
{
"start": 467,
"end": 490,
"text": "(Collins and Koo, 2005)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Recent work by",
"sec_num": null
},
{
"text": "Since our spectral learning algorithm uses a gen-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recent work by",
"sec_num": null
},
{
"text": "h 0 h 1 h 2 was",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recent work by",
"sec_num": null
},
{
"text": "Kilroy here erative model of words given a tree structure, it can score a tree structure i.e. its probability of generation. Thus, it can be used to re-rank the n-best outputs of a given parser. The remainder of the paper is organized as follows. In the next section we introduce the notation and give a brief overview of the spectral algorithm for learning HMMs (Hsu et al., 2008; . In Section 3 we describe our proposed model for dependency parsing in detail and work out the theory behind it. Section 4 provides experimental evaluation of our model on Penn Treebank data. We conclude with a brief summary and future avenues for research.",
"cite_spans": [
{
"start": 363,
"end": 381,
"text": "(Hsu et al., 2008;",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Recent work by",
"sec_num": null
},
{
"text": "In this section we describe the spectral algorithm for learning HMMs. 3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Spectral Algorithm For Learning HMMs",
"sec_num": "2"
},
{
"text": "The HMM that we consider in this section is a sequence of hidden states h \u2208 {1, . . . , k} that follow the Markov property:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Notation",
"sec_num": "2.1"
},
{
"text": "p(h t |h 1 , . . . , h t\u22121 ) = p(h t |h t\u22121 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Notation",
"sec_num": "2.1"
},
{
"text": "and a sequence of observations x \u2208 {1, . . . , n} such that",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Notation",
"sec_num": "2.1"
},
{
"text": "p(x t |x 1 , . . . , x t\u22121 , h 1 , . . . , h t ) = p(x t |h t ) 3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Notation",
"sec_num": "2.1"
},
{
"text": "As mentioned earlier, we use the model by which is conceptually similar to the one by Hsu et al. (2008) , but does further dimensionality reduction and thus has lower sample complexity. Also, critically, the fully reduced dimension model that we use generalizes much more cleanly to trees.",
"cite_spans": [
{
"start": 86,
"end": 103,
"text": "Hsu et al. (2008)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Notation",
"sec_num": "2.1"
},
{
"text": "The parameters of this HMM are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Notation",
"sec_num": "2.1"
},
{
"text": "\u2022 A vector \u03c0 of length k where \u03c0 i = p(h 1 = i):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Notation",
"sec_num": "2.1"
},
{
"text": "The probability of the start state in the sequence being i.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Notation",
"sec_num": "2.1"
},
{
"text": "\u2022 A matrix T of size k \u00d7 k where T i,j = p(h t+1 = i|h t = j): The probability of transitioning to state i, given that the previous state was j.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Notation",
"sec_num": "2.1"
},
{
"text": "\u2022 A matrix O of size n \u00d7 k where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Notation",
"sec_num": "2.1"
},
{
"text": "O i,j = p(x = i|h = j):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Notation",
"sec_num": "2.1"
},
{
"text": "The probability of state h emitting observation x.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Notation",
"sec_num": "2.1"
},
{
"text": "Define \u03b4 j to be the vector of length n with a 1 in the j th entry and 0 everywhere else, and diag(v) to be the matrix with the entries of v on the diagonal and 0 everywhere else.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Notation",
"sec_num": "2.1"
},
{
"text": "The joint distribution of a sequence of observations x 1 , . . . , x m and a sequence of hidden states h 1 , . . . , h m is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Notation",
"sec_num": "2.1"
},
{
"text": "p(x 1 , . . . ,x m , h 1 , . . . , h m ) = \u03c0 h 1 m\u22121 j=2 T h j ,h j\u22121 m j=1 O x j ,h j",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Notation",
"sec_num": "2.1"
},
{
"text": "Now, we can write the marginal probability of a sequence of observations as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Notation",
"sec_num": "2.1"
},
{
"text": "p(x 1 , . . . x m ) = h 1 ,...,hm p(x 1 , . . . , x m , h 1 , . . . , h m )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Notation",
"sec_num": "2.1"
},
{
"text": "which can be expressed in matrix form 4 as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Notation",
"sec_num": "2.1"
},
{
"text": "p(x 1 , . . . , x m ) = 1 A xm A x m\u22121 \u2022 \u2022 \u2022 A m 1 \u03c0 where A xm \u2261 T diag(O \u03b4 xm )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Notation",
"sec_num": "2.1"
},
{
"text": ", and 1 is a kdimensional vector with every entry equal to 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Notation",
"sec_num": "2.1"
},
{
"text": "A is called an \"observation operator\", and is effectively a third order tensor, and A xm which is a matrix, gives the distribution vector over states at time m+1 as a function of the state distribution vector at the current time m and the current observation \u03b4 xm . Since A xm depends on the hidden state, it is not observable, and hence cannot be directly estimated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Notation",
"sec_num": "2.1"
},
{
"text": "However, Hsu et al. (2008) and showed that under certain conditions there exists a fully observable representation of the observable operator model.",
"cite_spans": [
{
"start": 9,
"end": 26,
"text": "Hsu et al. (2008)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Notation",
"sec_num": "2.1"
},
{
"text": "Before presenting the model, we need to address a few more points. First, let U be a \"representation matrix\" (eigenfeature dictionary) which maps each observation to a reduced dimension space (n \u2192 k) that satisfies the conditions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fully observable representation",
"sec_num": "2.2"
},
{
"text": "\u2022 U O is invertible \u2022 |U ij | < 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fully observable representation",
"sec_num": "2.2"
},
{
"text": "Hsu et al. 2008; discuss U in more detail, but U can, for example, be obtained by the SVD of the bigram probability matrix (where P ij = p(x t+1 = i|x t = j)) or by doing CCA on neighboring n-grams (Dhillon et al., 2011) .",
"cite_spans": [
{
"start": 198,
"end": 220,
"text": "(Dhillon et al., 2011)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Fully observable representation",
"sec_num": "2.2"
},
{
"text": "Letting",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fully observable representation",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "y i = U \u03b4 x i , we have p(x 1 , . . . , x m ) = c \u221e C(y m )C(y m\u22121 ) . . . C(y 1 )c 1",
"eq_num": "(1)"
}
],
"section": "Fully observable representation",
"sec_num": "2.2"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fully observable representation",
"sec_num": "2.2"
},
{
"text": "c 1 = \u00b5 c \u221e = \u00b5 \u03a3 \u22121 C(y) = K(y)\u03a3 \u22121",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fully observable representation",
"sec_num": "2.2"
},
{
"text": "and \u00b5, \u03a3 and K, described in more detail below, are quantities estimated by frequencies of unigrams, bigrams, and trigrams in the observed (training) data. Under the assumption that data is generated by an HMM, the distributionp obtained by substituting the estimated values\u0109 1 ,\u0109 \u221e , and C(y) into equation (1) converges to p sufficiently fast as the amount of training data increases, giving us consistent parameter estimates. For details of the convergence proof, please see Hsu et al. (2008) and .",
"cite_spans": [
{
"start": 478,
"end": 495,
"text": "Hsu et al. (2008)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Fully observable representation",
"sec_num": "2.2"
},
{
"text": "In this section, we first describe a simple latent variable generative model for dependency parsing. We then define some extra notation and finally present the details of the corresponding spectral learning algorithm for dependency parsing, and prove that our learning algorithm provides a consistent estimation of the marginal probabilities. It is worth mentioning that an alternate way of approaching the spectral estimation of latent states for dependency parsing is by converting the dependency trees into linear sequences from root-to-leaf and doing a spectral estimation of latent states using Hsu et al. (2008) . However, this approach would not give us the correct probability distribution over trees as the probability calculations for different paths through the trees are not independent. Thus, although one could calculate the probability of a path from the root to a leaf, one cannot generalize from this probability to say anything about the neighboring nodes or words. Put another way, when a parent has more than the one descendant, one has to be careful to take into account that the hidden variables at each child node are all conditioned on the hidden variable of the parent.",
"cite_spans": [
{
"start": 600,
"end": 617,
"text": "Hsu et al. (2008)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Spectral Algorithm For Learning Dependency Trees",
"sec_num": "3"
},
{
"text": "In the standard setting, we are given training examples where each training example consists of a sequence of words x 1 , . . . , x m together with a dependency structure over those words, and we want to estimate the probability of the observed structure. This marginal probability estimates can then be used to build an actual generative dependency parser or, since the marginal probability is conditioned on the tree structure, it can be used re-rank the outputs of a parser.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A latent variable generative model for dependency parsing",
"sec_num": "3.1"
},
{
"text": "As in the conventional HMM described in the previous section, we can define a simple latent variable first order dependency parsing model by introducing a hidden variable h i for each word x i . The joint probability of a sequence of observed nodes x 1 , . . . , x m together with hidden nodes h 1 , . . . , h m can be written as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A latent variable generative model for dependency parsing",
"sec_num": "3.1"
},
{
"text": "p(x 1 , . . . ,x m , h 1 , . . . , h m ) = \u03c0 h 1 m j=2 t d(j) (h j |h pa(j) ) m j=1 o(x j |h j ) (2) h 1 h 2 h 3 y 1 y 2 y 3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A latent variable generative model for dependency parsing",
"sec_num": "3.1"
},
{
"text": "Figure 2: Dependency parsing tree with observed variables y 1 , y 2 , and y 3 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A latent variable generative model for dependency parsing",
"sec_num": "3.1"
},
{
"text": "where pa(j) is the parent of node j and d(j) \u2208 {L, R} indicates whether h j is a left or a right node of h pa(j) . For simplicity, the number of hidden and observed nodes in our tree are the same, however they are not required to be so.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A latent variable generative model for dependency parsing",
"sec_num": "3.1"
},
{
"text": "As is the case with the conventional HMM, the parameters used to calculate this joint probability are unobservable, but it turns out that under suitable conditions a fully observable model is also possible for the dependency tree case with the parameterization as described below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A latent variable generative model for dependency parsing",
"sec_num": "3.1"
},
{
"text": "We will define both the theoretical representations of our observable parameters, and the sampling versions of these parameters. Note that in all the cases, the estimated versions are unbiased estimates of the theoretical quantities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model parameters",
"sec_num": "3.2"
},
{
"text": "Define T d and T u d where d \u2208 {L, R} to be the hidden state transition matrices from parent to left or right child, and from left or right child to parent (hence the u for 'up'), respectively. In other words (referring to Figure 2 )",
"cite_spans": [],
"ref_spans": [
{
"start": 223,
"end": 231,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model parameters",
"sec_num": "3.2"
},
{
"text": "T R = t(h 3 |h 1 ) T L = t(h 2 |h 1 ) T u R = t(h 1 |h 3 ) T u L = t(h 1 |h 2 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model parameters",
"sec_num": "3.2"
},
{
"text": "Let U x(i) be the i th entry of vector U \u03b4 x and G = U O. Further, recall the notation diag(v), which is a matrix with elements of v on its diagonal, then:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model parameters",
"sec_num": "3.2"
},
{
"text": "\u2022 Define the k-dimensional vector \u00b5 (unigram counts):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model parameters",
"sec_num": "3.2"
},
{
"text": "\u00b5 = G\u03c0 [\u03bc] i = n u=1c (u)U u(i) wherec(u) = c(u) N 1 , c(u)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model parameters",
"sec_num": "3.2"
},
{
"text": "is the count of observation u in the training sample, and N 1 = u\u2208n c(u).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model parameters",
"sec_num": "3.2"
},
{
"text": "\u2022 Define the k\u00d7k matrices \u03a3 L and \u03a3 R (left childparent and right child-parent bigram counts): ) where u is the left child and v is the parent in the training sample, and",
"cite_spans": [],
"ref_spans": [
{
"start": 95,
"end": 96,
"text": ")",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model parameters",
"sec_num": "3.2"
},
{
"text": "[\u03a3 L ] i,j = n u=1 n v=1c L (u, v)U u(j) U v(i) \u03a3 L = GT u L diag(\u03c0)G [\u03a3 R ] i,j = n u=1 n v=1c R (u, v)U u(j) U v(i) \u03a3 R = GT u R diag(\u03c0)G wherec L (u, v) = c L (u,v) N 2 L , c L (u, v) is the count of bigram (u, v",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model parameters",
"sec_num": "3.2"
},
{
"text": "N 2 L = (u,v)\u2208n\u00d7n c L (u, v). Definec R (u, v) similarly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model parameters",
"sec_num": "3.2"
},
{
"text": "\u2022 Define k \u00d7 k \u00d7 k tensor K (left child-parentright child trigram counts):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model parameters",
"sec_num": "3.2"
},
{
"text": "K i,j,l = n u=1 n v=1 n w=1c (u, v, w)U w(i) U u(j) U v(l) K(y) = GT L diag(G y)T u R diag(\u03c0)G wherec(w, u, v) = c(w,u,v) N 3 , c(w, u, v) is the count of bigram (w, u, v)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model parameters",
"sec_num": "3.2"
},
{
"text": "where w is the left child, u is the parent and v is the right child in the training sample, and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model parameters",
"sec_num": "3.2"
},
{
"text": "N 3 = (w,u,v)\u2208n\u00d7n\u00d7n c(w, u, v).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model parameters",
"sec_num": "3.2"
},
{
"text": "\u2022 Define k \u00d7 k matrices \u2126 L and \u2126 R (skip-bigram counts (left child-right child) and (right child-left child)) 5 :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model parameters",
"sec_num": "3.2"
},
{
"text": "[\u03a9 L ] i,j = n u=1 n v=1 n w=1c (u, v, w)U w(i) U u(j) \u2126 L = GT L T u R diag(\u03c0)G [\u03a9 R ] i,j = n u=1 n v=1 n w=1c (u, v, w)U w(j) U u(i) \u2126 R = GT R T u L diag(\u03c0)G",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model parameters",
"sec_num": "3.2"
},
{
"text": "Using the above definitions, we can estimate the parameters of the model, namely \u00b5, \u03a3 L , \u03a3 R , \u2126 L , \u2126 R and K, from the training data and define observables useful for the dependency model as 6",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter estimation",
"sec_num": "3.3"
},
{
"text": "c 1 = \u00b5 c T \u221e = \u00b5 T \u03a3 \u22121 R E L = \u2126 L \u03a3 \u22121 R E R = \u2126 R \u03a3 \u22121 L D(y) = E \u22121 L K(y)\u03a3 \u22121 R",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter estimation",
"sec_num": "3.3"
},
{
"text": "As we will see, these quantities allow us to recursively compute the marginal probability of the dependency tree,p(x 1 , . . . , x m ), in a bottom up manner by using belief propagation. To see this, let hch(i) be the set of hidden children of hidden node i (in Figure 2 for instance, hch(1) = {2, 3}) and let och(i) be the set of observed children of hidden node i (in the same figure och(i) = {1}). Then compute the marginal probability p(x 1 , . . . , x m ) from Equation 2 as",
"cite_spans": [],
"ref_spans": [
{
"start": 262,
"end": 270,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Parameter estimation",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "r i (h) = j\u2208hch(i) \u03b1 j (h) j\u2208och(i) o(x j |h)",
"eq_num": "(3)"
}
],
"section": "Parameter estimation",
"sec_num": "3.3"
},
{
"text": "where \u03b1 i (h) is defined by summing over all the hidden random variables i.e., \u03b1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter estimation",
"sec_num": "3.3"
},
{
"text": "i (h) = h p(h |h)r i (h ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter estimation",
"sec_num": "3.3"
},
{
"text": "This can be written in a compact matrix form as \u2212 \u2192 r i = 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter estimation",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "j\u2208hch(i) diag(T d j \u2212 \u2192 r j ) \u2022 j\u2208och(i) diag(O \u03b4 x j )",
"eq_num": "(4)"
}
],
"section": "Parameter estimation",
"sec_num": "3.3"
},
{
"text": "where \u2212 \u2192 r i is a vector of size k (the dimensionality of the hidden space) of values r i (h). Note that since in Equation 2 we condition on whether x j is the left or right child of its parent, we have separate transition matrices for left and right transitions from a given hidden node d j \u2208 {L, R}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter estimation",
"sec_num": "3.3"
},
{
"text": "The recursive computation can be written in terms of observables as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter estimation",
"sec_num": "3.3"
},
{
"text": "\u2212 \u2192 r i = c \u221e j\u2208hch(i) D(E d j \u2212 \u2192 r j ) \u2022 j\u2208och(i) D((U U ) \u22121 U \u03b4 x j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter estimation",
"sec_num": "3.3"
},
{
"text": "The final calculation for the marginal probability of a given sequence i\u015d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter estimation",
"sec_num": "3.3"
},
{
"text": "p(x 1 , . . . , x m ) = \u2212 \u2192 r 1 c 1 (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter estimation",
"sec_num": "3.3"
},
{
"text": "The spectral estimation procedure is described below in Algorithm 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter estimation",
"sec_num": "3.3"
},
{
"text": "Algorithm 1 Spectral dependency parsing (Computing marginal probability of a tree.)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter estimation",
"sec_num": "3.3"
},
{
"text": "1: Input: Training examples-x (i) for i \u2208 {1, . . . , M }",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter estimation",
"sec_num": "3.3"
},
{
"text": "along with dependency structures where each sequence",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter estimation",
"sec_num": "3.3"
},
{
"text": "x (i) = x (i) 1 , . . . , x (i) mi . 2: Compute the spectral parameters\u03bc,\u03a3 R ,\u03a3 L ,\u03a9 R ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter estimation",
"sec_num": "3.3"
},
{
"text": "\u2126 L , andK #Now, for a given sentence, we can recursively compute the following:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter estimation",
"sec_num": "3.3"
},
{
"text": "3: for x (i) j for j \u2208 {m i , . . . , 1} do 4: Compute: \u2212 \u2192 r i = c \u221e j\u2208hch(i) D(E dj \u2212 \u2192 r j ) \u2022 j\u2208och(i) D((U U ) \u22121 U \u03b4 xj )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter estimation",
"sec_num": "3.3"
},
{
"text": "5: end for 6: Finally comput\u00ea",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter estimation",
"sec_num": "3.3"
},
{
"text": "p(x 1 , . . . , x mi ) = \u2212 \u2192 r 1 c 1 #",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter estimation",
"sec_num": "3.3"
},
{
"text": "The marginal probability of an entire tree.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter estimation",
"sec_num": "3.3"
},
{
"text": "Our main theoretical result states that the above scheme for spectral estimation of marginal probabilities provides a guaranteed consistent estimation scheme for the marginal probabilities:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sample complexity",
"sec_num": "3.4"
},
{
"text": "Theorem 3.1. Let the sequence {x 1 , . . . , x m } be generated by an k \u2265 2 state HMM. Suppose we are given a U which has the property that U O is invertible, and |U ij | \u2264 1. Suppose we use equation 5to estimate the probability based on N independent triples. Then",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sample complexity",
"sec_num": "3.4"
},
{
"text": "N \u2265 C m k 2 2 log k \u03b4 (6)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sample complexity",
"sec_num": "3.4"
},
{
"text": "where C m is specified in the appendix, implies that",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sample complexity",
"sec_num": "3.4"
},
{
"text": "1 \u2212 \u2264 p(x 1 , . . . , x m ) p(x 1 , . . . , x m ) \u2264 1 +",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sample complexity",
"sec_num": "3.4"
},
{
"text": "holds with probability at least 1 \u2212 \u03b4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sample complexity",
"sec_num": "3.4"
},
{
"text": "Proof. A sketch of the proof, in the case without directional transition parameters, can be found in the appendix. The proof with directional transition parameters is almost identical.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sample complexity",
"sec_num": "3.4"
},
{
"text": "Since our algorithm can score any given tree structure by computing its marginal probability, a natural way to benchmark our parser is to generate nbest dependency trees using some standard parser and then use our algorithm to re-rank the candidate dependency trees, e.g. using the log spectral probability as described in Algorithm 1 as a feature in a discriminative re-ranker.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Evaluation",
"sec_num": "4"
},
{
"text": "Our base parser was the discriminatively trained MSTParser (McDonald, 2006) , which implements both first and second order parsers and is trained using MIRA (Crammer et al., 2006) and used the standard baseline features as described in McDonald (2006) . We tested our methods on the English Penn Treebank (Marcus et al., 1993) . We use the standard splits of Penn Treebank; i.e., we used sections 2-21 for training, section 22 for development and section 23 for testing. We used the PennConverter 7 tool to convert Penn Treebank from constituent to dependency format. Following (McDonald, 2006; Koo et al., 2008) , we used the POS tagger by Ratnaparkhi (1996) trained on the full training data to provide POS tags for development and test sets and used 10way jackknifing to generate tags for the training set. As is common practice we stripped our sentences of all the punctuation. We evaluated our approach on sentences of all lengths.",
"cite_spans": [
{
"start": 59,
"end": 75,
"text": "(McDonald, 2006)",
"ref_id": "BIBREF12"
},
{
"start": 157,
"end": 179,
"text": "(Crammer et al., 2006)",
"ref_id": "BIBREF3"
},
{
"start": 236,
"end": 251,
"text": "McDonald (2006)",
"ref_id": "BIBREF12"
},
{
"start": 305,
"end": 326,
"text": "(Marcus et al., 1993)",
"ref_id": "BIBREF11"
},
{
"start": 578,
"end": 594,
"text": "(McDonald, 2006;",
"ref_id": "BIBREF12"
},
{
"start": 595,
"end": 612,
"text": "Koo et al., 2008)",
"ref_id": "BIBREF9"
},
{
"start": 641,
"end": 659,
"text": "Ratnaparkhi (1996)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4.1"
},
{
"text": "For the spectral learning phase, we need to just collect word counts from the training data as described above, so there are no tunable parameters as such. However, we need to have access to an attribute dictionary U which contains a k dimensional representation for each word in the corpus. A possible way of generating U as suggested by Hsu et al. (2008) is by performing SVD on bigrams P 21 and using the left eigenvectors as U . We instead used the eigenfeature dictionary proposed by Dhillon et al. (2011) (LR-MVL) which is obtained by performing CCA on neighboring words and has provably better sample complexity for rare words compared to the SVD alternative.",
"cite_spans": [
{
"start": 339,
"end": 356,
"text": "Hsu et al. (2008)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Details of spectral learning",
"sec_num": "4.2"
},
{
"text": "We induced the LR-MVL embeddings for words using the Reuters RCV1 corpus which contains about 63 million tokens in 3.3 million sentences and used their context oblivious embeddings as our estimate of U . We experimented with different choices of k (the size of the low dimensional projection) on the development set and found k = 10 to work reasonably well and fast. Using k = 10 we were able to estimate our spectral learning parameters \u00b5, \u03a3 L,R , \u2126 L,R , K from the entire training data in under 2 minutes on a 64 bit Intel 2.4 Ghz processor.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Details of spectral learning",
"sec_num": "4.2"
},
{
"text": "We could not find any previous work which describes features for discriminative re-ranking for dependency parsing, which is due to the fact that unlike constituency parsing, the base parsers for dependency parsing are discriminative (e.g. MST parser) which obviates the need for re-ranking as one could add a variety of features to the baseline parser itself. However, parse re-ranking is a good testbed for our spectral dependency parser which can score a given tree. So, we came up with a baseline set of features to use in an averaged perceptron re-ranker (Collins, 2002) . Our baseline features comprised of two main . Accuracy is the number of words which correctly identified their parent and Complete is the number of sentences for which the entire dependency tree was correct. 3). Base. Features are the two re-ranking features described in Section 4.3. 4). logp is the spectral log probability feature.",
"cite_spans": [
{
"start": 559,
"end": 574,
"text": "(Collins, 2002)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Re-ranking the outputs of MST parser",
"sec_num": "4.3"
},
{
"text": "features which capture information that varies across the different n-best parses and moreover were not used as features by the baseline MST parser, POSleft-modifier \u2227 POS-head \u2227 POS-right-modifier and POS-left/right-modifier \u2227 POS-head \u2227 POSgrandparent 8 . In addition to that we used the log of spectral probability (p(x 1 , . . . , x m ) -as calculated using Algorithm 1) as a feature. We used the MST parser trained on entire training data to generate a list of n-best parses for the development and test sets. The n-best parses for the training set were generated by 3-fold cross validation, where we train on 2 folds to get the parses for the third fold. In all our experiments we used n = 50. The results are shown in Table 1 . As can be seen, the best results give up to 4.6% reduction in error over the re-ranker which uses just the baseline set of features.",
"cite_spans": [],
"ref_spans": [
{
"start": 725,
"end": 732,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Re-ranking the outputs of MST parser",
"sec_num": "4.3"
},
{
"text": "Spectral learning of structured latent variable models in general is a promising direction as has been shown by the recent interest in this area. It allows us to circumvent the ubiquitous problem of getting stuck in local minima when estimating the latent variable models via EM. In this paper we ex-tended the spectral learning ideas to learn a simple yet powerful dependency parser. As future work, we are working on building an end-to-end parser which would involve coming up with a spectral version of the inside-outside algorithm for our setting. We are also working on extending it to learn more powerful grammars e.g. split head-automata grammars (SHAG) (Eisner and Satta, 1999) .",
"cite_spans": [
{
"start": 661,
"end": 685,
"text": "(Eisner and Satta, 1999)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and Future Work",
"sec_num": "5"
},
{
"text": "In this paper we proposed a novel spectral method for dependency parsing. Unlike EM trained generative latent variable models, our method does not get stuck in local optima, it gives consistent parameter estimates, and it is extremely fast to train. We worked out the theory of a simple yet powerful generative model and showed how it can be learned using a spectral method. As a pilot experimental evaluation we showed the efficacy of our approach by using the spectral probabilities output by our model for re-ranking the outputs of MST parser. Our method reduced the error of the baseline re-ranker by up to a moderate 4.6%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "This appendix offers a sketch of the proof of Theorem 1. The proof uses the following definitions, which are slightly modified from those of . Definition 1. Define \u039b as the smallest element of \u00b5, \u03a3 \u22121 , \u2126 \u22121 , and K(). In other words,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Appendix",
"sec_num": "7"
},
{
"text": "\u039b \u2261 min{min i |\u00b5 i |, min i,j |\u03a3 \u22121 ij |, min i,j |\u2126 \u22121 ij |, min i,j,k |K ijk |, min i,j |\u03a3 ij |, min i,j |\u2126 ij |, }",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Appendix",
"sec_num": "7"
},
{
"text": "where K ijk = K(\u03b4 j ) ik are the elements of the tensor K().",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Appendix",
"sec_num": "7"
},
{
"text": "Definition 2. Define \u03c3 k as the smallest singular value of \u03a3 and \u2126.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Appendix",
"sec_num": "7"
},
{
"text": "The proof relies on the fact that a row vector multiplied by a series of matrices, and finally multiplied by a column vector amounts to a sum over all possible products of individual entries in the vectors and matrices. With this in mind, if we bound the largest relative error of any particular entry in the matrix by, say, \u03c9, and there are, say, s parameters (vectors and matrices) being multiplied together, then by simple algebra the total relative error of the sum over the products is bounded by \u03c9 s .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Appendix",
"sec_num": "7"
},
{
"text": "The proof then follows from two basic steps. First, one must bound the maximal relative error, \u03c9 for any particular entry in the parameters, which can be done using central limit-type theorems and the quantity \u039b described above. Then, to calculate the exponent s one simply counts the number of parameters multiplied together when calculating the probability of a particular sequence of observations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Appendix",
"sec_num": "7"
},
{
"text": "Since each hidden node is associated with exactly one observed node, it follows that s = 12m + 2L, where L is the number of levels (for instance in our example \"Kilroy was here\" there are two levels). s can be easily computed for arbitrary tree topologies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Appendix",
"sec_num": "7"
},
{
"text": "It follows from Foster et al. (2012) that we achieve a sample complexity",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Appendix",
"sec_num": "7"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "N \u2265 128k 2 s 2 2 \u039b 2 \u03c3 4 k log 2k \u03b4 \u2022 \u22481 2 /s 2 ( s \u221a 1 + \u2212 1) 2",
"eq_num": "(7)"
}
],
"section": "Appendix",
"sec_num": "7"
},
{
"text": "leading to the theorem stated above. Lastly, note that in reality one does not see \u039b and \u03c3 k but instead estimates of these quantities; shows how to incorporate the accuracy of the estimates into the sample complexity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Appendix",
"sec_num": "7"
},
{
"text": "Actually, instead of using the model byHsu et al. (2008) we work with a related model proposed by which addresses some of the shortcomings of the earlier model which we detail below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This can be useful in modeling phylogeny trees for instance, but precludes most NLP applications, since there is a need to model the full set of different tree topologies possible in parsing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This is essentially the matrix form of the standard dynamic program (forward algorithm) used to estimate HMMs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Note than \u2126R = \u2126 T L , which is not immediately obvious from the matrix representations.6 The details of the derivation follow directly from the matrix versions of the variables.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://nlp.cs.lth.se/software/treebank_ converter/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "One might be able to come up with better features for dependency parse re-ranking. Our goal in this paper was just to get a reasonable baseline.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Acknowledgement: We would like to thank Emily Pitler for valuable feedback on the paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Spectral learning of latent-variable pcfgs",
"authors": [
{
"first": "Shay",
"middle": [],
"last": "Cohen",
"suffix": ""
},
{
"first": "Karl",
"middle": [],
"last": "Stratos",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Dean",
"middle": [],
"last": "Foster",
"suffix": ""
},
{
"first": "Lyle",
"middle": [],
"last": "Ungar",
"suffix": ""
}
],
"year": 2012,
"venue": "Association of Computational Linguistics (ACL)",
"volume": "50",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shay Cohen, Karl Stratos, Michael Collins, Dean Foster, and Lyle Ungar. Spectral learning of latent-variable pcfgs. In Association of Compu- tational Linguistics (ACL), volume 50, 2012.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Ranking algorithms for namedentity extraction: boosting and the voted perceptron",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL '02",
"volume": "",
"issue": "",
"pages": "489--496",
"other_ids": {
"DOI": [
"10.3115/1073083.1073165"
]
},
"num": null,
"urls": [],
"raw_text": "Michael Collins. Ranking algorithms for named- entity extraction: boosting and the voted percep- tron. In Proceedings of the 40th Annual Meet- ing on Association for Computational Linguis- tics, ACL '02, pages 489-496, Stroudsburg, PA, USA, 2002. Association for Computational Lin- guistics. URL http://dx.doi.org/10. 3115/1073083.1073165.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Discriminative reranking for natural language parsing",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Terry",
"middle": [],
"last": "Koo",
"suffix": ""
}
],
"year": 2005,
"venue": "Comput. Linguist",
"volume": "31",
"issue": "1",
"pages": "891--2017",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins and Terry Koo. Discriminative reranking for natural language parsing. Comput. Linguist., 31(1):25-70, March 2005. ISSN 0891- 2017.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Online passive-aggressive algorithms",
"authors": [
{
"first": "Koby",
"middle": [],
"last": "Crammer",
"suffix": ""
},
{
"first": "Ofer",
"middle": [],
"last": "Dekel",
"suffix": ""
},
{
"first": "Joseph",
"middle": [],
"last": "Keshet",
"suffix": ""
},
{
"first": "Shai",
"middle": [],
"last": "Shalev-Shwartz",
"suffix": ""
},
{
"first": "Yoram",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 2006,
"venue": "Journal of Machine Learning Research",
"volume": "7",
"issue": "",
"pages": "551--585",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Koby Crammer, Ofer Dekel, Joseph Keshet, Shai Shalev-Shwartz, and Yoram Singer. Online passive-aggressive algorithms. Journal of Ma- chine Learning Research, 7:551-585, 2006.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Maximum likelihood from incomplete data via the em algorithm",
"authors": [
{
"first": "A",
"middle": [
"P"
],
"last": "Dempster",
"suffix": ""
},
{
"first": "N",
"middle": [
"M"
],
"last": "Laird",
"suffix": ""
},
{
"first": "D",
"middle": [
"B"
],
"last": "Rubin",
"suffix": ""
}
],
"year": 1977,
"venue": "JRSS, SERIES B",
"volume": "39",
"issue": "1",
"pages": "1--38",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. P. Dempster, N. M. Laird, and D. B. Rubin. Max- imum likelihood from incomplete data via the em algorithm. JRSS, SERIES B, 39(1):1-38, 1977.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Multi-view learning of word embeddings via cca",
"authors": [
{
"first": "S",
"middle": [],
"last": "Paramveer",
"suffix": ""
},
{
"first": "Dean",
"middle": [],
"last": "Dhillon",
"suffix": ""
},
{
"first": "Lyle",
"middle": [],
"last": "Foster",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ungar",
"suffix": ""
}
],
"year": 2011,
"venue": "Advances in Neural Information Processing Systems (NIPS)",
"volume": "24",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paramveer S. Dhillon, Dean Foster, and Lyle Un- gar. Multi-view learning of word embeddings via cca. In Advances in Neural Information Process- ing Systems (NIPS), volume 24, 2011.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Efficient parsing for bilexical context-free grammars and headautomaton grammars",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
},
{
"first": "Giorgio",
"middle": [],
"last": "Satta",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "457--464",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Eisner and Giorgio Satta. Efficient pars- ing for bilexical context-free grammars and head- automaton grammars. In Proceedings of the 37th Annual Meeting of the Association for Computa- tional Linguistics (ACL), pages 457-464, Univer- sity of Maryland, June 1999. URL http://cs. jhu.edu/\u02dcjason/papers/#acl99.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Spectral dimensionality reduction for HMMs",
"authors": [
{
"first": "Dean",
"middle": [],
"last": "Foster",
"suffix": ""
},
{
"first": "Jordan",
"middle": [],
"last": "Rodu",
"suffix": ""
},
{
"first": "Lyle",
"middle": [],
"last": "Ungar",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dean Foster, Jordan Rodu, and Lyle Ungar. Spec- tral dimensionality reduction for HMMs. ArXiV http://arxiv.org/abs/1203.6130, 2012.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A spectral algorithm for learning hidden markov models",
"authors": [
{
"first": "D",
"middle": [],
"last": "Hsu",
"suffix": ""
},
{
"first": "S",
"middle": [
"M"
],
"last": "Kakade",
"suffix": ""
},
{
"first": "Tong",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:0811.4413v2"
]
},
"num": null,
"urls": [],
"raw_text": "D Hsu, S M. Kakade, and Tong Zhang. A spec- tral algorithm for learning hidden markov models. arXiv:0811.4413v2, 2008.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Simple semi-supervised dependency parsing",
"authors": [
{
"first": "Terry",
"middle": [],
"last": "Koo",
"suffix": ""
},
{
"first": "Xavier",
"middle": [],
"last": "Carreras",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2008,
"venue": "Proc. ACL/HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Terry Koo, Xavier Carreras, and Michael Collins. Simple semi-supervised dependency parsing. In In Proc. ACL/HLT, 2008.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Spectral learning for non-deterministic dependency parsing",
"authors": [
{
"first": "F",
"middle": [],
"last": "Luque",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Quattoni",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Balle",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Carreras",
"suffix": ""
}
],
"year": 2012,
"venue": "EACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. Luque, A. Quattoni, B. Balle, and X. Carreras. Spectral learning for non-deterministic depen- dency parsing. In EACL, 2012.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Building a large annotated corpus of english: the penn treebank",
"authors": [
{
"first": "Mitchell",
"middle": [
"P"
],
"last": "Marcus",
"suffix": ""
},
{
"first": "Mary",
"middle": [
"Ann"
],
"last": "Marcinkiewicz",
"suffix": ""
},
{
"first": "Beatrice",
"middle": [],
"last": "Santorini",
"suffix": ""
}
],
"year": 1993,
"venue": "Comput. Linguist",
"volume": "19",
"issue": "",
"pages": "891--2017",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell P. Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. Building a large annotated corpus of english: the penn treebank. Comput. Linguist., 19:313-330, June 1993. ISSN 0891- 2017.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Discriminative learning and spanning tree algorithms for dependency parsing",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan McDonald. Discriminative learning and span- ning tree algorithms for dependency parsing. PhD thesis, University of Pennsylvania, Philadelphia, PA, USA, 2006. AAI3225503.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Tagging english text with a probabilistic model",
"authors": [
{
"first": "Bernard",
"middle": [],
"last": "Merialdo",
"suffix": ""
}
],
"year": 1994,
"venue": "Comput. Linguist",
"volume": "20",
"issue": "",
"pages": "891--2017",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bernard Merialdo. Tagging english text with a prob- abilistic model. Comput. Linguist., 20:155-171, June 1994. ISSN 0891-2017.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Unlexicalised hidden variable models of split dependency grammars",
"authors": [
{
"first": "Gabriele",
"middle": [],
"last": "Antonio Musillo",
"suffix": ""
},
{
"first": "Paola",
"middle": [],
"last": "Merlo",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics on Human Language Technologies: Short Papers, HLT-Short '08",
"volume": "",
"issue": "",
"pages": "213--216",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gabriele Antonio Musillo and Paola Merlo. Un- lexicalised hidden variable models of split de- pendency grammars. In Proceedings of the 46th Annual Meeting of the Association for Computa- tional Linguistics on Human Language Technolo- gies: Short Papers, HLT-Short '08, pages 213- 216, Stroudsburg, PA, USA, 2008. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A spectral algorithm for latent tree graphical models",
"authors": [
{
"first": "P",
"middle": [],
"last": "Ankur",
"suffix": ""
},
{
"first": "Le",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "Eric",
"middle": [
"P"
],
"last": "Song",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Xing",
"suffix": ""
}
],
"year": 2011,
"venue": "ICML",
"volume": "",
"issue": "",
"pages": "1065--1072",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ankur P. Parikh, Le Song, and Eric P. Xing. A spec- tral algorithm for latent tree graphical models. In ICML, pages 1065-1072, 2011.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Learning accurate, compact, and interpretable tree annotation",
"authors": [
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "Leon",
"middle": [],
"last": "Barrett",
"suffix": ""
},
{
"first": "Romain",
"middle": [],
"last": "Thibaux",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, ACL-44",
"volume": "",
"issue": "",
"pages": "433--440",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Slav Petrov, Leon Barrett, Romain Thibaux, and Dan Klein. Learning accurate, compact, and in- terpretable tree annotation. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the As- sociation for Computational Linguistics, ACL-44, pages 433-440, Stroudsburg, PA, USA, 2006. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Conditional random fields for object recognition",
"authors": [
{
"first": "Ariadna",
"middle": [],
"last": "Quattoni",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Darrell",
"suffix": ""
}
],
"year": 2004,
"venue": "NIPS",
"volume": "",
"issue": "",
"pages": "1097--1104",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ariadna Quattoni, Michael Collins, and Trevor Dar- rell. Conditional random fields for object recog- nition. In In NIPS, pages 1097-1104. MIT Press, 2004.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A tutorial on hidden markov models and selected applications in speech recognition",
"authors": [
{
"first": "Lawrence",
"middle": [
"R"
],
"last": "Rabiner",
"suffix": ""
}
],
"year": 1989,
"venue": "Proceedings of the IEEE",
"volume": "",
"issue": "",
"pages": "257--286",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lawrence R. Rabiner. A tutorial on hidden markov models and selected applications in speech recog- nition. In Proceedings of the IEEE, pages 257- 286, 1989.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A Maximum Entropy Model for Part-Of-Speech Tagging",
"authors": [
{
"first": "Adwait",
"middle": [],
"last": "Ratnaparkhi",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "133--142",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adwait Ratnaparkhi. A Maximum Entropy Model for Part-Of-Speech Tagging. In Eric Brill and Kenneth Church, editors, Proceedings of the Em- pirical Methods in Natural Language Processing, pages 133-142, 1996.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "On the learnability of hidden markov models",
"authors": [
{
"first": "Sebastiaan",
"middle": [],
"last": "Terwijn",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 6th International Colloquium on Grammatical Inference: Algorithms and Applications, ICGI '02",
"volume": "",
"issue": "",
"pages": "261--268",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastiaan Terwijn. On the learnability of hidden markov models. In Proceedings of the 6th Inter- national Colloquium on Grammatical Inference: Algorithms and Applications, ICGI '02, pages 261-268, London, UK, UK, 2002. Springer- Verlag. ISBN 3-540-44239-1.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Sample dependency parsing tree for \"Kilroy was here\""
}
}
}
}