| { |
| "paper_id": "Q19-1041", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T15:09:24.141732Z" |
| }, |
| "title": "Perturbation Based Learning for Structured NLP Tasks with Application to Dependency Parsing", |
| "authors": [ |
| { |
| "first": "Amichay", |
| "middle": [], |
| "last": "Doitch", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Ram", |
| "middle": [], |
| "last": "Yazdi", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Tamir", |
| "middle": [], |
| "last": "Hazan", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Roi", |
| "middle": [], |
| "last": "Reichart", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "The best solution of structured prediction models in NLP is often inaccurate because of limited expressive power of the model or to non-exact parameter estimation. One way to mitigate this problem is sampling candidate solutions from the model's solution space, reasoning that effective exploration of this space should yield high-quality solutions. Unfortunately, sampling is often computationally hard and many works hence back-off to sub-optimal strategies, such as extraction of the best scoring solutions of the model, which are not as diverse as sampled solutions. In this paper we propose a perturbation-based approach where sampling from a probabilistic model is computationally efficient. We present a learning algorithm for the variance of the perturbations, and empirically demonstrate its importance. Moreover, while finding the argmax in our model is intractable, we propose an efficient and effective approximation. We apply our framework to cross-lingual dependency parsing across 72 corpora from 42 languages and to lightly supervised dependency parsing across 13 corpora from 12 languages, and demonstrate strong results in terms of both the quality of the entire solution list and of the final solution. 1", |
| "pdf_parse": { |
| "paper_id": "Q19-1041", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "The best solution of structured prediction models in NLP is often inaccurate because of limited expressive power of the model or to non-exact parameter estimation. One way to mitigate this problem is sampling candidate solutions from the model's solution space, reasoning that effective exploration of this space should yield high-quality solutions. Unfortunately, sampling is often computationally hard and many works hence back-off to sub-optimal strategies, such as extraction of the best scoring solutions of the model, which are not as diverse as sampled solutions. In this paper we propose a perturbation-based approach where sampling from a probabilistic model is computationally efficient. We present a learning algorithm for the variance of the perturbations, and empirically demonstrate its importance. Moreover, while finding the argmax in our model is intractable, we propose an efficient and effective approximation. We apply our framework to cross-lingual dependency parsing across 72 corpora from 42 languages and to lightly supervised dependency parsing across 13 corpora from 12 languages, and demonstrate strong results in terms of both the quality of the entire solution list and of the final solution. 1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Structured prediction problems are ubiquitous in Natural Language Processing (NLP) (Smith, 2011) . Although in most cases models for such problems are designed to predict the highest quality structure of the input example (e.g., a sentence or a document), in many cases a diverse list of meaningful structures is of fundamental importance.", |
| "cite_spans": [ |
| { |
| "start": 83, |
| "end": 96, |
| "text": "(Smith, 2011)", |
| "ref_id": "BIBREF51" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "This can stem from several reasons. First, it can be a defining property of the task. For example, in extractive summarization (Nenkova and McKeown, 2011) good summaries are those that consist of a high quality and diverse list of sentences extracted from the text. In other cases the members of the solution list are exploited when solving an end goal application. For example, dependency forests were used in order to improve machine translation (Tu et al., 2010; Ma et al., 2018) and sentiment analysis (Tu et al., 2012) .", |
| "cite_spans": [ |
| { |
| "start": 127, |
| "end": 154, |
| "text": "(Nenkova and McKeown, 2011)", |
| "ref_id": "BIBREF40" |
| }, |
| { |
| "start": 448, |
| "end": 465, |
| "text": "(Tu et al., 2010;", |
| "ref_id": "BIBREF64" |
| }, |
| { |
| "start": 466, |
| "end": 482, |
| "text": "Ma et al., 2018)", |
| "ref_id": "BIBREF31" |
| }, |
| { |
| "start": 506, |
| "end": 523, |
| "text": "(Tu et al., 2012)", |
| "ref_id": "BIBREF63" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In yet other cases it is a first step towards learning a high quality structure that cannot be learned by the model through standard argmax inference. For example, in the well-studied reranking setup (Collins, 2002; Collins and Koo, 2005; Charniak and Johnson, 2005; Son et al., 2012; Kalchbrenner and Blunsom, 2013) , a K-best list of solutions is first extracted from a baseline learner, which typically has a limited feature space, and is then transferred to another feature-rich model that chooses the best solution from this list. Other examples include bagging (Breiman, 1996; Sun and Wan, 2013) and boosting (Bawden and Crabb\u00e9, 2016) as well as other ensemble methods (Surdeanu and Manning, 2010; Kuncoro et al., 2016) that are often applied when the data available for model training is limited, in cases where exact argmax inference in the model is indefeasible, or when training is not deterministic. In such cases, an ensemble of approximated solutions is fed into another model that extracts a final high quality solution.", |
| "cite_spans": [ |
| { |
| "start": 200, |
| "end": 215, |
| "text": "(Collins, 2002;", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 216, |
| "end": 238, |
| "text": "Collins and Koo, 2005;", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 239, |
| "end": 266, |
| "text": "Charniak and Johnson, 2005;", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 267, |
| "end": 284, |
| "text": "Son et al., 2012;", |
| "ref_id": "BIBREF54" |
| }, |
| { |
| "start": 285, |
| "end": 316, |
| "text": "Kalchbrenner and Blunsom, 2013)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 567, |
| "end": 582, |
| "text": "(Breiman, 1996;", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 583, |
| "end": 601, |
| "text": "Sun and Wan, 2013)", |
| "ref_id": "BIBREF57" |
| }, |
| { |
| "start": 615, |
| "end": 640, |
| "text": "(Bawden and Crabb\u00e9, 2016)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 675, |
| "end": 703, |
| "text": "(Surdeanu and Manning, 2010;", |
| "ref_id": "BIBREF58" |
| }, |
| { |
| "start": 704, |
| "end": 725, |
| "text": "Kuncoro et al., 2016)", |
| "ref_id": "BIBREF27" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Unfortunately, both alternatives suffer from inherent limitations. K-best lists can be extracted by extensions of the argmax inference algorithm for many models: the K-best Viterbi algorithm (Golod, 2009) for Hidden Markov Models (Rabiner, 1989) and Conditional Random Fields (Lafferty et al., 2001 ), K-best Maximum Spanning Tree (MST) algorithms for graph-based dependency parsing (Camerini et al., 1980; Hall, 2007) , and so forth. However, the members of K-best lists are typically quite similar to each other and do not substantially deviate from the argmax solution of the model. 2 Ensemble techniques, in contrast, are often designed to encourage diversity of the K-list members, but they require the training of multiple models (often one model per solution in the K-list) which is prohibitive for large K values.", |
| "cite_spans": [ |
| { |
| "start": 191, |
| "end": 204, |
| "text": "(Golod, 2009)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 230, |
| "end": 245, |
| "text": "(Rabiner, 1989)", |
| "ref_id": "BIBREF44" |
| }, |
| { |
| "start": 276, |
| "end": 298, |
| "text": "(Lafferty et al., 2001", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 383, |
| "end": 406, |
| "text": "(Camerini et al., 1980;", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 407, |
| "end": 418, |
| "text": "Hall, 2007)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this work we propose a new method for learning K-lists from machine learning models, focusing on structured prediction models in NLP. Our method is based on the MAP-perturbations model (Hazan et al., 2016) . A particularly appealing property of the perturbations framework is that it supports computationally tractable sampling from the perturbated model, although this comes at the cost of the argmax operation often being intractable. This property allows us to sample high quality and diverse K-lists of solutions, while training only the base (non-perturbated) learner and a smooth noise function. We propose a novel algorithm that automatically learns the noise parameter of the perturbation model and show the efficacy of this approach in generating high quality K-lists ( \u00a7 2). To overcome the intractability of the argmax operation we use an approximation and experimentally demonstrate its efficacy.", |
| "cite_spans": [ |
| { |
| "start": 188, |
| "end": 208, |
| "text": "(Hazan et al., 2016)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Particularly, we introduce a Gibbs-perturbation model: a model that augments a given machine learning model with an additive or multiplicative Gaussian noise function (Keshet et al., 2011; Hazan et al., 2013) . In order to approximate the argmax of the perturbated model we use a max over marginals (MOM) procedure over the K-list members. We learn the variance of the Gaussian noise function such that the final solution distilled from the K-list is as close to the gold standard solution as possible. To the best of our knowledge, the final solution distillation method and the variance learning algorithm are novel in the context of perturbation-based learning.", |
| "cite_spans": [ |
| { |
| "start": 167, |
| "end": 188, |
| "text": "(Keshet et al., 2011;", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 189, |
| "end": 208, |
| "text": "Hazan et al., 2013)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "To evaluate our framework, we consider two dependency parsing setups: cross-language transfer and lightly supervised training. We focus on these tasks because they are prominent NLP challenges where the model (the non-perturbated dependency parser) is a good fit to the task and data, as indicated by the high quality trees generated in mono-lingual setups with abundance of in-domain training data, but the training setup makes parameter estimation challenging. Hence, the argmax solution of the model is often not the highest quality one. In such cases it is likely that a diverse list of high-quality solutions will be valuable.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Particularly, we experiment with the Universal Dependencies (UD) Treebanks (Nivre et al., 2016; . For cross-language parser transfer we consider 72 corpora from 42 languages. We train a perturbated delexicalized parser for each target language. The non-perturbated parser is first trained on data from all languages except from the target language and then we learn the variance of the noise distribution on additional data from those languages. Finally, we use the trained perturbated parser K times to the target language test set, perturbating the parameters of the base parser using noise sampled from the trained noise distribution. The final solution is extracted from this K-list by the MOM algorithm. The experiments in the lightly supervised setup are similar, except that we consider 13 UD corpora (written in 12 languages) that have limited training data. This setup is monolingual, we train and test on data from the same corpus.", |
| "cite_spans": [ |
| { |
| "start": 75, |
| "end": 95, |
| "text": "(Nivre et al., 2016;", |
| "ref_id": "BIBREF41" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Our results demonstrate the quality of the Klists generated by our algorithm and of the tree returned by the MOM procedure. We compare our lists and final solution to those of a variety of alternative algorithms for K-list generation, including the K-best variant of the parser's argmax inference algorithm, and demonstrate substantial gains. Finally, even though we integrate our method into a linear parser (Huang and Sagae, 2010) , our modified parser outperforms a state-of-the-art (non-perturbated) BiLSTM parser (Kiperwasser and Goldberg, 2016) on our tasks.", |
| "cite_spans": [ |
| { |
| "start": 409, |
| "end": 432, |
| "text": "(Huang and Sagae, 2010)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 518, |
| "end": 550, |
| "text": "(Kiperwasser and Goldberg, 2016)", |
| "ref_id": "BIBREF24" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Structured models in NLP Many NLP tasks, particularly tagging and parsing, involve the inference of a high-dimensional discrete structure y = (y 1 , . . . , y m ). For example, in part-of-speech (POS) tagging of an n-word input sentence, each y i variable corresponds to an input word (and hence m = n), and is assigned a value in {1, . . . , P } where P is the number of POS tags. In dependency parsing, a graph G = (V, E) is defined over an n-word input sentence such that each vertex corresponds to a word in the input sentence (|V | = n) and each arc corresponds to an ordered word pair (|E| = m = n 2 ). In the structured model, each ordered pair of words in the input sentence is assigned a variable y i , and the resulting parse tree is a vector (y 1 , . . . , y m ) \u2208 {0, 1} m that forms a spanning tree in the graph G. For every spanning tree y e = 1 if the arc e \u2208 E is in the spanning tree and y e = 0 otherwise. In what follows, we proceed with the dependency parsing notation although our ideas are equally relevant to any task defined over discrete structures. 3 The common practice in structured prediction is that structures are scored by a function that assigns favorable structures with high scores and unfavorable ones with low scores. The number of structures (|T |) is often exponential in m, as in our running dependency parsing example. Hence, in order to avoid exponential complexity, the scoring function has to factorize. In our running example this is done through:", |
| "cite_spans": [ |
| { |
| "start": 1075, |
| "end": 1076, |
| "text": "3", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "K-lists in NLP", |
| "sec_num": "2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "\u03b8(y 1 , . . . , y m ) = e\u2208E \u03b8 e y e", |
| "eq_num": "(1)" |
| } |
| ], |
| "section": "K-lists in NLP", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The standard approach is to train the model (estimate the \u03b8 parameters of the scoring function) so that the highest scoring configuration (namely y * = arg max y\u2208T \u03b8(y)) is as similar as possible to the human generated (''gold'') structure. For dependency parsing, this is equivalent to finding the maximal spanning tree of the graph G.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "K-lists in NLP", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Prediction with K-lists Unfortunately, oftentimes the highest scoring structure is not the best one. This may happen in cases the model is not expressive enough, for example, in first-order dependency parsing where only m local potentials (\u03b8 e ) are used to score exponentially many structures. This may also happen in cases where the values of the potential functions are inaccurate, as learning inherently has both statistical and variational errors.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "K-lists in NLP", |
| "sec_num": "2" |
| }, |
| { |
| "text": "A popular solution to this problem is exploiting the power of lists of structures. In the first stage of this framework, the list members are extracted and in the second stage, the final solution is extracted from this list-either by selecting one list member, or by distilling a new solution based on the statistics of the list members.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "K-lists in NLP", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Ideally, such a list should be high-quality and diverse, in order to explore candidate structures that add information over the structure returned by the argmax inference problem. Yet, the prominent approach in past research constructs a list of the K best solutions according to the scoring function (Equation 1). On the positive side, this approach is computationally feasible as the argmax inference algorithms of prominent structured NLP models can be efficiently extended to find the top scoring K structures ( \u00a7 1). However, in practice the topscoring K structures are similar to the top-scoring structure (see our analysis in \u00a7 6), and important parts of the solution space remain unexplored. 4 This calls for another approach that explores more diverse parts of the solution space. The approach we take here is based on sampling from probabilistic models.", |
| "cite_spans": [ |
| { |
| "start": 700, |
| "end": 701, |
| "text": "4", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "K-lists in NLP", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Sampling-based K-lists Sampling is a possible solution to the diversity problem. In practice, many sampling algorithms require that the structured model be defined as a probabilistic model. It is natural to impose a probabilistic interpretation of the model described in Equation 1. To do that, a posterior distribution over all structures (i.e., the Gibbs distribution) is realized from the scoring function:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "K-lists in NLP", |
| "sec_num": "2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "p \u03b8 (y 1 , . . . , y m ) \u221d exp e\u2208E \u03b8 e y e", |
| "eq_num": "(2)" |
| } |
| ], |
| "section": "K-lists in NLP", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The highest scoring structure under this probabilistic model is called the maximum aposteriori (MAP) assignment, and is identical to the top scoring function from Equation 1:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "K-lists in NLP", |
| "sec_num": "2" |
| }, |
| { |
| "text": "y * 1 , . . . , y * m = arg max y 1 ,...,y m \u2208T p \u03b8 (y 1 , . . . , y m ) (3) = arg max y 1 ,...,y m \u2208T e\u2208E \u03b8 e y e", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "K-lists in NLP", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Likewise, the top K-list of this modelconsisting of the K most probable structures of the Gibbs distribution-is also identical to that of the unnormalized model. As noted above, these structures are likely to be of high quality but also quite similar to each other.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "K-lists in NLP", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The natural alternative that probabilistic models make possible is to sample from the Gibbs distribution instead. Such a strategy is likely to detect high-quality structures even if they are not very similar to the best scoring solution, particularly in cases where the estimated model parameters do not fit well the test data. A final tree distilled from such a candidate list is more likely to be of higher quality than the list distilled from the list of the top scoring K structures, due to the better representation of the solution space.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "K-lists in NLP", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Unfortunately, this approach comes with a caveat: sampling a structure from the Gibbs distribution is often slower than finding the MAP assignment (Goldberg and Jerrum, 2007; Sontag et al., 2008) . In our running example, the sampling of first order graph-based dependency parsing depends on the mean hitting time of a random walk in a graph (Wilson, 1996; Zhang et al., 2014) , which is slower than finding the maximum spanning tree of the same graph.", |
| "cite_spans": [ |
| { |
| "start": 147, |
| "end": 174, |
| "text": "(Goldberg and Jerrum, 2007;", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 175, |
| "end": 195, |
| "text": "Sontag et al., 2008)", |
| "ref_id": "BIBREF55" |
| }, |
| { |
| "start": 342, |
| "end": 356, |
| "text": "(Wilson, 1996;", |
| "ref_id": "BIBREF70" |
| }, |
| { |
| "start": 357, |
| "end": 376, |
| "text": "Zhang et al., 2014)", |
| "ref_id": "BIBREF72" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "K-lists in NLP", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Perturbation-based K-lists Perturbation models define probability distributions over highdimensional discrete structures for which sampling is as fast as solving the MAP problem of a base, non-perturbated, model (Papandreou and Yuille, 2011; Tarlow et al., 2012; Hazan and Jaakkola, 2012; Maddison et al., 2014) . In our setting, perturbation models let us sample a spanning tree as fast as finding a highest scoring spanning tree of a base parser. In this setting, we can draw samples from the perturbated model by perturbing the potential functions of the base model and solving the resulting MAP problem. The MAPperturbation approach samples random variables \u03b3 1 , . . . , \u03b3 m from a posterior distribution around the base model weights \u03b8 1 , . . . , \u03b8 m and solves the randomly perturbed argmax problem: 5", |
| "cite_spans": [ |
| { |
| "start": 212, |
| "end": 241, |
| "text": "(Papandreou and Yuille, 2011;", |
| "ref_id": "BIBREF42" |
| }, |
| { |
| "start": 242, |
| "end": 262, |
| "text": "Tarlow et al., 2012;", |
| "ref_id": "BIBREF60" |
| }, |
| { |
| "start": 263, |
| "end": 288, |
| "text": "Hazan and Jaakkola, 2012;", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 289, |
| "end": 311, |
| "text": "Maddison et al., 2014)", |
| "ref_id": "BIBREF34" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "K-lists in NLP", |
| "sec_num": "2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "y \u03b3 = arg max y\u2208T e\u2208E \u03b3 e y e", |
| "eq_num": "(4)" |
| } |
| ], |
| "section": "K-lists in NLP", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The posterior distribution around the model weights q \u03b8 (\u03b3) is defined such that it is centered around the model weights \u03b8, namely, E \u03b3\u223cq \u03b8 [\u03b3] = \u03b8. For example, q \u03b8 (\u03b3) can be a Gaussian probability density function:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "K-lists in NLP", |
| "sec_num": "2" |
| }, |
| { |
| "text": "q \u03b8 (\u03b3) = e 1 \u221a 2\u03c0 e (\u03b3e \u2212\u03b8e ) 2 2 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "K-lists in NLP", |
| "sec_num": "2" |
| }, |
| { |
| "text": "For now we assume that the variance of the posterior q \u03b8 (\u03b3) is 1 and defer its learning to \u00a7 3. Perturbation models measure the probability a structure is of maximal score, when considering all perturbations:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "K-lists in NLP", |
| "sec_num": "2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "p \u03b3 (y 1 , . . . , y m ) = P \u03b3\u223cq \u03b8 [y \u03b3 = y]", |
| "eq_num": "(5)" |
| } |
| ], |
| "section": "K-lists in NLP", |
| "sec_num": "2" |
| }, |
| { |
| "text": "A particular appealing property of Gibbs models is that in many cases the most likely structure can be computed or approximated efficiently using dynamic programming or efficient optimization techniques (Koller et al., 2009; Wainwright and Jordan, 2008) . For example, finding the most likely dependency parse can be done by finding the maximum spanning tree of a graph (McDonald et al., 2005) . In this work we want to enjoy the best of both worlds, exploiting the capability of MAPperturbation models to sample by solving the MAP problem of the base model, while building on the efficient MAP approximation in Gibbs models. We do that by composing a perturbation model on top of a Gibbs model. This construction allows us to effectively sample high quality and diverse K-lists from MAP-perturbation models, and distill a high quality final structure.", |
| "cite_spans": [ |
| { |
| "start": 203, |
| "end": 224, |
| "text": "(Koller et al., 2009;", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 225, |
| "end": 253, |
| "text": "Wainwright and Jordan, 2008)", |
| "ref_id": "BIBREF66" |
| }, |
| { |
| "start": 370, |
| "end": 393, |
| "text": "(McDonald et al., 2005)", |
| "ref_id": "BIBREF37" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "K-lists in NLP", |
| "sec_num": "2" |
| }, |
| { |
| "text": "A major practical issue when implementing perturbation models is the magnitude of the perturbation variables \u03b3, or their variance. It is easy to see that the variance of these variables greatly influences the quality of the resulting probability model. If this variance is too high, the perturbation noise can easily shadow the signal learned from data, that is, e \u03b3 e y e e \u03b8 e y e with non-negligible probability, so the max-perturbation value becomes meaningless. Therefore, in this work we learn the variance of the perturbation posterior. For example, for a Gaussian noise \u03b3 \u223c N (0, \u03c3 2 e ) added to the Gibbs model parameters \u03b8 = [\u03b8 1 , . . . , \u03b8 m ], the variance is introduced as", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Effective Sampling and Learning with MAP-Perturbation Models", |
| "sec_num": "3" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "(additive) q \u03b8,\u03c3 (\u03b3) = e 1 \u221a 2\u03c0\u03c3 e e (\u03b3e \u2212\u03b8e ) 2 2\u03c3 2 e .", |
| "eq_num": "(6)" |
| } |
| ], |
| "section": "Effective Sampling and Learning with MAP-Perturbation Models", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Our model is more flexible, and allows other types of noise. For example, we can assume a Gaussian multiplicative noise \u03b3 \u223c N (1, \u03c3 2 e ) to get", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Effective Sampling and Learning with MAP-Perturbation Models", |
| "sec_num": "3" |
| }, |
| { |
| "text": "(multiplicative) q \u03b8,\u03c3 (\u03b3) = e 1 \u221a", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Effective Sampling and Learning with MAP-Perturbation Models", |
| "sec_num": "3" |
| }, |
| { |
| "text": "2\u03c0\u03c3 e e (\u03b3e \u2212\u03b8e ) 2 2\u03b8 2 e \u03c3 2 e . (7)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Effective Sampling and Learning with MAP-Perturbation Models", |
| "sec_num": "3" |
| }, |
| { |
| "text": "We divide this section in two parts. We first discuss our approach to variance learning in perturbation models. Then, we detail our recipe for learning with perturbation-based K-lists, so that each test example is eventually assigned a single structure.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Effective Sampling and Learning with MAP-Perturbation Models", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Given a training set", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning the variance of the perturbation distribution", |
| "sec_num": null |
| }, |
| { |
| "text": "S = {(x i , y i )} N i=1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning the variance of the perturbation distribution", |
| "sec_num": null |
| }, |
| { |
| "text": "consisting of examples (x i ) and the structures with which they are labeled (y i ), we learn the variance with respect to the oracle loss oracle K (). This loss penalizes the perturbation parameters (\u03b3 1 , . . . , \u03b3 m ) according to the difference between the final structure extracted from the K-list of each example x i and the gold tree of that example, y i . In our running example, dependency parsing, it is straightforward to define this loss as:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning the variance of the perturbation distribution", |
| "sec_num": null |
| }, |
| { |
| "text": "oracle K ({\u03b3 j } K j=1 , x i , y i ) = (8) HamDist(MOM({\u03b3 j } K j=1 , x i ), y i )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning the variance of the perturbation distribution", |
| "sec_num": null |
| }, |
| { |
| "text": "where", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning the variance of the perturbation distribution", |
| "sec_num": null |
| }, |
| { |
| "text": "\u03b3 j = (\u03b3 j 1 , . . . , \u03b3 j m )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning the variance of the perturbation distribution", |
| "sec_num": null |
| }, |
| { |
| "text": "are the perturbation parameters of the i-th example, MOM is the maxover-marginals algorithm that distills a final tree from the K sampled trees ( \u00a7 4), and HamDist is the hamming distance between the MOM tree and the gold tree y i :", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning the variance of the perturbation distribution", |
| "sec_num": null |
| }, |
| { |
| "text": "HamDist(y m , y i ) = (9) n j=1 1 if h y m (j) = h y i (j) 0 Otherwise", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning the variance of the perturbation distribution", |
| "sec_num": null |
| }, |
| { |
| "text": "where n is the number of words in the sentence, and h y (j) is the head of the j-th word in y. 6 We next define the expected empirical loss (EEL) with respect to the variance of the perturbation distribution:", |
| "cite_spans": [ |
| { |
| "start": 95, |
| "end": 96, |
| "text": "6", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning the variance of the perturbation distribution", |
| "sec_num": null |
| }, |
| { |
| "text": "EEL(\u03c3, S) = (10) 1 N (x i ,y i )\u2208S E \u03b3 1 ,...,\u03b3 K \u223cq \u03b8,\u03c3 [oracle K ({\u03b3 j } K j=1 , x i , y i )] 6", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning the variance of the perturbation distribution", |
| "sec_num": null |
| }, |
| { |
| "text": "The hamming distance is equivalent to the Unlabeled Attachment Score (UAS) between the trees. And the optimal \u03c3 will minimize this loss:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning the variance of the perturbation distribution", |
| "sec_num": null |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "\u03c3 * = min \u03c3 EEL(\u03c3, S)", |
| "eq_num": "(11)" |
| } |
| ], |
| "section": "Learning the variance of the perturbation distribution", |
| "sec_num": null |
| }, |
| { |
| "text": "Whenever q \u03b8,\u03c3 (\u03b3), the perturbation probability density function (pdf), is smooth in \u03c3, the EEL is the integral of a smooth function (the pdf q \u03b8,\u03c3 (\u03b3)) and the non-smooth oracle function. In the following we prove that this integral is a smooth function of \u03c3 and therefore the optimal variance can be learned from data by using a gradient method to solve the problem in Equation 11.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning the variance of the perturbation distribution", |
| "sec_num": null |
| }, |
| { |
| "text": "Claim 1. If the probability density function q \u03b8,\u03c3 (\u03b3) is smooth and its gradient is integrable, that is, |\u2202q \u03b8,\u03c3 (\u03b3 j )/\u2202\u03c3 e |d\u03b3 j < \u221e then the gradient of the EEL function with respect to \u03c3 e as computed on (x i , y i ) \u2208 S takes the form:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning the variance of the perturbation distribution", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2202EEL(\u03c3) \u2202\u03c3 e = (12) (x i ,y i )\u2208S \u2202q \u03b8,\u03c3 (\u03b3 j ) \u2202\u03c3 e oracle K ({\u03b3 j } K j=1 , x i , y i )d\u03b3 j Proof. The expectation E \u03b3 1 ,...,\u03b3 K \u223cq \u03b8,\u03c3 [oracle K ({\u03b3 j } K j=1 , x i , y i )]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning the variance of the perturbation distribution", |
| "sec_num": null |
| }, |
| { |
| "text": "is the integral", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning the variance of the perturbation distribution", |
| "sec_num": null |
| }, |
| { |
| "text": "K j=1 q \u03b8,\u03c3 (\u03b3 j )f ({\u03b3 j } K j=1 )d\u03b3 1 \u2022 \u2022 \u2022 d\u03b3 K , where f ({\u03b3 j } K j=1 ) = oracle K ({\u03b3 j } K j=1 , x i , y i ) is a non-differentiable function. Notably, the function f ({\u03b3 j } K j=1 )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning the variance of the perturbation distribution", |
| "sec_num": null |
| }, |
| { |
| "text": "is independent of \u03c3 and therefore its non-differentiability does not affect the differentiability of EEL(\u03c3). Moreover,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning the variance of the perturbation distribution", |
| "sec_num": null |
| }, |
| { |
| "text": "f ({\u03b3 j } K j=1 ) \u2264 N for some constant N , therefore the function q \u03b8,\u03c3 (\u03b3 j )f ({\u03b3 j } K j=1 )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning the variance of the perturbation distribution", |
| "sec_num": null |
| }, |
| { |
| "text": "is bounded by the integrable function Nq \u03b8,\u03c3 (\u03b3 j ) and its derivative with respect to \u03c3 is bounded by the function N |\u2202q \u03b8,\u03c3 (\u03b3 j )/\u2202\u03c3 e |. Following Theorem 2.27 by Folland (1999) , the function EEL(\u03c3) is differentiable and its gradient is attained by differentiating under the integral. This claim shows how to learn the optimal variance of the random perturbation variables with a gradient method. Note that oracle K and hence also EEL(\u03c3, S) are defined with respect to a given K-list size (K). K is a hyper-parameter that can be estimated using, for example, a grid-search for optimal value using development data. Our experiments are with: K = 10, 100, 200 ( \u00a7 5).", |
| "cite_spans": [ |
| { |
| "start": 167, |
| "end": 181, |
| "text": "Folland (1999)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning the variance of the perturbation distribution", |
| "sec_num": null |
| }, |
| { |
| "text": "Once \u03c3 and K are determined, we can generate meaningful samples, that is, the perturbation value \u03b3 e y e will not shadow the data signal \u03b8 e y e . We are now ready to provide a learning process with perturbation-based K-lists.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning the variance of the perturbation distribution", |
| "sec_num": null |
| }, |
| { |
| "text": "Learning with perturbation-based K-lists Our goal is to train a model so that it can eventually output a single high-quality structure, y * , hopefully of a higher quality than the output (MAP) of the Gibbs (base) model. Because joint learning of \u03b8 (the Gibbs model parameters) and \u03c3 (the variance of the perturbation distribution) is intractable, we first learn \u03b8 and then \u03c3.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning the variance of the perturbation distribution", |
| "sec_num": null |
| }, |
| { |
| "text": "We assume two training sets:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning the variance of the perturbation distribution", |
| "sec_num": null |
| }, |
| { |
| "text": "S = {(x i , y i )} N i=1 and S = {(x i , y i )} N i=1 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning the variance of the perturbation distribution", |
| "sec_num": null |
| }, |
| { |
| "text": "Our training recipe is as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning the variance of the perturbation distribution", |
| "sec_num": null |
| }, |
| { |
| "text": "1. Learn the parameters \u03b8 of the Gibbs (base) model with the training set S.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning the variance of the perturbation distribution", |
| "sec_num": null |
| }, |
| { |
| "text": "2. Learn the parameter \u03c3 and the hyperparameter K with the training set S by minimizing EEL(\u03c3, S ) while keeping the \u03b8 parameters learned at step (1) fixed.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning the variance of the perturbation distribution", |
| "sec_num": null |
| }, |
| { |
| "text": "The test-time recipe for the i-th test example is:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning the variance of the perturbation distribution", |
| "sec_num": null |
| }, |
| { |
| "text": "1. Sample K values of the perturbation variables:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning the variance of the perturbation distribution", |
| "sec_num": null |
| }, |
| { |
| "text": "{\u03b3 j \u223c q \u03b8,\u03c3 |j \u2208 {1, . . . , K}}.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning the variance of the perturbation distribution", |
| "sec_num": null |
| }, |
| { |
| "text": "2. for j \u2208 {1, . . . , K} find y \u03b3 j according to Equation 4.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning the variance of the perturbation distribution", |
| "sec_num": null |
| }, |
| { |
| "text": "3. Extract the final structure y * from {y", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning the variance of the perturbation distribution", |
| "sec_num": null |
| }, |
| { |
| "text": "\u03b3 j } K j=1 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning the variance of the perturbation distribution", |
| "sec_num": null |
| }, |
| { |
| "text": "The only missing piece is the method for extracting y * from {y \u03b3 j } K j=1 . Note that this method is employed both at step (2) of the training recipe (as it is part of the definition of EEL(\u03c3, S )) and at step (3) of the test-time recipe. In the next section we describe an approximation algorithm for this problem.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning the variance of the perturbation distribution", |
| "sec_num": null |
| }, |
| { |
| "text": "Our oracle loss considers the hamming distance of max-over-marginals (MOM) . For this aim, let us consider the single variable (candidate edge) marginal probabilities of the Gibbs-perturbation model:", |
| "cite_spans": [ |
| { |
| "start": 69, |
| "end": 74, |
| "text": "(MOM)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Max Over Marginals (MOM) Inference", |
| "sec_num": "4" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "\u03bc e = P \u03b3 [y \u03b3 e = 1]", |
| "eq_num": "(13)" |
| } |
| ], |
| "section": "Max Over Marginals (MOM) Inference", |
| "sec_num": "4" |
| }, |
| { |
| "text": "We then define the approximated argmax inference in the Gibbs-perturbation model as predicting the best spanning tree with respect to the log of these marginals:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Max Over Marginals (MOM) Inference", |
| "sec_num": "4" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "y * = arg max y\u2208T e\u2208E y e log \u03bc e", |
| "eq_num": "(14)" |
| } |
| ], |
| "section": "Max Over Marginals (MOM) Inference", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Notice that for first order parsing, our running example in this paper, this approach is essentially identical to the inference algorithm of Kuncoro et al. (2016) , which was aimed at distilling a final solution from an ensemble of parsers. However, this MOM approach can naturally be extended beyond single variable potentials. For example, we can consider variable pair potentials or potentials over variable triplets and perform exact (Koo and Collins, 2010) or approximated (Martins et al., 2013; Tchernowitz et al., 2016) inference for second and third order problems. Here, for simplicity, we focus on single variable potentials and solve the resulting MOM problem directly with an exact MST algorithm.", |
| "cite_spans": [ |
| { |
| "start": 141, |
| "end": 162, |
| "text": "Kuncoro et al. (2016)", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 438, |
| "end": 461, |
| "text": "(Koo and Collins, 2010)", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 478, |
| "end": 500, |
| "text": "(Martins et al., 2013;", |
| "ref_id": "BIBREF35" |
| }, |
| { |
| "start": 501, |
| "end": 526, |
| "text": "Tchernowitz et al., 2016)", |
| "ref_id": "BIBREF61" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Max Over Marginals (MOM) Inference", |
| "sec_num": "4" |
| }, |
| { |
| "text": "In what follows we first show that the MOM approach-recovering the best spanning tree according to the log-marginals of one Gibbsperturbation model-can be interpreted as a MAP approach over marginal probabilities of a continuous-discrete Gibbs model. We then discuss how we estimate the marginal probabilities \u03bc e (Equation 13).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Max Over Marginals (MOM) Inference", |
| "sec_num": "4" |
| }, |
| { |
| "text": "We show that MOM in one Gibbsperturbation model can be interpreted as MAP over marginals in another continuous-discrete Gibbs model.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MOM as MAP of a Continuous-discrete Gibbs Model", |
| "sec_num": null |
| }, |
| { |
| "text": "p M (y 1 , . . . , y m ) \u221d exp e\u2208E y e log \u03bc e \u221d e \u03bc y e e \u221d e (P \u03b3 [y \u03b3 e = 1]) y e \u221d e E \u03b3 1[y \u03b3 e = 1] y e ( * ) \u221d E \u03b3 (1) ,...,\u03b3 (m) e (1[y \u03b3 (e) e =1]) y e", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MOM as MAP of a Continuous-discrete Gibbs Model", |
| "sec_num": null |
| }, |
| { |
| "text": "The starred equivalence holds when the product function of expectations is the expectation of the same product function. This equivalence holds when the random variables 1[y \u03b3 e = 1] are independent. To enforce the independence assumption, the starred equivalence requires an independent perturbation vector \u03b3 (e) = (\u03b3 (e) 1 , . . . , \u03b3 (e) m ) for each edge.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MOM as MAP of a Continuous-discrete Gibbs Model", |
| "sec_num": null |
| }, |
| { |
| "text": "Using this independence assumption we are able to represent p M (y 1 , . . . , y m ) as the expectation of a product of functions, q \u03b8,\u03c3 (\u03b3 (e) )1[y \u03b3 (e) e = 1]. This factorization naturally lends a Gibbs model over the factors \u03c8 e (\u03b3 (e) , y e ) def = log(q \u03b8,\u03c3 (\u03b3 (e) )1[y \u03b3 (e) e = 1]). Hence, the MAP assignment of Equation 14 is the MAP over the structure variables y of the marginals over the continuous variables \u03b3 of the discrete-continuous Gibbs model:", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 60, |
| "end": 84, |
| "text": "p M (y 1 , . . . , y m )", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "MOM as MAP of a Continuous-discrete Gibbs Model", |
| "sec_num": null |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "p(y, \u03b3) \u221d exp e \u03c8 e (\u03b3 (e) , y e )", |
| "eq_num": "(15)" |
| } |
| ], |
| "section": "MOM as MAP of a Continuous-discrete Gibbs Model", |
| "sec_num": null |
| }, |
| { |
| "text": "Marginals Estimation The last detail required for the implementation of the MOM inference approach in Gibbs-perturbation models is recovering the marginals \u03bc e . Unfortunately, we are not aware of any direct way to do that. Instead, we propose to approximate the marginals by sampling K times from the model and computing the marginals using a maximum-likelihood approach on this sample. Particularly, in our first-order dependency parsing example we set \u03bc e to be the number of trees in the K-list that contain the edge e. As noted above, the idea of computing an MST over single-edge marginals has been proposed in Kuncoro et al. (2016) where the marginals were computed in a manner similar to ours, using the K parse trees of their K ensemble members. Our novelty is with respect to the way the dependency trees in the K-list are extracted: while they built on the non-convexity of neural networks and ran an LSTM-based parser (Dyer et al., 2015) from different random initializations, we develop a perturbation-based framework. Our method for K-list generation is often more efficient than that of Kuncoro et al. (2016) . Whereas we train a parser and a noise function and can then generate the K-list by solving K argmax problems, their method requires the training of K LSTM parsers.", |
| "cite_spans": [ |
| { |
| "start": 617, |
| "end": 638, |
| "text": "Kuncoro et al. (2016)", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 930, |
| "end": 949, |
| "text": "(Dyer et al., 2015)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 1102, |
| "end": 1123, |
| "text": "Kuncoro et al. (2016)", |
| "ref_id": "BIBREF27" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MOM as MAP of a Continuous-discrete Gibbs Model", |
| "sec_num": null |
| }, |
| { |
| "text": "5 Tasks, Models, and Experiments", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MOM as MAP of a Continuous-discrete Gibbs Model", |
| "sec_num": null |
| }, |
| { |
| "text": "Data. We consider two dependency parsing tasks: cross-lingual and monolingual but lightly supervised. For both tasks we consider Version 2.0 of the UD Treebanks (Nivre et al., 2016; . 7 The data set consists of 77 corpora from 45 languages. We use the gold POS tags in our experiments.", |
| "cite_spans": [ |
| { |
| "start": 161, |
| "end": 181, |
| "text": "(Nivre et al., 2016;", |
| "ref_id": "BIBREF41" |
| }, |
| { |
| "start": 184, |
| "end": 185, |
| "text": "7", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tasks and Data", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "We excluded 3 languages (Hindi, Urdu, and Japanese) with 5 corpora from the data set, as all models we experiment with (perturbated or not) demonstrated very poor results on these languages. An analysis revealed that the head-modifier distributions in these five corpora are very different from the corresponding distributions in the other corpora, which might explain the poor performance of the parsers.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tasks and Data", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Task1: Cross-lingual Dependency Parsing. In this setup, for each corpus we train on all the training sets of the corpora in the data set as long as they are of another language (the source languages training sets), and test on the test set of the target corpus. For this purpose, for each of the 72 corpora we constructed a training set of 1000 sentences and a development set of 100 sentences, taken from the training and the development sets of the corpora, respectively. 8 Then, for each target corpus we train the parser parameters (\u03b8) on a training set that consists of the training sets of all the corpora except from those of the target language (the source languages corpora), where for the non-perturbated models (see below) this training set is augmented with the development sets of the source language corpora. For the perturbated models, the development sets of the source languages are used for learning the noise parameter (\u03c3). For test we keep the original test sets of the UD corpora.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tasks and Data", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "To make the data suitable for cross-language transfer we discard words from the corpora. The parsers are then fed with the universal POS tags, that are identical across languages.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tasks and Data", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Task2: Lightly Supervised Monolingual Dependency Parsing. For this setup we chose 12 low-resource languages (13 corpora) that have between 300 and 5k training sentences: Danish, Estonian, Greek, Hungarian, Indonesian, Korean, Latvian, Old Church Slavonic, Persian, Turkish (2 corpora), Urdu, and Vietnamese. For each language we randomly sample 300 sentences for its training set and test on its UD Treebank test set.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tasks and Data", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "In this setup, to keep with the low resource language spirit, we do not learn the noise parameter (\u03c3) but rather use fixed noise parameters for the perturbated models (see below). As opposed to the cross-lingual setup, all the parsers are lexicalized, as this is a mono-lingual setup.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tasks and Data", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Previous Work Recent years have seen substantial efforts devoted to our setups. For crosslingual parsing, the proposed approaches include the use of typological features (Naseem et al., 2012; Zhang and Barzilay, 2015; Ponti et al., 2018; Scholivet et al., 2019) , annotation projection and other means of using parallel text from the source and target languages (Hwa et al., 2005; Ganchev et al., 2009; McDonald et al., 2011; Tiedemann, 2014; Ma and Xia, 2014; Rasooli and Collins, 2015; Lacroix et al., 2016; Agi\u0107 et al., 2016; Vilares et al., 2016; , similarity modeling for parser selection (Rosa and Zabokrtsky, 2015) , late decoding and synthetic languages Eisner, 2016, 2018b,a) . Likewise, lightly supervised parsing has been addressed with a variety of approaches, including co-training (Steedman et al., 2003) , self-training (Reichart and Rappoport, 2007) and inter-sentence consistency constraints (Rush et al., 2012) .", |
| "cite_spans": [ |
| { |
| "start": 170, |
| "end": 191, |
| "text": "(Naseem et al., 2012;", |
| "ref_id": "BIBREF39" |
| }, |
| { |
| "start": 192, |
| "end": 217, |
| "text": "Zhang and Barzilay, 2015;", |
| "ref_id": "BIBREF71" |
| }, |
| { |
| "start": 218, |
| "end": 237, |
| "text": "Ponti et al., 2018;", |
| "ref_id": "BIBREF43" |
| }, |
| { |
| "start": 238, |
| "end": 261, |
| "text": "Scholivet et al., 2019)", |
| "ref_id": "BIBREF50" |
| }, |
| { |
| "start": 362, |
| "end": 380, |
| "text": "(Hwa et al., 2005;", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 381, |
| "end": 402, |
| "text": "Ganchev et al., 2009;", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 403, |
| "end": 425, |
| "text": "McDonald et al., 2011;", |
| "ref_id": "BIBREF38" |
| }, |
| { |
| "start": 426, |
| "end": 442, |
| "text": "Tiedemann, 2014;", |
| "ref_id": "BIBREF62" |
| }, |
| { |
| "start": 443, |
| "end": 460, |
| "text": "Ma and Xia, 2014;", |
| "ref_id": "BIBREF32" |
| }, |
| { |
| "start": 461, |
| "end": 487, |
| "text": "Rasooli and Collins, 2015;", |
| "ref_id": "BIBREF45" |
| }, |
| { |
| "start": 488, |
| "end": 509, |
| "text": "Lacroix et al., 2016;", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 510, |
| "end": 528, |
| "text": "Agi\u0107 et al., 2016;", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 529, |
| "end": 550, |
| "text": "Vilares et al., 2016;", |
| "ref_id": "BIBREF65" |
| }, |
| { |
| "start": 594, |
| "end": 621, |
| "text": "(Rosa and Zabokrtsky, 2015)", |
| "ref_id": "BIBREF47" |
| }, |
| { |
| "start": 662, |
| "end": 684, |
| "text": "Eisner, 2016, 2018b,a)", |
| "ref_id": null |
| }, |
| { |
| "start": 795, |
| "end": 818, |
| "text": "(Steedman et al., 2003)", |
| "ref_id": "BIBREF56" |
| }, |
| { |
| "start": 835, |
| "end": 865, |
| "text": "(Reichart and Rappoport, 2007)", |
| "ref_id": "BIBREF46" |
| }, |
| { |
| "start": 909, |
| "end": 928, |
| "text": "(Rush et al., 2012)", |
| "ref_id": "BIBREF48" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tasks and Data", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Our goal is to provide a technique that can enhance any machine learning model for structured prediction in NLP in cases where high quality parameter estimation is challenging and the argmax solution is likely not to be the highest quality solution. We choose the tasks of crosslingual and lightly supervised dependency parsing since they form prominent NLP examples for our problem. We hence focus our experiments on an in-depth exploration of the impact of our framework on a dependency parser, rather than on a thorough comparison to previously proposed approaches.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tasks and Data", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Parsing model. We implemented our method within the linear time incremental parser of Huang and Sagae (2010). 9 Although our method is applicable to any parameterized data-driven machine learning model, including deep neural networks, we chose to focus here on a linear parser in which noise injection is straight-forward: all the weights in the weight vector of the model are perturbated. We chose to avoid implementation within LSTM-based parsers (Dyer et al., 2015; Kiperwasser and Goldberg, 2016; Dozat and Manning, 2017) , as in such models the perturbation parameters may be multiplied by each other (due to the deep, recurrent, nature of the network) causing second-order effects. We leave decisions relevant for neural parsing, (e.g., which subset of the LSTM parameter set should be perturbated in order to achieve the most effective model) for future research.", |
| "cite_spans": [ |
| { |
| "start": 449, |
| "end": 468, |
| "text": "(Dyer et al., 2015;", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 469, |
| "end": 500, |
| "text": "Kiperwasser and Goldberg, 2016;", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 501, |
| "end": 525, |
| "text": "Dozat and Manning, 2017)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Models and Experiments", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "We compare seven models. The main two models are our perturbation-based parsing models, where the variance is learned from data. We consider additive learned noise (ALN) and multiplicative learned noise (MLN) (Equations 6 and 7). In order to quantify the importance of data-driven noise learning we compare to two identical models where the variance is not learned from data but is rather fixed to be 1. 10 These baselines are denoted with AFN and MFN, for additive fixed noise and multiplicative fixed noise, respectively. As noted above, for the monolingual setup we do not implement the ALN and MLN models so that to keep the small training data spirit.", |
| "cite_spans": [ |
| { |
| "start": 404, |
| "end": 406, |
| "text": "10", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Models and Baselines", |
| "sec_num": null |
| }, |
| { |
| "text": "The fifth model is the baseline \"1-best\" parserthat is, the linear incremental parser with its original inference algorithm that outputs the solution with the best score under the model's scoring function. The sixth model, denoted as the ''K-best parser'' is a variant of the incremental parser that outputs the K top scoring solutions under the parser's scoring function. The K-best inference algorithm is described in Huang and Sagae (2010) and is implemented in the parser code that we use. Finally, although we do not explore the integration of perturbations into LSTM-based parsers in this paper, we do want to verify that our methods can boost a linear parser to improve over such neural parsers. For this aim, we also compare our results to the 1-best solution of the transition-based Table 1 : Results summary, cross-lingual parsing, K = 100. We report average (Av.) and median (Md.) UAS (across languages) of each model with MOM inference (M) and with an oracle that chooses the best tree out of the K-list produced by the model (O). The # Cor. columns report the number of corpora for which the model is the best scoring one (in case two models perform best on the same language, it counts for both). For 1-best and KG (1-best), both MOM (M) and Oracle (O) refer to the single tree produced by the model.", |
| "cite_spans": [ |
| { |
| "start": 420, |
| "end": 442, |
| "text": "Huang and Sagae (2010)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 792, |
| "end": 799, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Models and Baselines", |
| "sec_num": null |
| }, |
| { |
| "text": "BiLSTM parser of Kiperwasser and Goldberg (2016) . We refer to this parser as KG (1-best). 11 We further explored alternatives to the MOM inference algorithm for distilling the final tree from the various K-lists. Among these are training a feature-rich reranker to extract the best tree from the list, and extracting the tree that is most or least similar to the other trees. As all these alternatives were strongly outperformed by the MOM algorithm, we do not discuss them further.", |
| "cite_spans": [ |
| { |
| "start": 17, |
| "end": 48, |
| "text": "Kiperwasser and Goldberg (2016)", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 91, |
| "end": 93, |
| "text": "11", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Models and Baselines", |
| "sec_num": null |
| }, |
| { |
| "text": "Hyper-Parameters The only hyper-parameter of the perturbation method is K-the size of the K-list. As noted in \u00a7 3, K can be estimated using, for example, a grid-search for optimal value on development data. Here we keep with K = 100 as the major K value throughout our experiments. However, to obtain a better understanding of the behavior of our models as a function of K we also consider the setups where K = 10 and K = 200. 12 All hyper-parameters for both the incremental parser and the baseline BiLSTM parser are set to the default values that come with the authors' code.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Models and Baselines", |
| "sec_num": null |
| }, |
| { |
| "text": "Cross-lingual Results: MOM Inference. Our results are summarized in Table 1 . The final trees extracted by the MOM inference algorithm from the K-lists of the perturbated models with learned noise (the additive model ALN and the multiplicative model MLN) are clearly the best ones, with MLN being the best model both in terms of averaged and median UAS (67.4 and 71.4, respectively) and in terms of the number of corpora for which it performs best (39 out of 72).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 68, |
| "end": 75, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Perturbation models with fixed noise (AFN and MFN) compare favorably to K-best inference. However, in comparison to 1-best inference, AFN performs very similarly and MFN is outperformed in terms of averaged and median UAS. This emphasizes the importance of noise (variance) learning from data. Interestingly, the final tree extracted by the MOM algorithm from the parser's K-best list is worse than the parser's 1-best tree (averaged UAS of 58.5 vs. 66.4, median UAS of 62.8 vs. 70.2). Both the K-best and 1-best variants of the incremental parser do not provide the best UAS on any of the 72 corpora.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "6" |
| }, |
| { |
| "text": "The 1-best solution of the KG BiLSTM parser is very similar to the 1-best solution of the incremental parser in terms of averaged and median UAS. This indicates that the incremental parser to which we integrate our perturbation algorithm does not lag behind a more modern neural parser when the training data is not a good representative of the test data-the case of interest in this work. Additionally, the KG parser is less stable-it is the best performing parser on 26 of 72 corpora, but on 34 corpora it is outperformed by the 1-best solution of the incremental parser, of which on 9 corpora the gap is larger than 3%. Detailed per language results are presented in Table 3 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 670, |
| "end": 677, |
| "text": "Table 3", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Cross-lingual Results: List Quality. Because the focus of this paper is on the quality of the K-list, the table also reports the quality of each model assuming an oracle that selects the best tree from the K-list. Here the table clearly shows that perturbation with learned variance (MLN and ALN) provides substantially better K-lists. For example, MLN achieves an averaged UAS of 80.3, a median UAS of 83.4, and it is the best performing model on 58 of 72 corpora.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "6" |
| }, |
| { |
| "text": "The gaps from the 1-best and K-best inference algorithms of the incremental parser as well as from the KG BiLSTM parser are substantial in this evaluation. For example, the average and median UAS of the KG BiLSTM parser are only 66.6 and 69.9, reflecting a gap of 13.7 and 13.5 UAS points from MLN. Moreover, the non-perturbated methods do not provide the best results on any of the 72 corpora in this oracle selection evaluation: MLN is the best performing inference algorithm in 58 cases and MFN in 14 cases.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "6" |
| }, |
| { |
| "text": "As in MOM inference, noise learning (MLN and ALN) continues to outperform perturbation with fixed noise (MFN and AFN) both in terms of averaged and median USA. For example, the averaged UAS of MLN is 80.3 compared to 77.1 for MFN, and the number of corpora on which Figure 2 : Cross-lingual parsing, K = 100. Graphs format is identical to Figure 1 , but the comparison is between the full K-list and the unique trees in the K-list for each model. MLN performs best is 58, compared to 14 of MFN.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 266, |
| "end": 274, |
| "text": "Figure 2", |
| "ref_id": null |
| }, |
| { |
| "start": 339, |
| "end": 347, |
| "text": "Figure 1", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "6" |
| }, |
| { |
| "text": "The oracle results are very important as they indicate that improving the MOM inference method has a great potential to make cross-lingual parsing substantially better. None of the other models we consider extracts K-lists with candidate trees of the quality that our perturbated models do.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "6" |
| }, |
| { |
| "text": "We next consider the quality of the full K-lists of the different methods, rather than of the oracle best solutions. Figure 1 (top) compares the averaged UAS of the trees in the 1, 25, 50, 75, and 100 percentiles of the K-lists produced by the various inference methods. The K-lists of the perturbation based methods are clearly better than those of the K-best list, with the ALN, AFN, and MLN methods performing particularly well. Likewise, Figure 1 (bottom) demonstrates that the percentage of trees that fall into higher 10% UAS bins is substantially higher for MLN and ALN compared to K-best inference (the figure considers all the K-lists from the 72 test sets). That is, the perturbated lists are of higher quality than the K-best lists both when the oracle solution is considered and when the full lists are evaluated. Table 2 : Cross-lingual parsing results as a function of K, the size of the K list for the K-best and MLN parsers. A-U and M-U refer to average and median UAS across languages, respectively. #-C refers to the number of corpora for which the model is the best scoring one. (M) refers to MOM inference, while (O) refers to oracle selection of the best tree from the list. A-U-T and M-U-T refer to the average and median number of unique trees in the list, respectively. As noted above, the K-best model cannot generate K trees for all sentences. Figure 2 compares the full lists of MLN and ALN to the unique trees of the lists, in terms of averaged UAS (the bottom graph is limited to MLN, but the pattern for ALN is similar). The consistent pattern we observe is that the average quality of the full lists is higher than that of the unique trees of the lists. This means that the full lists have multiple copies of their higher quality trees, a property we consider desirable as our goal is to sample from the score space of the model and hence higher quality trees should be overrepresented.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 117, |
| "end": 131, |
| "text": "Figure 1 (top)", |
| "ref_id": "FIGREF1" |
| }, |
| { |
| "start": 442, |
| "end": 459, |
| "text": "Figure 1 (bottom)", |
| "ref_id": "FIGREF1" |
| }, |
| { |
| "start": 826, |
| "end": 833, |
| "text": "Table 2", |
| "ref_id": null |
| }, |
| { |
| "start": 1370, |
| "end": 1378, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Cross-lingual Results: Results as a Function of K. Finally, Table 2 compares the K-lists of the MLN and the K-best inference algorithms for list size values (K) of 10 and 200. MLN is clearly much better both when the final tree is selected with MOM inference and when it is selected by the oracle. The two rightmost columns of the table indicate that the number of unique trees is much higher in the K-best list, as discussed above. Table 4 (which is equivalent to Table 1 for crosslingual parsing) and Figure 3 (which is equivalent to Figure 1 ) summarizes the results for the monolingual setup. We present these results more briefly due to space limitations. We recall that in this setup we do not learn the noise, due to the shortage of training data, but rather used the fixed noise variance parameter of 1 ( \u00a75.2).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 60, |
| "end": 67, |
| "text": "Table 2", |
| "ref_id": null |
| }, |
| { |
| "start": 433, |
| "end": 440, |
| "text": "Table 4", |
| "ref_id": "TABREF5" |
| }, |
| { |
| "start": 465, |
| "end": 472, |
| "text": "Table 1", |
| "ref_id": null |
| }, |
| { |
| "start": 503, |
| "end": 511, |
| "text": "Figure 3", |
| "ref_id": "FIGREF4" |
| }, |
| { |
| "start": 536, |
| "end": 544, |
| "text": "Figure 1", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "6" |
| }, |
| { |
| "text": "The table shows that MFN is the best performing model both when MOM inference is used and when the best tree is selected by an oracle. As in the cross-lingual setup, the gap in the oracle selection case is much larger (e.g., an averaged UAS gap of 14.8 points from the 1-best parser, the second best model) than in the MOM inference setup (an averaged UAS gap of 1.5 points from 1-best).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lightly Supervised Mono-lingual Results", |
| "sec_num": null |
| }, |
| { |
| "text": "However, in certain aspects the results in this setup indicate a stronger impact of perturbations. First, MFN performs best on 12 of 13 corpora with MOM inference and in 13 of 13 corpora with oracle selection. Moreover, its gap from the BiLSTM parser is larger than in the cross-lingual setup, probably due to the strong dependence of neural models on large training corpora.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lightly Supervised Mono-lingual Results", |
| "sec_num": null |
| }, |
| { |
| "text": "Finally, Figure 3 presents a similar effect to Figure 1 . The K-lists of the perturbated models are clearly better than those of the K-best inference, which is reflected both by the percentile analysis (top graph) and the UAS histogram that is taken across all 13 experiments (bottom graph).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 9, |
| "end": 17, |
| "text": "Figure 3", |
| "ref_id": "FIGREF4" |
| }, |
| { |
| "start": 47, |
| "end": 55, |
| "text": "Figure 1", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Lightly Supervised Mono-lingual Results", |
| "sec_num": null |
| }, |
| { |
| "text": "Our experimental setup has made several limiting assumptions. Here we address three of these assumptions and explore the extent to which they reflect true limitations of our framework.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Additional Setups and Limitations", |
| "sec_num": "7" |
| }, |
| { |
| "text": "Additional Task: Cross-lingual POS Tagging Our main results were achieved with a single incremental linear parser. We next explore the impact of our framework on another task: crosslingual POS tagging. Training and development are performed with the training and development portions of the English (en) UD corpus (16371 and 3414 sentences, respectively) and the trained model is applied to six languages (11 corpora) from four different families: Italian Portuguese (both are Italic, Romance), modern Hebrew and Arabic (both are Semitic), Chinese and Japanese.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Additional Setups and Limitations", |
| "sec_num": "7" |
| }, |
| { |
| "text": "Our POS tagger is a BiLSTM with two fully connected (FC) classification layers that are fed with the hidden vector produced for each input word. MLN noise was injected only to the final FC layer to avoid second-order effects where perturbation parameters are multiplied by each other. While we consider here a deep learning model, the noise injection scheme is very simple. 13 To close the lexical gap between languages we train the English model with the English fastText word embeddings (Bojanowski et al., 2017; Grave 13 BiLSTM layer sizes are: word embedding: 300, output representations: 256, first FC: 512, second FC: 216. et al.,2018 . Then, at test time the target language fastText embeddings are mapped to a bilingual space with the English embeddings using the Babylon alignment matrices (Smith et al., 2017) .", |
| "cite_spans": [ |
| { |
| "start": 374, |
| "end": 376, |
| "text": "13", |
| "ref_id": null |
| }, |
| { |
| "start": 489, |
| "end": 514, |
| "text": "(Bojanowski et al., 2017;", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 515, |
| "end": 523, |
| "text": "Grave 13", |
| "ref_id": null |
| }, |
| { |
| "start": 620, |
| "end": 640, |
| "text": "FC: 216. et al.,2018", |
| "ref_id": null |
| }, |
| { |
| "start": 799, |
| "end": 819, |
| "text": "(Smith et al., 2017)", |
| "ref_id": "BIBREF52" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Additional Setups and Limitations", |
| "sec_num": "7" |
| }, |
| { |
| "text": "We consider a K = 100 list size. For our MLN method we perform greed search over two ranges of the noise parameter: [0.001, 0.01] and [0.1, 0.5]. Noticing that BiLSTMs predict the POS of each word independently, beam search cannot be applied for K-best list generation in this model. Hence, we generate the K-best list with a greedy search strategy that gets the 1-best solution of the model as input and iteratively makes a single word-level POS change with the minimal (negative) impact on the model score. When we do that, we keep track of previously generated solutions so that to generate K unique solutions. We distilled the final solution from the K-lists (ours and the K-best) with a per-word majority vote.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Additional Setups and Limitations", |
| "sec_num": "7" |
| }, |
| { |
| "text": "Our results indicate a clear advantage for the perturbated model. Particularly, for all 11 target corpora it is the final solution of this model that scores best. On average across the 11 corpora, the accuracy of our model is 53.05%, compared with 51.44% of the 1-best solution and 41.56% of the solution distilled from the K-best list. This low number of the latter solution is a result of its low quality lists which contain many poor solutions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Additional Setups and Limitations", |
| "sec_num": "7" |
| }, |
| { |
| "text": "Our main results were achieved with gold POS tags. However, in low-resource setups gold POS tags may be unavailable. To explore the impact of gold POS tags availability on our results we run a cross-lingual parsing setup identical to the one of \u00a7 5 with MLN and K = 100, except that the target language sentences are automatically POS tagged before they are fed to the parser. We consider the 11 target corpora of the 6 languages in our cross-lingual POS tagging experiments, and the English-trained non-perturbated BiLSTM tagger. The result pattern we observe is very similar to the cross-lingual parsing with gold POS tags, although the absolute numbers are lower. Particularly, the averaged UAS of the final solution of our model is 29.8, compared to 26.7 for K-best and 28.1 for 1-best. However, the quality of the perturbated list is much higher than that of the K-best list, as is indicated, for example, in the gap between their best oracle solutions (46 vs. 37.6). These results emphasize the importance of high quality POS tags for cross-lingual parsing. Presumably, manual POS tagging is a substantially easier task compared to dependency parsing so this requirement is hopefully not very restricting.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Cross-lingual Parsing with Predicted POS Tags", |
| "sec_num": null |
| }, |
| { |
| "text": "Well Resourced Monolingual Parsing Finally, our framework was developed with the motivation of addressing cases where the argmax solution of the model is likely not the highest quality one. We hence focused our experiments in cross-lingual and lightly supervised parsing setups. However, it is still interesting to evaluate our framework in setups where abundant labeled training data from the target language is available. For this aim we implemented an in-language well-resourced parsing setup, identical to the K = 100 lightly supervised parsing setup of \u00a7 5, except that the incremental linear parser and the MLN parameter are trained, developed and tested on the corresponding portions of a single UD corpus.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Cross-lingual Parsing with Predicted POS Tags", |
| "sec_num": null |
| }, |
| { |
| "text": "We run this experiment with 31 corpora of 14 UD languages: Arabic, German, English, Spanish, French, Hebrew, Japanese, Korean, Dutch, Portuguese, Slovenian, Swedish, Vietnamese, and Chinese. We chose these languages in order to experiment with a wide range of corpus sizes. As in \u00a7 5, for the perturbation model the parser is trained on the training set and the noise parameter is learned on the development set, while the base parser is trained on a concatenation of both sets.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Cross-lingual Parsing with Predicted POS Tags", |
| "sec_num": null |
| }, |
| { |
| "text": "In this more challenging setup, the distilled solution of the perturbated parser does not outperform the 1-best solution: On average across corpora its UAS is 82.5 whereas the 1-best scores 82.3. Interestingly, the distilled solution of the K-best list achieves an average UAS of only 72.9. However, in terms of list quality the perturbation model still excels. For example, the averaged UAS of its oracle best solution is 91.7 compared to 87.3 of the K-best list. Likewise, its 25%, 50%, and 75% percentile solutions score 70.1, 75.2, and 79.6 on average, respectively, while the respective numbers for the K-best list are only 58.2, 63.6, and 69.3. From these results we conclude that our model can substantially contribute to the quality and diversity of the extracted list of solutions even in the well-resourced in-language setup, but that its potential impact on a single final solution is more limited.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Cross-lingual Parsing with Predicted POS Tags", |
| "sec_num": null |
| }, |
| { |
| "text": "We presented a perturbation-based framework for structured prediction in NLP. Our algorithmic contribution includes an algorithm for data-driven estimation of the perturbation variance and a MOM algorithm for distilling a final solution from the K-list. An appealing theoretical property of our method is that it can augment any machine learning model, probabilistic or not, and draw samples from a probabilistic model defined on top of that base model. In setups like cross-lingual and lightly supervised parsing where the training and the test data are drawn from different distributions and the argmax solution of the base model is of low quality, our method is valuable in extracting a high quality solution list and it also modestly improves the quality of the final solution. Yet, we note that our current implementation mostly applies to linear models, although we demonstrate initial cross-lingual results with a BiLSTM POS tagger.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "8" |
| }, |
| { |
| "text": "In future work we will aim to develop better algorithms for final solution distillation. Our stronger list quality results indicate that an improved distillation algorithm can increase the impact of our framework. Note, however, that MOM is used as part of the noise learning procedure ( \u00a73) which yields high quality lists. We would also like to develop means of effectively applying our ideas to deep learning models. While theoretically our framework equally applies to such models, their layered organization requires a careful selection of the perturbated parameters and noise values.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "8" |
| }, |
| { |
| "text": "Many machine translation (MT) works aimed to generate diverse K-lists of translated sentences (e.g.,Macherey et al., 2008;Gimpel et al., 2013;Li and Jurafsky, 2016). However, these methods are specific to MT, whereas we focus on a general framework for structured prediction in NLP.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "To be more precise, our notation is that of the graphbased first-order dependency parsing problem, where weights are defined over individual candidate dependency arcs.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "As noted in \u00a7 1, the other prominent approach, based on ensemble methods, is computationally demanding for high K value, as K different models have to be trained. In the rest of the paper we hence do not focus on this approach.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "In practice, feature-based models are a bit more complicated. For example, linear models typically define \u03b8 e = W \u2022 f e and then the number of random noise variables in the MAP-perturbation approach is |W |. For simplicity of presentation we describe here a model with one parameter per candidate edge (\u03b8 e ) and m noise variables.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "https://universaldependencies.org/. 8 Eight corpora had less than 1000 training sentences, and 8 corpora had less than 100 development sentences. For these we took the entire training or development set, respectively.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "https://github.com/lianghuang3/ lineardpparser.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "In our models that do learn the variance, variance values were in the (0,2) range. We hence consider the value of 1 as a decent proxy to the condition where the variance is not learned from data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Code was downloaded from the first author's homepage.12 For K = 200, we set the beam width parameter of the parser's inference algorithm to 5000. Yet, even with this value the parser did not produce 200 trees for all sentences. The same pattern was observed for smaller K values, although less frequently.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "We would like to thank the action editor and the reviewers, as well as the members of the IE@Technion NLP group for their valuable feedback and advice. This research was partially funded by ISF personal grants no. 1625/18 and 948/15.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Multilingual projection for parsing truly low-resource languages", |
| "authors": [ |
| { |
| "first": "Zeljko", |
| "middle": [], |
| "last": "Agi\u0107", |
| "suffix": "" |
| }, |
| { |
| "first": "Anders", |
| "middle": [], |
| "last": "Johannsen", |
| "suffix": "" |
| }, |
| { |
| "first": "Barbara", |
| "middle": [], |
| "last": "Plank", |
| "suffix": "" |
| }, |
| { |
| "first": "Natalie", |
| "middle": [], |
| "last": "H\u00e9ctor Mart\u00ednez Alonso", |
| "suffix": "" |
| }, |
| { |
| "first": "Anders", |
| "middle": [], |
| "last": "Schluter", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "S\u00f8gaard", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Transactions of the Association for Computational Linguistics", |
| "volume": "4", |
| "issue": "", |
| "pages": "301--312", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zeljko Agi\u0107, Anders Johannsen, Barbara Plank, H\u00e9ctor Mart\u00ednez Alonso, Natalie Schluter, and Anders S\u00f8gaard. 2016. Multilingual projection for parsing truly low-resource languages. Trans- actions of the Association for Computational Linguistics, 4:301-312.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Boosting for efficient model selection for syntactic parsing", |
| "authors": [ |
| { |
| "first": "Rachel", |
| "middle": [], |
| "last": "Bawden", |
| "suffix": "" |
| }, |
| { |
| "first": "Beno\u00eet", |
| "middle": [], |
| "last": "Crabb\u00e9", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of COLING", |
| "volume": "", |
| "issue": "", |
| "pages": "1--11", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rachel Bawden and Beno\u00eet Crabb\u00e9. 2016. Boost- ing for efficient model selection for syntactic parsing.In Proceedings of COLING, pages 1-11.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Enriching word vectors with subword information", |
| "authors": [ |
| { |
| "first": "Piotr", |
| "middle": [], |
| "last": "Bojanowski", |
| "suffix": "" |
| }, |
| { |
| "first": "Edouard", |
| "middle": [], |
| "last": "Grave", |
| "suffix": "" |
| }, |
| { |
| "first": "Armand", |
| "middle": [], |
| "last": "Joulin", |
| "suffix": "" |
| }, |
| { |
| "first": "Tomas", |
| "middle": [], |
| "last": "Mikolov", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Transactions of the Association of Computational Linguistics", |
| "volume": "5", |
| "issue": "", |
| "pages": "135--146", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vec- tors with subword information. Transactions of the Association of Computational Linguistics, 5:135-146.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Bagging predictors", |
| "authors": [ |
| { |
| "first": "Leo", |
| "middle": [], |
| "last": "Breiman", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "Machine Learning", |
| "volume": "24", |
| "issue": "", |
| "pages": "123--140", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Leo Breiman. 1996. Bagging predictors. Machine Learning, 24(2):123-140.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "The k best spanning arborescences of a network", |
| "authors": [ |
| { |
| "first": "Paolo", |
| "middle": [ |
| "M" |
| ], |
| "last": "Camerini", |
| "suffix": "" |
| }, |
| { |
| "first": "Luigi", |
| "middle": [], |
| "last": "Fratta", |
| "suffix": "" |
| }, |
| { |
| "first": "Francesco", |
| "middle": [], |
| "last": "Maffioli", |
| "suffix": "" |
| } |
| ], |
| "year": 1980, |
| "venue": "Networks", |
| "volume": "10", |
| "issue": "2", |
| "pages": "91--109", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Paolo M. Camerini, Luigi Fratta, and Francesco Maffioli. 1980. The k best spanning arbores- cences of a network. Networks, 10(2):91-109.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Coarse-to-fine n-best parsing and MaxEnt discriminative reranking", |
| "authors": [ |
| { |
| "first": "Eugene", |
| "middle": [], |
| "last": "Charniak", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Johnson", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "173--180", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Eugene Charniak and Mark Johnson. 2005. Coarse-to-fine n-best parsing and MaxEnt dis- criminative reranking. In Proceedings of ACL, pages 173-180.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Ranking algorithms for named-entity extraction: Boosting and the voted perceptron", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "489--496", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael Collins. 2002. Ranking algorithms for named-entity extraction: Boosting and the voted perceptron.In Proceedings of ACL, pages 489-496.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Discriminative reranking for natural language parsing", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| }, |
| { |
| "first": "Terry", |
| "middle": [], |
| "last": "Koo", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Computational Linguistics", |
| "volume": "31", |
| "issue": "1", |
| "pages": "25--70", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael Collins and Terry Koo. 2005. Discrimi- native reranking for natural language parsing. Computational Linguistics, 31(1):25-70.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Deep biaffine attention for neural dependency parsing", |
| "authors": [ |
| { |
| "first": "Timothy", |
| "middle": [], |
| "last": "Dozat", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of ICLR", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Timothy Dozat and Christopher D. Manning. 2017. Deep biaffine attention for neural depen- dency parsing. In Proceedings of ICLR.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Transitionbased dependency parsing with stack long short-term memory", |
| "authors": [ |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Dyer", |
| "suffix": "" |
| }, |
| { |
| "first": "Miguel", |
| "middle": [], |
| "last": "Ballesteros", |
| "suffix": "" |
| }, |
| { |
| "first": "Wang", |
| "middle": [], |
| "last": "Ling", |
| "suffix": "" |
| }, |
| { |
| "first": "Austin", |
| "middle": [], |
| "last": "Matthews", |
| "suffix": "" |
| }, |
| { |
| "first": "Noah", |
| "middle": [ |
| "A" |
| ], |
| "last": "Smith", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "334--343", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. 2015. Transition- based dependency parsing with stack long short-term memory. In Proceedings of ACL, pages 334-343.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Real analysis: Modern techniques and their applications", |
| "authors": [ |
| { |
| "first": "Gerald", |
| "middle": [], |
| "last": "Folland", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gerald Folland. 1999. Real analysis: Modern techniques and their applications. John Wiley & Sons. New York.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Dependency grammar induction via bitext projection constraints", |
| "authors": [ |
| { |
| "first": "Kuzman", |
| "middle": [], |
| "last": "Ganchev", |
| "suffix": "" |
| }, |
| { |
| "first": "Jennifer", |
| "middle": [], |
| "last": "Gillenwater", |
| "suffix": "" |
| }, |
| { |
| "first": "Ben", |
| "middle": [], |
| "last": "Taskar", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of ACL-AFNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "369--377", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kuzman Ganchev, Jennifer Gillenwater, and Ben Taskar. 2009. Dependency grammar induction via bitext projection constraints. In Proceedings of ACL-AFNLP, pages 369-377.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "A systematic exploration of diversity in machine translation", |
| "authors": [ |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Gimpel", |
| "suffix": "" |
| }, |
| { |
| "first": "Dhruv", |
| "middle": [], |
| "last": "Batra", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Dyer", |
| "suffix": "" |
| }, |
| { |
| "first": "Gregory", |
| "middle": [], |
| "last": "Shakhnarovich", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "1100--1111", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kevin Gimpel, Dhruv Batra, Chris Dyer, and Gregory Shakhnarovich. 2013. A systematic exploration of diversity in machine translation. In Proceedings of EMNLP, pages 1100-1111.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "The complexity of ferromagnetic Ising with local fields", |
| "authors": [ |
| { |
| "first": "Ann", |
| "middle": [], |
| "last": "Leslie", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Goldberg", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Jerrum", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Combinatorics Probability and Computing", |
| "volume": "16", |
| "issue": "1", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Leslie Ann Goldberg and Mark Jerrum. 2007. The complexity of ferromagnetic Ising with local fields. Combinatorics Probability and Computing, 16(1):43.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "The k-best paths in Hidden Markov Models. Algorithms and applications to transmembrane protein topology recognition", |
| "authors": [ |
| { |
| "first": "Daniil", |
| "middle": [], |
| "last": "Golod", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Daniil Golod. 2009. The k-best paths in Hidden Markov Models. Algorithms and applications to transmembrane protein topology recogni- tion. Master's thesis, University of Waterloo.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Learning word vectors for 157 languages", |
| "authors": [ |
| { |
| "first": "Edouard", |
| "middle": [], |
| "last": "Grave", |
| "suffix": "" |
| }, |
| { |
| "first": "Piotr", |
| "middle": [], |
| "last": "Bojanowski", |
| "suffix": "" |
| }, |
| { |
| "first": "Prakhar", |
| "middle": [], |
| "last": "Gupta", |
| "suffix": "" |
| }, |
| { |
| "first": "Armand", |
| "middle": [], |
| "last": "Joulin", |
| "suffix": "" |
| }, |
| { |
| "first": "Tomas", |
| "middle": [], |
| "last": "Mikolov", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of LREC", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Edouard Grave, Piotr Bojanowski, Prakhar Gupta, Armand Joulin, and Tomas Mikolov. 2018. Learning word vectors for 157 languages. In Proceedings of LREC.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "K-best spanning tree parsing", |
| "authors": [ |
| { |
| "first": "Keith", |
| "middle": [], |
| "last": "Hall", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "392--399", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Keith Hall. 2007. K-best spanning tree parsing. In Proceedings of ACL, pages 392-399.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "On the partition function and random maximum a-posteriori perturbations", |
| "authors": [ |
| { |
| "first": "Tamir", |
| "middle": [], |
| "last": "Hazan", |
| "suffix": "" |
| }, |
| { |
| "first": "Tommi", |
| "middle": [], |
| "last": "Jaakkola", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of ICML", |
| "volume": "", |
| "issue": "", |
| "pages": "1667--1674", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tamir Hazan and Tommi Jaakkola. 2012. On the partition function and random maximum a-posteriori perturbations. In Proceedings of ICML, pages 1667-1674.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Learning efficient random maximum a-posteriori predictors with non-decomposable loss functions", |
| "authors": [ |
| { |
| "first": "Tamir", |
| "middle": [], |
| "last": "Hazan", |
| "suffix": "" |
| }, |
| { |
| "first": "Subhransu", |
| "middle": [], |
| "last": "Maji", |
| "suffix": "" |
| }, |
| { |
| "first": "Joseph", |
| "middle": [], |
| "last": "Keshet", |
| "suffix": "" |
| }, |
| { |
| "first": "Tommi", |
| "middle": [], |
| "last": "Jaakkola", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of NIPS", |
| "volume": "", |
| "issue": "", |
| "pages": "1887--1895", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tamir Hazan, Subhransu Maji, Joseph Keshet, and Tommi Jaakkola. 2013. Learning efficient random maximum a-posteriori predictors with non-decomposable loss functions. In Proceed- ings of NIPS, pages 1887-1895.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Perturbations, Optimization, and Statistics", |
| "authors": [ |
| { |
| "first": "Tamir", |
| "middle": [], |
| "last": "Hazan", |
| "suffix": "" |
| }, |
| { |
| "first": "George", |
| "middle": [], |
| "last": "Papandreou", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Tarlow", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tamir Hazan, George Papandreou, and Daniel Tarlow. 2016. Perturbations, Optimization, and Statistics, MIT Press.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Dynamic programming for linear-time incremental parsing", |
| "authors": [ |
| { |
| "first": "Liang", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenji", |
| "middle": [], |
| "last": "Sagae", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "1077--1086", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Liang Huang and Kenji Sagae. 2010. Dynamic programming for linear-time incremental pars- ing. In Proceedings of ACL, pages 1077-1086.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Bootstrapping parsers via syntactic projection across parallel texts", |
| "authors": [ |
| { |
| "first": "Rebecca", |
| "middle": [], |
| "last": "Hwa", |
| "suffix": "" |
| }, |
| { |
| "first": "Philip", |
| "middle": [], |
| "last": "Resnik", |
| "suffix": "" |
| }, |
| { |
| "first": "Amy", |
| "middle": [], |
| "last": "Weinberg", |
| "suffix": "" |
| }, |
| { |
| "first": "Clara", |
| "middle": [], |
| "last": "Cabezas", |
| "suffix": "" |
| }, |
| { |
| "first": "Okan", |
| "middle": [], |
| "last": "Kolak", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Natural language engineering", |
| "volume": "11", |
| "issue": "3", |
| "pages": "311--325", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rebecca Hwa, Philip Resnik, Amy Weinberg, Clara Cabezas, and Okan Kolak. 2005. Boot- strapping parsers via syntactic projection across parallel texts. Natural language engineering, 11(3):311-325.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Recurrent continuous translation models", |
| "authors": [ |
| { |
| "first": "Nal", |
| "middle": [], |
| "last": "Kalchbrenner", |
| "suffix": "" |
| }, |
| { |
| "first": "Phil", |
| "middle": [], |
| "last": "Blunsom", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "1700--1709", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nal Kalchbrenner and Phil Blunsom. 2013. Recur- rent continuous translation models. In Proceed- ings of EMNLP, pages 1700-1709.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "PAC-bayesian approach for minimization of phoneme error rate", |
| "authors": [ |
| { |
| "first": "Joseph", |
| "middle": [], |
| "last": "Keshet", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Mcallester", |
| "suffix": "" |
| }, |
| { |
| "first": "Tamir", |
| "middle": [], |
| "last": "Hazan", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of ICASSP", |
| "volume": "", |
| "issue": "", |
| "pages": "2224--2227", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Joseph Keshet, David McAllester, and Tamir Hazan. 2011. PAC-bayesian approach for min- imization of phoneme error rate. In Proceed- ings of ICASSP, pages 2224-2227.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Simple and accurate dependency parsing using bidirectional LSTM feature representations", |
| "authors": [ |
| { |
| "first": "Eliyahu", |
| "middle": [], |
| "last": "Kiperwasser", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoav", |
| "middle": [], |
| "last": "Goldberg", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Transactions of the Association of Computational Linguistics", |
| "volume": "4", |
| "issue": "", |
| "pages": "313--327", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Eliyahu Kiperwasser and Yoav Goldberg. 2016. Simple and accurate dependency parsing using bidirectional LSTM feature representations. Transactions of the Association of Computa- tional Linguistics, 4:313-327.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Probabilistic Graphical Models: Principles and Techniques", |
| "authors": [ |
| { |
| "first": "Daphne", |
| "middle": [], |
| "last": "Koller", |
| "suffix": "" |
| }, |
| { |
| "first": "Nir", |
| "middle": [], |
| "last": "Friedman", |
| "suffix": "" |
| }, |
| { |
| "first": "Francis", |
| "middle": [], |
| "last": "Bach", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Daphne Koller, Nir Friedman, and Francis Bach. 2009. Probabilistic Graphical Models: Prin- ciples and Techniques, MIT Press.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Efficient third-order dependency parsers", |
| "authors": [ |
| { |
| "first": "Terry", |
| "middle": [], |
| "last": "Koo", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "1--11", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Terry Koo and Michael Collins. 2010. Efficient third-order dependency parsers. In Proceedings of ACL, pages 1-11.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Distilling an ensemble of greedy dependency parsers into one MST parser", |
| "authors": [ |
| { |
| "first": "Adhiguna", |
| "middle": [], |
| "last": "Kuncoro", |
| "suffix": "" |
| }, |
| { |
| "first": "Miguel", |
| "middle": [], |
| "last": "Ballesteros", |
| "suffix": "" |
| }, |
| { |
| "first": "Lingpeng", |
| "middle": [], |
| "last": "Kong", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Dyer", |
| "suffix": "" |
| }, |
| { |
| "first": "Noah", |
| "middle": [ |
| "A" |
| ], |
| "last": "Smith", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "1744--1753", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Adhiguna Kuncoro, Miguel Ballesteros, Lingpeng Kong, Chris Dyer, and Noah A. Smith. 2016. Distilling an ensemble of greedy dependency parsers into one MST parser. In Proceedings of EMNLP, pages 1744-1753.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Frustratingly easy cross-lingual transfer for transitionbased dependency parsing", |
| "authors": [ |
| { |
| "first": "Oph\u00e9lie", |
| "middle": [], |
| "last": "Lacroix", |
| "suffix": "" |
| }, |
| { |
| "first": "Lauriane", |
| "middle": [], |
| "last": "Aufrant", |
| "suffix": "" |
| }, |
| { |
| "first": "Guillaume", |
| "middle": [], |
| "last": "Wisniewski", |
| "suffix": "" |
| }, |
| { |
| "first": "Fran\u00e7ois", |
| "middle": [], |
| "last": "Yvon", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of HLT-NAACL", |
| "volume": "", |
| "issue": "", |
| "pages": "1058--1063", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Oph\u00e9lie Lacroix, Lauriane Aufrant, Guillaume Wisniewski, and Fran\u00e7ois Yvon. 2016. Frustrat- ingly easy cross-lingual transfer for transition- based dependency parsing. In Proceedings of HLT-NAACL, pages 1058-1063.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data", |
| "authors": [ |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Lafferty", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Mccallum", |
| "suffix": "" |
| }, |
| { |
| "first": "Fernando", |
| "middle": [], |
| "last": "Pereira", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Proceedings of ICML", |
| "volume": "", |
| "issue": "", |
| "pages": "282--289", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "John Lafferty, Andrew McCallum, and Fernando Pereira. 2001. Conditional random fields: Prob- abilistic models for segmenting and label- ing sequence data. In Proceedings of ICML, pages 282-289.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Mutual information and diverse decoding improve neural machine translation", |
| "authors": [ |
| { |
| "first": "Jiwei", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Jurafsky", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1601.00372v2" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jiwei Li and Dan Jurafsky. 2016. Mutual information and diverse decoding improve neural machine translation. arXiv preprint arXiv:1601.00372v2.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "Forestbased neural machine translation", |
| "authors": [ |
| { |
| "first": "Chunpeng", |
| "middle": [], |
| "last": "Ma", |
| "suffix": "" |
| }, |
| { |
| "first": "Akihiro", |
| "middle": [], |
| "last": "Tamura", |
| "suffix": "" |
| }, |
| { |
| "first": "Masao", |
| "middle": [], |
| "last": "Utiyama", |
| "suffix": "" |
| }, |
| { |
| "first": "Tiejun", |
| "middle": [], |
| "last": "Zhao", |
| "suffix": "" |
| }, |
| { |
| "first": "Eiichiro", |
| "middle": [], |
| "last": "Sumita", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "1253--1263", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chunpeng Ma, Akihiro Tamura, Masao Utiyama, Tiejun Zhao, and Eiichiro Sumita. 2018. Forest- based neural machine translation. In Proceed- ings of ACL, pages 1253-1263.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "Unsupervised dependency parsing with transferring distribution via parallel guidance and entropy regularization", |
| "authors": [ |
| { |
| "first": "Xuezhe", |
| "middle": [], |
| "last": "Ma", |
| "suffix": "" |
| }, |
| { |
| "first": "Fei", |
| "middle": [], |
| "last": "Xia", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "1337--1348", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Xuezhe Ma and Fei Xia. 2014. Unsupervised dependency parsing with transferring distribu- tion via parallel guidance and entropy regular- ization.In Proceedings of ACL, pages 1337-1348.", |
| "links": null |
| }, |
| "BIBREF33": { |
| "ref_id": "b33", |
| "title": "Latticebased minimum error rate training for statistical machine translation", |
| "authors": [ |
| { |
| "first": "Wolfgang", |
| "middle": [], |
| "last": "Macherey", |
| "suffix": "" |
| }, |
| { |
| "first": "Franz", |
| "middle": [ |
| "Josef" |
| ], |
| "last": "Och", |
| "suffix": "" |
| }, |
| { |
| "first": "Ignacio", |
| "middle": [], |
| "last": "Thayer", |
| "suffix": "" |
| }, |
| { |
| "first": "Jakob", |
| "middle": [], |
| "last": "Uszkoreit", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "725--734", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Wolfgang Macherey, Franz Josef Och, Ignacio Thayer, and Jakob Uszkoreit. 2008. Lattice- based minimum error rate training for statistical machine translation. In Proceedings of EMNLP, pages 725-734.", |
| "links": null |
| }, |
| "BIBREF34": { |
| "ref_id": "b34", |
| "title": "A* sampling", |
| "authors": [ |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Maddison", |
| "suffix": "" |
| }, |
| { |
| "first": "Danny", |
| "middle": [], |
| "last": "Tarlow", |
| "suffix": "" |
| }, |
| { |
| "first": "Tom", |
| "middle": [], |
| "last": "Minka", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of NIPS", |
| "volume": "", |
| "issue": "", |
| "pages": "2085--2093", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chris Maddison, Danny Tarlow, and Tom Minka. 2014. A* sampling. In Proceedings of NIPS, pages 2085-2093.", |
| "links": null |
| }, |
| "BIBREF35": { |
| "ref_id": "b35", |
| "title": "Turning on the turbo: Fast thirdorder non-projective turbo parsers", |
| "authors": [ |
| { |
| "first": "Andre", |
| "middle": [], |
| "last": "Martins", |
| "suffix": "" |
| }, |
| { |
| "first": "Miguel", |
| "middle": [], |
| "last": "Almeida", |
| "suffix": "" |
| }, |
| { |
| "first": "Noah", |
| "middle": [ |
| "A" |
| ], |
| "last": "Smith", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "617--622", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Andre Martins, Miguel Almeida, and Noah A. Smith. 2013. Turning on the turbo: Fast third- order non-projective turbo parsers. In Proceed- ings of ACL, pages 617-622.", |
| "links": null |
| }, |
| "BIBREF36": { |
| "ref_id": "b36", |
| "title": "Universal dependency annotation for multilingual parsing", |
| "authors": [ |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Mcdonald", |
| "suffix": "" |
| }, |
| { |
| "first": "Joakim", |
| "middle": [], |
| "last": "Nivre", |
| "suffix": "" |
| }, |
| { |
| "first": "Yvonne", |
| "middle": [], |
| "last": "Quirmbach-Brundage", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoav", |
| "middle": [], |
| "last": "Goldberg", |
| "suffix": "" |
| }, |
| { |
| "first": "Dipanjan", |
| "middle": [], |
| "last": "Das", |
| "suffix": "" |
| }, |
| { |
| "first": "Kuzman", |
| "middle": [], |
| "last": "Ganchev", |
| "suffix": "" |
| }, |
| { |
| "first": "Keith", |
| "middle": [], |
| "last": "Hall", |
| "suffix": "" |
| }, |
| { |
| "first": "Slav", |
| "middle": [], |
| "last": "Petrov", |
| "suffix": "" |
| }, |
| { |
| "first": "Hao", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Oscar", |
| "middle": [], |
| "last": "T\u00e4ckstr\u00f6m", |
| "suffix": "" |
| }, |
| { |
| "first": "Claudia", |
| "middle": [], |
| "last": "Bedini", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "92--97", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ryan McDonald, Joakim Nivre,Yvonne Quirmbach- Brundage, Yoav Goldberg, Dipanjan Das, Kuzman Ganchev, Keith Hall, Slav Petrov, Hao Zhang, Oscar T\u00e4ckstr\u00f6m, Claudia Bedini, N\u00faria Bertomeu Castell\u00f3, and Jungmee Lee. 2013. Universal dependency annotation for multilin- gual parsing.In Proceedings of ACL, pages 92-97.", |
| "links": null |
| }, |
| "BIBREF37": { |
| "ref_id": "b37", |
| "title": "Non-projective dependency parsing using spanning tree algorithms", |
| "authors": [ |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Mcdonald", |
| "suffix": "" |
| }, |
| { |
| "first": "Fernando", |
| "middle": [], |
| "last": "Pereira", |
| "suffix": "" |
| }, |
| { |
| "first": "Kiril", |
| "middle": [], |
| "last": "Ribarov", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "523--530", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ryan McDonald, Fernando Pereira, Kiril Ribarov, and Jan Haji\u010d. 2005. Non-projective depen- dency parsing using spanning tree algorithms. In Proceedings of EMNLP, pages 523-530.", |
| "links": null |
| }, |
| "BIBREF38": { |
| "ref_id": "b38", |
| "title": "Multi-source transfer of delexicalized dependency parsers", |
| "authors": [ |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Mcdonald", |
| "suffix": "" |
| }, |
| { |
| "first": "Slav", |
| "middle": [], |
| "last": "Petrov", |
| "suffix": "" |
| }, |
| { |
| "first": "Keith", |
| "middle": [], |
| "last": "Hall", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "62--72", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ryan McDonald, Slav Petrov, and Keith Hall. 2011. Multi-source transfer of delexicalized dependency parsers. In Proceedings of EMNLP, pages 62-72.", |
| "links": null |
| }, |
| "BIBREF39": { |
| "ref_id": "b39", |
| "title": "Selective sharing for multilingual dependency parsing", |
| "authors": [ |
| { |
| "first": "Tahira", |
| "middle": [], |
| "last": "Naseem", |
| "suffix": "" |
| }, |
| { |
| "first": "Regina", |
| "middle": [], |
| "last": "Barzilay", |
| "suffix": "" |
| }, |
| { |
| "first": "Amir", |
| "middle": [], |
| "last": "Globerson", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "629--637", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tahira Naseem, Regina Barzilay, and Amir Globerson. 2012. Selective sharing for multi- lingual dependency parsing. In Proceedings of ACL, pages 629-637.", |
| "links": null |
| }, |
| "BIBREF40": { |
| "ref_id": "b40", |
| "title": "Automatic summarization. Foundations and Trends in Information Retrieval", |
| "authors": [ |
| { |
| "first": "Ani", |
| "middle": [], |
| "last": "Nenkova", |
| "suffix": "" |
| }, |
| { |
| "first": "Kathleen", |
| "middle": [], |
| "last": "Mckeown", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "", |
| "volume": "5", |
| "issue": "", |
| "pages": "103--233", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ani Nenkova and Kathleen McKeown. 2011. Automatic summarization. Foundations and Trends in Information Retrieval, 5(2-3):103-233.", |
| "links": null |
| }, |
| "BIBREF41": { |
| "ref_id": "b41", |
| "title": "Universal dependencies v1: A multilingual treebank collection", |
| "authors": [ |
| { |
| "first": "Joakim", |
| "middle": [], |
| "last": "Nivre", |
| "suffix": "" |
| }, |
| { |
| "first": "Marie-Catherine", |
| "middle": [], |
| "last": "De Marneffe", |
| "suffix": "" |
| }, |
| { |
| "first": "Filip", |
| "middle": [], |
| "last": "Ginter", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoav", |
| "middle": [], |
| "last": "Goldberg", |
| "suffix": "" |
| }, |
| { |
| "first": "Jan", |
| "middle": [], |
| "last": "Haji\u010d", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| }, |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Mcdonald", |
| "suffix": "" |
| }, |
| { |
| "first": "Slav", |
| "middle": [], |
| "last": "Petrov", |
| "suffix": "" |
| }, |
| { |
| "first": "Sampo", |
| "middle": [], |
| "last": "Pyysalo", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of LREC", |
| "volume": "", |
| "issue": "", |
| "pages": "1659--1666", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Joakim Nivre, Marie-Catherine de Marneffe, Filip Ginter, Yoav Goldberg, Jan Haji\u010d, Christopher D. Manning, Ryan McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, Reut Tsarfaty, and Daniel Zeman. 2016. Universal depen- dencies v1: A multilingual treebank collection. In Proceedings of LREC, pages 1659-1666.", |
| "links": null |
| }, |
| "BIBREF42": { |
| "ref_id": "b42", |
| "title": "Perturb-and-map random fields: Using discrete optimization to learn and sample from energy models", |
| "authors": [ |
| { |
| "first": "George", |
| "middle": [], |
| "last": "Papandreou", |
| "suffix": "" |
| }, |
| { |
| "first": "Alan", |
| "middle": [], |
| "last": "Yuille", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of ICCV", |
| "volume": "", |
| "issue": "", |
| "pages": "193--200", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "George Papandreou and Alan Yuille. 2011. Perturb-and-map random fields: Using discrete optimization to learn and sample from energy models. In Proceedings of ICCV, pages193-200.", |
| "links": null |
| }, |
| "BIBREF43": { |
| "ref_id": "b43", |
| "title": "Isomorphic transfer of syntactic structures in cross-lingual NLP", |
| "authors": [ |
| { |
| "first": "Roi", |
| "middle": [], |
| "last": "Edoardo Maria Ponti", |
| "suffix": "" |
| }, |
| { |
| "first": "Anna", |
| "middle": [], |
| "last": "Reichart", |
| "suffix": "" |
| }, |
| { |
| "first": "Ivan", |
| "middle": [], |
| "last": "Korhonen", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Vuli\u0107", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "1531--1542", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Edoardo Maria Ponti, Roi Reichart, Anna Korhonen, and Ivan Vuli\u0107. 2018. Isomorphic transfer of syntactic structures in cross-lingual NLP. In Proceedings of ACL, pages 1531-1542.", |
| "links": null |
| }, |
| "BIBREF44": { |
| "ref_id": "b44", |
| "title": "A tutorial on Hidden Markov Models and selected applications in speech recognition", |
| "authors": [ |
| { |
| "first": "Lawrence", |
| "middle": [ |
| "R" |
| ], |
| "last": "Rabiner", |
| "suffix": "" |
| } |
| ], |
| "year": 1989, |
| "venue": "Proceedings of the IEEE", |
| "volume": "77", |
| "issue": "2", |
| "pages": "257--286", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lawrence R. Rabiner. 1989. A tutorial on Hidden Markov Models and selected applications in speech recognition. Proceedings of the IEEE, 77(2):257-286.", |
| "links": null |
| }, |
| "BIBREF45": { |
| "ref_id": "b45", |
| "title": "Density-driven cross-lingual transfer of dependency parsers", |
| "authors": [ |
| { |
| "first": "Mohammad", |
| "middle": [], |
| "last": "Sadegh", |
| "suffix": "" |
| }, |
| { |
| "first": "Rasooli", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "328--338", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mohammad Sadegh Rasooli and Michael Collins. 2015. Density-driven cross-lingual transfer of dependency parsers. In Proceedings of EMNLP, pages 328-338.", |
| "links": null |
| }, |
| "BIBREF46": { |
| "ref_id": "b46", |
| "title": "Selftraining for enhancement and domain adaptation of statistical parsers trained on small datasets", |
| "authors": [ |
| { |
| "first": "Roi", |
| "middle": [], |
| "last": "Reichart", |
| "suffix": "" |
| }, |
| { |
| "first": "Ari", |
| "middle": [], |
| "last": "Rappoport", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "616--623", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Roi Reichart and Ari Rappoport. 2007. Self- training for enhancement and domain adapta- tion of statistical parsers trained on small datasets. In Proceedings of ACL, pages 616-623.", |
| "links": null |
| }, |
| "BIBREF47": { |
| "ref_id": "b47", |
| "title": "KL cpos 3-a language similarity measure for delexicalized parser transfer", |
| "authors": [ |
| { |
| "first": "Rudolf", |
| "middle": [], |
| "last": "Rosa", |
| "suffix": "" |
| }, |
| { |
| "first": "Zdenek", |
| "middle": [], |
| "last": "Zabokrtsky", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of ACL-IJCNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "243--249", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rudolf Rosa and Zdenek Zabokrtsky. 2015. KL cpos 3-a language similarity measure for delexicalized parser transfer. In Proceedings of ACL-IJCNLP, pages 243-249.", |
| "links": null |
| }, |
| "BIBREF48": { |
| "ref_id": "b48", |
| "title": "Improved parsing and POS tagging using inter-sentence consistency constraints", |
| "authors": [ |
| { |
| "first": "Alexander", |
| "middle": [ |
| "M" |
| ], |
| "last": "Rush", |
| "suffix": "" |
| }, |
| { |
| "first": "Roi", |
| "middle": [], |
| "last": "Reichart", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| }, |
| { |
| "first": "Amir", |
| "middle": [], |
| "last": "Globerson", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "1434--1444", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alexander M. Rush, Roi Reichart, Michael Collins, and Amir Globerson. 2012. Improved parsing and POS tagging using inter-sentence consistency constraints. In Proceedings of EMNLP, pages 1434-1444.", |
| "links": null |
| }, |
| "BIBREF49": { |
| "ref_id": "b49", |
| "title": "Cross-lingual dependency parsing with late decoding for truly low-resource languages", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Schlichtkrull", |
| "suffix": "" |
| }, |
| { |
| "first": "Anders", |
| "middle": [], |
| "last": "S\u00f8gaard", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of EACL", |
| "volume": "", |
| "issue": "", |
| "pages": "220--229", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael Schlichtkrull and Anders S\u00f8gaard. 2017. Cross-lingual dependency parsing with late de- coding for truly low-resource languages. In Proceedings of EACL, pages 220-229.", |
| "links": null |
| }, |
| "BIBREF50": { |
| "ref_id": "b50", |
| "title": "Typological features for multilingual delexicalised dependency parsing", |
| "authors": [ |
| { |
| "first": "Manon", |
| "middle": [], |
| "last": "Scholivet", |
| "suffix": "" |
| }, |
| { |
| "first": "Franck", |
| "middle": [], |
| "last": "Dary", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexis", |
| "middle": [], |
| "last": "Nasr", |
| "suffix": "" |
| }, |
| { |
| "first": "Carlos", |
| "middle": [], |
| "last": "Benoit Favre", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Ramisch", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of HLT-NAACL", |
| "volume": "", |
| "issue": "", |
| "pages": "3919--3930", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Manon Scholivet, Franck Dary, Alexis Nasr, Benoit Favre, and Carlos Ramisch. 2019. Typo- logical features for multilingual delexicalised dependency parsing. In Proceedings of HLT- NAACL, pages 3919-3930.", |
| "links": null |
| }, |
| "BIBREF51": { |
| "ref_id": "b51", |
| "title": "Linguistic Structure Prediction", |
| "authors": [ |
| { |
| "first": "Noah", |
| "middle": [ |
| "A" |
| ], |
| "last": "Smith", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Synthesis Lectures on Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Noah A. Smith. 2011. Linguistic Structure Predic- tion, Synthesis Lectures on Human Language Technologies, Morgan and Claypool.", |
| "links": null |
| }, |
| "BIBREF52": { |
| "ref_id": "b52", |
| "title": "Offline bilingual word vectors, orthogonal transformations and the inverted softmax", |
| "authors": [ |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Samuel", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Smith", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [ |
| "P" |
| ], |
| "last": "David", |
| "suffix": "" |
| }, |
| { |
| "first": "Steven", |
| "middle": [], |
| "last": "Turban", |
| "suffix": "" |
| }, |
| { |
| "first": "Nils", |
| "middle": [ |
| "Y" |
| ], |
| "last": "Hamblin", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Hammerla", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of ICLR", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Samuel L. Smith, David H. P. Turban, Steven Hamblin, and Nils Y. Hammerla. 2017. Offline bilingual word vectors, orthogonal transforma- tions and the inverted softmax. In Proceed- ings of ICLR.", |
| "links": null |
| }, |
| "BIBREF53": { |
| "ref_id": "b53", |
| "title": "Cross-lingual dependency parsing with late decoding for truly low-resource languages", |
| "authors": [ |
| { |
| "first": "Anders", |
| "middle": [], |
| "last": "S\u00f8gaard", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael Sejr", |
| "middle": [], |
| "last": "Schlichtkrull", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of EACL", |
| "volume": "", |
| "issue": "", |
| "pages": "220--229", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Anders S\u00f8gaard and Michael Sejr Schlichtkrull. 2017. Cross-lingual dependency parsing with late decoding for truly low-resource languages. In Proceedings of EACL, pages 220-229.", |
| "links": null |
| }, |
| "BIBREF54": { |
| "ref_id": "b54", |
| "title": "Continuous space translation models with neural networks", |
| "authors": [ |
| { |
| "first": "Le", |
| "middle": [], |
| "last": "Hai Son", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexandre", |
| "middle": [], |
| "last": "Allauzen", |
| "suffix": "" |
| }, |
| { |
| "first": "Fran\u00e7ois", |
| "middle": [], |
| "last": "Yvon", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of HLT-NAACL", |
| "volume": "", |
| "issue": "", |
| "pages": "39--48", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Le Hai Son, Alexandre Allauzen, and Fran\u00e7ois Yvon. 2012. Continuous space translation models with neural networks. In Proceedings of HLT- NAACL, pages 39-48.", |
| "links": null |
| }, |
| "BIBREF55": { |
| "ref_id": "b55", |
| "title": "Tightening lp relaxations for map using message passing", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Sontag", |
| "suffix": "" |
| }, |
| { |
| "first": "Talya", |
| "middle": [], |
| "last": "Meltzer", |
| "suffix": "" |
| }, |
| { |
| "first": "Amir", |
| "middle": [], |
| "last": "Globerson", |
| "suffix": "" |
| }, |
| { |
| "first": "Tommi", |
| "middle": [], |
| "last": "Jaakkola", |
| "suffix": "" |
| }, |
| { |
| "first": "Yair", |
| "middle": [], |
| "last": "Weiss", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of UAI", |
| "volume": "", |
| "issue": "", |
| "pages": "503--510", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David Sontag, Talya Meltzer, Amir Globerson, Tommi Jaakkola, and Yair Weiss. 2008. Tight- ening lp relaxations for map using message passing. In Proceedings of UAI, pages 503-510.", |
| "links": null |
| }, |
| "BIBREF56": { |
| "ref_id": "b56", |
| "title": "Bootstrapping statistical parsers from small datasets", |
| "authors": [ |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Steedman", |
| "suffix": "" |
| }, |
| { |
| "first": "Miles", |
| "middle": [], |
| "last": "Osborne", |
| "suffix": "" |
| }, |
| { |
| "first": "Anoop", |
| "middle": [], |
| "last": "Sarkar", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephen", |
| "middle": [], |
| "last": "Clark", |
| "suffix": "" |
| }, |
| { |
| "first": "Rebecca", |
| "middle": [], |
| "last": "Hwa", |
| "suffix": "" |
| }, |
| { |
| "first": "Julia", |
| "middle": [], |
| "last": "Hockenmaier", |
| "suffix": "" |
| }, |
| { |
| "first": "Paul", |
| "middle": [], |
| "last": "Ruhlen", |
| "suffix": "" |
| }, |
| { |
| "first": "Steven", |
| "middle": [], |
| "last": "Baker", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeremiah", |
| "middle": [], |
| "last": "Crim", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of EACL", |
| "volume": "", |
| "issue": "", |
| "pages": "331--338", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mark Steedman, Miles Osborne, Anoop Sarkar, Stephen Clark, Rebecca Hwa, Julia Hockenmaier, Paul Ruhlen, Steven Baker, and Jeremiah Crim. 2003. Bootstrapping statistical parsers from small datasets. In Proceedings of EACL, pages 331-338.", |
| "links": null |
| }, |
| "BIBREF57": { |
| "ref_id": "b57", |
| "title": "Data-driven, PCFG-based and pseudo-PCFG-based models for Chinese dependency parsing", |
| "authors": [ |
| { |
| "first": "Weiwei", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiaojun", |
| "middle": [], |
| "last": "Wan", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Transactions of the Association of Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "301--314", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Weiwei Sun and Xiaojun Wan. 2013. Data-driven, PCFG-based and pseudo-PCFG-based models for Chinese dependency parsing. Transactions of the Association of Computational Linguistics, 1:301-314.", |
| "links": null |
| }, |
| "BIBREF58": { |
| "ref_id": "b58", |
| "title": "Ensemble models for dependency parsing: Cheap and good?", |
| "authors": [ |
| { |
| "first": "Mihai", |
| "middle": [], |
| "last": "Surdeanu", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Christopher", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of HLT-NAACL", |
| "volume": "", |
| "issue": "", |
| "pages": "649--652", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mihai Surdeanu and Christopher D. Manning. 2010. Ensemble models for dependency pars- ing: Cheap and good? In Proceedings of HLT- NAACL, pages 649-652.", |
| "links": null |
| }, |
| "BIBREF59": { |
| "ref_id": "b59", |
| "title": "Target language adaptation of discriminative transfer parsers", |
| "authors": [ |
| { |
| "first": "Oscar", |
| "middle": [], |
| "last": "T\u00e4ckstr\u00f6m", |
| "suffix": "" |
| }, |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Mcdonald", |
| "suffix": "" |
| }, |
| { |
| "first": "Joakim", |
| "middle": [], |
| "last": "Nivre", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of HLT-NAACL", |
| "volume": "", |
| "issue": "", |
| "pages": "1061--1071", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Oscar T\u00e4ckstr\u00f6m, Ryan McDonald, and Joakim Nivre. 2013. Target language adaptation of dis- criminative transfer parsers. In Proceedings of HLT-NAACL, pages 1061-1071.", |
| "links": null |
| }, |
| "BIBREF60": { |
| "ref_id": "b60", |
| "title": "Randomized optimum models for structured prediction", |
| "authors": [ |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Tarlow", |
| "suffix": "" |
| }, |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Swersky", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [ |
| "S" |
| ], |
| "last": "Zemel", |
| "suffix": "" |
| }, |
| { |
| "first": "Ryan", |
| "middle": [ |
| "Prescott" |
| ], |
| "last": "Adams", |
| "suffix": "" |
| }, |
| { |
| "first": "Brendan", |
| "middle": [ |
| "J" |
| ], |
| "last": "Frey", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of AISTATS", |
| "volume": "", |
| "issue": "", |
| "pages": "1221--1229", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Daniel Tarlow, Kevin Swersky, Richard S. Zemel, Ryan Prescott Adams, and Brendan J. Frey. 2012. Randomized optimum models for structured prediction. In Proceedings of AISTATS, pages 1221-1229.", |
| "links": null |
| }, |
| "BIBREF61": { |
| "ref_id": "b61", |
| "title": "Effective greedy inference for graph-based non-projective dependency parsing", |
| "authors": [ |
| { |
| "first": "Ilan", |
| "middle": [], |
| "last": "Tchernowitz", |
| "suffix": "" |
| }, |
| { |
| "first": "Liron", |
| "middle": [], |
| "last": "Yedidsion", |
| "suffix": "" |
| }, |
| { |
| "first": "Roi", |
| "middle": [], |
| "last": "Reichart", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "711--720", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ilan Tchernowitz, Liron Yedidsion, and Roi Reichart. 2016. Effective greedy inference for graph-based non-projective dependency pars- ing. In Proceedings of EMNLP, pages 711-720.", |
| "links": null |
| }, |
| "BIBREF62": { |
| "ref_id": "b62", |
| "title": "Rediscovering annotation projection for cross-lingual parser induction", |
| "authors": [ |
| { |
| "first": "J\u00f6rg", |
| "middle": [], |
| "last": "Tiedemann", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of COLING", |
| "volume": "", |
| "issue": "", |
| "pages": "1854--1864", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J\u00f6rg Tiedemann. 2014. Rediscovering annotation projection for cross-lingual parser induction. In Proceedings of COLING, pages 1854-1864.", |
| "links": null |
| }, |
| "BIBREF63": { |
| "ref_id": "b63", |
| "title": "Dependency forest for sentiment analysis", |
| "authors": [ |
| { |
| "first": "Zhaopeng", |
| "middle": [], |
| "last": "Tu", |
| "suffix": "" |
| }, |
| { |
| "first": "Wenbin", |
| "middle": [], |
| "last": "Jiang", |
| "suffix": "" |
| }, |
| { |
| "first": "Qun", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Shouxun", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Natural Language Processing and Chinese Computing", |
| "volume": "", |
| "issue": "", |
| "pages": "69--77", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zhaopeng Tu, Wenbin Jiang, Qun Liu, and Shouxun Lin. 2012, Dependency forest for sen- timent analysis, In Natural Language Processing and Chinese Computing, pages 69-77. Springer.", |
| "links": null |
| }, |
| "BIBREF64": { |
| "ref_id": "b64", |
| "title": "Dependency forest for statistical machine translation", |
| "authors": [ |
| { |
| "first": "Zhaopeng", |
| "middle": [], |
| "last": "Tu", |
| "suffix": "" |
| }, |
| { |
| "first": "Yang", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Young-Sook", |
| "middle": [], |
| "last": "Hwang", |
| "suffix": "" |
| }, |
| { |
| "first": "Qun", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Shouxun", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of COLING", |
| "volume": "", |
| "issue": "", |
| "pages": "1092--1100", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zhaopeng Tu, Yang Liu, Young-Sook Hwang, Qun Liu, and Shouxun Lin. 2010. Dependency forest for statistical machine translation. In Proceedings of COLING, pages 1092-1100.", |
| "links": null |
| }, |
| "BIBREF65": { |
| "ref_id": "b65", |
| "title": "One model, two languages: Training bilingual parsers with harmonized treebanks", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Vilares", |
| "suffix": "" |
| }, |
| { |
| "first": "Carlos", |
| "middle": [], |
| "last": "G\u00f3mez-Rodr\u00edguez", |
| "suffix": "" |
| }, |
| { |
| "first": "Miguel", |
| "middle": [ |
| "A" |
| ], |
| "last": "Alonso", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "425--431", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David Vilares, Carlos G\u00f3mez-Rodr\u00edguez, and Miguel A. Alonso. 2016. One model, two lan- guages: Training bilingual parsers with har- monized treebanks. In Proceedings of ACL, pages 425-431.", |
| "links": null |
| }, |
| "BIBREF66": { |
| "ref_id": "b66", |
| "title": "Graphical models, exponential families, and variational inference. Foundations and Trends in Machine Learning", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Martin", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [ |
| "I" |
| ], |
| "last": "Wainwright", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Jordan", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "", |
| "volume": "1", |
| "issue": "", |
| "pages": "1--305", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Martin J. Wainwright and Michael I. Jordan. 2008. Graphical models, exponential families, and variational inference. Foundations and Trends in Machine Learning, 1(1-2):1-305.", |
| "links": null |
| }, |
| "BIBREF67": { |
| "ref_id": "b67", |
| "title": "The Galactic Dependencies treebanks: Getting more data by synthesizing new languages. Transactions of the Association for Computational Linguistics", |
| "authors": [ |
| { |
| "first": "Dingquan", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Eisner", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "4", |
| "issue": "", |
| "pages": "491--505", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dingquan Wang and Jason Eisner. 2016. The Galactic Dependencies treebanks: Getting more data by synthesizing new languages. Trans- actions of the Association for Computational Linguistics, 4:491-505.", |
| "links": null |
| }, |
| "BIBREF68": { |
| "ref_id": "b68", |
| "title": "Surface statistics of an unknown language indicate how to parse it", |
| "authors": [ |
| { |
| "first": "Dingquan", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Eisner", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Transactions of the Association for Computational Linguistics", |
| "volume": "6", |
| "issue": "", |
| "pages": "667--685", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dingquan Wang and Jason Eisner. 2018a. Surface statistics of an unknown language indicate how to parse it. Transactions of the Association for Computational Linguistics, 6:667-685.", |
| "links": null |
| }, |
| "BIBREF69": { |
| "ref_id": "b69", |
| "title": "Synthetic data made to order: The case of parsing", |
| "authors": [ |
| { |
| "first": "Dingquan", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Eisner", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "1325--1337", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dingquan Wang and Jason Eisner. 2018b. Syn- thetic data made to order: The case of parsing. In Proceedings of EMNLP, pages 1325-1337.", |
| "links": null |
| }, |
| "BIBREF70": { |
| "ref_id": "b70", |
| "title": "Generating random spanning trees more quickly than the cover time", |
| "authors": [ |
| { |
| "first": "Wilson", |
| "middle": [], |
| "last": "David Bruce", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "Proceedings of STOC", |
| "volume": "", |
| "issue": "", |
| "pages": "296--303", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David Bruce Wilson. 1996. Generating random spanning trees more quickly than the cover time. In Proceedings of STOC, pages 296-303.", |
| "links": null |
| }, |
| "BIBREF71": { |
| "ref_id": "b71", |
| "title": "Hierarchical low-rank tensors for multilingual transfer parsing", |
| "authors": [ |
| { |
| "first": "Yuan", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Regina", |
| "middle": [], |
| "last": "Barzilay", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "1857--1867", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yuan Zhang and Regina Barzilay. 2015. Hierarchical low-rank tensors for multilingual transfer parsing. In Proceedings of EMNLP, pages 1857-1867.", |
| "links": null |
| }, |
| "BIBREF72": { |
| "ref_id": "b72", |
| "title": "Steps to excellence: Simple inference with refined scoring of dependency trees", |
| "authors": [ |
| { |
| "first": "Yuan", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Tao", |
| "middle": [], |
| "last": "Lei", |
| "suffix": "" |
| }, |
| { |
| "first": "Regina", |
| "middle": [], |
| "last": "Barzilay", |
| "suffix": "" |
| }, |
| { |
| "first": "Tommi", |
| "middle": [], |
| "last": "Jaakkola", |
| "suffix": "" |
| }, |
| { |
| "first": "Amir", |
| "middle": [], |
| "last": "Globerson", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "197--207", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yuan Zhang, Tao Lei, Regina Barzilay, Tommi Jaakkola, and Amir Globerson. 2014. Steps to excellence: Simple inference with refined scoring of dependency trees. In Proceedings of ACL, pages 197-207.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "type_str": "figure", |
| "num": null, |
| "text": "Method Av. UAS (M) Md. UAS (M) Av. UAS (O) Md. UAS (O) # Cor. (M) # Cor.", |
| "uris": null |
| }, |
| "FIGREF1": { |
| "type_str": "figure", |
| "num": null, |
| "text": "Cross-lingual parsing, K = 100. Top: Averaged UAS of the trees in the M-th percentile of the K-list of each model (values were computed for M = 1, 25, 50, 75, 100). Bottom: Percentage of trees in each 10% UAS bin, for the K-list of each model. In both cases the values are calculated across all the trees in the lists produced for all test sets.", |
| "uris": null |
| }, |
| "FIGREF2": { |
| "type_str": "figure", |
| "num": null, |
| "text": "MethodA-U (M) M-U (M) A-U (O) M-U (O) #-C (M) #-C (O) A-U-T M-U-T", |
| "uris": null |
| }, |
| "FIGREF3": { |
| "type_str": "figure", |
| "num": null, |
| "text": "Method Av. UAS (M) Md. UAS (M) Av. UAS (O) Md. UAS (O) # Cor. (M) # Cor.", |
| "uris": null |
| }, |
| "FIGREF4": { |
| "type_str": "figure", |
| "num": null, |
| "text": "Mono-lingual parsing, K = 100. Graphs format is identical toFigure 1.", |
| "uris": null |
| }, |
| "TABREF3": { |
| "content": "<table/>", |
| "type_str": "table", |
| "num": null, |
| "html": null, |
| "text": "Corpus UAS, cross-lingual parsing," |
| }, |
| "TABREF5": { |
| "content": "<table/>", |
| "type_str": "table", |
| "num": null, |
| "html": null, |
| "text": "Results summary, mono-lingual parsing, K = 100.Table format is identical to Table 1." |
| } |
| } |
| } |
| } |