ACL-OCL / Base_JSON /prefixQ /json /Q18 /Q18-1046.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Q18-1046",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:11:00.752450Z"
},
"title": "Surface Statistics of an Unknown Language Indicate How to Parse It",
"authors": [
{
"first": "Dingquan",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University",
"location": {}
},
"email": ""
},
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We introduce a novel framework for delexicalized dependency parsing in a new language. We show that useful features of the target language can be extracted automatically from an unparsed corpus, which consists only of gold part-of-speech (POS) sequences. Providing these features to our neural parser enables it to parse sequences like those in the corpus. Strikingly, our system has no supervision in the target language. Rather, it is a multilingual system that is trained end-to-end on a variety of other languages, so it learns a feature extractor that works well. We show experimentally across multiple languages: (1) Features computed from the unparsed corpus improve parsing accuracy. (2) Including thousands of synthetic languages in the training yields further improvement. (3) Despite being computed from unparsed corpora, our learned task-specific features beat previous work's interpretable typological features that require parsed corpora or expert categorization of the language. Our best method improved attachment scores on held-out test languages by an average of 5.6 percentage points over past work that does not inspect the unparsed data (McDonald et al., 2011), and by 20.7 points over past \"grammar induction\" work that does not use training languages (Naseem et al., 2010).",
"pdf_parse": {
"paper_id": "Q18-1046",
"_pdf_hash": "",
"abstract": [
{
"text": "We introduce a novel framework for delexicalized dependency parsing in a new language. We show that useful features of the target language can be extracted automatically from an unparsed corpus, which consists only of gold part-of-speech (POS) sequences. Providing these features to our neural parser enables it to parse sequences like those in the corpus. Strikingly, our system has no supervision in the target language. Rather, it is a multilingual system that is trained end-to-end on a variety of other languages, so it learns a feature extractor that works well. We show experimentally across multiple languages: (1) Features computed from the unparsed corpus improve parsing accuracy. (2) Including thousands of synthetic languages in the training yields further improvement. (3) Despite being computed from unparsed corpora, our learned task-specific features beat previous work's interpretable typological features that require parsed corpora or expert categorization of the language. Our best method improved attachment scores on held-out test languages by an average of 5.6 percentage points over past work that does not inspect the unparsed data (McDonald et al., 2011), and by 20.7 points over past \"grammar induction\" work that does not use training languages (Naseem et al., 2010).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Dependency parsing is one of the core natural language processing tasks. It aims to parse a given sentence into its dependency tree: a directed graph of labeled syntactic relations between words. Supervised dependency parsers-which are trained using a \"treebank\" of known parses in the target language-have been very successful (McDonald, 2006; Nivre, 2008; Kiperwasser and Goldberg, 2016) . By contrast, the progress of unsupervised dependency parsers has been slow, and they have apparently not been used in any downstream NLP systems ). An unsupervised parser does not have access to a treebank, but only to a corpus of unparsed sentences in the target language.",
"cite_spans": [
{
"start": 328,
"end": 344,
"text": "(McDonald, 2006;",
"ref_id": "BIBREF43"
},
{
"start": 345,
"end": 357,
"text": "Nivre, 2008;",
"ref_id": "BIBREF50"
},
{
"start": 358,
"end": 389,
"text": "Kiperwasser and Goldberg, 2016)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Unsupervised parsing has been studied for decades. The most common approach is grammar induction (Lari and Young, 1990; Carroll and Charniak, 1992; . Grammar induction induces an explicit grammar from the unparsed corpus, such as a probabilistic context-free grammar (PCFG), and uses that to parse sentences of the language. This approach has encountered two major difficulties:",
"cite_spans": [
{
"start": 97,
"end": 119,
"text": "(Lari and Young, 1990;",
"ref_id": "BIBREF36"
},
{
"start": 120,
"end": 147,
"text": "Carroll and Charniak, 1992;",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Search error: Most formulations of grammar induction involve optimizing a highly non-convex objective function such as likelihood. The optimization is typically NP-hard (Cohen and , and approximate local search methods tend to get stuck in local optima.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Model error: Likelihood does not correlate well with parsing accuracy anyway (Smith, 2006, Figure 3.2) . Likelihood optimization seeks latent trees that help to predict the observed sentences, but these unsupervised trees may use a non-standard syntactic analysis or even be optimized to predict nonsyntactic properties such as topic. We seek a standard syntactic analysis-what calls the MATCHLINGUIST task.",
"cite_spans": [
{
"start": 79,
"end": 104,
"text": "(Smith, 2006, Figure 3.2)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We address both difficulties by using a supervised learning framework-one whose objective function is easier to optimize and explicitly tries to match linguists' standard syntactic analyses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our approach is inspired by Wang and Eisner (2017) , who use an unparsed but tagged corpus to predict the fine-grained syntactic typology of a language. For example, they may predict that about 70% of the direct objects fall to the right of the verb. Their system is trained on a large number of (unparsed corpus, true typology) pairs, each representing a different language. With this training, it can generalize to predict typology from the unparsed corpus of a new language. Our approach is similar except that we predict parses rather than just a typology. In both cases, the system is trained to optimize a task-specific quality measure. The system's parameterization can be chosen to simplify optimization (strikingly, the training objective could even be made convex by using a conditional random field architecture) and/or to incorporate linguistically motivated features.",
"cite_spans": [
{
"start": 28,
"end": 50,
"text": "Wang and Eisner (2017)",
"ref_id": "BIBREF75"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The positive results of Wang and Eisner (2017) demonstrate that there are indeed surface clues to syntactic structure in the input corpus, at least if it is POS-tagged (as in their work and ours). However, their method only found global typological information: it did not establish which 70% of the direct objects fell to the right of their verbs, let alone identify which nouns were in fact direct objects of which verbs. That requires a token-level analysis of each sentence, which we undertake in this paper. Again, the basic idea is that instead of predicting interpretable typological properties of a language as Wang and Eisner (2017) did, we will predict a language-specific version of the scoring function that a parser uses to choose among various actions or substructures.",
"cite_spans": [
{
"start": 24,
"end": 46,
"text": "Wang and Eisner (2017)",
"ref_id": "BIBREF75"
},
{
"start": 619,
"end": 641,
"text": "Wang and Eisner (2017)",
"ref_id": "BIBREF75"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our fundamental question is whether gold part-ofspeech (POS) sequences carry useful information about the syntax of a language. 1 As we will show, the answer is yes, and the information can be extracted and used to obtain actual parses. This is the same question that has been implicitly asked by previous papers in the unsupervised parsing tradition (see \u00a75). Unsupervised parsing of gold POS sequences is an artificial task, to be sure. 2 Nonetheless, it is a starting point for more ambitious settings that would learn from words and real-world grounding (with or without the POS tags). Even this starting point has proved surprisingly difficult over decades of research, so it has not been clear whether the POS sequences even contain the necessary information.",
"cite_spans": [
{
"start": 439,
"end": 440,
"text": "2",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Parsing with Supervised Tuning",
"sec_num": "2"
},
{
"text": "Yet this task-like others that engineers, linguists, or human learners might face-might be solvable with general knowledge about the distribution of human languages. An experienced linguist can sometimes puzzle out the structure of a new language. The reader may be willing to guess a parse for the gold POS sequence VERB DET NOUN ADJ DET NOUN. After all, adjectives usually attach to nouns (Naseem et al., 2010) , and the adjective in this example seems to attach to the first noun-not to the second, since determiners usually fall at the edge of a noun phrase. Meanwhile, the sequence's sole verb is apparently followed by two noun phrases, which suggests either VSO (verb-subject-object) or VOS orderand VSO is a good guess as it is more common (Dryer and Haspelmath, 2013) . Observing a corpus of additional POS sequences might help resolve the question of whether this language is primarily VSO or VOS, for example, by guessing that short noun phrases in the corpus (for example, unmodified pronouns) are more often subjects.",
"cite_spans": [
{
"start": 391,
"end": 412,
"text": "(Naseem et al., 2010)",
"ref_id": "BIBREF48"
},
{
"start": 748,
"end": 776,
"text": "(Dryer and Haspelmath, 2013)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Parsing with Supervised Tuning",
"sec_num": "2"
},
{
"text": "Thus, we propose to solve the task by training a kind of \"artificial linguist\" that can do such analysis on corpora of new languages. This is a general approach to developing an unsupervised method for a specific type of dataset: tune its structure and hyperparameters so that it works well on actual datasets of that sort, and then apply it to new datasets. For example, consider clustering-the canonical unsupervised problem. What constitutes a useful cluster depends on the type of data and the application. Basu et al. (2013) develop a text clustering system specifically to aid teachers. Their \"Powergrading\" system can group all the student-written answers to a novel question, having been trained on human judgments of answer similarity for other questions. Their novel questions are analogous to our novel languages: their unsupervised system is specifically tailored to match teachers' semantic similarity judgments within any corpus of student answers, just as ours is tailored to match linguists' syntactic judgments within any corpus of human-language POS sequences. Other NLP work on supervised tuning of unsupervised learners includes strapping (Eisner and Karakos, 2005; Karakos et al., 2007) , which tunes with the help of both real and synthetic datasets, just as we will ( \u00a73).",
"cite_spans": [
{
"start": 511,
"end": 529,
"text": "Basu et al. (2013)",
"ref_id": "BIBREF3"
},
{
"start": 1159,
"end": 1185,
"text": "(Eisner and Karakos, 2005;",
"ref_id": "BIBREF14"
},
{
"start": 1186,
"end": 1207,
"text": "Karakos et al., 2007)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Parsing with Supervised Tuning",
"sec_num": "2"
},
{
"text": "Are such systems really \"unsupervised\"? Yes, in the sense that they are able to discover desirable structure in a new dataset. Unsupervised learners are normally crafted using assumptions about the data domain. Their structure and hyperparameters may have been manually tuned to produce pleasing results for typical datasets in that domain. In the domain of POS corpora, we simply scale up this practice to automatically tune a large set of parameters, which later guide our system's search for linguist-approved structure on each new humanlanguage dataset. Our system should be regarded as \"supervised\" if the examples are taken to be entire languages: after all, we train it to map unlabeled corpora to usefully labeled corpora. But once trained, it is \"unsupervised\" if the examples are taken to be the sentences within a given corpus: by analyzing the corpus, our system figures out how to map sentences of that language to parses, without any labeled examples in that language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Parsing with Supervised Tuning",
"sec_num": "2"
},
{
"text": "We use two datasets in our experiment: UD: Universal Dependencies version 1.2 (Nivre et al., 2015) A collection of 37 dependency treebanks of 33 languages, tokenized and annotated with a common set of POS tags and dependency relations. 3 In principle, our trained system could be applied to predict UD-style dependency relations in any tokenized natural-language corpus with UD-style POS tags.",
"cite_spans": [
{
"start": 78,
"end": 98,
"text": "(Nivre et al., 2015)",
"ref_id": null
},
{
"start": 236,
"end": 237,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "GD: Galactic Dependencies version 1.0 (Wang and Eisner, 2016) A collection of dependency treebanks for 53,428 synthetic languages (of which we will use a subset). A GD treebank is generated by starting with some UD treebank and stochastically permuting the child subtrees of nouns and/or verbs to match their orders in other UD treebanks. For example, one of the GD treebanks reflects what the English UD treebank might have looked like if English had been both VSO (like Irish) and postpositional (like Japanese). This typologically diverse collection of resource-rich synthetic languages aims to propel the development of NLP systems that can handle diverse natural languages, such as multilingual parsers and taggers.",
"cite_spans": [
{
"start": 38,
"end": 61,
"text": "(Wang and Eisner, 2016)",
"ref_id": "BIBREF74"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "We hope for our system to do well, on average, at matching real linguist-parsed corpora of real human languages. We therefore tune its parameters \u0398 on such treebanks. UD provides training examples actually drawn from that distribution D over treebanks-but alas, rather few. Thus to better estimate the expected performance of \u0398 under D, we follow Wang and Eisner (2017) and augment our training data with GD's synthetic treebanks.",
"cite_spans": [
{
"start": 347,
"end": 369,
"text": "Wang and Eisner (2017)",
"ref_id": "BIBREF75"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Why Synthetic Training Languages?",
"sec_num": "3.1"
},
{
"text": "Ideally we would have sampled these synthetic treebanks from a careful estimateD of D: for example, the mean of a Bayesian posterior for D, derived from prior assumptions and UD evidence. However, such adventurous \"extrapolation\" of unseen languages would have required actually constructing such an estimateD-which would embody a distribution over semantic content and a full theory of universal grammar! The GD treebanks were derived more simply and more conservatively by \"interpolation\" among the actual UD corpora. They combine observed parse trees (which provide attested semantic content) with stochastic word order models trained on observed languages (which attempt to mimic attested patterns for presenting that content). GD's sampling distributionD still offers moderately varied synthetic datasets, which remain moderately realistic, as they are limited to phenomena observed in UD.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Why Synthetic Training Languages?",
"sec_num": "3.1"
},
{
"text": "As Wang and Eisner (2016) pointed out, synthetic examples have been used in many other supervised machine learning settings. A common technique is to exploit invariance: if real image z should be classified as a cat, then so should a rotated version of image z. Our technique is the same! We assume that if real corpus u should be parsed as having certain dependencies among the word tokens, then so should a version of corpus u in which those tokens have been systematically permuted in a linguistically plausible way. 4 This is analogous to how rotation sytematically transforms the image (rotating all pixels through the same angle) in a physically plausible way (as real objects do rotate relative to the camera). The systematicity is needed to ensure that the task on syn-thetic data is feasible. In our case, the synthetic corpus then provides many sentences that have been similarly permuted, which may jointly provide enough clues to guess the word order of this synthetic language (for example, VSO vs. VOS in \u00a72) and thus recover the dependencies. See Wang and Eisner (2018, \u00a72) for related discussion. With enough good synthetic languages to use for training, even nearest-neighbor could be an effective method. That is, one could obtain the parser for a test corpus simply by copying the trained parser for the most similar training corpus (under some metric). Wang and Eisner (2016) explored this approach of \"single-source transfer\" from synthetic languages. Yet with only thousands of synthetic languages, perhaps no single training corpus is sufficiently similar. 5 To draw on patterns in many training corpora to figure out how to parse the test corpus, we will train a single parser that can handle all of the training corpora (Ammar et al., 2016) , much as we trained our typological classifier in earlier work (Wang and Eisner, 2017) .",
"cite_spans": [
{
"start": 3,
"end": 25,
"text": "Wang and Eisner (2016)",
"ref_id": "BIBREF74"
},
{
"start": 1062,
"end": 1088,
"text": "Wang and Eisner (2018, \u00a72)",
"ref_id": null
},
{
"start": 1373,
"end": 1395,
"text": "Wang and Eisner (2016)",
"ref_id": "BIBREF74"
},
{
"start": 1745,
"end": 1765,
"text": "(Ammar et al., 2016)",
"ref_id": "BIBREF2"
},
{
"start": 1830,
"end": 1853,
"text": "(Wang and Eisner, 2017)",
"ref_id": "BIBREF75"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Why Synthetic Training Languages?",
"sec_num": "3.1"
},
{
"text": "An unsupervised parser for language is built without any gold parse trees for . However, we assume a corpus u of unparsed but POS-tagged sentences of is available. From u, we will extract statistics T(u) that are informative about the syntactic structure of , to guide us in parsing POStagged sentences of .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Formulation",
"sec_num": "4"
},
{
"text": "Overall, our approach is to train a \"languageagnostic\" parser-one that does not know what language it is parsing in. It produces a parse tre\u00ea y = Parse \u0398 (x; u) from a sentence x, constructing T(u) as an intermediate quantity that carries (for example) typological information about . The parameters \u0398 are shared by all languages, and determine how to construct and use T. To learn them, we will allow to range over training languages, and then test our ability to parse when ranges over novel test languages. Our Parse \u0398 (x; u) system has two stages. First it uses a neural network to compute T(u) \u2208 R m , a vector that represents the typological properties of and resembles the language embedding of Ammar et al. (2016) . Then it parses sentence x while taking T(u) as an additional input. We will give details of these two components in \u00a76 and \u00a77.",
"cite_spans": [
{
"start": 715,
"end": 721,
"text": "(2016)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task Formulation",
"sec_num": "4"
},
{
"text": "We assume in this paper that the input sentence x is given as a POS sequence: that is, our parser is delexicalized. This spares us from also needing language-specific lexical parameters associated with the specific vocabulary of each language, a problem that we leave to future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Formulation",
"sec_num": "4"
},
{
"text": "We will choose our universal parameter values by minimizing an estimate of their expected loss,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Formulation",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u0398 = argmin \u0398 mean \u2208L train Loss(\u0398; x ( ) , y ( ) , u ( ) )",
"eq_num": "(1)"
}
],
"section": "Task Formulation",
"sec_num": "4"
},
{
"text": "where L train is a collection of training languages (ideally drawn IID from the distribution D of possible human languages) for which some syntactic information is available. Specifically, each training language has a treebank (x ( ) , y ( ) ), where x ( ) is a collection of POS-tagged sentences whose correct dependency trees are given by y ( ) . Each also has an unparsed corpus u ( ) (possibly equal to x ( ) or containing x ( ) ). We can therefore define the parser's loss on training language as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Formulation",
"sec_num": "4"
},
{
"text": "Loss(\u0398; x ( ) , y ( ) , u ( ) ) (2) = mean (x,y)\u2208(x ( ) ,y ( ) ) loss(Parse \u0398 (x; u ( ) ) \u0177 , y)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Formulation",
"sec_num": "4"
},
{
"text": "where loss(. . .) is a task-specific per-sentence loss (defined in \u00a78.1) that evaluates the parser's output y on sentence x against x's correct tree y.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Formulation",
"sec_num": "4"
},
{
"text": "5 Related Work",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Formulation",
"sec_num": "4"
},
{
"text": "Many papers rely on some universal learning procedure to determine T(u) (see \u00a74) for a target language. For example, T(\u2022) may be the Expectation-Maximization (EM) algorithm, yielding a PCFG T(u) that fully determines a CKY parser (Carroll and Charniak, 1992; . Since EM and CKY are fixed algorithms, this approach has no trainable parameters. Grammar induction tries to turn an unsupervised corpus into a generative grammar. The approach of the previous paragraph is often modified to reduce model error or search error ( \u00a71). To reduce model error, many papers have used dependency grammar, with training objectives that incorporate notions like lexical attraction (Yuret, 1998) and grammatical bigrams (Paskin, 2001 (Paskin, , 2002 . The dependency model with valence (DMV) was the first method to beat a simple right-branching heuristic. Headden III et al. (2009) and Spitkovsky et al. (2012) made the DMV more expressive by considering higher-order valency or punctuation. To reduce search error, strategies for eliminating or escaping local optima have included convexified objectives (Wang et al., 2008; Gimpel and Smith, 2012) , smart initialization Mare\u010dek and Straka, 2013) , search bias Eisner, 2005, 2006; Naseem et al., 2010; Gillenwater et al., 2010) , branch-and-bound search (Gormley and Eisner, 2013) , and switching objectives (Spitkovsky et al., 2013) .",
"cite_spans": [
{
"start": 230,
"end": 258,
"text": "(Carroll and Charniak, 1992;",
"ref_id": "BIBREF4"
},
{
"start": 666,
"end": 679,
"text": "(Yuret, 1998)",
"ref_id": "BIBREF80"
},
{
"start": 704,
"end": 717,
"text": "(Paskin, 2001",
"ref_id": "BIBREF55"
},
{
"start": 718,
"end": 733,
"text": "(Paskin, , 2002",
"ref_id": "BIBREF56"
},
{
"start": 841,
"end": 866,
"text": "Headden III et al. (2009)",
"ref_id": "BIBREF26"
},
{
"start": 1090,
"end": 1109,
"text": "(Wang et al., 2008;",
"ref_id": "BIBREF77"
},
{
"start": 1110,
"end": 1133,
"text": "Gimpel and Smith, 2012)",
"ref_id": "BIBREF20"
},
{
"start": 1157,
"end": 1182,
"text": "Mare\u010dek and Straka, 2013)",
"ref_id": "BIBREF40"
},
{
"start": 1197,
"end": 1216,
"text": "Eisner, 2005, 2006;",
"ref_id": null
},
{
"start": 1217,
"end": 1237,
"text": "Naseem et al., 2010;",
"ref_id": "BIBREF48"
},
{
"start": 1238,
"end": 1263,
"text": "Gillenwater et al., 2010)",
"ref_id": "BIBREF19"
},
{
"start": 1290,
"end": 1316,
"text": "(Gormley and Eisner, 2013)",
"ref_id": "BIBREF22"
},
{
"start": 1344,
"end": 1369,
"text": "(Spitkovsky et al., 2013)",
"ref_id": "BIBREF67"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Per-Language Learning",
"sec_num": "5.1"
},
{
"text": "Unsupervised parsing (which is also our task) tries to turn the same corpus directly into a treebank, without necessarily finding a grammar. We discuss some recent milestones here. Grave and Elhadad (2015) propose a transductive learning objective for unsupervised parsing, and a convex relaxation of it. (Jiang et al. (2017) combined that work with grammar induction.) Mart\u00ednez Alonso et al. 2017create an unsupervised dependency parser that is formally similar to ours in that it uses cross-linguistic knowledge as well as statistics computed from a corpus of POS sequences in the target language. However, its cross-linguistic knowledge is hand-coded: namely, the set of POS-to-POS dependencies that are allowed by the UD annotation scheme, and the typical directions for some of these dependencies. The only corpus statistic extracted from u is whether ADP-NOMINAL or NOMINAL-ADP bigrams are more frequent, 6 which distinguishes prepositional from postpositional languages. The actual parser starts by identifying the head word as the most \"central\" word according to a PageRank (Page et al., 1999) analysis of the graph of candidate edges, and proceeds by greedily attaching words of decreasing PageRank at lower depths in the tree.",
"cite_spans": [
{
"start": 181,
"end": 205,
"text": "Grave and Elhadad (2015)",
"ref_id": "BIBREF23"
},
{
"start": 305,
"end": 325,
"text": "(Jiang et al. (2017)",
"ref_id": "BIBREF28"
},
{
"start": 1083,
"end": 1102,
"text": "(Page et al., 1999)",
"ref_id": "BIBREF54"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Per-Language Learning",
"sec_num": "5.1"
},
{
"text": "This approach parses a \"target\" language using the treebanks of other resource-rich languages as \"source\" languages. There are two main variants.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Language Learning",
"sec_num": "5.2"
},
{
"text": "Memory-based. This method trains a supervised parsing model on each source treebank. It uses these (delexicalized) source-language models to help parse the target sentence, favoring sources that are similar to the target language. A common similarity measure (Rosa and \u017dabokrtsk\u00fd, 2015a) considers the probability of the target language's POS-corpus u under a trigram language model of source-language POS sequences.",
"cite_spans": [
{
"start": 259,
"end": 287,
"text": "(Rosa and \u017dabokrtsk\u00fd, 2015a)",
"ref_id": "BIBREF59"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Language Learning",
"sec_num": "5.2"
},
{
"text": "Single-source transfer (SST) (Rosa and \u017dabokrtsk\u00fd, 2015a; Wang and Eisner, 2016) simply uses the parser for the most similar source treebank. Multi-source transfer (MST) (Rosa and \u017dabokrtsk\u00fd, 2015a) parses the target POS sequence with each of the source parsers, and then combines these parses into a consensus tree using the Chu-Liu-Edmonds algorithm (Chu, 1965; Edmonds, 1967) . As a faster variant, model interpolation (Rosa and \u017dabokrtsk\u00fd, 2015b ) builds a consensus model for the target language (via a weighted average of source models' parameters), rather than a consensus parse for each target sentence separately.",
"cite_spans": [
{
"start": 29,
"end": 57,
"text": "(Rosa and \u017dabokrtsk\u00fd, 2015a;",
"ref_id": "BIBREF59"
},
{
"start": 58,
"end": 80,
"text": "Wang and Eisner, 2016)",
"ref_id": "BIBREF74"
},
{
"start": 170,
"end": 198,
"text": "(Rosa and \u017dabokrtsk\u00fd, 2015a)",
"ref_id": "BIBREF59"
},
{
"start": 352,
"end": 363,
"text": "(Chu, 1965;",
"ref_id": "BIBREF7"
},
{
"start": 364,
"end": 378,
"text": "Edmonds, 1967)",
"ref_id": null
},
{
"start": 422,
"end": 449,
"text": "(Rosa and \u017dabokrtsk\u00fd, 2015b",
"ref_id": "BIBREF60"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Language Learning",
"sec_num": "5.2"
},
{
"text": "Memory-based methods require storing models for all source treebanks, which is expensive when we include thousands of GD treebanks ( \u00a73).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Language Learning",
"sec_num": "5.2"
},
{
"text": "Model-based. This method trains a single language-agnostic model. McDonald et al. 2011train a delexicalized parser on the concatenation of all source treebanks, achieving a large gain over grammar induction. This parser can learn universals such as the preference for determiners to attach to nouns (which was hard-coded by Naseem et al. (2010) ). However, it is expected to parse a sentence x without being told the language or even a corpus u, possibly by guessing properties of the language from the configurations it encounters in the single sentence x alone.",
"cite_spans": [
{
"start": 324,
"end": 344,
"text": "Naseem et al. (2010)",
"ref_id": "BIBREF48"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Language Learning",
"sec_num": "5.2"
},
{
"text": "Further gains were achieved (Naseem et al., 2012; T\u00e4ckstr\u00f6m et al., 2013b; Zhang and Barzilay, 2015; Ammar et al., 2016) by providing the parser with about 10 typological properties of x's language-for example, whether direct objects generally fall to the right of the verb-as listed in the World Atlas of Linguistic Structures (Dryer and Haspelmath, 2013) .",
"cite_spans": [
{
"start": 28,
"end": 49,
"text": "(Naseem et al., 2012;",
"ref_id": "BIBREF47"
},
{
"start": 50,
"end": 74,
"text": "T\u00e4ckstr\u00f6m et al., 2013b;",
"ref_id": "BIBREF69"
},
{
"start": 75,
"end": 100,
"text": "Zhang and Barzilay, 2015;",
"ref_id": "BIBREF81"
},
{
"start": 101,
"end": 120,
"text": "Ammar et al., 2016)",
"ref_id": "BIBREF2"
},
{
"start": 328,
"end": 356,
"text": "(Dryer and Haspelmath, 2013)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Language Learning",
"sec_num": "5.2"
},
{
"text": "However, relying on WALS raises some issues. (1) The unknown language might not be in WALS. 7 (2) Some typological features are missing for some languages. (3) All the WALS features are categorical values, which loses useful information about tendencies (for example, how often the canonical word order is violated). (4) Not all WALS features are useful-only 56 of them pertain to word order, and only 8 of those have been used in past work. (5) With a richer parser (a stack LSTM dependency parser), WALS features do not appear to help at all on unknown languages (Ammar et al., 2016, footnote 30).",
"cite_spans": [
{
"start": 92,
"end": 93,
"text": "7",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Language Learning",
"sec_num": "5.2"
},
{
"text": "Some other work on generalizing from source to target languages assumes the availability of source-target parallel data, or bitext. Two uses:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Exploiting Parallel Data",
"sec_num": "5.3"
},
{
"text": "Induction of multilingual word embeddings. Similar to universal POS tags, multilingual word embeddings serve as a universal representation that bridges the lexical differences among languages. Guo et al. (2016) proposed two approaches: (1) Training a variant of the skip-gram model (Mikolov et al., 2013) by using bilingual sets of context words. (2) Generating the embedding of each target word by averaging the embeddings of the source words to which it is aligned.",
"cite_spans": [
{
"start": 193,
"end": 210,
"text": "Guo et al. (2016)",
"ref_id": "BIBREF25"
},
{
"start": 282,
"end": 304,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF46"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Exploiting Parallel Data",
"sec_num": "5.3"
},
{
"text": "Annotation projection. Given aligned bitext, one can generate an approximate parse for a target sentence by \"projecting\" the parse tree of the corresponding source sentence. A target-language parser can then be trained from these approximate parses. The idea was originally proposed by Yarowsky et al. (2001) , and then applied to dependency parsing on low-resource languages (Hwa et al., 2005; Ganchev et al., 2009; Smith and Eisner, 2009; Tiedemann, 2014, inter alia) . McDonald et al. (2011) extend this approach to multiple source languages by projected transfer. Later work in this vein mainly tries to improve the approximate parses, including translating the source treebanks into the target language with an off-the-shelf machine translation system (Tiedemann et al., 2014), augmenting the trees with weights (Agi\u0107 et al., 2016), and using only partial trees with high-confidence alignments Collins, 2015, 2017; Lacroix et al., 2016) .",
"cite_spans": [
{
"start": 286,
"end": 308,
"text": "Yarowsky et al. (2001)",
"ref_id": "BIBREF79"
},
{
"start": 376,
"end": 394,
"text": "(Hwa et al., 2005;",
"ref_id": "BIBREF27"
},
{
"start": 395,
"end": 416,
"text": "Ganchev et al., 2009;",
"ref_id": "BIBREF17"
},
{
"start": 417,
"end": 440,
"text": "Smith and Eisner, 2009;",
"ref_id": "BIBREF62"
},
{
"start": 441,
"end": 469,
"text": "Tiedemann, 2014, inter alia)",
"ref_id": null
},
{
"start": 899,
"end": 919,
"text": "Collins, 2015, 2017;",
"ref_id": null
},
{
"start": 920,
"end": 941,
"text": "Lacroix et al., 2016)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Exploiting Parallel Data",
"sec_num": "5.3"
},
{
"text": "Our own approach can be categorized as modelbased multi-language learning with no parallel text or target-side supervision. However, we also analyze an unparsed corpus u of the target language, as the per-language systems of \u00a75.1 do. Our analysis of u does not produce a specialized target grammar or parser, but only extracts a target vector T(u) to be fed to the language-agnostic parser. The analyzer is trained jointly with the parser, over many languages. 6 The Typology Component Wang and Eisner (2017) extract typological properties of a language from its POS-tagged corpus u, in effect predicting syntactic structure from superficial features. Like them, we compute a hidden layer T(u) using a standard multilayer perceptron architecture, for example,",
"cite_spans": [
{
"start": 486,
"end": 508,
"text": "Wang and Eisner (2017)",
"ref_id": "BIBREF75"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Situating Our Work",
"sec_num": "5.4"
},
{
"text": "T(u) W W 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Situating Our Work",
"sec_num": "5.4"
},
{
"text": "T(u) = \u03c8(W \u03c0(u) + b W ) \u2208 R h (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Situating Our Work",
"sec_num": "5.4"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Situating Our Work",
"sec_num": "5.4"
},
{
"text": "\u03c0(u) \u2208 R d is the surface features of u, W \u2208 R h\u00d7d maps \u03c0(u) into a h-dimensional space, b W \u2208 R h",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Situating Our Work",
"sec_num": "5.4"
},
{
"text": "is a bias vector, and \u03c8 is an element-wise activation function. While equation (3) has only 1 layer, we explore versions with from 0 to 3 layers (where T(u) = \u03c0(u) in the 0-layer case). A 2-layer version is shown in Figure 1. The number of layers is chosen by crossvalidation, as are h and the \u03c8 function.",
"cite_spans": [],
"ref_spans": [
{
"start": 216,
"end": 222,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Situating Our Work",
"sec_num": "5.4"
},
{
"text": "To define \u03c0(u), we used development data to select the following fast but effective subset of the features proposed by Wang and Eisner (2017) .",
"cite_spans": [
{
"start": 119,
"end": 141,
"text": "Wang and Eisner (2017)",
"ref_id": "BIBREF75"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Design of the Surface Features \u03c0(u)",
"sec_num": "6.1"
},
{
"text": "Hand-engineered features. Given a token j in a sentence, let its right window R j be the sequence of POS tags p j+1 , . . . , p j+w (padding the sentence as needed with # symbols). w is the window size. Define g w (t | j) \u2208 [0, 1] to be the fraction of words in R j tagged with t. Now, given a corpus u, define",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Design of the Surface Features \u03c0(u)",
"sec_num": "6.1"
},
{
"text": "\u03c0 w t = mean j g w (t | j), \u03c0 w t|s = mean j: T j =s g w (t | j)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Design of the Surface Features \u03c0(u)",
"sec_num": "6.1"
},
{
"text": "where j ranges over tokens of u. The unigram prevalence \u03c0 w t measures the frequency of t overall, while the bigram prevalence \u03c0 w t|s measures the frequency with which t can be found to the left of an average s tag (in a window of size w). For each of these quantities, we have a corresponding mirror-image quantity (denoted by negating w) by computing it on a reversed version of the corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Design of the Surface Features \u03c0(u)",
"sec_num": "6.1"
},
{
"text": "The final hand-engineered \u03c0(u) includes:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Design of the Surface Features \u03c0(u)",
"sec_num": "6.1"
},
{
"text": "\u2022 \u03c0 w t , for each tag type t and each w \u2208 {1, 3, 8, 100}. This quantity measures how frequently t appears in u.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Design of the Surface Features \u03c0(u)",
"sec_num": "6.1"
},
{
"text": "\u2022 \u03c0 w t|s //\u03c0 w t and \u03c0 \u2212w t|s //\u03c0 \u2212w t , for each tag type pair s, t and each w \u2208 {1, 3, 8, 100}. We define x//y = min(x/y, 1) to bound the feature values for better generalization. Notice that if w = 1, the log of \u03c0 w t|s /\u03c0 w t is the bigram pointwise mutual information. Each matched pair of these quantities is intuitively related to the word order typology-for example, if ADPs are more likely to have closely following than closely preceding NOUNs",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Design of the Surface Features \u03c0(u)",
"sec_num": "6.1"
},
{
"text": "(\u03c0 w NOUN|ADP //\u03c0 w NOUN > \u03c0 \u2212w NOUN|ADP //\u03c0 \u2212w NOUN )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Design of the Surface Features \u03c0(u)",
"sec_num": "6.1"
},
{
"text": ", the language is more likely to be prepositional than postpositional.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Design of the Surface Features \u03c0(u)",
"sec_num": "6.1"
},
{
"text": "Neural features. In contrast, our neural features automatically learn to extract arbitrary predictive configurations. As Figure 2 shows, we encode each POS-tagged sentence u i \u2208 u using a recurrent neural network, which reads one-hot POS embeddings from left to right, then outputs its final hidden state vector f i as the encoding. The final neural \u03c0(u) is the average encoding of all sentences (average-pooling): that is, the average of all sentence-level configurations. We specifically use a gated recurrent unit (GRU) network (Cho et al., 2014) . The GRU is jointly trained with all other parameters in the system so that it focuses on detecting word-order properties of u that are useful for parsing.",
"cite_spans": [
{
"start": 531,
"end": 549,
"text": "(Cho et al., 2014)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 121,
"end": 129,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Design of the Surface Features \u03c0(u)",
"sec_num": "6.1"
},
{
"text": "To construct Parse(x; u), we can extend any statistical parsing architecture Parse(x) to be sensitive to T(u). For our experiments, we extend the delexicalized graph-based implementation of the BIST parser (Kiperwasser and Goldberg, 2016)-an arc-factored dependency model with neural context features extracted by a bidirectional LSTM. This recent parser was the state of the art when it was published.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Parsing Architecture",
"sec_num": "7"
},
{
"text": "Given a POS-sentence x and a corpus u, our parser first computes an unlabeled projective tree",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Parsing Architecture",
"sec_num": "7"
},
{
"text": ". . . . . . f1 . . . . . . . . . . . . . . . f2 . . . . . . avg-pooling \u21e1(u)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Parsing Architecture",
"sec_num": "7"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "argmax y\u2208Y(x) score(x, y; u)",
"eq_num": "(4)"
}
],
"section": "The Parsing Architecture",
"sec_num": "7"
},
{
"text": "where, letting a range over the arcs in tree y,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Parsing Architecture",
"sec_num": "7"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "score(x, y; u) = a\u2208y s(\u03c6(a; x, u))",
"eq_num": "(5)"
}
],
"section": "The Parsing Architecture",
"sec_num": "7"
},
{
"text": "With this definition, the argmax in (4) is computed efficiently by the algorithm of Eisner (1996) .",
"cite_spans": [
{
"start": 84,
"end": 97,
"text": "Eisner (1996)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Parsing Architecture",
"sec_num": "7"
},
{
"text": "s(\u2022) is a neural scoring function on vectors, s(\u03c6(\u2022 \u2022 \u2022 )) = v tanh(V \u03c6(\u2022 \u2022 \u2022 ) + b V ) (6)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Parsing Architecture",
"sec_num": "7"
},
{
"text": "where V is a matrix, b V is a bias vector, and v is a vector, all being parameters in \u0398.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Parsing Architecture",
"sec_num": "7"
},
{
"text": "The function \u03c6(a; x, u) extracts the feature vector of arc a given x and u. BIST scores unlabeled arcs, so a denotes a pair (i, j)-the indices of the parent and child, respectively. We define",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Parsing Architecture",
"sec_num": "7"
},
{
"text": "\u03c6(a; x, u) = [B(x, i; T(u)); B(x, j; T(u))] (7)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Parsing Architecture",
"sec_num": "7"
},
{
"text": "which concatenates contextual representations of tokens i and j. B(x, i) is itself a concatenation of the hidden states of a left-to-right LSTM and a right-to-left LSTM (Graves, 2012) when each has read sentence x up through word i (really POS tag i). These LSTM parameters are included in \u0398.",
"cite_spans": [
{
"start": 169,
"end": 183,
"text": "(Graves, 2012)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Parsing Architecture",
"sec_num": "7"
},
{
"text": "The POS tags in x are provided to the LSTMs as one-hot vectors. Crucially, T(u) is also provided to the LSTM at each step, as shown in Figure 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 135,
"end": 143,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "The Parsing Architecture",
"sec_num": "7"
},
{
"text": "After selecting the best tree via equation 4, we use each arc's \u03c6 vector again to predict its label. This yields the labeled tree\u0177 = Parse \u0398 (x; u).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Parsing Architecture",
"sec_num": "7"
},
{
"text": "The only extension that this makes to BIST is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Parsing Architecture",
"sec_num": "7"
},
{
"text": "T(u) T(u) T(u) T(u) 1 2 3 0 1 2 s 0,1 s 1,0 s 1,2 s 2,1 s 3,2 s 2,3 s 3,1 s 1,3 s 0,2 s 2,0 s 0,3 s 3,0 y x + SUM x 0 x 1 x 2 x 3 u score(x, y; u)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Parsing Architecture",
"sec_num": "7"
},
{
"text": "lang info arcs POS seq. 6. The root of the tree is always position 0, where x 0 is a distinguished \"root\" symbol that is prepended to the input sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Parsing Architecture",
"sec_num": "7"
},
{
"text": "to supply T(u) to the BiLSTM. 8 This extension is not a significant slowdown at test time, since T(u) only needs to be computed once per test language, not once per test sentence. Since T(u) can be computed for any novel language at test time, this differs from the \"many languages, one parser\" architecture (Ammar et al., 2016) , in which a testtime language must have been seen at training time or at least must have known WALS features.",
"cite_spans": [
{
"start": 308,
"end": 328,
"text": "(Ammar et al., 2016)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Parsing Architecture",
"sec_num": "7"
},
{
"text": "Product of experts. We also consider a variant of the function (6) for scoring arc a, namely",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Parsing Architecture",
"sec_num": "7"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03bbs H (a) + (1 \u2212 \u03bb)s N (a)",
"eq_num": "(8)"
}
],
"section": "The Parsing Architecture",
"sec_num": "7"
},
{
"text": "where s H (a) and s N (a) are the scores produced by separately trained systems using, respectively, the hand-engineered and neural features from \u00a76.1. Hyperparameter \u03bb \u2208 [0, 1] is tuned on dev data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Parsing Architecture",
"sec_num": "7"
},
{
"text": "8 Training the System",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Parsing Architecture",
"sec_num": "7"
},
{
"text": "We exactly follow the training method of Kiperwasser and Goldberg (2016), who minimize a structured max-margin hinge loss (Taskar et al., 2004; McDonald et al., 2005; LeCun et al., 2007) . 8 An alternative would be to concatenate T(u) with the representation computed by the BiLSTM. This gets empirically worse results, probably because the BiLSTM does not have advance knowledge of language-specific word order as it reads the sentence. We also tried an architecture that does both, with no notable improvement.",
"cite_spans": [
{
"start": 122,
"end": 143,
"text": "(Taskar et al., 2004;",
"ref_id": "BIBREF71"
},
{
"start": 144,
"end": 166,
"text": "McDonald et al., 2005;",
"ref_id": "BIBREF44"
},
{
"start": 167,
"end": 186,
"text": "LeCun et al., 2007)",
"ref_id": "BIBREF37"
},
{
"start": 189,
"end": 190,
"text": "8",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training Objective",
"sec_num": "8.1"
},
{
"text": "We want the correct tree y to beat each tree y by a margin equal to the number of errors in y (we count spurious edges). Formally, loss(x, y; u) is given by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Objective",
"sec_num": "8.1"
},
{
"text": "max(0, \u2212 score(x, y; u)+ max y score(x, y ; u) model score + a\u2208y 1 a / \u2208y precision error ) (9)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Objective",
"sec_num": "8.1"
},
{
"text": "where a ranges over the arcs of a tree y, and 1 a / \u2208y is an indicator that is 1 if a / \u2208 y. Thus, this loss function is high if there exists a tree y that has a high score relative to y yet low precision. 9 The training algorithm makes use of lossaugmented inference (Taskar et al., 2005) , a variant on the ordinary inference of (4). The most violating tree y (in the max y above) is computed again by an arc-factored dependency algorithm (Eisner, 1996) , where the score of any candidate arc a is s(\u03c6(a; x, u)) + 1 a / \u2208y . Actually, the above method would only train the score function to predict the correct unlabeled tree as above (since a ranges over unlabeled arcs as before). In practice, we also jointly train the labeler to predict the correct labels on the gold arcs, using a separate hinge-loss objective. Because these two components share parameters through \u03c6(a; x, u), this is a multi-task learning problem.",
"cite_spans": [
{
"start": 206,
"end": 207,
"text": "9",
"ref_id": null
},
{
"start": 268,
"end": 289,
"text": "(Taskar et al., 2005)",
"ref_id": "BIBREF70"
},
{
"start": 441,
"end": 455,
"text": "(Eisner, 1996)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training Objective",
"sec_num": "8.1"
},
{
"text": "Augment training data. Unlike ordinary NLP problems whose training examples are sentences, each training example in equation 1is an entire language. Unfortunately, UD ( \u00a73) only provides a few dozen languages-presumably not enough to generalize well to novel languages. We therefore augment our training dataset L train with thousands of synthetic languages from the GD dataset ( \u00a73), as already discussed in \u00a73.1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Algorithm",
"sec_num": "8.2"
},
{
"text": "Stochastic gradient descent (SGD). 10 Treating each language as a single large example during training would lead to slow SGD steps. Instead, we take our SGD examples to be individual sentences, by regarding equations (1)-(2) together as an objective averaged over sentences. Each example (x, y, u) is sampled hierarchically, by first drawing a language from L train and setting u = u ( ) , then drawing the sentence (x, y) uniformly from (x ( ) , y ( ) ). We train using mini-batches of 100 sentences; each mini-batch can mix many languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Algorithm",
"sec_num": "8.2"
},
{
"text": "Encourage real languages. To sample from L train , we first flip a coin with weight \u03b2 \u2208 [0, 1] to choose \"real\" vs. \"synthetic,\" and then sample uniformly within that set. Why? The test sentences will come from real languages, so the synthetic languages are out-of-domain. Including them reduces variance but increases bias. We raise \u03b2 to keep them from overwhelming the real languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Algorithm",
"sec_num": "8.2"
},
{
"text": "Sample efficiently. The sentences (x, y) are stored in different files by language. To reduce disk accesses, we do not visit a file on each sample. Rather, for each language , we maintain in memory a subset of (x ( ) , y ( ) ), obtained by reservoir sampling. Samples from (x ( ) , y ( ) ) are drawn sequentially from this \"chunk,\" and when it is used up we fetch a new chunk. We also maintain u ( ) and the hand-engineered features from \u03c0(u ( ) ) in memory.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Algorithm",
"sec_num": "8.2"
},
{
"text": "Our data split follows that of Wang and Eisner (2017) , as shown in Table 2 , 11 which has 18 training languages (20 treebanks) and 17 test languages. All hyperparameters are tuned via 5-fold cross-validation on the 20 training treebanks-that is, we evaluate each fold (4 treebanks) using the model trained on the remaining folds (16 treebanks). However, a model trained on a treebank of language is never evaluated on another treebank of language . We selected the hyperparameters that maximized the average unlabeled attachment score (UAS) (K\u00fcbler et al., 2009) , which is the evaluation metric that is reported by most previ-11 However, as we are interested in transfer to unseen languages, our Table 2 follows the principle of Eisner and Wang (n.d.) and does not test on the Finnishftb or Latin treebanks because other treebanks of those languages appeared in training data. Specifically, Latinitt and Latinproiel fall in the same training folds as French and Italian, respectively. For the same reason, Table 2 does not show cross-validation development results on these Latin treebanks-nor on the Ancient Greekgrc and Ancient Greekgrc_proiel treebanks, which fall in the same training folds as Czech and Danish, respectively. ous work on unsupervised parsing. We also report labeled attachment score (LAS). 12 When augmenting the data, the 16 training treebanks are \"mixed and matched\" to get GD treebanks for 16\u00d717\u00d717 = 4624 additional synthetic training languages (Wang and Eisner, 2016, \u00a75) .",
"cite_spans": [
{
"start": 31,
"end": 53,
"text": "Wang and Eisner (2017)",
"ref_id": "BIBREF75"
},
{
"start": 542,
"end": 563,
"text": "(K\u00fcbler et al., 2009)",
"ref_id": "BIBREF34"
},
{
"start": 731,
"end": 753,
"text": "Eisner and Wang (n.d.)",
"ref_id": null
},
{
"start": 1313,
"end": 1315,
"text": "12",
"ref_id": null
},
{
"start": 1472,
"end": 1499,
"text": "(Wang and Eisner, 2016, \u00a75)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 68,
"end": 75,
"text": "Table 2",
"ref_id": null
},
{
"start": 698,
"end": 705,
"text": "Table 2",
"ref_id": null
},
{
"start": 1008,
"end": 1015,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Basic Setup",
"sec_num": "9.1"
},
{
"text": "The next sections analyze these cross-validation results. Finally, \u00a79.8 will evaluate on 15 previously unseen languages (excluding Latin and Finnish ftb ) with our model trained on all 18 training languages (20 treebanks for UD, plus 20\u00d721\u00d7 21 = 8840 when adding GD) with the hyperparameters that achieved the best average unlabeled attachment score during cross-validation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Basic Setup",
"sec_num": "9.1"
},
{
"text": "The UD and GD corpora provide a train/dev/test split of each treebank, denoted as (x train , y train ), (x dev , y dev ) and (x test , y test ). Throughout this paper, for both training and testing languages, we take (x ( ) , y ( ) ) = (x train , y train ). We take u ( ) to consist of all x train sentences with \u2264 40 tokens. Table 1 shows the cross-validation parsing results over different systems discussed so far. For each architecture, we show the best average unlabeled attachment score (the UAS column) chosen by cross-validation, and the corresponding labeled attachment score (the LAS column). In brief, the main sources of improvement are twofold:",
"cite_spans": [
{
"start": 268,
"end": 271,
"text": "( )",
"ref_id": null
}
],
"ref_spans": [
{
"start": 326,
"end": 333,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Basic Setup",
"sec_num": "9.1"
},
{
"text": "Synthetic languages. We observe that +GD consistently outperforms UD across all architectures. It even helps with the baseline system that we tried, which simply ignores the target corpus u ( ) . In that system (similar to McDonald et al. (2011)), the BiLSTM may still manage to extract -specific information from the single sentence x \u2208 x ( ) that it is parsing. 13 The additional GD training languages apparently help it learn to do so in a way that generalizes to new languages. To better understand the trend, we study how the performance varies when more synthetic languages are used. As shown in Figure 4 , when \u03b2 = 1, all the training languages are sampled from real languages. By gradually increasing the proportion of GD languages (reducing \u03b2 from \u00a78.2), the baseline UAS increases dramatically from 63.95 to 67.97. However, if all languages are uniformly sampled (\u03b2 = 16 4624+16 \u2248 0.003) or only synthetic languages are used (\u03b2 = 0), the UAS falls back slightly to 67.42 or 67.36. The best \u03b2 value is 0.2, which treats each real language as 0.2/16 0.8/4624 \u2248 72 times more helpful than each synthetic language, yet 80% of the training data is contributed by synthetic languages. \u03b2 = 0.2 was also optimal for the non-baseline systems in Table 1 .",
"cite_spans": [
{
"start": 364,
"end": 366,
"text": "13",
"ref_id": null
}
],
"ref_spans": [
{
"start": 602,
"end": 610,
"text": "Figure 4",
"ref_id": "FIGREF3"
},
{
"start": 1246,
"end": 1253,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparison Among Architectures",
"sec_num": "9.2"
},
{
"text": "Unparsed corpora. The systems that exploit unparsed corpora consistently outperform the baseline system in both the UD and +GD conditions. To investigate, we examine the impact of reducing u ( ) when parsing a held-out language . We used the system in row N and column +GD of Table 1 , which was trained on full-sized u corpora. When testing on a held-out language , we compute T(u ( ) ) using only a random size-t subset of u ( ) . As shown in Figure 5 , the system does Figure 5 : Effect of the size |u ( ) | of the unparsed corpus. The y-axis represents the cross-validation UAS and LAS scores, averaged over the 7 languages that have |u ( ) | \u2265 9000 sentences, when using only a subset of the sentences from u ( ) . Using all of u ( ) would achieve 64.61 UAS and 49.04 LAS. The plot shows the average over 10 runs with different random subsets; the error bars indicate the 10th to the 90th percentile of those runs. The 7 languages are Finnish (Finnic), Norwegian (Germanic), Dutch (Germanic), Czech (Slavic), German (Germanic), Hindi (Indic), and English (Germanic).",
"cite_spans": [],
"ref_spans": [
{
"start": 276,
"end": 283,
"text": "Table 1",
"ref_id": null
},
{
"start": 445,
"end": 453,
"text": "Figure 5",
"ref_id": null
},
{
"start": 472,
"end": 480,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparison Among Architectures",
"sec_num": "9.2"
},
{
"text": "not need a very large unparsed corpus-most of the benefit is obtained by t = 256. Nonetheless, a larger corpus always achieves a better and more stable performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison Among Architectures",
"sec_num": "9.2"
},
{
"text": "Besides Baseline, another directly comparable approach is SST ( \u00a75.2). As shown in Table 1 , SST gives a stronger baseline on the UD column-as good as H+N. However, this advantage does not carry over to the +GD column, meaning that SST cannot exploit the extra training data. Wang and Eisner (2016, Figure 5 ) already found that GD languages provide diminishing benefit to SST as more UD languages get involved. 14 For H+N, however, the extra GD languages do help to identify the truly useful surface patterns in u.",
"cite_spans": [
{
"start": 276,
"end": 307,
"text": "Wang and Eisner (2016, Figure 5",
"ref_id": null
}
],
"ref_spans": [
{
"start": 83,
"end": 90,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparison to SST",
"sec_num": "9.3"
},
{
"text": "We also considered trying model interpolation (Rosa and \u017dabokrtsk\u00fd, 2015b) . Unfortunately, as mentioned in \u00a75.2, this method is impractical with GD languages, because it requires storing 4624 ( \u00a79.1) additional local models. Nonetheless, we can estimate an \"upper bound\" on how well the interpolation might do. Our upper bound is SST where an oracle is used to choose the source language; Rosa and \u017dabokrtsk\u00fd (2015b) found that in practice, this does better than interpolation. This approximate upper bound is 68.03 of UAS and 52.10 of LAS, neither of which is significantly better than H+N on UD, but both of which are significantly outperformed by H+N on +GD.",
"cite_spans": [
{
"start": 46,
"end": 74,
"text": "(Rosa and \u017dabokrtsk\u00fd, 2015b)",
"ref_id": "BIBREF60"
},
{
"start": 390,
"end": 417,
"text": "Rosa and \u017dabokrtsk\u00fd (2015b)",
"ref_id": "BIBREF60"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison to SST",
"sec_num": "9.3"
},
{
"text": "The results in Table 1 demonstrate that we learned to extract features T(u), from the unparsed target corpus u, that improve the baseline parser. We consider replacing T(u) by an oracle that has access to the true syntax of the target language. We consider two different oracles, T D and T W .",
"cite_spans": [],
"ref_spans": [
{
"start": 15,
"end": 22,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Oracle Typology vs. Our Learned T(u)",
"sec_num": "9.4"
},
{
"text": "T D is the directionalities typology that was studied by Liu (2010) and used as a training target by Wang and Eisner (2017) . Specifically, T D \u2208 [0, 1] 57 is a vector of the directionalities of each type of dependency relation; it specifies what fraction of direct objects fall to the right of the verb, and so on. 15 In principle, this should be very helpful for parsing, but it must be extracted from a treebank, which is presumably unavailable for unknown languages.",
"cite_spans": [
{
"start": 57,
"end": 67,
"text": "Liu (2010)",
"ref_id": "BIBREF38"
},
{
"start": 101,
"end": 123,
"text": "Wang and Eisner (2017)",
"ref_id": "BIBREF75"
},
{
"start": 316,
"end": 318,
"text": "15",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Oracle Typology vs. Our Learned T(u)",
"sec_num": "9.4"
},
{
"text": "We also consider T W -the WALS featuresas the typological classification given by linguists. This resembles the previous multi-language learning approaches (Naseem et al., 2012; T\u00e4ckstr\u00f6m et al., 2013b; Zhang and Barzilay, 2015; Ammar et al., 2016) that exploited the WALS features. In particular, we use 81A, 82A, 83A, 85A, 86A, 87A, 88A and 89A-a union of WALS features used by those works. In order to derive the WALS features for a synthetic GD language, we first copy the features from its substrate language 14 The number of real treebanks in our cross-validation setting is 16, greater than the 10 in Wang and Eisner (2016) . 15 The directionality of a relation a in language is given",
"cite_spans": [
{
"start": 156,
"end": 177,
"text": "(Naseem et al., 2012;",
"ref_id": "BIBREF47"
},
{
"start": 178,
"end": 202,
"text": "T\u00e4ckstr\u00f6m et al., 2013b;",
"ref_id": "BIBREF69"
},
{
"start": 203,
"end": 228,
"text": "Zhang and Barzilay, 2015;",
"ref_id": "BIBREF81"
},
{
"start": 229,
"end": 248,
"text": "Ammar et al., 2016)",
"ref_id": "BIBREF2"
},
{
"start": 514,
"end": 516,
"text": "14",
"ref_id": null
},
{
"start": 608,
"end": 630,
"text": "Wang and Eisner (2016)",
"ref_id": "BIBREF74"
},
{
"start": 633,
"end": 635,
"text": "15",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Oracle Typology vs. Our Learned T(u)",
"sec_num": "9.4"
},
{
"text": "by count ( a \u2192)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Oracle Typology vs. Our Learned T(u)",
"sec_num": "9.4"
},
{
"text": "count (a) , where count ( a \u2192) is the count of a-relations that point from left to right, and count (a) is the count of all arelations. (Wang and Eisner, 2016) . We then replace the 81A, 82A, 83A features-which concern the order between verbs and their dependents-by those of its V-superstrate language 16 (if any). We replace 85A, 86A, 87A, 88A and 89A-which concern the order between nouns and their dependentsby those of its N-superstrate language (if any).",
"cite_spans": [
{
"start": 136,
"end": 159,
"text": "(Wang and Eisner, 2016)",
"ref_id": "BIBREF74"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Oracle Typology vs. Our Learned T(u)",
"sec_num": "9.4"
},
{
"text": "As a pleasant surprise, we find that our best system (H+N) is competitive with both oracle methods. It outperforms both of them on both UAS and LAS, and the improvements are significant and substantial in 3 of these 4 cases. Our parser has learned to extract information T(u) that is not only cheap (no treebank needed), but also at least as useful as \"gold\" typology for parsing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Oracle Typology vs. Our Learned T(u)",
"sec_num": "9.4"
},
{
"text": "For the rest of the experiments, we use the H+N system, as it wins under cross-validation on both UD and +GD (Table 1) . This is a combination via (8) of the best H system and the best N system under cross-validation, with the mixture hyperparameter \u03bb also chosen by cross-validation.",
"cite_spans": [],
"ref_spans": [
{
"start": 109,
"end": 118,
"text": "(Table 1)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Selected Hyperparameter Settings",
"sec_num": "9.5"
},
{
"text": "For both UD and +GD, cross-validation selected 125 as the sizes of the LSTM hidden states and 100 as the sizes of the hidden layers for scoring arcs (the length of v in equation 6).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selected Hyperparameter Settings",
"sec_num": "9.5"
},
{
"text": "Hyperparameters for UD. The H system computes T(u) with a 1-layer network (as in equation (3)), with hidden size h = 128 and \u03c8 = tanh as the activation function. For the N system, T(u) is a 1-layer network with hidden size h = 64 and \u03c8 = sigmoid as the activation function. The size of the hidden state of GRU as shown in Figure 2 is 128. The mixture weight for the final H+N system is \u03bb = 0.5.",
"cite_spans": [],
"ref_spans": [
{
"start": 322,
"end": 330,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Selected Hyperparameter Settings",
"sec_num": "9.5"
},
{
"text": "Hyperparameters for +GD. The H system computes T(u) with a 2-layer network (as shown in Figure 1) , with h = 128 and \u03c8 = sigmoid for both hidden layers. For N, T(u) is a 1-layer network with hidden size h = 64 and \u03c8 = sigmoid. The size of the hidden state of GRU is 256. Both H and N set \u03b2 = 0.2 (see \u00a78.2). The mixture weight for the final H+N system is \u03bb = 0.4.",
"cite_spans": [],
"ref_spans": [
{
"start": 88,
"end": 97,
"text": "Figure 1)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Selected Hyperparameter Settings",
"sec_num": "9.5"
},
{
"text": "We test our trained system in a more realistic scenario where both u and x for held-out languages Figure 6 : Performance on noisy input over 16 training languages. Each dot is an experiment annotated by the number of sentences used to train the tagger. (The rightmost \"\u221e\" point uses gold tags instead of a tagger, which is the result from Table 1 .) The x-axis gives the average accuracy of the trained RDRPOSTagger. The y-axis gives the average parsing performance.",
"cite_spans": [],
"ref_spans": [
{
"start": 98,
"end": 106,
"text": "Figure 6",
"ref_id": null
},
{
"start": 339,
"end": 346,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Performance on Noisy Tag Sequences",
"sec_num": "9.6"
},
{
"text": "consist of noisy POS tags rather than gold POS tags. Following Wang and Eisner (2016, Appendix B) , at test time, the gold POS tags in a corpus are replaced by a noisy version produced by the RDRPOSTagger (Nguyen et al., 2014) trained on a subset of the original gold-tagged corpus. 17 Figure 6 shows a linear relationship between the performance of our best model (H+N with +GD) and the noisiness of the POS tags, which is controlled by altering the amount of training data. With only 100 training sentences, the performance suffers greatly-the UAS drops from 70.65 to 51.57. Nonetheless, even this is comparable to Naseem et al. (2010) on gold POS tags, which yields a UAS of 50.00. That system was the first grammar induction approach to exploit knowledge of the distribution of natural languages, and remained state-of-the-art (Noji et al., 2016) until the work of and Mart\u00ednez Alonso et al. (2017). Figure 7 breaks down the results by dependency relation type-showing that using u and synthetic data improves results almost across the board.",
"cite_spans": [
{
"start": 63,
"end": 97,
"text": "Wang and Eisner (2016, Appendix B)",
"ref_id": null
},
{
"start": 617,
"end": 637,
"text": "Naseem et al. (2010)",
"ref_id": "BIBREF48"
},
{
"start": 831,
"end": 850,
"text": "(Noji et al., 2016)",
"ref_id": "BIBREF53"
}
],
"ref_spans": [
{
"start": 286,
"end": 294,
"text": "Figure 6",
"ref_id": null
},
{
"start": 904,
"end": 912,
"text": "Figure 7",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Performance on Noisy Tag Sequences",
"sec_num": "9.6"
},
{
"text": "We also notice large differences between labeled and unlabeled F1 scores for some relations, especially rarer ones. In other words, the system mislabels the arcs that it correctly recov-ers. (Remember from \u00a79.2 that the hyperparameters were selected to maximize unlabeled scores (UAS) rather than labeled (LAS).) Figure 8 gives the label confusion matrix. While the dark NONE column shows that arcs of each type are often missed altogether (recall errors), the dark diagonal shows that they are usually labeled correctly if found. That said, it is relatively common to confuse the different labels for nominal dependents of verbs (nsubj, dobj, nmod) . We suspect that lexical information could help sort out these roles via distributional semantics. Some other mistakes arise from discrepancies in the annotation scheme. For example, neg can be easily confused with advmod, as some languages (for example, Spanish) use ADV instead of PART for negations.",
"cite_spans": [
{
"start": 630,
"end": 649,
"text": "(nsubj, dobj, nmod)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 313,
"end": 321,
"text": "Figure 8",
"ref_id": "FIGREF6"
}
],
"eq_spans": [],
"section": "Analysis by Dependency Relation Type",
"sec_num": "9.7"
},
{
"text": "In all previous sections, we evaluated on the 16 languages in the training set by cross-validation. For the final test, we combine all the 20 treebanks and train the system with the hyperparameters given in \u00a79.5, then test on the 15 unseen test languages. Table 2 displays results on these 15 test languages (top) as well as the cross-validation results on the 16 languages (bottom).",
"cite_spans": [],
"ref_spans": [
{
"start": 256,
"end": 263,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Final Evaluation on Test Data",
"sec_num": "9.8"
},
{
"text": "We see that we improve significantly over baseline on almost every language. Indeed, on the test languages, +T(u) improves both UAS and LAS by > 3.5 percentage points on average. The improvement grows to > 5.6 if we augment the training data as well (+GD, meaning +T(u)+GD).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Final Evaluation on Test Data",
"sec_num": "9.8"
},
{
"text": "One disappointment concerns the added benefit on the LAS of +GD over just +T(u) : while this data augmentation helped significantly on nearly every one of the 16 development languages, it produced less consistent improvements on the test languages and hurt some of them. We suspect that this is because we tuned the hyperparameters to maximize UAS, not LAS ( \u00a79.2). As a result, while the average benefit across our 15 test languages was fairly large, this sample was not large enough to establish that it was significantly greater from 0, that is, that future test languages would also see an improvement from data augmentation.",
"cite_spans": [
{
"start": 74,
"end": 79,
"text": "+T(u)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Final Evaluation on Test Data",
"sec_num": "9.8"
},
{
"text": "We also notice that there seems to be a small difference between the pattern of results on development versus test languages. This may simply reflect overfitting to the development languages, but we also note that the test languages (chosen by The pattern is that F1, precision, and recall-both labeled and unlabeled-are improved over baseline when we exploit unlabeled corpora (+T(u)), and improved again when we augment training data (+T(u)+GD). The relations are sorted by their average gold proportion in the 16 languages, shown by the gray area and right vertical axis. For example, nmod is the most common relation, accounting for 15.5% of all arcs. Altogether, the 20 most frequent relations (shown here) account for 94% of the arcs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Final Evaluation on Test Data",
"sec_num": "9.8"
},
{
"text": "Wang and Eisner (2016)) tended to have considerably smaller unparsed corpora u, so there may be a domain mismatch problem. To ameliorate this problem, one could include training examples with versions of u that are truncated to lengths seen in test data (cf. Figure 5 ). One could also include the size |u| explicitly in T(u).",
"cite_spans": [],
"ref_spans": [
{
"start": 259,
"end": 267,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Final Evaluation on Test Data",
"sec_num": "9.8"
},
{
"text": "We showed how to build a \"language-agnostic\" delexicalized dependency parser that can better parse sentences of an unknown language by exploiting an unparsed (but POS-tagged) corpus of that language. Unlike grammar induction, which estimates a PCFG from the unparsed corpus, we train a neural network to extract a feature vector from the unparsed corpus that helps a subsequent neural parser. By end-to-end training on the treebanks of many languages (optionally including synthetic languages), our neural network can extract linguistic information that helps neural dependency parsing. Variants of our architecture are possible. In future work, the neural parser could use attention to look at individual relevant sentences of u, which are posited to be triggers in some theories of child grammar acquisition (Gibson and Wexler, 1994; Frank and Kapur, 1996) . We could also try injecting T(u) into the neural parser by means other than concatenating it with the input POS embeddings. We might also consider parsing architec-tures other than BIST, such as the LSTM-Minus architecture for scoring spans (Cross and Huang, 2016) , or the recent attention-based arc-factored model (Dozat and Manning, 2017) . Finally, our approach is applicable to tasks other than dependency parsing, such as constituent parsing or semantic parsing-if suitable treebanks are available for many training languages.",
"cite_spans": [
{
"start": 810,
"end": 835,
"text": "(Gibson and Wexler, 1994;",
"ref_id": "BIBREF18"
},
{
"start": 836,
"end": 858,
"text": "Frank and Kapur, 1996)",
"ref_id": "BIBREF16"
},
{
"start": 1102,
"end": 1125,
"text": "(Cross and Huang, 2016)",
"ref_id": "BIBREF9"
},
{
"start": 1177,
"end": 1202,
"text": "(Dozat and Manning, 2017)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "10"
},
{
"text": "For applied uses, it would be interesting to combine the unsupervised techniques of this paper with low-resource techniques that make use of some annotated or parallel data in the target language. It would also be interesting to include further synthetic languages that have been modified to better resemble the actual target languages, using the method of (Wang and Eisner, 2018) .",
"cite_spans": [
{
"start": 357,
"end": 380,
"text": "(Wang and Eisner, 2018)",
"ref_id": "BIBREF76"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "10"
},
{
"text": "It is important to relax the delexicalized assumption. As shown in \u00a79.6, the performance of our system relies heavily on the gold POS tags, which are presumably not available for unknown languages. What is available is lexical information-which has proved to be very important for supervised parsing, and should help unsupervised parsers as well. As discussed in \u00a79.7, some errors seem easily fixable by considering word distributions. In the future, we will explore ways to extend our cross-linguistic parser to work with word sequences rather than POS sequences, perhaps by learning a cross-language word representation that is shared among training and test languages (Ruder et al., 2017 Each row is normalized to sum to 1 and represents a frequent gold relation. For example, the nsubj row shows how well we recovered the gold nsubj arcs; the (nsubj, dobj) entry shows p(predicted = dobj | gold = nsubj), which measures the fraction of nsubj relations that are recovered but mislabeled as dobj. The diagonal represents correct arcs: where dark, it indicates high labeled recall for that relation. The final column represents gold arcs that were not recovered with any label: where dark, it indicates low unlabeled recall for that relation. We show the top 20 relations sorted by gold frequency.",
"cite_spans": [
{
"start": 671,
"end": 690,
"text": "(Ruder et al., 2017",
"ref_id": "BIBREF61"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "10"
},
{
"text": "One takeaway message from this work is contained in our title. Surface statistics of a language-mined from the surface part-of-speech order-provide clues about how to find the underlying syntactic dependencies. Chomsky (1965) imagined that such clues might be exploited by a Language Acquisition Device, so it is interesting to know that they do exist.",
"cite_spans": [
{
"start": 211,
"end": 225,
"text": "Chomsky (1965)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "10"
},
{
"text": "Another takeaway message is that synthetic training languages are useful for NLP. Using synthetic examples in training is a way to encourage a system to be invariant to superficial variation. We created synthetic languages by varying the surface structure in a way that \"should\" preserve the deep structure. This allows our trained system to be invariant to variation in surface structure, just as object recognition wants to be invariant to an image's angle or lighting conditions ( \u00a73.1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "10"
},
{
"text": "Our final takeaway goes beyond language: one can treat unsupervised structure discovery as a supervised learning problem. As \u00a7 \u00a71-2 discussed, this approach inherits the advantages of supervised learning. Training may face an easier optimization landscape, and we can train the system to find the Table 2 : Data splits and final evaluation on the 15 test languages (top), along with cross-validation results on the 16 development languages (bottom) grouped by 5 folds (separated by dashed lines). For languages with multiple treebanks, we identify them by subscripts. We use \"Slavonic\" for Old Church Slavonic. Column B is the baseline that doesn't use T(u) (McDonald et al., 2011) . +T(u) is our H+N system, and +GD is that system when the training data is augmented with synthetic languages. In comparing among these three systems, we boldface the highest score as well as all scores that are not significantly worse (paired permutation test, p < 0.05). If a row is an average over many sentences of a single language, then each paired datapoint is a sentence, so a significant improvement should generalize to new sentences. But if a row is an average, then each paired datapoint is a language (as in Table 1) , so a significant improvement should generalize to new languages. specific kind of structure that we desire, using any features that we think may be discriminative.",
"cite_spans": [
{
"start": 658,
"end": 681,
"text": "(McDonald et al., 2011)",
"ref_id": "BIBREF45"
}
],
"ref_spans": [
{
"start": 297,
"end": 304,
"text": "Table 2",
"ref_id": null
},
{
"start": 1204,
"end": 1212,
"text": "Table 1)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "10"
},
{
"text": "We also include an experiment on noisy POS sequences. 2 It is clearly not the task setting faced by human language learners. Nor is it a plausible engineering setting: a language with gold POS sequences often also has at least a small treebank of gold parses, or at least parallel text in a language from which noisy parses can be noisily projected(Agi\u0107 et al., 2016). There is also no practical reason to consider POS tags without their attached words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "While it might have been preferable to use the expanded and revised UD version 2.0, we wished to compare fairly with GD 1.0, which is based on UD 1.2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Another example is back-translation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Wang and Eisner (2018) do investigate synthesis \"on demand\" of a permuted training corpus that is as similar as possible to the test corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In our notation of \u00a76.1, below, this asks whether t\u2208{NOUN,PRON,PROPN} \u03c0 w t|ADP is greater for w = 1 or w = \u22121.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "2,679 out of about 7,000 world languages are in WALS.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Formally, for this loss function to be used in equation(2), we must interpret Parse \u0398 in that equation as returning a forest of scored parses, not just a single parse.10 More precisely, we use Adam(Kingma and Ba, 2015), a popular variant of SGD. The parameters \u0398 are initialized by \"Xavier initialization\"(Glorot and Bengio, 2010).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "When reporting LAS and when studying the labeling errors in \u00a79.7, we would ideally have first re-tuned our system to optimize LAS via cross-validation. Unfortunately, these potentially improved LAS results would have required months of additional computation. The optimal hyperparameters may not be very different, however, since UAS and LAS rose and fell together when we varied other training conditions inFigures 4-6.13 That is, our baseline system has learned a single parser that can handle a cross-linguistic variety of POS sequences(cf. McDonald et al., 2011; Ammar et al., 2016, section 4.2), just as the reader was able to parse VERB DET NOUN ADJ DET NOUN in \u00a72.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The language whose word order model is used to permute the dependents of the verbs. SeeWang and Eisner (2016) for details.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Another way to get noisy tags, as a reviewer notes, would have been to use a cross-lingual POS tagger designed for lowresource settings(T\u00e4ckstr\u00f6m et al., 2013a;Kim et al., 2017).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Acknowledgements This material is based upon work supported by the U.S. National Science Foundation under Grants No. 1423276 and 1718846. We are grateful to the state of Maryland for providing indispensable computing resources via the Maryland Advanced Research Computing Center (MARCC). Thanks to Kiperwasser and Goldberg (2016) for releasing their Dynet parsing code, which was the basis for our reimplementation in PyTorch. We thank the Argo lab members for useful discussions. Finally, we thank TACL action editor Marco Kuhlmann and the anonymous reviewers for high-quality suggestions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "UAS LAS Language B +T(u) +GD B +T(u) +GD Basque",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "UAS LAS Language B +T(u) +GD B +T(u) +GD Basque 49.89 54.34 57.59 27.07 31.46 35.32",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Multilingual projection for parsing truly low-resource languages",
"authors": [
{
"first": "Anders",
"middle": [],
"last": "References \u017deljko Agi\u0107",
"suffix": ""
},
{
"first": "Barbara",
"middle": [],
"last": "Johannsen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Plank",
"suffix": ""
},
{
"first": "Natalie",
"middle": [],
"last": "H\u00e9ctor Mart\u00ednez Alonso",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "Schluter",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2016,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "4",
"issue": "",
"pages": "301--312",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "References \u017deljko Agi\u0107, Anders Johannsen, Barbara Plank, H\u00e9ctor Mart\u00ednez Alonso, Natalie Schluter, and Anders S\u00f8gaard. 2016. Multilingual projec- tion for parsing truly low-resource languages. Transactions of the Association for Computa- tional Linguistics, 4:301-312.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Many languages, one parser",
"authors": [
{
"first": "Waleed",
"middle": [],
"last": "Ammar",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Mulcaire",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2016,
"venue": "Transactions of the Association of Computational Linguistics",
"volume": "4",
"issue": "",
"pages": "431--444",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Waleed Ammar, George Mulcaire, Miguel Balles- teros, Chris Dyer, and Noah Smith. 2016. Many languages, one parser. Transactions of the As- sociation of Computational Linguistics, 4:431- 444.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Powergrading: A clustering approach to amplify human effort for short answer grading",
"authors": [
{
"first": "Sumit",
"middle": [],
"last": "Basu",
"suffix": ""
},
{
"first": "Chuck",
"middle": [],
"last": "Jacobs",
"suffix": ""
},
{
"first": "Lucy",
"middle": [],
"last": "Vanderwende",
"suffix": ""
}
],
"year": 2013,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "391--402",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sumit Basu, Chuck Jacobs, and Lucy Vander- wende. 2013. Powergrading: A clustering ap- proach to amplify human effort for short an- swer grading. Transactions of the Association for Computational Linguistics, 1:391-402.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Two experiments on learning probabilistic dependency grammars from corpora",
"authors": [
{
"first": "Glenn",
"middle": [],
"last": "Carroll",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
}
],
"year": 1992,
"venue": "Working Notes of the Workshop on Statistically-Based NLP Techniques",
"volume": "",
"issue": "",
"pages": "1--13",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Glenn Carroll and Eugene Charniak. 1992. Two experiments on learning probabilistic depen- dency grammars from corpora. In Working Notes of the Workshop on Statistically-Based NLP Techniques, pages 1-13.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "On the properties of neural machine translation: Encoder-decoder approaches",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Van Merrienboer",
"suffix": ""
},
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation",
"volume": "",
"issue": "",
"pages": "103--111",
"other_ids": {
"DOI": [
"10.3115/v1/W14-4012"
]
},
"num": null,
"urls": [],
"raw_text": "Kyunghyun Cho, Bart van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder approaches. In Proceedings of Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation, pages 103- 111.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Aspects of the Theory of Syntax",
"authors": [
{
"first": "Noam",
"middle": [],
"last": "Chomsky",
"suffix": ""
}
],
"year": 1965,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Noam Chomsky. 1965. Aspects of the Theory of Syntax. MIT Press, Cambridge, MA.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "On the shortest arborescence of a directed graph",
"authors": [
{
"first": "Yoeng-Jin",
"middle": [],
"last": "Chu",
"suffix": ""
}
],
"year": 1965,
"venue": "Science Sinica",
"volume": "14",
"issue": "",
"pages": "1396--1400",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoeng-Jin Chu. 1965. On the shortest arbores- cence of a directed graph. Science Sinica, 14:1396-1400.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Empirical risk minimization for probabilistic grammars: Sample complexity and hardness of learning",
"authors": [
{
"first": "B",
"middle": [],
"last": "Shay",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Cohen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2012,
"venue": "Computational Linguistics",
"volume": "38",
"issue": "3",
"pages": "479--526",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shay B. Cohen and Noah A. Smith. 2012. Empirical risk minimization for probabilis- tic grammars: Sample complexity and hard- ness of learning. Computational Linguistics, 38(3):479-526.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Incremental parsing with minimal features using bidirectional LSTM",
"authors": [
{
"first": "James",
"middle": [],
"last": "Cross",
"suffix": ""
},
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "32--37",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Cross and Liang Huang. 2016. Incre- mental parsing with minimal features using bi- directional LSTM. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 2: Short Papers), pages 32-37, Berlin.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Deep biaffine attention for neural dependency parsing",
"authors": [
{
"first": "Timothy",
"middle": [],
"last": "Dozat",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Timothy Dozat and Christopher D. Manning. 2017. Deep biaffine attention for neural depen- dency parsing. In Proceedings of the Interna- tional Conference on Learning Representations.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "The World Atlas of Language Structures Online",
"authors": [
{
"first": "Matthew",
"middle": [
"S"
],
"last": "Dryer",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Haspelmath",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew S. Dryer and Martin Haspelmath, editors. 2013. The World Atlas of Language Structures Online. Max Planck Institute for Evolutionary Anthropology. http://wals.info/.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Optimum branchings. Journal of Research of the National Bureau of Standards B",
"authors": [],
"year": 1967,
"venue": "",
"volume": "71",
"issue": "",
"pages": "233--240",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jack Edmonds. 1967. Optimum branchings. Jour- nal of Research of the National Bureau of Stan- dards B, 71(4):233-240.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Three new probabilistic models for dependency parsing: An exploration",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the 16th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "340--345",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Eisner. 1996. Three new probabilistic mod- els for dependency parsing: An exploration. In Proceedings of the 16th International Confer- ence on Computational Linguistics, pages 340- 345.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Bootstrapping without the boot",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
},
{
"first": "Damianos",
"middle": [],
"last": "Karakos",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "395--402",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Eisner and Damianos Karakos. 2005. Boot- strapping without the boot. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, pages 395-402.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "On evaluating cross-lingual generalization",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
},
{
"first": "Dingquan",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Eisner and Dingquan Wang. No date. On evaluating cross-lingual generalization. To be posted on arXiv.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "On the use of triggers in parameter setting",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Frank",
"suffix": ""
},
{
"first": "Shyam",
"middle": [],
"last": "Kapur",
"suffix": ""
}
],
"year": 1996,
"venue": "Linguistic Inquiry",
"volume": "27",
"issue": "",
"pages": "623--660",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Frank and Shyam Kapur. 1996. On the use of triggers in parameter setting. Linguistic In- quiry, 27:623-660.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Dependency grammar induction via bitext projection constraints",
"authors": [
{
"first": "Kuzman",
"middle": [],
"last": "Ganchev",
"suffix": ""
},
{
"first": "Jennifer",
"middle": [],
"last": "Gillenwater",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Taskar",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP",
"volume": "",
"issue": "",
"pages": "369--377",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kuzman Ganchev, Jennifer Gillenwater, and Ben Taskar. 2009. Dependency grammar induction via bitext projection constraints. In Proceed- ings of the Joint Conference of the 47th An- nual Meeting of the ACL and the 4th Interna- tional Joint Conference on Natural Language Processing of the AFNLP, pages 369-377.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Triggers",
"authors": [
{
"first": "Edward",
"middle": [],
"last": "Gibson",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [],
"last": "Wexler",
"suffix": ""
}
],
"year": 1994,
"venue": "Linguistic Inquiry",
"volume": "25",
"issue": "3",
"pages": "407--454",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Edward Gibson and Kenneth Wexler. 1994. Trig- gers. Linguistic Inquiry, 25(3):407-454.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Sparsity in dependency grammar induction",
"authors": [
{
"first": "Jennifer",
"middle": [],
"last": "Gillenwater",
"suffix": ""
},
{
"first": "Kuzman",
"middle": [],
"last": "Ganchev",
"suffix": ""
},
{
"first": "Jo\u00e3o",
"middle": [],
"last": "Gra Ca",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Taskar",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the ACL 2010 Conference Short Papers",
"volume": "",
"issue": "",
"pages": "194--199",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jennifer Gillenwater, Kuzman Ganchev, Jo\u00e3o Gra ca, Fernando Pereira, and Ben Taskar. 2010. Sparsity in dependency grammar induction. In Proceedings of the ACL 2010 Conference Short Papers, pages 194-199.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Concavity and initialization for unsupervised dependency parsing",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "577--581",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Gimpel and Noah A. Smith. 2012. Concav- ity and initialization for unsupervised depen- dency parsing. In Proceedings of the 2012 Con- ference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, pages 577-581.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Understanding the difficulty of training deep feedforward neural networks",
"authors": [
{
"first": "Xavier",
"middle": [],
"last": "Glorot",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the International Conference on Artificial Intelligence and Statistics",
"volume": "9",
"issue": "",
"pages": "249--256",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xavier Glorot and Yoshua Bengio. 2010. Under- standing the difficulty of training deep feedfor- ward neural networks. In Proceedings of the International Conference on Artificial Intelli- gence and Statistics, volume 9, pages 249-256.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Nonconvex global optimization for latent-variable models",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Gormley",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "444--454",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Gormley and Jason Eisner. 2013. Non- convex global optimization for latent-variable models. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 444-454.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "A convex and feature-rich discriminative approach to dependency grammar induction",
"authors": [
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "No\u00e9mie",
"middle": [],
"last": "Elhadad",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1375--1384",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Edouard Grave and No\u00e9mie Elhadad. 2015. A convex and feature-rich discriminative ap- proach to dependency grammar induction. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natu- ral Language Processing, pages 1375-1384.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Supervised Sequence Labelling with Recurrent Neural Networks",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Graves",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Graves. 2012. Supervised Sequence Labelling with Recurrent Neural Networks. Springer.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "A representation learning framework for multi-source transfer parsing",
"authors": [
{
"first": "Jiang",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Wanxiang",
"middle": [],
"last": "Che",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": ""
},
{
"first": "Haifeng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "2734--2740",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiang Guo, Wanxiang Che, David Yarowsky, Haifeng Wang, and Ting Liu. 2016. A repre- sentation learning framework for multi-source transfer parsing. In Proceedings of the Thirti- eth AAAI Conference on Artificial Intelligence, pages 2734-2740.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Improving unsupervised dependency parsing with richer contexts and smoothing",
"authors": [
{
"first": "P",
"middle": [],
"last": "William",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "Headden",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Mcclosky",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "101--109",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William P. Headden III, Mark Johnson, and David McClosky. 2009. Improving unsupervised dependency parsing with richer contexts and smoothing. In Proceedings of Human Lan- guage Technologies: The 2009 Annual Confer- ence of the North American Chapter of the As- sociation for Computational Linguistics, pages 101-109.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Bootstrapping parsers via syntactic projection across parallel texts",
"authors": [
{
"first": "Rebecca",
"middle": [],
"last": "Hwa",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
},
{
"first": "Amy",
"middle": [],
"last": "Weinberg",
"suffix": ""
},
{
"first": "Clara",
"middle": [],
"last": "Cabezas",
"suffix": ""
},
{
"first": "Okan",
"middle": [],
"last": "Kolak",
"suffix": ""
}
],
"year": 2005,
"venue": "Natural Language Engineering",
"volume": "11",
"issue": "3",
"pages": "311--325",
"other_ids": {
"DOI": [
"10.1017/S1351324905003840"
]
},
"num": null,
"urls": [],
"raw_text": "Rebecca Hwa, Philip Resnik, Amy Weinberg, Clara Cabezas, and Okan Kolak. 2005. Boot- strapping parsers via syntactic projection across parallel texts. Natural Language Engineering, 11(3):311-325.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Combining generative and discriminative approaches to unsupervised dependency parsing via dual decomposition",
"authors": [
{
"first": "Yong",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Wenjuan",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Kewei",
"middle": [],
"last": "Tu",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1689--1694",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yong Jiang, Wenjuan Han, and Kewei Tu. 2017. Combining generative and discriminative ap- proaches to unsupervised dependency parsing via dual decomposition. In Proceedings of the 2017 Conference on Empirical Methods in Nat- ural Language Processing, pages 1689-1694.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Crossinstance tuning of unsupervised document clustering algorithms",
"authors": [
{
"first": "Damianos",
"middle": [],
"last": "Karakos",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
},
{
"first": "Sanjeev",
"middle": [],
"last": "Khudanpur",
"suffix": ""
},
{
"first": "Carey",
"middle": [
"E"
],
"last": "Priebe",
"suffix": ""
}
],
"year": 2007,
"venue": "Human Language Technologies: Proceedings of the Annual Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "252--259",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Damianos Karakos, Jason Eisner, Sanjeev Khu- danpur, and Carey E. Priebe. 2007. Cross- instance tuning of unsupervised document clus- tering algorithms. In Human Language Tech- nologies: Proceedings of the Annual Confer- ence of the North American Chapter of the As- sociation for Computational Linguistics, pages 252-259.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Cross-lingual transfer learning for POS tagging without cross-lingual resources",
"authors": [
{
"first": "Joo-Kyung",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Young-Bum",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Ruhi",
"middle": [],
"last": "Sarikaya",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Fosler-Lussier",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2832--2838",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joo-Kyung Kim, Young-Bum Kim, Ruhi Sarikaya, and Eric Fosler-Lussier. 2017. Cross-lingual transfer learning for POS tagging without cross-lingual resources. In Proceed- ings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2832-2838.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "Diederik",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceed- ings of the International Conference on Learn- ing Representations.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Simple and accurate dependency parsing using bidirectional LSTM feature representations",
"authors": [
{
"first": "Eliyahu",
"middle": [],
"last": "Kiperwasser",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2016,
"venue": "Transactions of the Association of Computational Linguistics",
"volume": "4",
"issue": "",
"pages": "313--327",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eliyahu Kiperwasser and Yoav Goldberg. 2016. Simple and accurate dependency parsing us- ing bidirectional LSTM feature representations. Transactions of the Association of Computa- tional Linguistics, 4:313-327.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Corpus-based induction of syntactic structure: Models of dependency and constituency",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "478--485",
"other_ids": {
"DOI": [
"10.3115/1218955.1219016"
]
},
"num": null,
"urls": [],
"raw_text": "Dan Klein and Christopher Manning. 2004. Corpus-based induction of syntactic structure: Models of dependency and constituency. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics, pages 478-485.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Dependency Parsing",
"authors": [
{
"first": "Sandra",
"middle": [],
"last": "K\u00fcbler",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.2200/S00169ED1V01Y200901HLT002"
]
},
"num": null,
"urls": [],
"raw_text": "Sandra K\u00fcbler, Ryan McDonald, and Joakim Nivre. 2009. Dependency Parsing. Morgan and Claypool.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Frustratingly easy cross-lingual transfer for transition-based dependency parsing",
"authors": [
{
"first": "Oph\u00e9lie",
"middle": [],
"last": "Lacroix",
"suffix": ""
},
{
"first": "Lauriane",
"middle": [],
"last": "Aufrant",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Wisniewski",
"suffix": ""
},
{
"first": "Fran\u00e7ois",
"middle": [],
"last": "Yvon",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1058--1063",
"other_ids": {
"DOI": [
"10.18653/v1/N16-1121"
]
},
"num": null,
"urls": [],
"raw_text": "Oph\u00e9lie Lacroix, Lauriane Aufrant, Guillaume Wisniewski, and Fran\u00e7ois Yvon. 2016. Frustratingly easy cross-lingual transfer for transition-based dependency parsing. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1058-1063.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "The estimation of stochastic context-free grammars using the Inside-Outside algorithm",
"authors": [
{
"first": "Karim",
"middle": [],
"last": "Lari",
"suffix": ""
},
{
"first": "Steve",
"middle": [
"J"
],
"last": "Young",
"suffix": ""
}
],
"year": 1990,
"venue": "Computer Speech and Language",
"volume": "4",
"issue": "1",
"pages": "35--56",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karim Lari and Steve J. Young. 1990. The estima- tion of stochastic context-free grammars using the Inside-Outside algorithm. Computer Speech and Language, 4(1):35-56.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "A tutorial on energy-based learning",
"authors": [
{
"first": "Yann",
"middle": [],
"last": "Lecun",
"suffix": ""
},
{
"first": "Sumit",
"middle": [],
"last": "Chopra",
"suffix": ""
},
{
"first": "Raia",
"middle": [],
"last": "Hadsell",
"suffix": ""
},
{
"first": "Fu Jie",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2007,
"venue": "Predicting Structured Data",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yann LeCun, Sumit Chopra, Raia Hadsell, and Fu Jie Huang. 2007. A tutorial on energy-based learning. In G\u00f6khan Bak\u0131r, Thomas Hofmann, Bernhard Sch\u00f6lkopf, Alexander J. Smola, Ben Taskar, and S. V. N. Vishwanathan, editors, Pre- dicting Structured Data. MIT Press.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Dependency direction as a means of word-order typology: A method based on dependency treebanks",
"authors": [
{
"first": "Haitao",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2010,
"venue": "Lingua",
"volume": "120",
"issue": "6",
"pages": "1567--1578",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haitao Liu. 2010. Dependency direction as a means of word-order typology: A method based on dependency treebanks. Lingua, 120(6):1567-1578.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Delexicalized and minimally supervised parsing on universal dependencies",
"authors": [
{
"first": "David",
"middle": [],
"last": "Mare\u010dek",
"suffix": ""
}
],
"year": 2016,
"venue": "Statistical Language and Speech Processing: 4th International Conference, SLSP 2016",
"volume": "",
"issue": "",
"pages": "30--42",
"other_ids": {
"DOI": [
"10.1007/978-3-319-45925-7_3"
]
},
"num": null,
"urls": [],
"raw_text": "David Mare\u010dek. 2016. Delexicalized and min- imally supervised parsing on universal de- pendencies. In Statistical Language and Speech Processing: 4th International Confer- ence, SLSP 2016, Pilsen, Czech Republic, Oc- tober 11-12, 2016, Proceedings, pages 30-42, Cham.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Stopprobability estimates computed on a large corpus improve unsupervised dependency parsing",
"authors": [
{
"first": "David",
"middle": [],
"last": "Mare\u010dek",
"suffix": ""
},
{
"first": "Milan",
"middle": [],
"last": "Straka",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "281--290",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Mare\u010dek and Milan Straka. 2013. Stop- probability estimates computed on a large cor- pus improve unsupervised dependency parsing. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 281-290.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Twelve years of unsupervised dependency parsing",
"authors": [
{
"first": "David",
"middle": [],
"last": "Mare\u010dek",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 16th ITAT Conference Information Technologies-Applications and Theory",
"volume": "",
"issue": "",
"pages": "56--62",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Mare\u010dek. 2016. Twelve years of un- supervised dependency parsing. In Proceed- ings of the 16th ITAT Conference Information Technologies-Applications and Theory, pages 56-62.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Parsing universal dependencies without training",
"authors": [
{
"first": "\u017deljko",
"middle": [],
"last": "H\u00e9ctor Mart\u00ednez Alonso",
"suffix": ""
},
{
"first": "Barbara",
"middle": [],
"last": "Agi\u0107",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "Plank",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "230--240",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H\u00e9ctor Mart\u00ednez Alonso, \u017deljko Agi\u0107, Barbara Plank, and Anders S\u00f8gaard. 2017. Parsing uni- versal dependencies without training. In Pro- ceedings of the 15th Conference of the Euro- pean Chapter of the Association for Compu- tational Linguistics: Volume 1, Long Papers, pages 230-240.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Discriminative Learning and Spanning Tree Algorithms for Dependency Parsing",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan McDonald. 2006. Discriminative Learning and Spanning Tree Algorithms for Dependency Parsing. Ph.D. thesis, University of Pennsylva- nia.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Online large-margin training of dependency parsers",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Koby",
"middle": [],
"last": "Crammer",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05)",
"volume": "",
"issue": "",
"pages": "91--98",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005. Online large-margin training of dependency parsers. In Proceedings of the 43rd Annual Meeting of the Association for Compu- tational Linguistics (ACL'05), pages 91-98.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Multi-source transfer of delexicalized dependency parsers",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "Keith",
"middle": [],
"last": "Hall",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "62--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan McDonald, Slav Petrov, and Keith Hall. 2011. Multi-source transfer of delexicalized de- pendency parsers. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 62-72.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S. Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in Neural Infor- mation Processing Systems, pages 3111-3119.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Selective sharing for multilingual dependency parsing",
"authors": [
{
"first": "Tahira",
"middle": [],
"last": "Naseem",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "Amir",
"middle": [],
"last": "Globerson",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "629--637",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tahira Naseem, Regina Barzilay, and Amir Globerson. 2012. Selective sharing for multi- lingual dependency parsing. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics, pages 629-637.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Using universal linguistic knowledge to guide grammar induction",
"authors": [
{
"first": "Tahira",
"middle": [],
"last": "Naseem",
"suffix": ""
},
{
"first": "Harr",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1234--1244",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tahira Naseem, Harr Chen, Regina Barzilay, and Mark Johnson. 2010. Using universal linguis- tic knowledge to guide grammar induction. In Proceedings of the 2010 Conference on Empir- ical Methods in Natural Language Processing, pages 1234-1244.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "RDRPOSTagger: A ripple down rules-based part-of-speech tagger",
"authors": [],
"year": 2014,
"venue": "Proceedings of the Demonstrations at the 14th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "17--20",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dat Quoc Nguyen, Dai Quoc Nguyen, Dang Duc Pham, and Son Bao Pham. 2014. RDRPOSTag- ger: A ripple down rules-based part-of-speech tagger. In Proceedings of the Demonstrations at the 14th Conference of the European Chapter of the Association for Computational Linguistics, pages 17-20.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Algorithms for deterministic incremental dependency parsing",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2008,
"venue": "Computational Linguistics",
"volume": "34",
"issue": "4",
"pages": "513--553",
"other_ids": {
"DOI": [
"https://www.mitpressjournals.org/doi/pdf/10.1162/coli.07-056-R1-07-027"
]
},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre. 2008. Algorithms for determinis- tic incremental dependency parsing. Computa- tional Linguistics, 34(4):513-553.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Prokopis Prokopidis, Sampo Pyysalo, Loganathan Ramasamy",
"authors": [
{
"first": "John",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "Kepa",
"middle": [],
"last": "Bauer",
"suffix": ""
},
{
"first": "Riyaz",
"middle": [
"Ahmad"
],
"last": "Bengoetxea",
"suffix": ""
},
{
"first": "Cristina",
"middle": [],
"last": "Bhat",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Bosco",
"suffix": ""
},
{
"first": "Giuseppe",
"middle": [
"G A"
],
"last": "Bowman",
"suffix": ""
},
{
"first": "Miriam",
"middle": [],
"last": "Celano",
"suffix": ""
},
{
"first": "Marie-Catherine",
"middle": [],
"last": "Connor",
"suffix": ""
},
{
"first": "Arantza",
"middle": [],
"last": "De Marneffe",
"suffix": ""
},
{
"first": "Kaja",
"middle": [],
"last": "Diaz De Ilarraza",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Dobrovoljc",
"suffix": ""
},
{
"first": "Toma\u017e",
"middle": [],
"last": "Dozat",
"suffix": ""
},
{
"first": "Rich\u00e1rd",
"middle": [],
"last": "Erjavec",
"suffix": ""
},
{
"first": "Jennifer",
"middle": [],
"last": "Farkas",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Foster",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Galbraith",
"suffix": ""
},
{
"first": "Iakes",
"middle": [],
"last": "Ginter",
"suffix": ""
},
{
"first": "Koldo",
"middle": [],
"last": "Goenaga",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Gojenola",
"suffix": ""
},
{
"first": "Berta",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Bruno",
"middle": [],
"last": "Gonzales",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Guillaume",
"suffix": ""
},
{
"first": "Dag",
"middle": [],
"last": "Haji\u010d",
"suffix": ""
},
{
"first": "Radu",
"middle": [],
"last": "Haug",
"suffix": ""
},
{
"first": "Elena",
"middle": [],
"last": "Ion",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "Irimia",
"suffix": ""
},
{
"first": "Hiroshi",
"middle": [],
"last": "Johannsen",
"suffix": ""
},
{
"first": "Jenna",
"middle": [],
"last": "Kanayama",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Kanerva",
"suffix": ""
},
{
"first": "Veronika",
"middle": [],
"last": "Krek",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Laippala",
"suffix": ""
},
{
"first": "Nikola",
"middle": [],
"last": "Lenci",
"suffix": ""
},
{
"first": "Teresa",
"middle": [],
"last": "Ljube\u0161i\u0107",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Lynn",
"suffix": ""
},
{
"first": "C\u01cet\u01celina",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "M\u01cer\u01cenduc",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mare\u010dek",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "H\u00e9ctor Mart\u00ednez Alonso",
"suffix": ""
},
{
"first": "Yuji",
"middle": [],
"last": "Ma\u0161ek",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Matsumoto",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Verginica",
"middle": [],
"last": "Missil\u00e4",
"suffix": ""
},
{
"first": "Yusuke",
"middle": [],
"last": "Mititelu",
"suffix": ""
},
{
"first": "Simonetta",
"middle": [],
"last": "Miyao",
"suffix": ""
},
{
"first": "Shunsuke",
"middle": [],
"last": "Montemagni",
"suffix": ""
},
{
"first": "Hanna",
"middle": [],
"last": "Mori",
"suffix": ""
},
{
"first": "Petya",
"middle": [],
"last": "Nurmi",
"suffix": ""
},
{
"first": "Lilja",
"middle": [],
"last": "Osenova",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "\u00d8vrelid",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ballesteros, John Bauer, Kepa Bengoetxea, Riyaz Ahmad Bhat, Cristina Bosco, Sam Bow- man, Giuseppe G. A. Celano, Miriam Connor, Marie-Catherine de Marneffe, Arantza Diaz de Ilarraza, Kaja Dobrovoljc, Timothy Dozat, Toma\u017e Erjavec, Rich\u00e1rd Farkas, Jennifer Foster, Daniel Galbraith, Filip Ginter, Iakes Goenaga, Koldo Gojenola, Yoav Goldberg, Berta Gon- zales, Bruno Guillaume, Jan Haji\u010d, Dag Haug, Radu Ion, Elena Irimia, Anders Johannsen, Hi- roshi Kanayama, Jenna Kanerva, Simon Krek, Veronika Laippala, Alessandro Lenci, Nikola Ljube\u0161i\u0107, Teresa Lynn, Christopher Manning, C\u01cet\u01celina M\u01cer\u01cenduc, David Mare\u010dek, H\u00e9ctor Mart\u00ednez Alonso, Jan Ma\u0161ek, Yuji Matsumoto, Ryan McDonald, Anna Missil\u00e4, Verginica Mi- titelu, Yusuke Miyao, Simonetta Montemagni, Shunsuke Mori, Hanna Nurmi, Petya Osenova, Lilja \u00d8vrelid, Elena Pascual, Marco Passarotti, Cenel-Augusto Perez, Slav Petrov, Jussi Piitu- lainen, Barbara Plank, Martin Popel, Prokopis Prokopidis, Sampo Pyysalo, Loganathan Ra- masamy, Rudolf Rosa, Shadi Saleh, Sebastian Schuster, Wolfgang Seeker, Mojgan Seraji, Na- talia Silveira, Maria Simi, Radu Simionescu, Katalin Simk\u00f3, Kiril Simov, Aaron Smith, Jan \u0160t\u011bp\u00e1nek, Alane Suhr, Zsolt Sz\u00e1nt\u00f3, Takaaki Tanaka, Reut Tsarfaty, Sumire Uematsu, Lar- raitz Uria, Viktor Varga, Veronika Vincze, Zden\u011bk \u017dabokrtsk\u00fd, Daniel Zeman, and Hanzhi Zhu. 2015. Universal dependencies 1.2. LIN- DAT/CLARIN digital library at the Institute of Formal and Applied Linguistics, Charles Uni- versity in Prague.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Using left-corner parsing to encode universal structural constraints in grammar induction",
"authors": [
{
"first": "Hiroshi",
"middle": [],
"last": "Noji",
"suffix": ""
},
{
"first": "Yusuke",
"middle": [],
"last": "Miyao",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "33--43",
"other_ids": {
"DOI": [
"10.18653/v1/D16-1004"
]
},
"num": null,
"urls": [],
"raw_text": "Hiroshi Noji, Yusuke Miyao, and Mark Johnson. 2016. Using left-corner parsing to encode uni- versal structural constraints in grammar induc- tion. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Pro- cessing, pages 33-43.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "The PageRank citation ranking: Bringing order to the Web",
"authors": [
{
"first": "Lawrence",
"middle": [],
"last": "Page",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Brin",
"suffix": ""
},
{
"first": "Rajeev",
"middle": [],
"last": "Motwani",
"suffix": ""
},
{
"first": "Terry",
"middle": [],
"last": "Winograd",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lawrence Page, Sergey Brin, Rajeev Motwani, and Terry Winograd. 1999. The PageRank cita- tion ranking: Bringing order to the Web. Tech- nical Report 1999-66, Stanford InfoLab.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "Cubic-time parsing and learning algorithms for grammatical bigram",
"authors": [
{
"first": "A",
"middle": [],
"last": "Mark",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Paskin",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark A. Paskin. 2001. Cubic-time parsing and learning algorithms for grammatical bigram.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "Grammatical bigrams",
"authors": [
{
"first": "A",
"middle": [],
"last": "Mark",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Paskin",
"suffix": ""
}
],
"year": 2002,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "91--97",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark A. Paskin. 2002. Grammatical bigrams. In Advances in Neural Information Processing Systems, pages 91-97.",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "Density-driven cross-lingual transfer of dependency parsers",
"authors": [
{
"first": "Mohammad",
"middle": [],
"last": "Sadegh",
"suffix": ""
},
{
"first": "Rasooli",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "328--338",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohammad Sadegh Rasooli and Michael Collins. 2015. Density-driven cross-lingual transfer of dependency parsers. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 328-338.",
"links": null
},
"BIBREF58": {
"ref_id": "b58",
"title": "Cross-lingual syntactic transfer with limited resources",
"authors": [
{
"first": "Mohammad",
"middle": [],
"last": "Sadegh",
"suffix": ""
},
{
"first": "Rasooli",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "279--293",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohammad Sadegh Rasooli and Michael Collins. 2017. Cross-lingual syntactic transfer with lim- ited resources. Transactions of the Association for Computational Linguistics, 5:279-293.",
"links": null
},
"BIBREF59": {
"ref_id": "b59",
"title": "KL cpos 3 -a language similarity measure for delexicalized parser transfer",
"authors": [
{
"first": "Rudolf",
"middle": [],
"last": "Rosa",
"suffix": ""
},
{
"first": "Zden\u011bk",
"middle": [],
"last": "\u017dabokrtsk\u00fd",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "243--249",
"other_ids": {
"DOI": [
"10.3115/v1/P15-2040"
]
},
"num": null,
"urls": [],
"raw_text": "Rudolf Rosa and Zden\u011bk \u017dabokrtsk\u00fd. 2015a. KL cpos 3 -a language similarity measure for delexicalized parser transfer. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th Interna- tional Joint Conference on Natural Language Processing, pages 243-249.",
"links": null
},
"BIBREF60": {
"ref_id": "b60",
"title": "Mstparser model interpolation for multi-source delexicalized transfer",
"authors": [
{
"first": "Rudolf",
"middle": [],
"last": "Rosa",
"suffix": ""
},
{
"first": "Zden\u011bk",
"middle": [],
"last": "\u017dabokrtsk\u00fd",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 14th International Conference on Parsing Technologies",
"volume": "",
"issue": "",
"pages": "71--75",
"other_ids": {
"DOI": [
"10.18653/v1/W15-2209"
]
},
"num": null,
"urls": [],
"raw_text": "Rudolf Rosa and Zden\u011bk \u017dabokrtsk\u00fd. 2015b. Mstparser model interpolation for multi-source delexicalized transfer. In Proceedings of the 14th International Conference on Parsing Tech- nologies, pages 71-75.",
"links": null
},
"BIBREF61": {
"ref_id": "b61",
"title": "A survey of cross-lingual word embedding models",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2017,
"venue": "Computing Research Repository",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1706.04902"
]
},
"num": null,
"urls": [],
"raw_text": "Sebastian Ruder, Ivan Vuli\u0107, and Anders S\u00f8gaard. 2017. A survey of cross-lingual word embed- ding models. Computing Research Repository, arXiv:1706.04902.",
"links": null
},
"BIBREF62": {
"ref_id": "b62",
"title": "Parser adaptation and projection with quasisynchronous grammar features",
"authors": [
{
"first": "David",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "822--831",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David A. Smith and Jason Eisner. 2009. Parser adaptation and projection with quasi- synchronous grammar features. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 822-831.",
"links": null
},
"BIBREF63": {
"ref_id": "b63",
"title": "Novel Estimation Methods for Unsupervised Discovery of Latent Structure in Natural Language Text",
"authors": [
{
"first": "A",
"middle": [],
"last": "Noah",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Noah A. Smith. 2006. Novel Estimation Methods for Unsupervised Discovery of Latent Structure in Natural Language Text. Ph.D. thesis, Johns Hopkins University.",
"links": null
},
"BIBREF64": {
"ref_id": "b64",
"title": "Guiding unsupervised grammar induction using contrastive estimation",
"authors": [
{
"first": "A",
"middle": [],
"last": "Noah",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2005,
"venue": "International Joint Conference on Artificial Intelligence (IJCAI) Workshop on Grammatical Inference Applications",
"volume": "",
"issue": "",
"pages": "73--82",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Noah A. Smith and Jason Eisner. 2005. Guid- ing unsupervised grammar induction using con- trastive estimation. In International Joint Con- ference on Artificial Intelligence (IJCAI) Work- shop on Grammatical Inference Applications, pages 73-82.",
"links": null
},
"BIBREF65": {
"ref_id": "b65",
"title": "Annealing structural bias in multilingual weighted grammar induction",
"authors": [
{
"first": "A",
"middle": [],
"last": "Noah",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the International Conference on Computational Linguistics and the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "569--576",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Noah A. Smith and Jason Eisner. 2006. Annealing structural bias in multilingual weighted gram- mar induction. In Proceedings of the Interna- tional Conference on Computational Linguis- tics and the Association for Computational Lin- guistics, pages 569-576.",
"links": null
},
"BIBREF66": {
"ref_id": "b66",
"title": "Three dependency-andboundary models for grammar induction",
"authors": [
{
"first": "I",
"middle": [],
"last": "Valentin",
"suffix": ""
},
{
"first": "Hiyan",
"middle": [],
"last": "Spitkovsky",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Alshawi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "688--698",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Valentin I. Spitkovsky, Hiyan Alshawi, and Daniel Jurafsky. 2012. Three dependency-and- boundary models for grammar induction. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Pro- cessing and Computational Natural Language Learning, pages 688-698.",
"links": null
},
"BIBREF67": {
"ref_id": "b67",
"title": "Breaking out of local optima with count transforms and model recombination: A study in grammar induction",
"authors": [
{
"first": "I",
"middle": [],
"last": "Valentin",
"suffix": ""
},
{
"first": "Hiyan",
"middle": [],
"last": "Spitkovsky",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Alshawi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1983--1995",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Valentin I. Spitkovsky, Hiyan Alshawi, and Daniel Jurafsky. 2013. Breaking out of local optima with count transforms and model recombina- tion: A study in grammar induction. In Pro- ceedings of the 2013 Conference on Empiri- cal Methods in Natural Language Processing, pages 1983-1995.",
"links": null
},
"BIBREF68": {
"ref_id": "b68",
"title": "Token and type constraints for cross-lingual partof-speech tagging",
"authors": [
{
"first": "Oscar",
"middle": [],
"last": "T\u00e4ckstr\u00f6m",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2013,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1--12",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oscar T\u00e4ckstr\u00f6m, Dipanjan Das, Slav Petrov, Ryan McDonald, and Joakim Nivre. 2013a. To- ken and type constraints for cross-lingual part- of-speech tagging. Transactions of the Associa- tion for Computational Linguistics, 1:1-12.",
"links": null
},
"BIBREF69": {
"ref_id": "b69",
"title": "Target language adaptation of discriminative transfer parsers",
"authors": [
{
"first": "Oscar",
"middle": [],
"last": "T\u00e4ckstr\u00f6m",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1061--1071",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oscar T\u00e4ckstr\u00f6m, Ryan McDonald, and Joakim Nivre. 2013b. Target language adaptation of discriminative transfer parsers. In Proceedings of the 2013 Conference of the North Ameri- can Chapter of the Association for Computa- tional Linguistics: Human Language Technolo- gies, pages 1061-1071.",
"links": null
},
"BIBREF70": {
"ref_id": "b70",
"title": "Learning structured prediction models: A large margin approach",
"authors": [
{
"first": "Ben",
"middle": [],
"last": "Taskar",
"suffix": ""
},
{
"first": "Vassil",
"middle": [],
"last": "Chatalbashev",
"suffix": ""
},
{
"first": "Daphne",
"middle": [],
"last": "Koller",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Guestrin",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 22nd International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "896--903",
"other_ids": {
"DOI": [
"http://doi.acm.org/10.1145/1102351.1102464"
]
},
"num": null,
"urls": [],
"raw_text": "Ben Taskar, Vassil Chatalbashev, Daphne Koller, and Carlos Guestrin. 2005. Learning structured prediction models: A large margin approach. In Proceedings of the 22nd International Confer- ence on Machine Learning, pages 896-903.",
"links": null
},
"BIBREF71": {
"ref_id": "b71",
"title": "Maxmargin parsing",
"authors": [
{
"first": "Ben",
"middle": [],
"last": "Taskar",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Daphne",
"middle": [],
"last": "Koller",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ben Taskar, Dan Klein, Michael Collins, Daphne Koller, and Christopher Manning. 2004. Max- margin parsing. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing.",
"links": null
},
"BIBREF72": {
"ref_id": "b72",
"title": "Rediscovering annotation projection for cross-lingual parser induction",
"authors": [
{
"first": "J\u00f6rg",
"middle": [],
"last": "Tiedemann",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "1854--1864",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J\u00f6rg Tiedemann. 2014. Rediscovering annotation projection for cross-lingual parser induction. In Proceedings of COLING 2014, the 25th Inter- national Conference on Computational Linguis- tics: Technical Papers, pages 1854-1864.",
"links": null
},
"BIBREF73": {
"ref_id": "b73",
"title": "Treebank translation for cross-lingual parser induction",
"authors": [
{
"first": "J\u00f6rg",
"middle": [],
"last": "Tiedemann",
"suffix": ""
},
{
"first": "\u017deljko",
"middle": [],
"last": "Agi\u0107",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Eighteenth Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "130--140",
"other_ids": {
"DOI": [
"10.3115/v1/W14-1614"
]
},
"num": null,
"urls": [],
"raw_text": "J\u00f6rg Tiedemann, \u017deljko Agi\u0107, and Joakim Nivre. 2014. Treebank translation for cross-lingual parser induction. In Proceedings of the Eigh- teenth Conference on Computational Natural Language Learning, pages 130-140.",
"links": null
},
"BIBREF74": {
"ref_id": "b74",
"title": "The Galactic Dependencies treebanks: Getting more data by synthesizing new languages",
"authors": [
{
"first": "Dingquan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2016,
"venue": "Transactions of the Association of Computational Linguistics",
"volume": "4",
"issue": "",
"pages": "491--505",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dingquan Wang and Jason Eisner. 2016. The Galactic Dependencies treebanks: Getting more data by synthesizing new languages. Transactions of the Association of Computa- tional Linguistics, 4:491-505. Data available at https://github.com/gdtreebank/ gdtreebank.",
"links": null
},
"BIBREF75": {
"ref_id": "b75",
"title": "Finegrained prediction of syntactic typology: Discovering latent structure with supervised learning",
"authors": [
{
"first": "Dingquan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "147--161",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dingquan Wang and Jason Eisner. 2017. Fine- grained prediction of syntactic typology: Dis- covering latent structure with supervised learn- ing. Transactions of the Association for Com- putational Linguistics, 5:147-161.",
"links": null
},
"BIBREF76": {
"ref_id": "b76",
"title": "Synthetic data made to order: The case of parsing",
"authors": [
{
"first": "Dingquan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1325--1337",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dingquan Wang and Jason Eisner. 2018. Syn- thetic data made to order: The case of parsing. In Proceedings of the Conference on Empiri- cal Methods in Natural Language Processing (EMNLP), pages 1325-1337, Brussels.",
"links": null
},
"BIBREF77": {
"ref_id": "b77",
"title": "Semi-supervised convex training for dependency parsing",
"authors": [
{
"first": "Qin Iris",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Dale",
"middle": [],
"last": "Schuurmans",
"suffix": ""
},
{
"first": "Dekang",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 46th",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qin Iris Wang, Dale Schuurmans, and Dekang Lin. 2008. Semi-supervised convex training for dependency parsing. In Proceedings of the 46th",
"links": null
},
"BIBREF78": {
"ref_id": "b78",
"title": "Annual Meeting of Annual Meeting of the Association for Computational Linguistics and the 2008 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "532--540",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Meeting of Annual Meeting of the Asso- ciation for Computational Linguistics and the 2008 Conference of the North American Chap- ter of the Association for Computational Lin- guistics: Human Language Technologies, pages 532-540.",
"links": null
},
"BIBREF79": {
"ref_id": "b79",
"title": "Inducing multilingual text analysis tools via robust projection across aligned corpora",
"authors": [
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": ""
},
{
"first": "Grace",
"middle": [],
"last": "Ngai",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Wicentowski",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the First International Conference on Human Language Technology Research",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Yarowsky, Grace Ngai, and Richard Wicen- towski. 2001. Inducing multilingual text anal- ysis tools via robust projection across aligned corpora. In Proceedings of the First Interna- tional Conference on Human Language Tech- nology Research.",
"links": null
},
"BIBREF80": {
"ref_id": "b80",
"title": "Discovery of Linguistic Relations Using Lexical Attraction",
"authors": [
{
"first": "Deniz",
"middle": [],
"last": "Yuret",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Deniz Yuret. 1998. Discovery of Linguistic Re- lations Using Lexical Attraction. Ph.D. thesis, Massachusetts Institute of Technology.",
"links": null
},
"BIBREF81": {
"ref_id": "b81",
"title": "Hierarchical low-rank tensors for multilingual transfer parsing",
"authors": [
{
"first": "Yuan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1857--1867",
"other_ids": {
"DOI": [
"10.18653/v1/D15-1213"
]
},
"num": null,
"urls": [],
"raw_text": "Yuan Zhang and Regina Barzilay. 2015. Hierar- chical low-rank tensors for multilingual trans- fer parsing. In Proceedings of the 2015 Con- ference on Empirical Methods in Natural Lan- guage Processing, pages 1857-1867.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "A 2-layer typology component. The bias vectors (b W ) are suppressed for readability.",
"num": null,
"type_str": "figure",
"uris": null
},
"FIGREF1": {
"text": "Computing the neural feature vector \u03c0(u).",
"num": null,
"type_str": "figure",
"uris": null
},
"FIGREF2": {
"text": "The architecture of the delexicalized graphbased BIST parser with the introduction of T(u), where s i,j in each cell is the arc score s(\u03c6(a; x, T(u)) from equation",
"num": null,
"type_str": "figure",
"uris": null
},
"FIGREF3": {
"text": "Effect of \u03b2. The UAS and LAS (y-axis) of the baseline system as a function of \u03b2 (x-axis).",
"num": null,
"type_str": "figure",
"uris": null
},
"FIGREF5": {
"text": "Evaluation by dependency relation type, showing an equal-weighted average of the 16 development languages. Each vertical bar spans the range from labeled F1 (bottom) to unlabeled F1 (top), with error bars given by bootstrap resampling over the 16 languages. Precision and recall are also indicated.",
"num": null,
"type_str": "figure",
"uris": null
},
"FIGREF6": {
"text": "The confusion matrix of our parser, as an equal-weight average over 16 development languages.",
"num": null,
"type_str": "figure",
"uris": null
},
"TABREF0": {
"text": "In each column, we star the best result as well as all results that are not significantly worse.",
"num": null,
"type_str": "table",
"content": "<table><tr><td/><td/><td/><td/><td>UAS</td><td/><td>LAS</td></tr><tr><td/><td/><td colspan=\"2\">System UD</td><td>+GD</td><td>UD</td><td>+GD</td></tr><tr><td/><td/><td>SST</td><td colspan=\"2\">66.22* 65.70</td><td colspan=\"2\">50.40 50.54</td></tr><tr><td/><td/><td colspan=\"3\">Baseline 63.95 67.97</td><td>48.46</td><td>52.78</td></tr><tr><td/><td/><td>H</td><td>64.83</td><td>69.41</td><td>49.41</td><td>53.63</td></tr><tr><td/><td/><td>N</td><td>65.30</td><td>70.06</td><td>49.43</td><td>54.19</td></tr><tr><td/><td/><td>H;N</td><td colspan=\"2\">65.26 69.62</td><td>49.67</td><td>53.68</td></tr><tr><td/><td/><td>H+N</td><td colspan=\"4\">67.34* 70.65* 52.02* 55.18*</td></tr><tr><td>oracle</td><td>features</td><td>T D T W</td><td>65.94 64.84</td><td colspan=\"2\">70.01* 49.77 69.75 49.30</td><td>53.43 53.79</td></tr><tr><td colspan=\"7\">Table 1: Average parsing results over 16 languages,</td></tr><tr><td colspan=\"7\">computed by 5-fold cross-validation. We compare</td></tr><tr><td colspan=\"7\">training on real languages only (the UD column) ver-</td></tr><tr><td colspan=\"7\">sus augmenting with synthetic languages at \u03b2 = 0.2</td></tr><tr><td colspan=\"7\">(the +GD column). Baseline is the ablated system that</td></tr><tr><td colspan=\"7\">omits T(u) ( \u00a79.2). SST is the single-source transfer</td></tr><tr><td colspan=\"7\">approach ( \u00a75.2). H and N use only hand-engineered</td></tr><tr><td colspan=\"7\">features or neural features, while H;N defines \u03c0(u)</td></tr><tr><td colspan=\"7\">to concatenate both ( \u00a76.1) and H+N is the product-of-</td></tr><tr><td colspan=\"7\">experts model ( \u00a77). T D and T W that incorporate oracle</td></tr><tr><td colspan=\"7\">knowledge of the target-language syntax ( \u00a79.4). For</td></tr><tr><td colspan=\"7\">each comparison between UD and +GD, we boldface</td></tr><tr><td colspan=\"7\">the better (higher) result, or both if they are not signifi-</td></tr><tr><td colspan=\"7\">cantly different (paired permutation test over languages</td></tr><tr><td colspan=\"3\">with p &lt; 0.05).</td><td/><td/><td/></tr></table>",
"html": null
},
"TABREF2": {
"text": ").",
"num": null,
"type_str": "table",
"content": "<table><tr><td/><td>nmod punct case det nsubj root amod dobj advmod conj cc mark aux acl advcl compound cop nummod name xcomp OTHERS NONE Predicted relations</td><td/></tr><tr><td>Gold relations</td><td>OTHERS xcomp name nummod cop compound advcl acl aux mark cc conj advmod dobj amod root nmod punct case det nsubj</td><td>0.0 0.5</td></tr></table>",
"html": null
}
}
}
}