ACL-OCL / Base_JSON /prefixK /json /K19 /K19-1029.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "K19-1029",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:05:15.241591Z"
},
"title": "Low-Resource Parsing with Crosslingual Contextualized Representations",
"authors": [
{
"first": "Phoebe",
"middle": [],
"last": "Mulcaire~\u21e4",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Jungo",
"middle": [],
"last": "Kasai~\u21e4",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith~}",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Allen Institute for Artificial Intelligence",
"location": {
"settlement": "Seattle",
"region": "WA",
"country": "USA"
}
},
"email": ""
},
{
"first": "Paul",
"middle": [
"G"
],
"last": "Allen",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Despite advances in dependency parsing, languages with small treebanks still present challenges. We assess recent approaches to multilingual contextual word representations (CWRs), and compare them for crosslingual transfer from a language with a large treebank to a language with a small or nonexistent treebank, by sharing parameters between languages in the parser itself. We experiment with a diverse selection of languages in both simulated and truly low-resource scenarios, and show that multilingual CWRs greatly facilitate low-resource dependency parsing even without crosslingual supervision such as dictionaries or parallel text. Furthermore, we examine the non-contextual part of the learned language models (which we call a \"decontextual probe\") to demonstrate that polyglot language models better encode crosslingual lexical correspondence compared to aligned monolingual language models. This analysis provides further evidence that polyglot training is an effective approach to crosslingual transfer.",
"pdf_parse": {
"paper_id": "K19-1029",
"_pdf_hash": "",
"abstract": [
{
"text": "Despite advances in dependency parsing, languages with small treebanks still present challenges. We assess recent approaches to multilingual contextual word representations (CWRs), and compare them for crosslingual transfer from a language with a large treebank to a language with a small or nonexistent treebank, by sharing parameters between languages in the parser itself. We experiment with a diverse selection of languages in both simulated and truly low-resource scenarios, and show that multilingual CWRs greatly facilitate low-resource dependency parsing even without crosslingual supervision such as dictionaries or parallel text. Furthermore, we examine the non-contextual part of the learned language models (which we call a \"decontextual probe\") to demonstrate that polyglot language models better encode crosslingual lexical correspondence compared to aligned monolingual language models. This analysis provides further evidence that polyglot training is an effective approach to crosslingual transfer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Dependency parsing has achieved new states of the art using distributed word representations in neural networks, trained with large amounts of annotated data Ma et al., 2018; Che et al., 2018) . However, many languages are low-resource, with small or no treebanks, which presents a severe challenge in developing accurate parsing systems in those languages. One way to address this problem is with a crosslingual solution that makes use of a language with a large treebank and raw text in both languages. The hypothesis behind this approach is that, although each language is unique, different languages manifest similar char-\u21e4 Equal contribution. Random order.",
"cite_spans": [
{
"start": 158,
"end": 174,
"text": "Ma et al., 2018;",
"ref_id": "BIBREF30"
},
{
"start": 175,
"end": 192,
"text": "Che et al., 2018)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "acteristics (e.g., morphological, lexical, syntactic) which can be exploited by training a single polyglot model with data from multiple languages (Ammar, 2016) .",
"cite_spans": [
{
"start": 147,
"end": 160,
"text": "(Ammar, 2016)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recent work has extended contextual word representations (CWRs) multilingually either by training a polyglot language model (LM) on a mixture of data from multiple languages (joint training approach; Mulcaire et al., 2019; Lample and Conneau, 2019) or by aligning multiple monolingual language models crosslingually (retrofitting approach; Schuster et al., 2019; Aldarmaki and Diab, 2019) . These multilingual representations have been shown to facilitate crosslingual transfer on several tasks, including Universal Dependencies parsing and natural language inference. In this work, we assess these two types of methods by using them for low-resource dependency parsing, and discover that the joint training approach substantially outperforms the retrofitting approach. We further apply multilingual CWRs produced by the joint training approach to diverse languages, and show that it is still effective in transfer between distant languages, though we find that phylogenetically related source languages are generally more helpful.",
"cite_spans": [
{
"start": 200,
"end": 222,
"text": "Mulcaire et al., 2019;",
"ref_id": "BIBREF34"
},
{
"start": 223,
"end": 248,
"text": "Lample and Conneau, 2019)",
"ref_id": "BIBREF24"
},
{
"start": 340,
"end": 362,
"text": "Schuster et al., 2019;",
"ref_id": "BIBREF44"
},
{
"start": 363,
"end": 388,
"text": "Aldarmaki and Diab, 2019)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We hypothesize that joint polyglot training is more successful than retrofitting because it induces a degree of lexical correspondence between languages that the linear transformation used in retrofitting methods cannot capture. To test this hypothesis, we design a decontextual probe. We decontextualize CWRs into non-contextual word vectors that retain much of CWRs' taskperformance benefit, and evaluate the crosslingual transferability of language models via word translation. In our decontextualization framework, we use a single LSTM cell without recurrence to obtain a context-independent vector, thereby allowing for a direct probe into the LSTM networks in-dependent of a particular corpus. We show that decontextualized vectors from the joint training approach yield representations that score higher on a word translation task than the retrofitting approach or word type vectors such as fastText (Bojanowski et al., 2017) . This finding provides evidence that polyglot language models encode crosslingual similarity, specifically crosslingual lexical correspondence, that a linear alignment between monolingual language models does not.",
"cite_spans": [
{
"start": 907,
"end": 932,
"text": "(Bojanowski et al., 2017)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We examine crosslingual solutions to lowresource dependency parsing, which make crucial use of multilingual CWRs. All models are implemented in AllenNLP, version 0.7.2 and the hyperparameters and training details are given in the appendix.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "2"
},
{
"text": "Prior methods to produce multilingual contextual word representations (CWRs) can be categorized into two major classes, which we call joint training and retrofitting. 1 The joint training approach trains a single polgylot language model (LM) on a mixture of texts in multiple languages (Mulcaire et al., 2019; Lample and Conneau, 2019; Devlin et al., 2019) , 2 while the retrofitting approach trains separate LMs on each language and aligns the learned representations later (Schuster et al., 2019; Aldarmaki and Diab, 2019) . We compare example approaches from these two classes using the same LM training data, and discover that the joint training approach generally yields better performance in low-resource dependency parsing, even without crosslingual supervision.",
"cite_spans": [
{
"start": 286,
"end": 309,
"text": "(Mulcaire et al., 2019;",
"ref_id": "BIBREF34"
},
{
"start": 310,
"end": 335,
"text": "Lample and Conneau, 2019;",
"ref_id": "BIBREF24"
},
{
"start": 336,
"end": 356,
"text": "Devlin et al., 2019)",
"ref_id": "BIBREF10"
},
{
"start": 475,
"end": 498,
"text": "(Schuster et al., 2019;",
"ref_id": "BIBREF44"
},
{
"start": 499,
"end": 524,
"text": "Aldarmaki and Diab, 2019)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual CWRs",
"sec_num": "2.1"
},
{
"text": "Retrofitting Approach Following Schuster et al. 2019, we first train a bidirectional LM with two-layer LSTMs on top of character CNNs for each language (ELMo, Peters et al., 2018) , and then align the monolingual LMs across languages. Denote the hidden state in the jth layer for word i in context c by h",
"cite_spans": [
{
"start": 152,
"end": 179,
"text": "(ELMo, Peters et al., 2018)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual CWRs",
"sec_num": "2.1"
},
{
"text": "(j)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual CWRs",
"sec_num": "2.1"
},
{
"text": "i,c . We use a trainable weighted average of the three layers (character-CNN and 1 This term was originally used by Faruqui et al. (2015) to describe updates to word vectors, after estimating them from corpora, using semantic lexicons. We generalize it to capture the notion of a separate update to fit something other than the original data, applied after conventional training.",
"cite_spans": [
{
"start": 116,
"end": 137,
"text": "Faruqui et al. (2015)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual CWRs",
"sec_num": "2.1"
},
{
"text": "2 Multilingual BERT is documented in https: //github.com/google-research/bert/blob/ master/multilingual.md. two LSTM layers) to compute the contextual representation e i,c for the word: . 3 In the first step, we compute an \"anchor\" h (j)",
"cite_spans": [
{
"start": 188,
"end": 189,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual CWRs",
"sec_num": "2.1"
},
{
"text": "e i,c = P 2 j=0 j h (j) i,c",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual CWRs",
"sec_num": "2.1"
},
{
"text": "i for each word by averaging h",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual CWRs",
"sec_num": "2.1"
},
{
"text": "(j) i,c",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual CWRs",
"sec_num": "2.1"
},
{
"text": "over all occurrences in an LM corpus. We then apply a standard dictionary-based technique 4 to create multilingual word embeddings Conneau et al., 2018) . In particular, suppose that we have a word-translation dictionary from source language s to target language",
"cite_spans": [
{
"start": 131,
"end": 152,
"text": "Conneau et al., 2018)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual CWRs",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "t. Let H (j) s , H",
"eq_num": "(j)"
}
],
"section": "Multilingual CWRs",
"sec_num": "2.1"
},
{
"text": "t be matrices whose columns are the anchors in the jth layer for the source and corresponding target words in the dictionary. For each layer j, find the linear transformation W \u21e4(j) such that",
"cite_spans": [
{
"start": 177,
"end": 181,
"text": "\u21e4(j)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual CWRs",
"sec_num": "2.1"
},
{
"text": "W \u21e4(j) = argmin W ||WH (j) s H (j) t || F",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual CWRs",
"sec_num": "2.1"
},
{
"text": "The linear transformations are then used to map the LM hidden states for the source language to the target LM space. Specifically, contextual representations for the source and target languages are computed by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual CWRs",
"sec_num": "2.1"
},
{
"text": "P 2 j=0 j W \u21e4(j) h (j) i,c and P 2 j=0 j h (j)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual CWRs",
"sec_num": "2.1"
},
{
"text": "i,c respectively. We use publicly available dictionaries from Conneau et al. (2018) 5and align all languages to the English LM space, again following Schuster et al. (2019) .",
"cite_spans": [
{
"start": 150,
"end": 172,
"text": "Schuster et al. (2019)",
"ref_id": "BIBREF44"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual CWRs",
"sec_num": "2.1"
},
{
"text": "Joint Training Approach Another approach to multilingual CWRs is to train a single LM on multiple languages (Tsvetkov et al., 2016; Ragni et al., 2016; \u00d6stling and Tiedemann, 2017) . We train a single bidirectional LM with charater CNNs and two-layer LSTMs on multiple languages (Rosita, Mulcaire et al., 2019) . We then use the polyglot LM to provide contextual representations. Similarly to the retrofitting approach, we represent word i in context c as a trainable weighted average of the hidden states in the trained polyglot LM:",
"cite_spans": [
{
"start": 108,
"end": 131,
"text": "(Tsvetkov et al., 2016;",
"ref_id": "BIBREF48"
},
{
"start": 132,
"end": 151,
"text": "Ragni et al., 2016;",
"ref_id": "BIBREF41"
},
{
"start": 152,
"end": 180,
"text": "\u00d6stling and Tiedemann, 2017)",
"ref_id": null
},
{
"start": 288,
"end": 310,
"text": "Mulcaire et al., 2019)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual CWRs",
"sec_num": "2.1"
},
{
"text": "P 2 j=0 j h (j)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual CWRs",
"sec_num": "2.1"
},
{
"text": "i,c . In contrast to retrofitting, crosslinguality is learned implicitly by sharing all network parameters during LM training; no crosslingual dictionaries are used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual CWRs",
"sec_num": "2.1"
},
{
"text": "Refinement after Joint Training It is possible to combine the two approaches above; the alignment procedure used in the retrofitting approach can serve as a refinement step on top of an alreadypolyglot language model. We will see only a limited gain in parsing performance from this refinement in our experiments, suggesting that polyglot LMs are already producing high-quality multilingual CWRs even without crosslingual dictionary supervision.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual CWRs",
"sec_num": "2.1"
},
{
"text": "FastText Baseline We also compare the multilingual CWRs to a subword-based, non-contextual word embedding baseline.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual CWRs",
"sec_num": "2.1"
},
{
"text": "We train 300dimensional word vectors on the same LM data using the fastText method (Bojanowski et al., 2017) , and use the same bilingual dictionaries to align them (Conneau et al., 2018) .",
"cite_spans": [
{
"start": 83,
"end": 108,
"text": "(Bojanowski et al., 2017)",
"ref_id": "BIBREF5"
},
{
"start": 165,
"end": 187,
"text": "(Conneau et al., 2018)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual CWRs",
"sec_num": "2.1"
},
{
"text": "We train polyglot parsers for multiple languages on top of multilingual CWRs. All parser parameters are shared between the source and target languages. suggest that sharing parameters between languages can alleviate the low-resource problem in syntactic parsing, but their experiments are limited to (relatively similar) European languages. Mulcaire et al. (2019) also include experiments with dependency parsing using polyglot contextual representations between two language pairs (English/Chinese and English/Arabic), but focus on high-resource tasks. Here we explore a wider range of languages, and analyze the particular efficacy of a crosslingual approach to dependency parsing in a low-resource setting.",
"cite_spans": [
{
"start": 341,
"end": 363,
"text": "Mulcaire et al. (2019)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency Parsers",
"sec_num": "2.2"
},
{
"text": "We use a strong graph-based dependency parser with BiLSTM and biaffine attention , which is also used in related work (Schuster et al., 2019; Mulcaire et al., 2019) . Crucially, our parser only takes as input word representations. Universal parts of speech have been shown useful for low-resource dependency parsing (Duong et al., 2015; Ahmad et al., 2019) , but many realistic lowresource scenarios lack reliable part-of-speech taggers; here, we do not use parts of speech as input, and thus avoid the error-prone part-of-speech tagging pipeline. For the fastText baseline, word embeddings are not updated during training, to preserve crosslingual alignment . ",
"cite_spans": [
{
"start": 118,
"end": 141,
"text": "(Schuster et al., 2019;",
"ref_id": "BIBREF44"
},
{
"start": 142,
"end": 164,
"text": "Mulcaire et al., 2019)",
"ref_id": "BIBREF34"
},
{
"start": 316,
"end": 336,
"text": "(Duong et al., 2015;",
"ref_id": "BIBREF14"
},
{
"start": 337,
"end": 356,
"text": "Ahmad et al., 2019)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency Parsers",
"sec_num": "2.2"
},
{
"text": "We first conduct a set of experiments to assess the efficacy of multilingual CWRs for low-resource dependency parsing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "Following prior work on low-resource dependency parsing and crosslingual transfer (Zhang and Barzilay, 2015; Guo et al., 2015; Schuster et al., 2019) , we conduct multi-source experiments on six languages (German, Spanish, French, Italian, Portuguese, and Swedish) from Google universal dependency treebank version 2.0 (McDonald et al., 2013). 6 We train language models on the six languages and English to produce multilingual CWRs. For each tested language, we train a polyglot parser with the multilingual CWRs on the five other languages and English, and apply the parser to the test data for the target language. Importantly, the parsing annotation scheme is shared among the seven languages. Our results will show that the joint training approach for CWRs substantially outperforms the retrofitting approach.",
"cite_spans": [
{
"start": 82,
"end": 108,
"text": "(Zhang and Barzilay, 2015;",
"ref_id": "BIBREF59"
},
{
"start": 109,
"end": 126,
"text": "Guo et al., 2015;",
"ref_id": "BIBREF19"
},
{
"start": 127,
"end": 149,
"text": "Schuster et al., 2019)",
"ref_id": "BIBREF44"
},
{
"start": 344,
"end": 345,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Zero-Target Dependency Parsing",
"sec_num": "3.1"
},
{
"text": "The previous experiment compares the joint training and retrofitting approaches in low-resource dependency parsing only for relatively similar languages. In order to study the effectiveness more extensively, we apply it to a more typologically diverse set of languages. We use five pairs of languages for \"low-resource simulations,\" in which we reduce the size of a large treebank, and four languages for \"true low-resource experiments,\" where only small UD treebanks are available, allowing us to compare to other work in the low-resource condition (Table 1 ). Following de Lhoneux et al. 2018, we selected these language pairs to represent linguistic diversity. For each target language, we produce multilingual CWRs by training a polyglot language model with its related language (e.g., Arabic and Hebrew) as well as English (e.g., Arabic and English). We then train a polyglot dependency parser on each language pair and assess the crosslingual transfer in terms of target parsing accuracy. Each pair of related languages shares features like word order, morphology, or script. For example, Arabic and Hebrew are similar in their rich transfixing morphology , and Dutch and German share most of their word order features. We chose Chinese and Japanese as an example of a language pair which does not share a language family but does share characters.",
"cite_spans": [],
"ref_spans": [
{
"start": 550,
"end": 558,
"text": "(Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Diverse Low-Resource Parsing",
"sec_num": "3.2"
},
{
"text": "We chose Hungarian, Vietnamese, Uyghur, and Kazakh as true low-resource target languages because they had comparatively small amounts of annotated text in the UD corpus (Vietnamese: 1,400 sentences, 20,285 tokens; Hungarian: 910 sentences, 20,166 tokens; Uyghur: 1,656 sentences, 19,262 tokens; Kazakh: 31 sentences, 529 tokens;), yet had convenient sources of text for LM pretraining (Zeman et al., 2018) . 7 Other small treebanks exist, but in most cases another larger treebank exists for the same language, making domain adaptation a more likely option than crosslingual transfer. Also, recent work (Che et al., 2018) using contextual embeddings was topranked for most of these languages in the CoNLL 2018 shared task on UD parsing (Zeman et al., 2018) . 8 We use the same Universal Dependencies (UD) treebanks (Nivre et al., 2018) and train/development/test splits as the CoNLL 2018 shared task (Zeman et al., 2018 ). 9 The annotation scheme is again shared across languages, which facilitates crosslingual transfer. For each triple of two related languages and English, we downsample training and development data to match the language with the smallest treebank size. This allows for fairer comparisons because within each triple, the source language for any parser will have the same amount of training data. We further downsample sentences from the target train/development data to simulate low-resource scenarios. The ratio of training and development data is kept 5:1 throughout the simulations, and we denote the number of sentences in training data by |D \u2327 |. For testing, we use the CoNLL 2018 script on the gold word segmentations. For the truly low-resource languages, we also present results with word segmentations from the system outputs of Che et al. 2018 ",
"cite_spans": [
{
"start": 385,
"end": 405,
"text": "(Zeman et al., 2018)",
"ref_id": "BIBREF58"
},
{
"start": 408,
"end": 409,
"text": "7",
"ref_id": null
},
{
"start": 603,
"end": 621,
"text": "(Che et al., 2018)",
"ref_id": "BIBREF6"
},
{
"start": 736,
"end": 756,
"text": "(Zeman et al., 2018)",
"ref_id": "BIBREF58"
},
{
"start": 759,
"end": 760,
"text": "8",
"ref_id": null
},
{
"start": 900,
"end": 919,
"text": "(Zeman et al., 2018",
"ref_id": "BIBREF58"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Diverse Low-Resource Parsing",
"sec_num": "3.2"
},
{
"text": "In this section we describe the results of the various parsing experiments. Table 2 shows results on zero-target dependency parsing. First, we see that all CWRs greatly improve upon the fastText baseline. The joint training approach (Rosita), which uses no dictionaries, consistently outperforms the dictionary-dependent retrofitting approach (ELMos+Alignment). As discussed in the previous section, we can apply the alignment method to refine the alreadypolyglot Rosita using dictionaries. However, we observe a relatively limited gain in overall performance (74.5 vs. 73.9 LAS points), suggesting that Rosita (polyglot language model) is already developing useful multilingual CWRs for parsing without crosslingual supervision. Note that the degraded overall performance of our ELMo+Alignment compared to Schuster et al. 2019 (Conneau et al., 2018) . The absence of a dictionary yields much worse performance (69.2 vs. 73.1) in contrast with the joint training approach of Rosita, which also does not use a dictionary (73.9). We also present results using gold universal part of speech to compare to previous work in Table 3. We again see Rosita's effectiveness and a marginal benefit from refinement with dictionaries. It should also be noted that the reported results for French, Italian and German in Schuster et al. (2019) outperform all results from our controlled comparison; this may be due to the use of abundant LM training data. Nevertheless, joint training, with or without refinement, performs best on average in both gold and predicted POS settings.",
"cite_spans": [
{
"start": 828,
"end": 850,
"text": "(Conneau et al., 2018)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 76,
"end": 83,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4"
},
{
"text": "Low-Resource Simulations Figure 1 shows simulated low-resource results. 11 Of greatest interest are the significant improvements over monolingual parsers when adding English or relatedlanguage data. This improvement is consistent across languages and suggests that crosslingual transfer is a viable solution for a wide range of languages, even when (as in our case) languagespecific tuning or annotated resources like parallel corpora or bilingual dictionaries are not available. See Figure 2 for a visualization of the differences in performance with varying training size. The polyglot advantage is minor when the target language treebank is large, but dramatic in the condition where the target language has only 100 sentences. The fastText approaches consistently underperform the language model approaches, but show the same pattern. In addition, related-language polyglot (\"+rel.\") outperforms English polyglot in most cases in the low-resource condition. The exceptions to this pattern are Italian (whose treebank is of a different genre from the Spanish one), and Japanese and Chinese, which differ significantly in morphology and word order. The CMN/JPN result suggests that such typological features influence the degree of crosslingual transfer more than orthographic properties like shared characters. This result in crosslingual transfer also mirrors the observation from prior work (Gerz et al., 2018) that typological features of the language are predictive of monolingual LM performance. The related-language improvement also vanishes in the full-data condition (Figure 2 ), implying that the importance of shared linguistic features can be overcome with sufficient annotated data. It is also noteworthy that variations in word order, such as the order of adjective and noun, do not affect performance: Italian, Arabic, and others use a noun-adjective order while English uses an adjective-noun order, but their +ENG and +rel. results are comparable. The Croatian and Russian results are notable because of shared heritage but different scripts. Though Croatian uses the Latin alphabet and Russian uses Cyrillic, transfer between HRV+RUS is clearly more effective than HRV+ENG (82.00 vs. 79.21 LAS points when |D \u2327 | = 100). This suggests that character-based LMs can implicitly learn to transliterate between related languages with different scripts, even without parallel supervision.",
"cite_spans": [
{
"start": 1396,
"end": 1415,
"text": "(Gerz et al., 2018)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 25,
"end": 33,
"text": "Figure 1",
"ref_id": "FIGREF1"
},
{
"start": 484,
"end": 492,
"text": "Figure 2",
"ref_id": "FIGREF3"
},
{
"start": 1578,
"end": 1587,
"text": "(Figure 2",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Diverse Low-Resource Parsing",
"sec_num": "4.2"
},
{
"text": "Truly Low Resource Languages Finally we present \"true low-resource\" experiments for four languages in which little UD data is available (see Section 3.2). in Hungarian, Vietnamese, and Kazakh. Consistent with our simulations, we see that training parsers with the target's related language is more effective than with the more distant language, English. It is particularly noteworthy that the Rosita models, which do not use a parallel corpus or dictionary, dramatically improve over the best previously reported result from Schuster et al. 2019 Rosa and Mare\u010dek (2018) derived from parallel text. This result further corroborates our finding that the joint training approach to multilingual CWRs is more effective than retrofitting monolingual LMs.",
"cite_spans": [
{
"start": 546,
"end": 569,
"text": "Rosa and Mare\u010dek (2018)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Diverse Low-Resource Parsing",
"sec_num": "4.2"
},
{
"text": "We also evaluate the diverse low-resource language pairs using pretrained multilingual BERT (Devlin et al., 2019) as text embeddings ( Figure 3) . Here, the same language model (multilingual cased BERT, 12 covering 104 languages) is used for all parsers, with the only variation being in the training treebanks provided to each parser. Parsers are trained using the same hyperparameters and data as in Section 3.2. 13 There are two critical differences from our previous experiments: multilingual BERT is trained on much larger amounts of Wikipedia data compared to other LMs used in this work, and the WordPiece vocabulary (Wu et al., 2016) used in the cased multilingual BERT model has been shown to have a distribution skewed toward Latin alphabets (\u00c1cs, 2019). These results are thus not directly comparable to those in Figure 1 ; nevertheless, it is interesting to see that the results obtained with ELMo-like LMs are comparable to and in some cases better than results using a BERT model trained on over a hundred languages. Our results broadly fit with those of Pires et al. (2019) , who found that multilingual BERT was useful for zero-shot crosslingual syntactic transfer. In particular, we find nearly no performance benefit from cross-script transfer using BERT in a language pair (English-Japanese) for which they reported 12 Available at https://github.com/googleresearch/bert/ 13 AllenNLP version 0.9.0 was used for these experiments. poor performance in zero-shot transfer, contrary to our results using Rosita (Section 4.2).",
"cite_spans": [
{
"start": 92,
"end": 113,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF10"
},
{
"start": 416,
"end": 418,
"text": "13",
"ref_id": null
},
{
"start": 625,
"end": 642,
"text": "(Wu et al., 2016)",
"ref_id": "BIBREF56"
},
{
"start": 1070,
"end": 1089,
"text": "Pires et al. (2019)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [
{
"start": 135,
"end": 145,
"text": "Figure 3)",
"ref_id": "FIGREF4"
},
{
"start": 825,
"end": 833,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Comparison to Multilingual BERT Embeddings",
"sec_num": "4.3"
},
{
"text": "We saw the success of the joint polyglot training for multilingual CWRs over the retrofitting approach in the previous section. We hypothesize that CWRs from joint training provide useful representations for parsers by inducing nonlinear similarity in the vector spaces of different languages that we cannot retrieve with a simple alignment of monolingual pretrained language models. In order to test this hypothesis, we conduct a decontextual probe comprised of two steps. The decontextualization step effectively distills CWRs into word type vectors, where each unique word is mapped to exactly one embedding regardless of the context. We then conduct linear transformation-based word translation on the decontextualized vectors to quantify the degree of crosslingual similarity in the multilingual CWRs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decontextual Probe",
"sec_num": "5"
},
{
"text": "Recall from Section 2 that we produce CWRs from bidirectional LMs with character CNNs and twolayer LSTMs. We propose a method to remove the dependence on context c for the two LSTM layers (the CNN layer is already context-independent by design). During LM training, the hidden states of each layer h t are computed by the standard LSTM equations: Table 5 : Context independent vs. dependent performance in English. All embeddings are 512dimensional and trained on the same English corpus of approximately 50M tokens for fair comparisons. We also concatenate 128-dimensional character LSTM representations with the word vectors in every configuration to ensure all models have character input. UD scores are LAS, and SRL and NER are F 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 347,
"end": 354,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Decontextualization",
"sec_num": "5.1"
},
{
"text": "i t = (W i x t + +U i h t 1 + b i ) f t = (W f x t + U f h t 1 + b f ) c t = tanh (W c x t + U c h t 1 + b c ) o t = (W o x t + U o h t 1 + b o ) c t = f t c t 1 + i t c t h t = o t tanh (c t )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decontextualization",
"sec_num": "5.1"
},
{
"text": "We produce contextless vectors from pretrained LMs by removing recursion in the computation (i.e. setting h t 1 and c t 1 to 0):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decontextualization",
"sec_num": "5.1"
},
{
"text": "i t = (W i x t + b i ) f t = (W f x t + b f ) c t = tanh (W c x t + b c ) o t = (W o x t + b o ) c t = i t c t h t = o t tanh (c t )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decontextualization",
"sec_num": "5.1"
},
{
"text": "This method is fast to compute, as it does not require recurrent computation and only needs to see each word once. This way, each word is associated with a set of exactly three vectors from the three layers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decontextualization",
"sec_num": "5.1"
},
{
"text": "We perform a brief experiment to find what information is successfully retained by the decontextualized vectors, by using them as inputs to three tasks (in a monolingual English setting, for simplicity). For Universal Dependencies (UD) parsing, semantic role labeling (SRL), and named entity recognition (NER), we used the standard train/development/test splits from UD English EWT (Zeman et al., 2018) and Ontonotes (Pradhan et al., 2013) . Following Mulcaire et al. 2019, we use strong existing neural models for each task: for UD parsing, He et al. (2017) for SRL, and Peters et al. (2017) for NER. Table 5 compares the decontextualized vectors with the original CWRs (ELMo) and the conventional word type vectors, GloVe (Pennington et al., 2014) and fastText (Bojanowski et al., 2017) . In all three tasks, the decontextualized vectors substantially improve over fastText and GloVe vectors, and perform nearly on par with contextual Table 6 : Crosslingual alignment results (precision at 1) from decontextual probe. Layers 0, 1, and 2 denote the character CNN, first LSTM, and second LSTM layers in the language models respectively.",
"cite_spans": [
{
"start": 382,
"end": 402,
"text": "(Zeman et al., 2018)",
"ref_id": "BIBREF58"
},
{
"start": 417,
"end": 439,
"text": "(Pradhan et al., 2013)",
"ref_id": "BIBREF40"
},
{
"start": 542,
"end": 558,
"text": "He et al. (2017)",
"ref_id": "BIBREF21"
},
{
"start": 563,
"end": 592,
"text": "SRL, and Peters et al. (2017)",
"ref_id": null
},
{
"start": 718,
"end": 749,
"text": "GloVe (Pennington et al., 2014)",
"ref_id": null
},
{
"start": 763,
"end": 788,
"text": "(Bojanowski et al., 2017)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 602,
"end": 609,
"text": "Table 5",
"ref_id": null
},
{
"start": 937,
"end": 944,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Performance of decontextualized vectors",
"sec_num": null
},
{
"text": "ELMo. This suggests that while part of the advantage of CWRs is in the incorporation of context, they also benefit from rich context-independent representations present in deeper networks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Performance of decontextualized vectors",
"sec_num": null
},
{
"text": "Given the decontextualized vectors from each layer of the bidirectional language models, we can measure the crosslingual lexical correspondence in the multilingual CWRs by performing word translation. Concretely, suppose that we have training and evaluation word translation pairs from the source to the target language. Using the same word alignment objective discussed as in Section 2.1, we find a linear transform by aligning the decontextualized vectors for the training source-target word pairs. Then, we apply this linear transform to the decontextualized vector for each source word in the evaluation pairs. The closest target vector is found using the cross-domain similarity local scaling (CSLS) measure (Conneau et al., 2018) , which is designed to remedy the hubness problem (where a few \"hub\" points are nearest neighbors to many other points each) in word translation by normalizing the cosine similarity according to the degree of hubness.",
"cite_spans": [
{
"start": 713,
"end": 735,
"text": "(Conneau et al., 2018)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word Translation Test",
"sec_num": "5.2"
},
{
"text": "We again take the dictionaries from Conneau et al. (2018) with the given train/test split, and always use English as the target language. For each language, we take all words that appear three times or more in our LM training data and compute decontextualized vectors for them. Word translation is evaluated by choosing the closest vector among the English decontextualized vectors.",
"cite_spans": [
{
"start": 36,
"end": 57,
"text": "Conneau et al. (2018)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word Translation Test",
"sec_num": "5.2"
},
{
"text": "We present word translation results from our decontextual probe in Table 6 . We see that the first LSTM layer generally achieves the best crosslingual alignment both in ELMos and Rosita. This finding mirrors recent studies on layerwise transferability; representations from the first LSTM layer in a language model are most transferable across a range of tasks (Liu et al., 2019) . Our decontextual probe demonstrates that the first LSTM layer learns the most generalizable representations not only across tasks but also across languages. In all six languages, Rosita (joint LM training approach) outperforms ELMos (retrofitting approach) and the fastText vectors. This shows that for the polyglot (jointly trained) LMs, there is a preexisting similarity between languages' vector spaces beyond what a linear transform provides. The resulting language-agnostic representations lead to polyglot training's success in lowresource dependency parsing.",
"cite_spans": [
{
"start": 361,
"end": 379,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [
{
"start": 67,
"end": 74,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5.3"
},
{
"text": "In addition to the work mentioned above, much previous work has proposed techniques to transfer knowledge from a high-resource to a lowresource language for dependency parsing. Many of these methods use an essentially (either lexicalized or delexicalized) joint polyglot training setup (e.g., McDonald et al., 2011; Cohen et al., 2011; Duong et al., 2015; Guo et al., 2016; Vilares et al., 2016; Falenska and \u00c7 etinoglu, 2017 as well as many of the CoNLL 2017/2018 shared task participants: Lim 2018). Some use typological information to facilitate crosslingual transfer (e.g., Naseem et al., 2012; Zhang and Barzilay, 2015; Wang and Eisner, 2016; Rasooli and Collins, 2017; . Others use bitext (Zeman et al., 2018) , manually-specified rules (Naseem et al., 2012) , or surface statistics from gold universal part of speech (Wang and Eisner, 2018a,b) to map the source to target. The methods examined in this work to produce multilingual CWRs do not rely on such external information about the languages, and instead use relatively abundant LM data to learn crosslinguality that abstracts away from typological divergence.",
"cite_spans": [
{
"start": 293,
"end": 315,
"text": "McDonald et al., 2011;",
"ref_id": "BIBREF32"
},
{
"start": 316,
"end": 335,
"text": "Cohen et al., 2011;",
"ref_id": "BIBREF8"
},
{
"start": 336,
"end": 355,
"text": "Duong et al., 2015;",
"ref_id": "BIBREF14"
},
{
"start": 356,
"end": 373,
"text": "Guo et al., 2016;",
"ref_id": "BIBREF20"
},
{
"start": 374,
"end": 395,
"text": "Vilares et al., 2016;",
"ref_id": "BIBREF50"
},
{
"start": 396,
"end": 425,
"text": "Falenska and \u00c7 etinoglu, 2017",
"ref_id": "BIBREF15"
},
{
"start": 491,
"end": 494,
"text": "Lim",
"ref_id": null
},
{
"start": 578,
"end": 598,
"text": "Naseem et al., 2012;",
"ref_id": "BIBREF35"
},
{
"start": 599,
"end": 624,
"text": "Zhang and Barzilay, 2015;",
"ref_id": "BIBREF59"
},
{
"start": 625,
"end": 647,
"text": "Wang and Eisner, 2016;",
"ref_id": "BIBREF53"
},
{
"start": 648,
"end": 674,
"text": "Rasooli and Collins, 2017;",
"ref_id": "BIBREF42"
},
{
"start": 695,
"end": 715,
"text": "(Zeman et al., 2018)",
"ref_id": "BIBREF58"
},
{
"start": 743,
"end": 764,
"text": "(Naseem et al., 2012)",
"ref_id": "BIBREF35"
},
{
"start": 824,
"end": 850,
"text": "(Wang and Eisner, 2018a,b)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Further Related Work",
"sec_num": "6"
},
{
"text": "Recent work has developed several probing methods for (monolingual) contextual representations (Liu et al., 2019; Hewitt and Manning, 2019; Tenney et al., 2019) . Wada and Iwata (2018) showed that the (contextless) input and output word vectors in a polyglot word-based language model manifest a certain level of lexical correspondence between languages. Our decontextual probe demonstrated that the internal layers of polyglot language models capture crosslinguality and produce useful multilingual CWRs for downstream low-resource dependency parsing.",
"cite_spans": [
{
"start": 95,
"end": 113,
"text": "(Liu et al., 2019;",
"ref_id": "BIBREF29"
},
{
"start": 114,
"end": 139,
"text": "Hewitt and Manning, 2019;",
"ref_id": "BIBREF22"
},
{
"start": 140,
"end": 160,
"text": "Tenney et al., 2019)",
"ref_id": "BIBREF47"
},
{
"start": 163,
"end": 184,
"text": "Wada and Iwata (2018)",
"ref_id": "BIBREF51"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Further Related Work",
"sec_num": "6"
},
{
"text": "We assessed recent approaches to multilingual contextual word representations, and compared them in the context of low-resource dependency parsing. Our parsing results illustrate that a joint training approach for polyglot language models outperforms a retrofitting approach of aligning monolingual language models. Our decontextual probe showed that jointly trained LMs learn a better crosslingual lexical correspondence than the one produced by aligning monolingual language models or word type vectors. Our results provide a strong basis for multilingual representation learning and for further study of crosslingual transfer in a low-resource setting beyond dependency parsing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Schuster et al. (2019) only used the first LSTM layer, but we found a performance benefit from using all layers in preliminary results.4 Conneau et al. (2018) developed an unsupervised alignment technique that does not require a dictionary. We found that their unsupervised alignment yielded substantially degraded performance in downstream parsing in line with the findings ofSchuster et al. (2019).5 https://github.com/facebookresearch/ MUSE#ground-truth-bilingual-dictionaries",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://github.com/ryanmcd/uni-dep-tb",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The one exception is Uyghur where we only have 3M words in the raw LM data fromZeman et al. (2018).8 In Kazakh,Che et al. (2018) did not use CWRs due to the extremely small treebank size.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "See Appendix for a list of UD treebanks used.10 System outputs for all shared task systems are available at https://lindat.mff.cuni.cz/repository/ xmlui/handle/11234/1-2885",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors thank Nikolaos Pappas and Tal Schuster as well as the anonymous reviewers for their helpful feedback. This research was funded in part by NSF grant IIS-1562364, a Google research award to NAS, the Funai Overseas Scholarship to JK, and the NVIDIA Corporation through the donation of a GeForce GPU. Jeffrey Pennington, Richard Socher, and Christo-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Exploring BERT's vocabulary",
"authors": [
{
"first": "",
"middle": [],
"last": "Judit\u00e1cs",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Judit\u00c1cs. 2019. Exploring BERT's vocabulary.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "On difficulties of cross-lingual transfer with order differences: A case study on dependency parsing",
"authors": [
{
"first": "Wasi",
"middle": [],
"last": "Ahmad",
"suffix": ""
},
{
"first": "Zhisong",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xuezhe",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Nanyun",
"middle": [],
"last": "Peng",
"suffix": ""
}
],
"year": 2019,
"venue": "Proc. of NAACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1253"
]
},
"num": null,
"urls": [],
"raw_text": "Wasi Ahmad, Zhisong Zhang, Xuezhe Ma, Eduard Hovy, Kai-Wei Chang, and Nanyun Peng. 2019. On difficulties of cross-lingual transfer with order dif- ferences: A case study on dependency parsing. In Proc. of NAACL-HLT.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Contextaware cross-lingual mapping",
"authors": [
{
"first": "Hanan",
"middle": [],
"last": "Aldarmaki",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
}
],
"year": 2019,
"venue": "Proc. of NAACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hanan Aldarmaki and Mona Diab. 2019. Context- aware cross-lingual mapping. In Proc. of NAACL- HLT.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Towards a Universal Analyzer of Natural Languages",
"authors": [
{
"first": "Waleed",
"middle": [],
"last": "Ammar",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Waleed Ammar. 2016. Towards a Universal Analyzer of Natural Languages. Ph.D. thesis, Carnegie Mel- lon University.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Enriching word vectors with subword information",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. TACL, 5.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Towards better UD parsing: Deep contextualized word embeddings, ensemble, and treebank concatenation",
"authors": [
{
"first": "Wanxiang",
"middle": [],
"last": "Che",
"suffix": ""
},
{
"first": "Yijia",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yuxuan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Bo",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wanxiang Che, Yijia Liu, Yuxuan Wang, Bo Zheng, and Ting Liu. 2018. Towards better UD parsing: Deep contextualized word embeddings, ensemble, and treebank concatenation. In Proc. of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "One billion word benchmark for measuring progress in statistical language modeling",
"authors": [
{
"first": "Ciprian",
"middle": [],
"last": "Chelba",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Qi",
"middle": [],
"last": "Ge",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1312.3005"
]
},
"num": null,
"urls": [],
"raw_text": "Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, Phillipp Koehn, and Tony Robin- son. 2013. One billion word benchmark for mea- suring progress in statistical language modeling. arXiv:1312.3005.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Unsupervised structure prediction with nonparallel multilingual guidance",
"authors": [
{
"first": "Dipanjan",
"middle": [],
"last": "Shay B Cohen",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Das",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2011,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shay B Cohen, Dipanjan Das, and Noah A. Smith. 2011. Unsupervised structure prediction with non- parallel multilingual guidance. In Proc. of EMNLP.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Word translation without parallel data",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Marc'aurelio",
"middle": [],
"last": "Ranzato",
"suffix": ""
},
{
"first": "Ludovic",
"middle": [],
"last": "Denoyer",
"suffix": ""
},
{
"first": "Herv\u00e9",
"middle": [],
"last": "J\u00e9gou",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. of ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, and Herv\u00e9 J\u00e9gou. 2018. Word translation without parallel data. In Proc. of ICLR.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proc. of NAACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proc. of NAACL-HLT.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Deep biaffine attention for neural dependency parsing",
"authors": [
{
"first": "Timothy",
"middle": [],
"last": "Dozat",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. of ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Timothy Dozat and Christopher Manning. 2017. Deep biaffine attention for neural dependency parsing. In Proc. of ICLR.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Stanford's graph-based neural dependency parser at the conll 2017 shared task",
"authors": [
{
"first": "Timothy",
"middle": [],
"last": "Dozat",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Timothy Dozat, Peng Qi, and Christopher D. Manning. 2017. Stanford's graph-based neural dependency parser at the conll 2017 shared task. In Proc. of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Adaptive subgradient methods for online learning and stochastic optimization",
"authors": [
{
"first": "John",
"middle": [
"C"
],
"last": "Duchi",
"suffix": ""
},
{
"first": "Elad",
"middle": [],
"last": "Hazan",
"suffix": ""
},
{
"first": "Yoram",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John C. Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. JMLR, 12.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Low resource dependency parsing: Cross-lingual parameter sharing in a neural network parser",
"authors": [
{
"first": "Long",
"middle": [],
"last": "Duong",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bird",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Cook",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. of ACL-IJCNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Long Duong, Trevor Cohn, Steven Bird, and Paul Cook. 2015. Low resource dependency parsing: Cross-lingual parameter sharing in a neural network parser. In Proc. of ACL-IJCNLP.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Lexicalized vs. delexicalized parsing in low-resource scenarios",
"authors": [
{
"first": "Agnieszka",
"middle": [],
"last": "Falenska",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Etinoglu",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. of IWPT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Agnieszka Falenska and\u00d6zlem \u00c7 etinoglu. 2017. Lex- icalized vs. delexicalized parsing in low-resource scenarios. In Proc. of IWPT.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Retrofitting word vectors to semantic lexicons",
"authors": [
{
"first": "Manaal",
"middle": [],
"last": "Faruqui",
"suffix": ""
},
{
"first": "Jesse",
"middle": [],
"last": "Dodge",
"suffix": ""
},
{
"first": "Sujay",
"middle": [],
"last": "Kumar Jauhar",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. of NAACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.3115/v1/N15-1184"
]
},
"num": null,
"urls": [],
"raw_text": "Manaal Faruqui, Jesse Dodge, Sujay Kumar Jauhar, Chris Dyer, Eduard Hovy, and Noah A. Smith. 2015. Retrofitting word vectors to semantic lexicons. In Proc. of NAACL-HLT.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "AllenNLP: A deep semantic natural language processing platform",
"authors": [
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Joel",
"middle": [],
"last": "Grus",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Oyvind",
"middle": [],
"last": "Tafjord",
"suffix": ""
},
{
"first": "Pradeep",
"middle": [],
"last": "Dasigi",
"suffix": ""
},
{
"first": "Nelson",
"middle": [
"F"
],
"last": "Liu",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Schmitz",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. of NLP-OSS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/W18-2501"
]
},
"num": null,
"urls": [],
"raw_text": "Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew Pe- ters, Michael Schmitz, and Luke Zettlemoyer. 2018. AllenNLP: A deep semantic natural language pro- cessing platform. In Proc. of NLP-OSS.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "On the relation between linguistic typology and (limitations of) multilingual language modeling",
"authors": [
{
"first": "Daniela",
"middle": [],
"last": "Gerz",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Edoardo",
"suffix": ""
},
{
"first": "Roi",
"middle": [],
"last": "Ponti",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Reichart",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniela Gerz, Ivan Vuli\u0107, Edoardo Maria Ponti, Roi Reichart, and Anna Korhonen. 2018. On the re- lation between linguistic typology and (limitations of) multilingual language modeling. In Proc. of EMNLP.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Cross-lingual dependency parsing based on distributed representations",
"authors": [
{
"first": "Jiang",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Wanxiang",
"middle": [],
"last": "Che",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": ""
},
{
"first": "Haifeng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. of ACL-IJCNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.3115/v1/P15-1119"
]
},
"num": null,
"urls": [],
"raw_text": "Jiang Guo, Wanxiang Che, David Yarowsky, Haifeng Wang, and Ting Liu. 2015. Cross-lingual depen- dency parsing based on distributed representations. In Proc. of ACL-IJCNLP.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A representation learning framework for multi-source transfer parsing",
"authors": [
{
"first": "Jiang",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Wanxiang",
"middle": [],
"last": "Che",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": ""
},
{
"first": "Haifeng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2016,
"venue": "Proc. of AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiang Guo, Wanxiang Che, David Yarowsky, Haifeng Wang, and Ting Liu. 2016. A representation learn- ing framework for multi-source transfer parsing. In Proc. of AAAI.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Deep semantic role labeling: What works and what's next",
"authors": [
{
"first": "Luheng",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luheng He, Kenton Lee, Mike Lewis, and Luke Zettle- moyer. 2017. Deep semantic role labeling: What works and what's next. In Proc. of ACL.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "A structural probe for finding syntax in word representations",
"authors": [
{
"first": "John",
"middle": [],
"last": "Hewitt",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2019,
"venue": "Proc. of NAACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Hewitt and Christopher D. Manning. 2019. A structural probe for finding syntax in word represen- tations. In Proc. of NAACL-HLT.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "ADAM: A Method for Stochastic Optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [
"Lei"
],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. of ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Lei Ba. 2015. ADAM: A Method for Stochastic Optimization. In Proc. of ICLR.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Crosslingual language model pretraining",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
}
],
"year": 2019,
"venue": "Proc. of NeurIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guillaume Lample and Alexis Conneau. 2019. Cross- lingual language model pretraining. In Proc. of NeurIPS.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Parameter sharing between dependency parsers for related languages",
"authors": [
{
"first": "Johannes",
"middle": [],
"last": "Miryam De Lhoneux",
"suffix": ""
},
{
"first": "Isabelle",
"middle": [],
"last": "Bjerva",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "Augenstein",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Miryam de Lhoneux, Johannes Bjerva, Isabelle Augen- stein, and Anders S\u00f8gaard. 2018. Parameter sharing between dependency parsers for related languages. In Proc. of EMNLP.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "From raw text to universal dependencies -look, no tags!",
"authors": [
{
"first": "Yan",
"middle": [],
"last": "Miryam De Lhoneux",
"suffix": ""
},
{
"first": "Ali",
"middle": [],
"last": "Shao",
"suffix": ""
},
{
"first": "Eliyahu",
"middle": [],
"last": "Basirat",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Kiperwasser",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Stymne",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/K17-3022"
]
},
"num": null,
"urls": [],
"raw_text": "Miryam de Lhoneux, Yan Shao, Ali Basirat, Eliyahu Kiperwasser, Sara Stymne, Yoav Goldberg, and Joakim Nivre. 2017. From raw text to universal de- pendencies -look, no tags! In Proc. of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "SEx BiST: A multi-source trainable parser with deep contextualized lexical representations",
"authors": [
{
"first": "Kyungtae",
"middle": [],
"last": "Lim",
"suffix": ""
},
{
"first": "Cheoneum",
"middle": [],
"last": "Park",
"suffix": ""
},
{
"first": "Changki",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Thierry",
"middle": [],
"last": "Poibeau",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/K18-2014"
]
},
"num": null,
"urls": [],
"raw_text": "KyungTae Lim, Cheoneum Park, Changki Lee, and Thierry Poibeau. 2018. SEx BiST: A multi-source trainable parser with deep contextualized lexical representations. In Proc. of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Univer- sal Dependencies.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "A system for multilingual dependency parsing based on bidirectional LSTM feature representations",
"authors": [
{
"first": "Kyungtae",
"middle": [],
"last": "Lim",
"suffix": ""
},
{
"first": "Thierry",
"middle": [],
"last": "Poibeau",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/K17-3006"
]
},
"num": null,
"urls": [],
"raw_text": "KyungTae Lim and Thierry Poibeau. 2017. A system for multilingual dependency parsing based on bidi- rectional LSTM feature representations. In Proc. of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Linguistic knowledge and transferability of contextual representations",
"authors": [
{
"first": "Nelson",
"middle": [
"F"
],
"last": "Liu",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Belinkov",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2019,
"venue": "Proc. of NAACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nelson F. Liu, Matt Gardner, Yonatan Belinkov, Matthew Peters, and Noah A. Smith. 2019. Lin- guistic knowledge and transferability of contextual representations. In Proc. of NAACL-HLT.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Stackpointer networks for dependency parsing",
"authors": [
{
"first": "Xuezhe",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Zecong",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Jingzhou",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Nanyun",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xuezhe Ma, Zecong Hu, Jingzhou Liu, Nanyun Peng, Graham Neubig, and Eduard Hovy. 2018. Stack- pointer networks for dependency parsing. In Proc. of ACL.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Universal dependency annotation for multilingual parsing",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Yvonne",
"middle": [],
"last": "Quirmbach-Brundage",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Kuzman",
"middle": [],
"last": "Ganchev",
"suffix": ""
},
{
"first": "Keith",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Oscar",
"middle": [],
"last": "T\u00e4ckstr\u00f6m",
"suffix": ""
},
{
"first": "Claudia",
"middle": [],
"last": "Bedini",
"suffix": ""
}
],
"year": 2013,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan McDonald, Joakim Nivre, Yvonne Quirmbach- Brundage, Yoav Goldberg, Dipanjan Das, Kuz- man Ganchev, Keith Hall, Slav Petrov, Hao Zhang, Oscar T\u00e4ckstr\u00f6m, Claudia Bedini, N\u00faria Bertomeu Castell\u00f3, and Jungmee Lee. 2013. Uni- versal dependency annotation for multilingual pars- ing. In Proc. of ACL.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Multi-source transfer of delexicalized dependency parsers",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "Keith",
"middle": [],
"last": "Hall",
"suffix": ""
}
],
"year": 2011,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan McDonald, Slav Petrov, and Keith Hall. 2011. Multi-source transfer of delexicalized dependency parsers. In Proc. of EMNLP.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Exploiting similarities among languages for machine translation",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1309.4168"
]
},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Quoc V. Le, and Ilya Sutskever. 2013. Exploiting similarities among languages for ma- chine translation. arXiv:1309.4168.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Polyglot contextual representations improve crosslingual transfer",
"authors": [
{
"first": "Phoebe",
"middle": [],
"last": "Mulcaire",
"suffix": ""
},
{
"first": "Jungo",
"middle": [],
"last": "Kasai",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2019,
"venue": "Proc. of NAACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Phoebe Mulcaire, Jungo Kasai, and Noah A. Smith. 2019. Polyglot contextual representations improve crosslingual transfer. In Proc. of NAACL-HLT.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Selective sharing for multilingual dependency parsing",
"authors": [
{
"first": "Tahira",
"middle": [],
"last": "Naseem",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "Amir",
"middle": [],
"last": "Globerson",
"suffix": ""
}
],
"year": 2012,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tahira Naseem, Regina Barzilay, and Amir Globerson. 2012. Selective sharing for multilingual dependency parsing. In Proc. of ACL.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "GloVe: Global vectors for word representation",
"authors": [
{
"first": "D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "pher D. Manning. 2014. GloVe: Global vectors for word representation. In Proc. of EMNLP.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Semi-supervised sequence tagging with bidirectional language models",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Waleed",
"middle": [],
"last": "Ammar",
"suffix": ""
},
{
"first": "Chandra",
"middle": [],
"last": "Bhagavatula",
"suffix": ""
},
{
"first": "Russell",
"middle": [],
"last": "Power",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Peters, Waleed Ammar, Chandra Bhagavat- ula, and Russell Power. 2017. Semi-supervised se- quence tagging with bidirectional language models. In Proc. of ACL.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. of NAACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In Proc. of NAACL-HLT.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "How multilingual is multilingual BERT?",
"authors": [
{
"first": "Telmo",
"middle": [],
"last": "Pires",
"suffix": ""
},
{
"first": "Eva",
"middle": [],
"last": "Schlinger",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Garrette",
"suffix": ""
}
],
"year": 2019,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1493"
]
},
"num": null,
"urls": [],
"raw_text": "Telmo Pires, Eva Schlinger, and Dan Garrette. 2019. How multilingual is multilingual BERT? In Proc. of ACL.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Towards robust linguistic analysis using ontonotes",
"authors": [
{
"first": "Alessandro",
"middle": [],
"last": "Sameer Pradhan",
"suffix": ""
},
{
"first": "Nianwen",
"middle": [],
"last": "Moschitti",
"suffix": ""
},
{
"first": "Hwee Tou",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "Olga",
"middle": [],
"last": "Bj\u00f6rkelund",
"suffix": ""
},
{
"first": "Yuchen",
"middle": [],
"last": "Uryupina",
"suffix": ""
},
{
"first": "Zhi",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhong",
"suffix": ""
}
],
"year": 2013,
"venue": "Proc. of CoNLL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Hwee Tou Ng, Anders Bj\u00f6rkelund, Olga Uryupina, Yuchen Zhang, and Zhi Zhong. 2013. Towards ro- bust linguistic analysis using ontonotes. In Proc. of CoNLL.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Multi-language neural network language models",
"authors": [
{
"first": "Anton",
"middle": [],
"last": "Ragni",
"suffix": ""
},
{
"first": "Edgar",
"middle": [],
"last": "Dakin",
"suffix": ""
},
{
"first": "Xie",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "J",
"middle": [
"F"
],
"last": "Mark",
"suffix": ""
},
{
"first": "Kate",
"middle": [],
"last": "Gales",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Knill",
"suffix": ""
}
],
"year": 2016,
"venue": "Proc. of INTERSPEECH",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anton Ragni, Edgar Dakin, Xie Chen, Mark J. F. Gales, and Kate Knill. 2016. Multi-language neural net- work language models. In Proc. of INTERSPEECH.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Cross-lingual syntactic transfer with limited resources. TACL, 5",
"authors": [
{
"first": "Mohammad",
"middle": [],
"last": "Sadegh",
"suffix": ""
},
{
"first": "Rasooli",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00061"
]
},
"num": null,
"urls": [],
"raw_text": "Mohammad Sadegh Rasooli and Michael Collins. 2017. Cross-lingual syntactic transfer with limited resources. TACL, 5.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "CUNI x-ling: Parsing under-resourced languages in CoNLL 2018 UD shared task",
"authors": [
{
"first": "Rudolf",
"middle": [],
"last": "Rosa",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Mare\u010dek",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rudolf Rosa and David Mare\u010dek. 2018. CUNI x-ling: Parsing under-resourced languages in CoNLL 2018 UD shared task. In Proc. of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Univer- sal Dependencies.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Cross-lingual alignment of contextual word embeddings, with applications to zeroshot dependency parsing",
"authors": [
{
"first": "Tal",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Ori",
"middle": [],
"last": "Ram",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "Amir",
"middle": [],
"last": "Globerson",
"suffix": ""
}
],
"year": 2019,
"venue": "Proc. of NAACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tal Schuster, Ori Ram, Regina Barzilay, and Amir Globerson. 2019. Cross-lingual alignment of con- textual word embeddings, with applications to zero- shot dependency parsing. In Proc. of NAACL-HLT.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "82 treebanks, 34 models: Universal dependency parsing with multi-treebank models",
"authors": [
{
"first": "Aaron",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "Bernd",
"middle": [],
"last": "Bohnet",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Miryam De Lhoneux",
"suffix": ""
},
{
"first": "Yan",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Shao",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Stymne",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aaron Smith, Bernd Bohnet, Miryam de Lhoneux, Joakim Nivre, Yan Shao, and Sara Stymne. 2018. 82 treebanks, 34 models: Universal dependency pars- ing with multi-treebank models. In Proc. of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Target language adaptation of discriminative transfer parsers",
"authors": [
{
"first": "Oscar",
"middle": [],
"last": "T\u00e4ckstr\u00f6m",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2013,
"venue": "Proc. of NAACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oscar T\u00e4ckstr\u00f6m, Ryan McDonald, and Joakim Nivre. 2013. Target language adaptation of discriminative transfer parsers. In Proc. of NAACL-HLT.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "BERT rediscovers the classical NLP pipeline",
"authors": [
{
"first": "Ian",
"middle": [],
"last": "Tenney",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Ellie",
"middle": [],
"last": "Pavlick",
"suffix": ""
}
],
"year": 2019,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019. BERT rediscovers the classical NLP pipeline. In Proc. of ACL.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Polyglot neural language models: A case study in cross-lingual phonetic representation learning",
"authors": [
{
"first": "Yulia",
"middle": [],
"last": "Tsvetkov",
"suffix": ""
},
{
"first": "Sunayana",
"middle": [],
"last": "Sitaram",
"suffix": ""
},
{
"first": "Manaal",
"middle": [],
"last": "Faruqui",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Littell",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Mortensen",
"suffix": ""
},
{
"first": "Alan",
"middle": [
"W"
],
"last": "Black",
"suffix": ""
},
{
"first": "Lori",
"middle": [],
"last": "Levin",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
}
],
"year": 2016,
"venue": "Proc. of NAACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/N16-1161"
]
},
"num": null,
"urls": [],
"raw_text": "Yulia Tsvetkov, Sunayana Sitaram, Manaal Faruqui, Guillaume Lample, Patrick Littell, David Mortensen, Alan W Black, Lori Levin, and Chris Dyer. 2016. Polyglot neural language models: A case study in cross-lingual phonetic representation learning. In Proc. of NAACL-HLT.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "UParse: the Edinburgh system for the CoNLL 2017 UD shared task",
"authors": [
{
"first": "Clara",
"middle": [],
"last": "Vania",
"suffix": ""
},
{
"first": "Xingxing",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lopez",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/K17-3010"
]
},
"num": null,
"urls": [],
"raw_text": "Clara Vania, Xingxing Zhang, and Adam Lopez. 2017. UParse: the Edinburgh system for the CoNLL 2017 UD shared task. In Proc. of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Univer- sal Dependencies.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "One model, two languages: training bilingual parsers with harmonized treebanks",
"authors": [
{
"first": "David",
"middle": [],
"last": "Vilares",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "G\u00f3mez-Rodr\u00edguez",
"suffix": ""
},
{
"first": "Miguel",
"middle": [
"A"
],
"last": "Alonso",
"suffix": ""
}
],
"year": 2016,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/P16-2069"
]
},
"num": null,
"urls": [],
"raw_text": "David Vilares, Carlos G\u00f3mez-Rodr\u00edguez, and Miguel A. Alonso. 2016. One model, two lan- guages: training bilingual parsers with harmonized treebanks. In Proc. of ACL.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Unsupervised cross-lingual word embedding by multilingual neural language models",
"authors": [
{
"first": "Takashi",
"middle": [],
"last": "Wada",
"suffix": ""
},
{
"first": "Tomoharu",
"middle": [],
"last": "Iwata",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1809.02306"
]
},
"num": null,
"urls": [],
"raw_text": "Takashi Wada and Tomoharu Iwata. 2018. Unsuper- vised cross-lingual word embedding by multilingual neural language models. arXiv:1809.02306.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "IBM research at the CoNLL 2018 shared task on multilingual parsing",
"authors": [
{
"first": "Hui",
"middle": [],
"last": "Wan",
"suffix": ""
},
{
"first": "Tahira",
"middle": [],
"last": "Naseem",
"suffix": ""
},
{
"first": "Young-Suk",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Vittorio",
"middle": [],
"last": "Castelli",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Ballesteros",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/K18-2009"
]
},
"num": null,
"urls": [],
"raw_text": "Hui Wan, Tahira Naseem, Young-Suk Lee, Vittorio Castelli, and Miguel Ballesteros. 2018. IBM re- search at the CoNLL 2018 shared task on multilin- gual parsing. In Proc. of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Uni- versal Dependencies.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "The galactic dependencies treebanks: Getting more data by synthesizing new languages",
"authors": [
{
"first": "Dingquan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dingquan Wang and Jason Eisner. 2016. The galactic dependencies treebanks: Getting more data by syn- thesizing new languages. TACL, 4.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "Surface statistics of an unknown language indicate how to parse it",
"authors": [
{
"first": "Dingquan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dingquan Wang and Jason Eisner. 2018a. Surface statistics of an unknown language indicate how to parse it. TACL, 6.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "Synthetic data made to order: The case of parsing",
"authors": [
{
"first": "Dingquan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dingquan Wang and Jason Eisner. 2018b. Synthetic data made to order: The case of parsing. In Proc. of EMNLP.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "Google's neural machine translation system: Bridging the gap between human and machine translation",
"authors": [
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Norouzi",
"suffix": ""
},
{
"first": "Maxim",
"middle": [],
"last": "Macherey",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Krikun",
"suffix": ""
},
{
"first": "Qin",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Klaus",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Macherey",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1609.08144"
]
},
"num": null,
"urls": [],
"raw_text": "Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation. arXiv:1609.08144.",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "ADADELTA: an adaptive learning rate method",
"authors": [
{
"first": "D",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zeiler",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew D Zeiler. 2012. ADADELTA: an adaptive learning rate method. arxiv:1212.5701.",
"links": null
},
"BIBREF58": {
"ref_id": "b58",
"title": "CoNLL 2018 shared task: Multilingual parsing from raw text to universal dependencies",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Zeman",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Haji\u010d",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Popel",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Potthast",
"suffix": ""
},
{
"first": "Milan",
"middle": [],
"last": "Straka",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Ginter",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Zeman, Jan Haji\u010d, Martin Popel, Martin Pot- thast, Milan Straka, Filip Ginter, Joakim Nivre, and Slav Petrov. 2018. CoNLL 2018 shared task: Multi- lingual parsing from raw text to universal dependen- cies. In Proc. of the CoNLL 2018 Shared Task: Mul- tilingual Parsing from Raw Text to Universal Depen- dencies.",
"links": null
},
"BIBREF59": {
"ref_id": "b59",
"title": "Hierarchical low-rank tensors for multilingual transfer parsing",
"authors": [
{
"first": "Yuan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/D15-1213"
]
},
"num": null,
"urls": [],
"raw_text": "Yuan Zhang and Regina Barzilay. 2015. Hierarchical low-rank tensors for multilingual transfer parsing. In Proc. of EMNLP.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "11 A table with full details including different size simulations is provided in the appendix.ARA HEB HRV RUS NLD DEU SPA ITA CMN JPN 50",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF1": {
"text": "LAS for UD parsing results in a simulated low-resource setting where the size of the target language treebank (|D \u2327 |) is set to 100 sentences.",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF3": {
"text": "Plots of parsing performance vs. target language treebank size for several example languages. The size 0 target treebank point indicates a parser trained only on the source language treebank but with polyglot representations, allowing transfer to the target test treebank using no target language training trees. See Appendix for results with zero-target-treebank and intermediate size data (|D \u2327 | 2 {0, 100, 500, 1000}) for all languages.",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF4": {
"text": "LAS for UD parsing results in a simulated low-resource setting ((|D \u2327 | = 100) using multilingual BERT embeddings in place of Rosita. Cf.Figure 1.",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF5": {
"text": "and Poibeau (2017); Vania et al. (2017); de Lhoneux et al. (2017); Che et al. (2018); Wan et al. (2018); Smith et al. (2018); Lim et al.",
"type_str": "figure",
"uris": null,
"num": null
},
"TABREF1": {
"content": "<table/>",
"text": "",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF2": {
"content": "<table/>",
"text": "(HUN, VIE, UIG) andSmith et al. (2018) (KAZ) for a direct comparison to those languages' best previously reported parsers.10",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF3": {
"content": "<table><tr><td>Model</td><td>DEU</td><td>SPA</td><td>FRA</td><td>ITA</td><td>POR</td><td>SWE</td><td>AVG</td></tr><tr><td>Schuster et al. (2019) (retrofitting)</td><td colspan=\"6\">61.4 77.5 77.0 77.6 73.9 71.0</td><td>73.1</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td>69.2</td></tr><tr><td>fastText + Alignment</td><td colspan=\"6\">45.2 68.5 62.8 58.9 61.1 50.4</td><td>57.8</td></tr><tr><td>ELMos + Alignment (retrofitting)</td><td colspan=\"6\">57.3 75.4 73.7 71.6 75.1 74.2</td><td>71.2</td></tr></table>",
"text": "'s reported results (71.2 vs. 73.1) is likely Schuster et al. (2019) (retrofitting, no dictionaries) 61.7 76.6 76.3 77.1 69.1 54.2 Rosita (joint training, no dictionaries) 58.0 81.8 75.6 74.8 77.1 76.2 73.9 Rosita + Refinement (joint training + retrofitting) 61.7 79.7 75.8 76.0 76.8 76.7 74.5",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF4": {
"content": "<table><tr><td>Model</td><td>DEU</td><td>SPA</td><td>FRA</td><td>ITA</td><td>POR</td><td>SWE</td><td>AVG</td></tr><tr><td>Zhang and Barzilay (2015)</td><td colspan=\"6\">54.1 68.3 68.8 69.4 72.5 62.5</td><td>65.9</td></tr><tr><td>Guo et al. (2016)</td><td colspan=\"6\">55.9 73.1 71.0 71.2 78.6 69.5</td><td>69.9</td></tr><tr><td>Ammar et al. (2016)</td><td colspan=\"6\">57.1 74.6 73.9 72.5 77.0 68.1</td><td>70.5</td></tr><tr><td>Schuster et al. (2019) (retrofitting)</td><td colspan=\"6\">65.2 80.0 80.8 79.8 82.7 75.4</td><td>77.3</td></tr><tr><td colspan=\"7\">Schuster et al. (2019) (retrofitting, no dictionaries) 64.1 77.8 79.8 79.7 79.1 69.6</td><td>75.0</td></tr><tr><td>Rosita (joint training, no dictionaries)</td><td colspan=\"6\">63.6 83.4 78.9 77.8 83.0 79.6</td><td>77.7</td></tr><tr><td>Rosita + Refinement (joint training + retrofitting)</td><td colspan=\"6\">64.8 82.1 78.7 78.8 84.1 79.1</td><td>77.9</td></tr></table>",
"text": "Zero-target results in LAS. Results reported in prior work (above the line) use an unknown amount of LM training data; all models below the line are limited to approximately 50M words per language.",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF5": {
"content": "<table/>",
"text": "Zero-target results in LAS with gold UPOS.",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF6": {
"content": "<table><tr><td>shows these results.</td></tr><tr><td>Consistent with our simulations, our parsers on</td></tr><tr><td>top of Rosita (multilingual CWRs from the joint</td></tr><tr><td>training approach) substantially outperform the</td></tr><tr><td>parsers with ELMos (monolingual CWRs) in all</td></tr><tr><td>languages, and establish a new state of the art</td></tr></table>",
"text": "",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF7": {
"content": "<table/>",
"text": "LAS (F 1 ) comparison for truly low-resource languages. The gold and pred. columns show results under gold segmentation and predicted segmentation. The languages in the parentheses indicate the languages used in parser training.",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF11": {
"content": "<table><tr><td>Vector</td><td>DEU</td><td>SPA</td><td>FRA</td><td>ITA</td><td>POR</td><td>SWE</td></tr><tr><td colspan=\"7\">fastText 31.6 54.8 56.7 50.2 55.5 43.9</td></tr><tr><td/><td/><td/><td>ELMos</td><td/><td/><td/></tr><tr><td colspan=\"2\">Layer 0 19</td><td/><td/><td/><td/><td/></tr></table>",
"text": ".7 41.5 41.1 36.9 44.6 27.5 Layer 1 24.4 46.4 47.6 44.2 48.3 36.3 Layer 2 19.9 40.5 41.9 38.1 42.5 30.9 Rosita Layer 0 37.9 56.6 58.2 57.5 56.6 50.6 Layer 1 40.3 56.3 57.2 58.1 56.5 53.7 Layer 2 38.8 51.1 52.7 53.6 50.7 50.8",
"num": null,
"html": null,
"type_str": "table"
}
}
}
}