ACL-OCL / Base_JSON /prefixT /json /tlt /2020.tlt-1.6.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:42:55.943748Z"
},
"title": "Cross-Lingual Domain Adaptation for Dependency Parsing",
"authors": [
{
"first": "Sara",
"middle": [],
"last": "Stymne",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Uppsala University",
"location": {}
},
"email": "sara.stymne@lingfil.uu.se"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We show how we can adapt parsing to low-resource domains by combining treebanks across languages for a parser model with treebank embeddings. We demonstrate how we can take advantage of in-domain treebanks from other languages, and show that this is especially useful when only out-of-domain treebanks are available for the target language. The method is also extended to low-resource languages by using out-of-domain treebanks from related languages. Two parameter-free methods for applying treebank embeddings at test time are proposed, which give competitive results to tuned methods when applied to Twitter data and transcribed speech. This gives us a method for selecting treebanks and training a parser targeted at any combination of domain and language.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "We show how we can adapt parsing to low-resource domains by combining treebanks across languages for a parser model with treebank embeddings. We demonstrate how we can take advantage of in-domain treebanks from other languages, and show that this is especially useful when only out-of-domain treebanks are available for the target language. The method is also extended to low-resource languages by using out-of-domain treebanks from related languages. Two parameter-free methods for applying treebank embeddings at test time are proposed, which give competitive results to tuned methods when applied to Twitter data and transcribed speech. This gives us a method for selecting treebanks and training a parser targeted at any combination of domain and language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Recent advances in dependency parsing have enabled high-quality parsing for a relatively high number of languages. However, satisfactory results are mainly limited to text types for which there are treebanks for a specific language. Even for high-resource languages, treebanks are typically only available for a small number of domains and genres. In this work we show how we can improve parsing for non-canonical text types by using in-domain annotated data from other languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We focus on two low-resource text types that stand out in different respects from canonical written texts: Twitter data and (transcribed) spoken data, for which annotated treebanks exist for only a small number of languages. Twitter data often contains non-standard language and specific features such as hash tags and emoticons. Spoken data tends to be more informal than written texts, and contains features such as fillers, restarts, and reparandums. While Twitter can be regarded as a genre, and spoken data as a medium (Lee, 2001), we will follow previous work in NLP and use the term domain to cover both these types of text. 1 The main novelty in this work is that we combine domain adaptation with cross-lingual learning for dependency parsing. We note that treebanks for a specific domain (IND: in-domain) often exist for some languages, and we show that we can take advantage of such data for parsing this domain in other languages. Our main focus is on the case where we want to parse data for a language that has some resources, but none for the domain in question (OOD: out-of-domain). While there is plenty of work both on cross-lingual parsing (Ammar et al., 2016a; Ahmad et al., 2019; Kondratyuk and Straka, 2019) and domain adaptation for parsing (Kim et al., 2016; Sato et al., 2017; Xiuming et al., 2019) , there is to the best of our knowledge no attempts to combine these approaches in a uniform framework for dependency parsing.",
"cite_spans": [
{
"start": 632,
"end": 633,
"text": "1",
"ref_id": null
},
{
"start": 1159,
"end": 1180,
"text": "(Ammar et al., 2016a;",
"ref_id": "BIBREF1"
},
{
"start": 1181,
"end": 1200,
"text": "Ahmad et al., 2019;",
"ref_id": "BIBREF0"
},
{
"start": 1201,
"end": 1229,
"text": "Kondratyuk and Straka, 2019)",
"ref_id": "BIBREF12"
},
{
"start": 1264,
"end": 1282,
"text": "(Kim et al., 2016;",
"ref_id": "BIBREF10"
},
{
"start": 1283,
"end": 1301,
"text": "Sato et al., 2017;",
"ref_id": "BIBREF22"
},
{
"start": 1302,
"end": 1323,
"text": "Xiuming et al., 2019)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We adapt the parsing framework of Smith et al. (2018a) which incorporates treebank embeddings to represent treebanks, similarly to how language embeddings has been used to represent the languages (Ammar et al., 2016b; de Lhoneux et al., 2017a) . In this framework each parsing model is trained on a concatenation of different treebanks, and the representation of each input token includes an embedding representing the treebank from which the token comes from. Depending on the mix of treebanks, the treebank embedding can encode aspects such as differences between languages, domains, and annotation style. Parsing with treebank embeddings has previously been applied monolingually (Stymne et al., 2018; Wagner et al., 2020) and cross-lingually for related languages, but without taking domain into account (Smith et al., 2018a; Lim et al., 2018), 2 In this paper, we show that joint training with treebank embeddings can be applied simultaneously across both across languages and domains, in effect addressing the task of cross-lingual domain adaptation. It is a simple and efficient method, which does not require expensive pre-processing, pre-training, translation, or similar tasks required by many other cross-lingual approaches, while giving competitive results across many settings. In this work we explore how such a resource lean method can be applied to cross-domain parsing on its own. We leave to future work an investigation of how the proposed technique interacts with other techniques for domain adaptation, for instance based on pre-training contextualized embeddings like BERT (Devlin et al., 2019) .",
"cite_spans": [
{
"start": 34,
"end": 54,
"text": "Smith et al. (2018a)",
"ref_id": "BIBREF23"
},
{
"start": 196,
"end": 217,
"text": "(Ammar et al., 2016b;",
"ref_id": "BIBREF3"
},
{
"start": 218,
"end": 243,
"text": "de Lhoneux et al., 2017a)",
"ref_id": "BIBREF5"
},
{
"start": 683,
"end": 704,
"text": "(Stymne et al., 2018;",
"ref_id": "BIBREF25"
},
{
"start": 705,
"end": 725,
"text": "Wagner et al., 2020)",
"ref_id": "BIBREF28"
},
{
"start": 808,
"end": 829,
"text": "(Smith et al., 2018a;",
"ref_id": "BIBREF23"
},
{
"start": 830,
"end": 850,
"text": "Lim et al., 2018), 2",
"ref_id": null
},
{
"start": 1595,
"end": 1616,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "At test time, there is a need to determine which treebank embedding to use, which is straightforward for test data from a treebank used during training. However, when the input sentence is from a treebank not used during training there is a need to determine the treebank embedding. One option is to use a proxy treebank (Stymne et al., 2018) , i.e. to choose the embedding of one of the treebanks used during training, which can be determined based on development data. Wagner et al. (2020) show that it is often advantageous to interpolate the embeddings of the treebanks used for training instead. They show in a monolingual setting how interpolation weights can be learnt based on sentence similarity. However, their equal weight baseline performs just as well in the majority of cases, and avoids the need of learning interpolation weights, which would also be less straight-forward in the cross-lingual setting. We thus adopt equal-weight interpolation. We also propose the use of an ensembling strategy applied to trees obtained by using all possible proxy treebanks embeddings.",
"cite_spans": [
{
"start": 321,
"end": 342,
"text": "(Stymne et al., 2018)",
"ref_id": "BIBREF25"
},
{
"start": 471,
"end": 491,
"text": "Wagner et al. (2020)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We show that using in-domain data from another language is useful when no in-domain data is available for the target language. Using the proposed methods, we can potentially train a parser for any combination of domain and language, as long as that domain has training data in some language, without the need for tuning on target development data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Data We mainly use data from the Universal Dependencies (UD) project (Nivre et al., 2020) , version 2.4 . We put our main focus on languages with a single-domain dependency treebank with either spoken data or Twitter data, including both training and test data and additional treebank data for other domains. While several UD treebanks contain some data from these domains mixed with other domains, it is often not easily identifiable which domain specific sentences come from. We thus use the three UD single domain treebanks of spoken data for French, Norwegian, and Slovenian, which fulfills our requirements. In addition we evaluate our methods on Komi-Zyrian and Naija, which both have spoken test data, but no training data for any domain in UD. For Twitter we use two treebanks from UD for Italian and code-switching Hindi-English. In addition we use the English Tweebank v2, which is annotated in UD style (Liu et al., 2018) . We convert sentences in the English Tweebank with multiple roots to have only one root, which is a UD requirement, by only keeping the first root, and joining the other roots to it with the parataxis relation. This happens when a single Tweet contains more than one sentence, and it is the solution adopted in the Italian PoSTWITA treebank.",
"cite_spans": [
{
"start": 69,
"end": 89,
"text": "(Nivre et al., 2020)",
"ref_id": "BIBREF18"
},
{
"start": 914,
"end": 932,
"text": "(Liu et al., 2018)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "2"
},
{
"text": "In addition to the in-domain treebanks we use additional treebanks from the same language, when available, or for related languages otherwise. For Komi Zyrian, a Uralic lanugage, we also use a Russian treebank, since Russian is a contact language, which also shares the Cyrillic script, in contrast to other Uralic treebanks with training data. Table 1 lists the data used for each language. Note that in all cases, the additional data is much larger than the in-domain data, which is typically quite small. For Slovenian SST, no development data was available, so we split off 5% of the training data. In all other cases we use the original splits. While UD treebanks have standard annotation guidelines, there are several inconsistencies between the treebanks used, especially for the rather unusual features of spoken data and Twitter. For instance, see Liu et al. (2018) for a discussion of differences between English and Italian Twitter treebanks, or the Naija-NSC documentation for known deviations from UD standards. 3 To be able to compare the effect of adding in-domain data, we create a contrastive treebank for each IND language of the same size, counted in the number of tokens. We use data from the treebank(s) marked with italics in Table 1 .",
"cite_spans": [
{
"start": 857,
"end": 874,
"text": "Liu et al. (2018)",
"ref_id": "BIBREF15"
},
{
"start": 1025,
"end": 1026,
"text": "3",
"ref_id": null
}
],
"ref_spans": [
{
"start": 345,
"end": 352,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 1248,
"end": 1255,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "2"
},
{
"text": "We think the language sample is interesting and covers many aspects. Even though the majority of languages are Indoeuropean, they mostly have different genera. They range from having hardly any resources like Komi Zyrian, to large resources, like English, and cover some interesting special cases, such as code switching, a Creole language, Naija, and a language with two written varieties, Norwegian.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "2"
},
{
"text": "Parser We use uuparser 4 (de Lhoneux et al., 2017b) which is a transition-based dependency parser using the arc-hybrid transition system with the addition of a swap transition and a static-dynamic oracle, to be able to handled non-projectivity. The parser uses a two-layer BiLSTM as a feature extractor followed by a multi-layer perceptron predicting transitions, in the style of Kiperwasser and Goldberg (2016) . Each word, w i , is represented by the concatenation of a word embedding, e w (w i ), a character-level embedding, obtained by running a BiLSTM over the characters ch j (1 \u2264 j \u2264 m) of w i , where m is the word length in characters, and a treebank embedding, e tb (t * ):",
"cite_spans": [
{
"start": 380,
"end": 411,
"text": "Kiperwasser and Goldberg (2016)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "e i = [e w (w i ); BILSTM(ch 1:m ); e tb (t * )]",
"eq_num": "(1)"
}
],
"section": "Experimental Setup",
"sec_num": "2"
},
{
"text": "The treebank embedding represents a treebank, t * , which is chosen among the set of k treebanks used when training the model. During training, t * is chosen as the treebank to which the current word/sentence belongs. When applying the model, the treebank of the sentence can be used only if the test sentence comes from a treebank that was used during training. In other cases some other method has to be used. In this work we explore the following methods:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "2"
},
{
"text": "\u2022 Proxy treebank: when dev data is available, we can try all possible proxy treebanks i.e. all treebanks used during training the model, and choose the treebank, t * , which performs best on dev data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "2"
},
{
"text": "\u2022 Interpolation: We interpolate the embeddings from all treebanks used during training by averaging them with equal weights: (t * = \u2211 k t=1 1 k e tb (t))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "2"
},
{
"text": "\u2022 Ensemble: We run the model with each possible proxy treebank, obtaining k output trees. Then we apply the reparsing technique by Sagae and Lavie (2006) which applies the Chu-Liu-Edmonds (Edmonds, 1967) algorithm with each arc being weighted by the number of trees for which that arc was predicted. 5",
"cite_spans": [
{
"start": 131,
"end": 153,
"text": "Sagae and Lavie (2006)",
"ref_id": "BIBREF21"
},
{
"start": 172,
"end": 203,
"text": "Chu-Liu-Edmonds (Edmonds, 1967)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "2"
},
{
"text": "Note that in all cases we only apply these techniques at test time. is heavier, requiring k test runs, followed by an application of the CLU algorithm. Interpolation and ensembling both have the advantage of being parameter free, while proxy treebank requires dev data. For languages without dev data we also compare our results to the oracle score, where we pick the best proxy treebank based on test performance. We use the default hyperparameters of uuparser, as specified in Smith et al. (2018a) . Note that no POStags are used, since POS-tagging in these difficult domains would lead to the same issues as for parsing. In addition, character embeddings compensate for the lack of POS-tags to a large extent across several typologically different languages (Smith et al., 2018b) , and in order for universal POS-tags, the most feasible choice cross-lingually, to be useful for parsing, the tagging quality has to be prohibitively high (G\u00f3mez-Rodr\u00edguez, 2020). The parser is trained end-to-end on treebank data, without any pre-training. All embeddings are initialized randomly at training time. Each model is trained for 30 epochs, and the best epoch is chosen based on average development scores among treebanks used at training time.",
"cite_spans": [
{
"start": 479,
"end": 499,
"text": "Smith et al. (2018a)",
"ref_id": "BIBREF23"
},
{
"start": 761,
"end": 782,
"text": "(Smith et al., 2018b)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "2"
},
{
"text": "Evaluation Metrics We use unlabelled and labelled attachment score, UAS and LAS, as evaluation metrics. Our system was optimized based on development UAS scores, since we believe that it is a good fit to the case of inconsistent labeling in the treebanks for each target domain. Overall, the test results reflect the trends seen in development data relatively well.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "2"
},
{
"text": "We first present results using different sources of training data, IND or OOD, from the same or another language, choosing the best proxy treebank based on development UAS scores. We use the full set of treebanks from Table 1 . 6 For other language OOD data, we use the contrastive datasets sampled from the same languages as the other language IND data.",
"cite_spans": [
{
"start": 228,
"end": 229,
"text": "6",
"ref_id": null
}
],
"ref_spans": [
{
"start": 218,
"end": 225,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3"
},
{
"text": "Our main interest is the middle part of Table 2 , lines 3-5, where we investigate the effect of adding IND data from other languages to in-language OOD data. Adding out-of-language IND data leads to average improvements of 2.1 LAS points and 1.3 UAS points. It always helps for Twitter, and helps in all cases except Norwegian for spoken data. If we instead add an equivalent amount of out-of-language OOD data, we see minor average gains and a performance that is considerably worse than for IND data. Norwegian is an outlier here as well, with good results for OOD data. We leave an investigation of why to future work. These results confirm that our treebank combination strategy is useful.",
"cite_spans": [],
"ref_spans": [
{
"start": 40,
"end": 47,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3"
},
{
"text": "The two top lines of Table 2 simulates results when no in-language data is available. As expected these scores are considerably lower than when using in-language OOD data, being so poor that these parsers are hardly useful, confirming previous research, e.g. Meechan-Maddon and and Vania et al. (2019) . In this case there is no clear difference between IND and OOD data. The scores for English and Hindi-English with IND data are closer to in-language OOD scores, which can be explained by the partial language match between these two treebanks.",
"cite_spans": [
{
"start": 282,
"end": 301,
"text": "Vania et al. (2019)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [
{
"start": 21,
"end": 28,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3"
},
{
"text": "As a point of comparison, the bottom part of Table 2 shows the results when data matching both language and domain is available. As expected, it leads to large gains. For all languages, the model trained Table 3 : Test scores for models trained on all available in-language OOD data and IND data from the other languages, using different methods for applying it to the target treebank.",
"cite_spans": [],
"ref_spans": [
{
"start": 45,
"end": 52,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 204,
"end": 211,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3"
},
{
"text": "on only the relatively small in-language IND data beats all models trained without it, even though the gap is quite small for French and Slovenian. The gains are especially pronounced for the code-switched Hindi-English and for Norwegian. When in-language IND data is available we see no average gains from adding out-of-language IND data, whereas adding in-language OOD data always helps considerably. We also note that the gap between UAS and LAS gets smaller, when the training data fits the test data better, supporting our intuition that out-of-language OOD data helps more with structure than labels. Next, we focus on our main scenario of interest, where we have in-language OOD data and out-oflanguage IND data. We use the model from line 5 in Table 2 and also show results for Slovenian without the additional Slavic languages. We investigate how best to apply the model at test time for cases where the treebank, i.e. the combination of language and domain, has not been seen at training time. We compare using a proxy treebank, matching either language or domain, interpolation, and ensembling. Table 3 summarizes the results.",
"cite_spans": [],
"ref_spans": [
{
"start": 752,
"end": 759,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 1106,
"end": 1113,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3"
},
{
"text": "When choosing a single proxy it is on average 3.3 LAS points and 1.6 UAS points better to use the same language than the same domain, but there is some variation between languages. The interpolation method works well on both metrics, giving the best average LAS scores and competitive UAS scores. Ensembling gives the highest UAS scores by a small margin, but does worse on LAS. We also note that including the related Slavic languages improves parsing for Slovenian considerably, with an LAS gain of 3.4 for the interpolation strategy. Table 3 also shows the best proxy used, either matching domain or language. For language proxies we note some surprises, Norwegian Bokmaal is a better fit than the matching language variety Nynorsk, and the Serbian corpus is better than Slovenian in the Slavic setting. We also note that the ParTUT treebank is often a good proxy. The differences between proxies are typically small, though. The domain proxies seem more straight-forward, with Norwegian and English being preferred more than the other options. The only small surprise is that Italian was a better fit for English than the partially matching Hindi-English treebank. There could, however, be many reasons for this, such as more similar annotation schemes for Italian and English, or the fact that while there is a partial overlap with English, Hindi is less related to English than Italian.",
"cite_spans": [],
"ref_spans": [
{
"start": 537,
"end": 544,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3"
},
{
"text": "Finally we apply our methods to the two low-resource languages without any in-language training data. Here, we have no development data for choosing a proxy language, so the focus is on our two parameter free methods: interpolation and ensembling. As a point of comparison we give the oracle score of the proxy treebank with the highest UAS score. We compare three models: using only the close OOD languages from Table 1 , and adding either all three IND spoken treebanks or the contrastive OOD treebanks. Results are shown in Table 4 . Interestingly, adding the small data from the unrelated languages helps somewhat regardless of if this data is OOD or IND. Adding the IND data do present the overall best scores, though, with the highest UAS scores for Komi Zyrian and the highest LAS scores for Naija. For our target model, interpolation and ensembling works quite well, often tying with the oracle scores, and typically not falling too much behind the oracle. However, in the setting with only related languages, these two methods falls behind the oracle, indicating that these methods works better with a more diverse mix of training languages and domains. 7 Our experiments confirm the usefulness of our proposed method of mixing training treebanks and Table 4 : Test set scores for languages without any training data, using different training data combination, with the oracle proxy treebank, interpolation, or ensembling.",
"cite_spans": [
{
"start": 1163,
"end": 1164,
"text": "7",
"ref_id": null
}
],
"ref_spans": [
{
"start": 413,
"end": 420,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 527,
"end": 534,
"text": "Table 4",
"ref_id": null
},
{
"start": 1260,
"end": 1267,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3"
},
{
"text": "applying the model to new data. Treebank embeddings seem to be capable of encoding aspects both of domain and language. 8 Both interpolation and ensembling have the advantage that they do not require any tuning on development data, which choosing a single proxy does. Interpolation has the further advantage that it requires no extra processing, and seems preferable since it gives the best LAS scores, as well as competitive UAS scores.",
"cite_spans": [
{
"start": 120,
"end": 121,
"text": "8",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "3"
},
{
"text": "In this paper we have shown how we can improve parsing for specific domains by combining data in that domain but from another language with in-language out-of-domain data. We show that it is possible to do so using a parsing model with treebank embeddings. We also propose the use of two parameter free methods for applying treebank embeddings to new data at test time, which give competitive results compared to optimizing a proxy treebank based on development data. This indicates that treebank embeddings are able to capture aspects both about text type and language. We also think it is worth noting that in contrast to much previous work, e.g. Smith et al. (2018a) , we see gains for languages which are not closely related.",
"cite_spans": [
{
"start": 649,
"end": 669,
"text": "Smith et al. (2018a)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
},
{
"text": "In future work we want to apply our methods also to other text types and to explore how the data selection strategies work with other parsing frameworks. We also want to extend the work on weighted interpolation by Wagner et al. (2020) to the cross-lingual case, to be able to combine it with the proposed methods. Another line of work is to investigate how much annotated data is needed in order to see gains of the same size as when adding IND treebanks from other languages.",
"cite_spans": [
{
"start": 215,
"end": 235,
"text": "Wagner et al. (2020)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
},
{
"text": "In this work we did not take advantage of any type of pre-trained word embeddings. It is likely that either cross-lingual static word embeddings (Ruder et al., 2019) or multilingual dynamic word embeddings, like multilingual BERT (Devlin et al., 2019) could improve the results overall. Using either of these resources would also allow us to utilize IND in-language unlabeled data in the pre-training step, which might potentially lead to improvements. We do believe that seeing labelled data, with arc types that are specific to the text types in question, as we do in this work, is also useful. It is an open question, which we leave to future work, how pre-training would interact with our proposed method.",
"cite_spans": [
{
"start": 145,
"end": 165,
"text": "(Ruder et al., 2019)",
"ref_id": "BIBREF20"
},
{
"start": 230,
"end": 251,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
},
{
"text": "The term domain has often been used as a catch-all term in NLP, to cover many different types of text type differences, often without being clearly defined, see e.g.(Weiss et al., 2016;Chu and Wang, 2018), even though there has been some attempts to investigate different aspects of domains, e.g.(van der Wees et al., 2015;Ruder et al., 2016).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "With the exception of a footnote inSmith et al. (2018a), where this type of data combination is mentioned for spoken French and Naija. However, no details or experimental results are provided.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/UniversalDependencies/UD_Naija-NSC/blob/master/README.md 4 https://github.com/UppsalaNLP/uuparser 5 Weighting the arcs by development UAS or LAS instead had little impact on the results, but requires development data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Thank you to current and former members of the Uppsala parsing group for many fruitful discussions: Ali Basirat, Daniel Dakota, Miryam de Lhoneux, Artur Kulmizev, Joakim Nivre, and Aaron Smith. I would also like to thank the anonymous reviewers for their insightful comments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "On difficulties of cross-lingual transfer with order differences: A case study on dependency parsing",
"authors": [
{
"first": "Wasi",
"middle": [],
"last": "Ahmad",
"suffix": ""
},
{
"first": "Zhisong",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xuezhe",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Nanyun",
"middle": [],
"last": "Peng",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "2440--2452",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wasi Ahmad, Zhisong Zhang, Xuezhe Ma, Eduard Hovy, Kai-Wei Chang, and Nanyun Peng. 2019. On difficulties of cross-lingual transfer with order differences: A case study on dependency parsing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1, pages 2440-2452, Minneapolis, Minnesota, US.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Many languages, one parser",
"authors": [
{
"first": "Waleed",
"middle": [],
"last": "Ammar",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Mulcaire",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2016,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "4",
"issue": "",
"pages": "431--444",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Waleed Ammar, George Mulcaire, Miguel Ballesteros, Chris Dyer, and Noah Smith. 2016a. Many languages, one parser. Transactions of the Association for Computational Linguistics, 4:431-444.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "We also experimented with separate embeddings for domain and language, which gave lower scores",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "We also experimented with separate embeddings for domain and language, which gave lower scores.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Many languages, one parser",
"authors": [
{
"first": "Waleed",
"middle": [],
"last": "Ammar",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Mulcaire",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2016,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "4",
"issue": "",
"pages": "431--444",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Waleed Ammar, George Mulcaire, Miguel Ballesteros, Chris Dyer, and Noah A. Smith. 2016b. Many languages, one parser. Transactions of the Association for Computational Linguistics, 4:431-444.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A survey of domain adaptation for neural machine translation",
"authors": [
{
"first": "Chenhui",
"middle": [],
"last": "Chu",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1304--1319",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chenhui Chu and Rui Wang. 2018. A survey of domain adaptation for neural machine translation. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1304-1319, Santa Fe, New Mexico, USA.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "From raw text to universal dependencies -look, no tags!",
"authors": [
{
"first": "Yan",
"middle": [],
"last": "Miryam De Lhoneux",
"suffix": ""
},
{
"first": "Ali",
"middle": [],
"last": "Shao",
"suffix": ""
},
{
"first": "Eliyahu",
"middle": [],
"last": "Basirat",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Kiperwasser",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Stymne",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Goldberg",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies",
"volume": "",
"issue": "",
"pages": "207--217",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Miryam de Lhoneux, Yan Shao, Ali Basirat, Eliyahu Kiperwasser, Sara Stymne, Yoav Goldberg, and Joakim Nivre. 2017a. From raw text to universal dependencies -look, no tags! In Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 207-217, Vancouver, Canada.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Arc-hybrid non-projective dependency parsing with a static-dynamic oracle",
"authors": [
{
"first": "Sara",
"middle": [],
"last": "Miryam De Lhoneux",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Stymne",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th International Conference on Parsing Technologies",
"volume": "",
"issue": "",
"pages": "99--104",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Miryam de Lhoneux, Sara Stymne, and Joakim Nivre. 2017b. Arc-hybrid non-projective dependency parsing with a static-dynamic oracle. In Proceedings of the 15th International Conference on Parsing Technologies, pages 99-104, Pisa, Italy.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirec- tional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1, pages 4171-4186, Minneapolis, Minnesota, US.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Optimum branchings",
"authors": [],
"year": 1967,
"venue": "Journal of Research of the National Bureau of Standards",
"volume": "71",
"issue": "",
"pages": "233--240",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jack Edmonds. 1967. Optimum branchings. Journal of Research of the National Bureau of Standards, 71B:233- 240.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "On the frailty of universal POS tags for neural UD parsers",
"authors": [
{
"first": "Carlos",
"middle": [],
"last": "Mark Anderson",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "G\u00f3mez-Rodr\u00edguez",
"suffix": ""
}
],
"year": 2020,
"venue": "Accepted to CoNLL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Anderson Carlos G\u00f3mez-Rodr\u00edguez. 2020. On the frailty of universal POS tags for neural UD parsers. In Accepted to CoNLL 2020.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Frustratingly easy neural domain adaptation",
"authors": [
{
"first": "Young-Bum",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Karl",
"middle": [],
"last": "Stratos",
"suffix": ""
},
{
"first": "Ruhi",
"middle": [],
"last": "Sarikaya",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "387--396",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Young-Bum Kim, Karl Stratos, and Ruhi Sarikaya. 2016. Frustratingly easy neural domain adaptation. In Pro- ceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 387-396, Osaka, Japan.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Simple and accurate dependency parsing using bidirectional LSTM feature representations",
"authors": [
{
"first": "Eliyahu",
"middle": [],
"last": "Kiperwasser",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2016,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "4",
"issue": "",
"pages": "313--327",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eliyahu Kiperwasser and Yoav Goldberg. 2016. Simple and accurate dependency parsing using bidirectional LSTM feature representations. Transactions of the Association for Computational Linguistics, 4:313-327.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "75 languages, 1 model: Parsing universal dependencies universally",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Kondratyuk",
"suffix": ""
},
{
"first": "Milan",
"middle": [],
"last": "Straka",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2779--2795",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Kondratyuk and Milan Straka. 2019. 75 languages, 1 model: Parsing universal dependencies universally. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing, pages 2779-2795, Hong Kong, China.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Genres, registers, text types, domains, and styles: Clarifying the concepts and navigating a path through the BNC jungle",
"authors": [
{
"first": "Y",
"middle": [
"W"
],
"last": "David",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2001,
"venue": "Language Learning & Technology",
"volume": "5",
"issue": "3",
"pages": "37--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Y. W. Lee. 2001. Genres, registers, text types, domains, and styles: Clarifying the concepts and navigating a path through the BNC jungle. Language Learning & Technology, 5(3):37-72.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "SEx BiST: A multi-source trainable parser with deep contextualized lexical representations",
"authors": [
{
"first": "Kyungtae",
"middle": [],
"last": "Lim",
"suffix": ""
},
{
"first": "Cheoneum",
"middle": [],
"last": "Park",
"suffix": ""
},
{
"first": "Changki",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Thierry",
"middle": [],
"last": "Poibeau",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies",
"volume": "",
"issue": "",
"pages": "143--152",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "KyungTae Lim, Cheoneum Park, Changki Lee, and Thierry Poibeau. 2018. SEx BiST: A multi-source train- able parser with deep contextualized lexical representations. In Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 143-152, Brussels, Belgium.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Parsing tweets into universal dependencies",
"authors": [
{
"first": "Yijia",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yi",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Wanxiang",
"middle": [],
"last": "Che",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Schneider",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "965--975",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yijia Liu, Yi Zhu, Wanxiang Che, Bing Qin, Nathan Schneider, and Noah A. Smith. 2018. Parsing tweets into universal dependencies. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 965-975, New Orleans, Louisiana.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "How to parse low-resource languages: Cross-lingual parsing, target language annotation, or both?",
"authors": [
{
"first": "Ailsa",
"middle": [],
"last": "Meechan-Maddon",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fifth International Conference on Dependency Linguistics (Depling, SyntaxFest 2019)",
"volume": "",
"issue": "",
"pages": "112--120",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ailsa Meechan-Maddon and Joakim Nivre. 2019. How to parse low-resource languages: Cross-lingual parsing, target language annotation, or both? In Proceedings of the Fifth International Conference on Dependency Linguistics (Depling, SyntaxFest 2019), pages 112-120, Paris, France.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Universal dependencies 2.4. LINDAT/CLARIN digital library at the Institute of Formal and Applied Linguistics (\u00daFAL",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Mitchell",
"middle": [],
"last": "Abrams",
"suffix": ""
},
{
"first": "\u017deljko",
"middle": [],
"last": "Agi\u0107",
"suffix": ""
}
],
"year": 2019,
"venue": "Faculty of Mathematics and Physics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre, Mitchell Abrams, \u017deljko Agi\u0107, et al. 2019. Universal dependencies 2.4. LINDAT/CLARIN digital library at the Institute of Formal and Applied Linguistics (\u00daFAL), Faculty of Mathematics and Physics, Charles University.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Universal dependencies v2: An evergrowing multilingual treebank collection",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Marie-Catherine",
"middle": [],
"last": "De Marneffe",
"suffix": ""
},
{
"first": "Filip",
"middle": [],
"last": "Ginter",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Hajic",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Sampo",
"middle": [],
"last": "Pyysalo",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Francis",
"middle": [],
"last": "Tyers",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Zeman",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of The 12th Language Resources and Evaluation Conference (LREC 2020)",
"volume": "",
"issue": "",
"pages": "4034--4043",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre, Marie-Catherine de Marneffe, Filip Ginter, Jan Hajic, Christopher D. Manning, Sampo Pyysalo, Sebastian Schuster, Francis Tyers, and Daniel Zeman. 2020. Universal dependencies v2: An evergrowing multilingual treebank collection. In Proceedings of The 12th Language Resources and Evaluation Conference (LREC 2020), pages 4034-4043, Marseille, France.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Towards a continuous modeling of natural language domains",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
},
{
"first": "Parsa",
"middle": [],
"last": "Ghaffari",
"suffix": ""
},
{
"first": "John",
"middle": [
"G"
],
"last": "Breslin",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Workshop on Uphill Battles in Language Processing: Scaling Early Achievements to Robust Methods",
"volume": "",
"issue": "",
"pages": "53--57",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Ruder, Parsa Ghaffari, and John G. Breslin. 2016. Towards a continuous modeling of natural language domains. In Proceedings of the Workshop on Uphill Battles in Language Processing: Scaling Early Achieve- ments to Robust Methods, pages 53-57, Austin, TX.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A survey of cross-lingual word embedding models",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "Sogaard",
"suffix": ""
}
],
"year": 2019,
"venue": "Journal of Artificial Intelligence Research",
"volume": "65",
"issue": "",
"pages": "69--631",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Ruder, Ivan Vuli\u0107, and Anders Sogaard. 2019. A survey of cross-lingual word embedding models. Journal of Artificial Intelligence Research, 65:69-631.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Parser combination by reparsing",
"authors": [
{
"first": "Kenji",
"middle": [],
"last": "Sagae",
"suffix": ""
},
{
"first": "Alon",
"middle": [],
"last": "Lavie",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers",
"volume": "",
"issue": "",
"pages": "129--132",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenji Sagae and Alon Lavie. 2006. Parser combination by reparsing. In Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers, pages 129-132, New York City, USA.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Adversarial training for cross-domain universal dependency parsing",
"authors": [
{
"first": "Motoki",
"middle": [],
"last": "Sato",
"suffix": ""
},
{
"first": "Hitoshi",
"middle": [],
"last": "Manabe",
"suffix": ""
},
{
"first": "Hiroshi",
"middle": [],
"last": "Noji",
"suffix": ""
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies",
"volume": "",
"issue": "",
"pages": "71--79",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Motoki Sato, Hitoshi Manabe, Hiroshi Noji, and Yuji Matsumoto. 2017. Adversarial training for cross-domain universal dependency parsing. In Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 71-79, Vancouver, Canada.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "82 treebanks, 34 models: Universal dependency parsing with multi-treebank models",
"authors": [
{
"first": "Aaron",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "Bernd",
"middle": [],
"last": "Bohnet",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Miryam De Lhoneux",
"suffix": ""
},
{
"first": "Yan",
"middle": [],
"last": "Nivre",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Shao",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Stymne",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies",
"volume": "",
"issue": "",
"pages": "113--123",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aaron Smith, Bernd Bohnet, Miryam de Lhoneux, Joakim Nivre, Yan Shao, and Sara Stymne. 2018a. 82 tree- banks, 34 models: Universal dependency parsing with multi-treebank models. In Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 113-123, Brussels, Belgium.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "An investigation of the interactions between pre-trained word embeddings, character models and POS tags in dependency parsing",
"authors": [
{
"first": "Aaron",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Miryam De Lhoneux",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Stymne",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2711--2720",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aaron Smith, Miryam de Lhoneux, Sara Stymne, and Joakim Nivre. 2018b. An investigation of the interactions between pre-trained word embeddings, character models and POS tags in dependency parsing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2711-2720, Brussels, Belgium.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Parser training with heterogeneous treebanks",
"authors": [
{
"first": "Sara",
"middle": [],
"last": "Stymne",
"suffix": ""
},
{
"first": "Aaron",
"middle": [],
"last": "Miryam De Lhoneux",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "619--625",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sara Stymne, Miryam de Lhoneux, Aaron Smith, and Joakim Nivre. 2018. Parser training with heterogeneous treebanks. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 619-625, Melbourne, Australia.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "What's in a domain? Analyzing genre and topic differences in statistical machine translation",
"authors": [
{
"first": "Arianna",
"middle": [],
"last": "Marlies Van Der Wees",
"suffix": ""
},
{
"first": "Wouter",
"middle": [],
"last": "Bisazza",
"suffix": ""
},
{
"first": "Christof",
"middle": [],
"last": "Weerkamp",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Monz",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics, Short Papers",
"volume": "",
"issue": "",
"pages": "560--566",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marlies van der Wees, Arianna Bisazza, Wouter Weerkamp, and Christof Monz. 2015. What's in a domain? Analyzing genre and topic differences in statistical machine translation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics, Short Papers, pages 560-566, Beijing, China.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "A systematic comparison of methods for low-resource dependency parsing on genuinely low-resource languages",
"authors": [
{
"first": "Clara",
"middle": [],
"last": "Vania",
"suffix": ""
},
{
"first": "Yova",
"middle": [],
"last": "Kementchedjhieva",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "Sogaard",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lopez",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "1105--1116",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Clara Vania, Yova Kementchedjhieva, Anders Sogaard, and Adam Lopez. 2019. A systematic comparison of methods for low-resource dependency parsing on genuinely low-resource languages. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1105-1116, Hong Kong, China.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Treebank embedding vectors for out-of-domain dependency parsing",
"authors": [
{
"first": "Joachim",
"middle": [],
"last": "Wagner",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Barry",
"suffix": ""
},
{
"first": "Jennifer",
"middle": [],
"last": "Foster",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "8812--8818",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joachim Wagner, James Barry, and Jennifer Foster. 2020. Treebank embedding vectors for out-of-domain depen- dency parsing. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8812-8818, Online.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "A survey of transfer learning",
"authors": [
{
"first": "Karl",
"middle": [],
"last": "Weiss",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Taghi",
"suffix": ""
},
{
"first": "Dingding",
"middle": [],
"last": "Khoshgoftaar",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2016,
"venue": "Journal of Big Data",
"volume": "3",
"issue": "1",
"pages": "1--40",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karl Weiss, Taghi M. Khoshgoftaar, and DingDing Wang. 2016. A survey of transfer learning. Journal of Big Data, 3(1):1-40.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Learning domain invariant word representations for parsing domain adaptation",
"authors": [
{
"first": "Qiao",
"middle": [],
"last": "Xiuming",
"suffix": ""
},
{
"first": "Zhang",
"middle": [],
"last": "Yue",
"suffix": ""
},
{
"first": "Zhao",
"middle": [],
"last": "Tiejun",
"suffix": ""
}
],
"year": 2019,
"venue": "Natural Language Processing and Chinese Computing (NLPCC 2019)",
"volume": "",
"issue": "",
"pages": "801--813",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qiao Xiuming, Zhang Yue, and Zhao Tiejun. 2019. Learning domain invariant word representations for parsing domain adaptation. In Natural Language Processing and Chinese Computing (NLPCC 2019), pages 801-813, Dunhuang, China.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"content": "<table><tr><td>Language</td><td>IND Treebank</td><td>Train</td><td>Dev</td><td>Test</td><td>Additional OOD data</td></tr><tr><td>French</td><td>Spoken</td><td>15.0K</td><td>10.2K</td><td>10.2K</td><td>GSD (364K), Partut (24.9K), Sequoia (51.9K)</td></tr><tr><td>Norwegian</td><td>NynorskLIA</td><td>35.2K</td><td>10.2K</td><td>10.0K</td><td>Nynorsk (245K), Bokmaal (244K)</td></tr><tr><td>Slovenian</td><td>SSJ</td><td>18.6K</td><td>906</td><td>10.0K</td><td>SST (113K), Croatian_SET (153K), Serbian_SET (74.3K)</td></tr><tr><td>Komi Zyrian</td><td>IKDP</td><td>-</td><td>-</td><td>1.3K</td><td>Finnish_TDT (163K), North_Sami_Giella (16.8K), Russian_Taiga (18.1K)</td></tr><tr><td>Naija</td><td>NSC</td><td>-</td><td>-</td><td>12.9K</td><td>English: EWT (205K), GUM (66.2K), LinES (50.1K) ParTUT (43.5K)</td></tr><tr><td>English</td><td>Tweebank</td><td>24.8K</td><td>11.8K</td><td>19.1K</td><td>EWT (205K), GUM (66.2K), LinES (50.1K) ParTUT (43.5K)</td></tr><tr><td>Hindi-English CS</td><td>HIENCS</td><td>19.3K</td><td>3.3K</td><td>3.1K</td><td>English: Hindi_HDTB (281K)</td></tr><tr><td>Italian</td><td>PoSTWITA</td><td>104K</td><td>12.8K</td><td>13.2K</td><td>ISDT (294K), ParTUT (52.4K), VIT (241K)</td></tr></table>",
"html": null,
"text": "EWT (205K), GUM (66.2K), LinES (50.1K) ParTUT (43.5K),",
"num": null,
"type_str": "table"
},
"TABREF1": {
"content": "<table/>",
"html": null,
"text": "Treebanks and number of tokens in train, dev, and test data sets for the target treebanks. Top of table is spoken data, and bottom is for Twitter data. Additional data lists treebanks used for each target treebank, which is in-language unless otherwise noted, and the number of tokens in the training set for each treebank. Treebanks in italics are used in the contrastive data sets.",
"num": null,
"type_str": "table"
},
"TABREF3": {
"content": "<table/>",
"html": null,
"text": "Test set scores for spoken data with different combinations of training data, using the best proxy treebank. For each line, only data sources marked 'X' are used, sources marked '-' are not used. Note that 'Same language' also includes related Slavic languages for Slovenian.",
"num": null,
"type_str": "table"
}
}
}
}