ACL-OCL / Base_JSON /prefixR /json /ranlp /2021.ranlp-1.111.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:51:44.987458Z"
},
"title": "Not All Linearizations Are Equally Data-Hungry in Sequence Labeling Parsing",
"authors": [
{
"first": "Alberto",
"middle": [],
"last": "Mu\u00f1oz-Ortiz",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "CITIC",
"location": {
"country": "Spain"
}
},
"email": "alberto.munoz.ortiz@udc.es"
},
{
"first": "Michalina",
"middle": [],
"last": "Strzyz",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "CITIC",
"location": {
"country": "Spain"
}
},
"email": "michalina.strzyz@priberam.pt"
},
{
"first": "David",
"middle": [],
"last": "Vilares",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "CITIC",
"location": {
"country": "Spain"
}
},
"email": "david.vilares@udc.es"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Different linearizations have been proposed to cast dependency parsing as sequence labeling and solve the task as: (i) a head selection problem, (ii) finding a representation of the token arcs as bracket strings, or (iii) associating partial transition sequences of a transition-based parser to words. Yet, there is little understanding about how these linearizations behave in low-resource setups. Here, we first study their data efficiency, simulating data-restricted setups from a diverse set of rich-resource treebanks. Second, we test whether such differences manifest in truly low-resource setups. The results show that head selection encodings are more data-efficient and perform better in an ideal (gold) framework, but that such advantage greatly vanishes in favour of bracketing formats when the running setup resembles a real-world low-resource configuration.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Different linearizations have been proposed to cast dependency parsing as sequence labeling and solve the task as: (i) a head selection problem, (ii) finding a representation of the token arcs as bracket strings, or (iii) associating partial transition sequences of a transition-based parser to words. Yet, there is little understanding about how these linearizations behave in low-resource setups. Here, we first study their data efficiency, simulating data-restricted setups from a diverse set of rich-resource treebanks. Second, we test whether such differences manifest in truly low-resource setups. The results show that head selection encodings are more data-efficient and perform better in an ideal (gold) framework, but that such advantage greatly vanishes in favour of bracketing formats when the running setup resembles a real-world low-resource configuration.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Dependency parsing (Mel'cuk et al., 1988; K\u00fcbler et al., 2009) has achieved clear improvements in recent years, to the point that graph-based (Martins et al., 2013; Dozat et al., 2017) and transition-based (Ma et al., 2018; Fern\u00e1ndez-Gonz\u00e1lez and G\u00f3mez-Rodr\u00edguez, 2019) parsers are already very accurate on certain setups, such as English news. In this line, Berzak et al. (2016) have pointed out that the performance on these setups is already on par with that expected from experienced human annotators.",
"cite_spans": [
{
"start": 19,
"end": 41,
"text": "(Mel'cuk et al., 1988;",
"ref_id": "BIBREF33"
},
{
"start": 42,
"end": 62,
"text": "K\u00fcbler et al., 2009)",
"ref_id": "BIBREF21"
},
{
"start": 142,
"end": 164,
"text": "(Martins et al., 2013;",
"ref_id": "BIBREF30"
},
{
"start": 165,
"end": 184,
"text": "Dozat et al., 2017)",
"ref_id": "BIBREF11"
},
{
"start": 206,
"end": 223,
"text": "(Ma et al., 2018;",
"ref_id": "BIBREF29"
},
{
"start": 224,
"end": 269,
"text": "Fern\u00e1ndez-Gonz\u00e1lez and G\u00f3mez-Rodr\u00edguez, 2019)",
"ref_id": "BIBREF14"
},
{
"start": 359,
"end": 379,
"text": "Berzak et al. (2016)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Thus, the efforts have started to focus on related problems such as parsing different domains or multi-lingual scenarios (Sato et al., 2017; Song et al., 2019; Ammar et al., 2016) , creating faster models (Volokh, 2013; Chen and Manning, 2014) , designing low-resource and cross-lingual parsing techniques (Tiedemann et al., 2014; , or infusing syntactic knowledge into models (Strubell et al., 2018; Rotman and Reichart, 2019) . This work will lie in the intersection between fast parsing and low-resource languages. Recent work has proposed encodings to cast parsing as sequence labeling (Spoustov\u00e1 and Spousta, 2010; Strzyz et al., 2019; Li et al., 2018; Kiperwasser and Ballesteros, 2018) . This approach computes a linearized tree of a sentence of length n in n tagging actions, providing a good speed/accuracy trade-off. Also, it offers a na\u00efve way to infuse syntactic information as an embedding or feature (Ma et al., 2019; . Such encodings have been evaluated on English and multi-lingual setups, but there is no study about their behaviour on low-resource setups, and what strengths and weaknesses they might exhibit.",
"cite_spans": [
{
"start": 121,
"end": 140,
"text": "(Sato et al., 2017;",
"ref_id": "BIBREF43"
},
{
"start": 141,
"end": 159,
"text": "Song et al., 2019;",
"ref_id": "BIBREF47"
},
{
"start": 160,
"end": 179,
"text": "Ammar et al., 2016)",
"ref_id": "BIBREF0"
},
{
"start": 205,
"end": 219,
"text": "(Volokh, 2013;",
"ref_id": "BIBREF57"
},
{
"start": 220,
"end": 243,
"text": "Chen and Manning, 2014)",
"ref_id": "BIBREF6"
},
{
"start": 306,
"end": 330,
"text": "(Tiedemann et al., 2014;",
"ref_id": "BIBREF54"
},
{
"start": 377,
"end": 400,
"text": "(Strubell et al., 2018;",
"ref_id": "BIBREF50"
},
{
"start": 401,
"end": 427,
"text": "Rotman and Reichart, 2019)",
"ref_id": "BIBREF40"
},
{
"start": 590,
"end": 619,
"text": "(Spoustov\u00e1 and Spousta, 2010;",
"ref_id": "BIBREF49"
},
{
"start": 620,
"end": 640,
"text": "Strzyz et al., 2019;",
"ref_id": "BIBREF51"
},
{
"start": 641,
"end": 657,
"text": "Li et al., 2018;",
"ref_id": "BIBREF27"
},
{
"start": 658,
"end": 692,
"text": "Kiperwasser and Ballesteros, 2018)",
"ref_id": "BIBREF19"
},
{
"start": 914,
"end": 931,
"text": "(Ma et al., 2019;",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We study the behaviour of linearizations for dependency parsing as sequence labeling in low-resource setups. First, we explore their data efficiency, i.e. if they can exploit their full potential with less supervised data. To do so, we simulate different data-restricted setups from a diverse set of rich-resource treebanks. Second, we shed light about their performance on truly low-resource treebanks. The goal is to determine whether tendencies from the experiments in the previous phase hold when the language is truly low-resource and when secondary effects of realworld low-resource setups, such as using predicted part-of-speech (PoS) tags or no PoS tags, impact more certain types of linearizations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contribution",
"sec_num": null
},
{
"text": "Low-resource parsing has been explored from perspectives such as unsupervised parsing, data augmentation, cross-lingual learning, or dataefficiency of models. For instance, on unsupervised parsing, Klein and Manning (2004) and Spitkovsky et al. (2010) have worked on generative models to determine whether to continue or stop attaching de-pendents to a token, while others (Le and Zuidema, 2015; Mohananey et al., 2020) have studied how to use self-training for unsupervised parsing.",
"cite_spans": [
{
"start": 198,
"end": 222,
"text": "Klein and Manning (2004)",
"ref_id": "BIBREF20"
},
{
"start": 227,
"end": 251,
"text": "Spitkovsky et al. (2010)",
"ref_id": "BIBREF48"
},
{
"start": 373,
"end": 395,
"text": "(Le and Zuidema, 2015;",
"ref_id": "BIBREF26"
},
{
"start": 396,
"end": 419,
"text": "Mohananey et al., 2020)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "On data augmentation, McClosky et al. (2006) used self-training to annotate extra data, while others have focused on linguistically motivated approaches to augment treebanks. This is the case of Vania et al. (2019) or , who have proposed methods to replace subtrees within a given sentence.",
"cite_spans": [
{
"start": 22,
"end": 44,
"text": "McClosky et al. (2006)",
"ref_id": "BIBREF31"
},
{
"start": 195,
"end": 214,
"text": "Vania et al. (2019)",
"ref_id": "BIBREF55"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "On cross-lingual learning, authors such as S\u00f8gaard (2011) or McDonald et al. (2011) trained delexicalized parsers in a source rich-resource treebank, which are then used to parse a low-resource target language. Falenska and \u00c7 etinoglu (2017) explored lexicalized versus delexicalized parsers and compared them on low-resource treebanks, depending on factors such as the treebank size and the PoS tags performance. Wang and Eisner (2018) created synthetic treebanks that resemble the target language by permuting constituents of distant treebanks. Naseem et al. (2012) and T\u00e4ckstr\u00f6m et al. (2013) tackled this same issue, but from the model side, training on rich-resource languages in such way the model learns to detect the aspects of the source languages that are relevant for the target language. Recently, Mulcaire et al. (2019) used a LSTM to build a polyglot language model, which is then used to train on top of it a parser that shows cross-lingual abilities in zero-shot setups.",
"cite_spans": [
{
"start": 43,
"end": 57,
"text": "S\u00f8gaard (2011)",
"ref_id": "BIBREF46"
},
{
"start": 61,
"end": 83,
"text": "McDonald et al. (2011)",
"ref_id": "BIBREF32"
},
{
"start": 211,
"end": 241,
"text": "Falenska and \u00c7 etinoglu (2017)",
"ref_id": "BIBREF13"
},
{
"start": 414,
"end": 436,
"text": "Wang and Eisner (2018)",
"ref_id": "BIBREF58"
},
{
"start": 547,
"end": 567,
"text": "Naseem et al. (2012)",
"ref_id": "BIBREF36"
},
{
"start": 572,
"end": 595,
"text": "T\u00e4ckstr\u00f6m et al. (2013)",
"ref_id": "BIBREF53"
},
{
"start": 810,
"end": 832,
"text": "Mulcaire et al. (2019)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "On data-efficiency, research work has explored the impact of the use of different amounts of data, motivated by the lack of annotated data or by the lack of quality of it. For instance, Lacroix et al. (2016a) showed how a transition-based parser with a dynamic oracle can be used without any modifications to parse partially annotated data. They found that this setup is useful to train low-resource parsers on sentence-aligned texts, from a rich-resource treebank to an automatically translated low-resource language, where only precisely aligned tokens are used for the projection in the target dataset. Lacroix et al. (2016b) studied the effect that pre-processing and post-processing has in annotation projection, and concluded that quality should prevail over quantity. Related to training with restricted data, showed that when distilling a graph-based parser for faster inference time, models with smaller treebanks suffered less. also distilled models for Enhanced Universal Dependencies (EUD) parsing with different amounts of data, observing that less training data usually translated into slightly lower performance, while offering better energy consumption. Garcia et al. (2018) showed, in the context of Romance languages, that peeking samples from related languages and adapting them to the target language is useful to train a model that performs on par with one trained on fully (but still limited) manually annotated data. Restricted to constituent parsing, Shi et al. (2020) analyzed the role of the dev data in unsupervised parsing. They pointed out that many unsupervised parsers use the score on the dev set as a signal for hyperparameter updates, and show that by using a handful of samples from that development set to train a counterpart supervised model, the results outperformed those of the unsupervised setup. Finally, there is work describing the impact that the size of the parsing training data has on downstream tasks that use syntactic information as part of the input (Sagae et al., 2008; .",
"cite_spans": [
{
"start": 186,
"end": 208,
"text": "Lacroix et al. (2016a)",
"ref_id": "BIBREF24"
},
{
"start": 606,
"end": 628,
"text": "Lacroix et al. (2016b)",
"ref_id": "BIBREF25"
},
{
"start": 1170,
"end": 1190,
"text": "Garcia et al. (2018)",
"ref_id": "BIBREF15"
},
{
"start": 1475,
"end": 1492,
"text": "Shi et al. (2020)",
"ref_id": "BIBREF45"
},
{
"start": 2002,
"end": 2022,
"text": "(Sagae et al., 2008;",
"ref_id": "BIBREF42"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "In what follows, we review the existing families of encodings for parsing as sequence labeling ( \u00a73.1) and the models that we will be using ( \u00a73.2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "3"
},
{
"text": "Sequence labeling assigns one output label to every input token. Many problems are cast as sequence labeling due to its fast and simple nature, like PoS tagging, chunking, super tagging, namedentity recognition, semantic role labeling, and parsing. For dependency parsing, to create a linearized tree it suffices to assign each word w i a discrete label of the form (x i , l i ), where l i is the dependency type and x i encodes a subset of the arcs of the tree related to such word. Although only labels seen in the training data can be predicted, show that the coverage is almost complete. We distinguish three families of encodings, which we now review (see also Figure 1 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 666,
"end": 674,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Encodings for sequence labeling dependency parsing",
"sec_num": "3.1"
},
{
"text": "Head-selection encodings (Spoustov\u00e1 and Spousta, 2010; Li et al., 2018; Strzyz et al., 2019) . Each word label component x i encodes its head as an index or an (abstracted) offset. This can be done by labeling the target word with the (absolute) index of its head token, or by using a relative offset that accounts for the difference between the dependent and head indexes. In this work, we chose a relative PoS-based encoding (rp h ) that has shown to perform consistently better among the linerarizations of this family. Here, x i is a tuple (p i , o i ), such that if o i > 0 the head of w i is the o i th word to the right of w i whose PoS tag is p i ; if o i < 0, the head of w i is the o i th word to the left of w i whose PoS tag is p i . Among its advantages, we find the capacity to encode any non-projective tree and words being directly and only linked to its head, but on the other hand it is dependent on external factors (e.g. PoS tags). 1",
"cite_spans": [
{
"start": 25,
"end": 54,
"text": "(Spoustov\u00e1 and Spousta, 2010;",
"ref_id": "BIBREF49"
},
{
"start": 55,
"end": 71,
"text": "Li et al., 2018;",
"ref_id": "BIBREF27"
},
{
"start": 72,
"end": 92,
"text": "Strzyz et al., 2019)",
"ref_id": "BIBREF51"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Encodings for sequence labeling dependency parsing",
"sec_num": "3.1"
},
{
"text": "Bracketing-based encodings (Yli-Jyr\u00e4 and G\u00f3mez-Rodr\u00edguez, 2017) . Each x i encodes a sort of incoming and outgoing arcs of a given word and its neighbors, represented as bracket strings. More particularly, in Strzyz et al. (2019) each x i is a string that follows the expression (<)?((\\) * |(/) * )(>)?, where < means that w i\u22121 has an incoming arc from the right, k times \\ means that w i has k outgoing arcs towards the left, k times / means that w i\u22121 has k outgoing arcs to the right, and > means that w i has an arc coming from the left. This encoding produces a compressed label set while not relying on external features, such as PoS tags. However, when it comes to non-projectivity, it can only analyze crossing arcs in opposite directions. To counteract this, it is possible to define a linearization using a second independent pair of brackets (denoted with '*') to encode a 2-planar tree . 2 In this work we are considering experiments with both the restricted non-projective (rx b ) and the 2-planar bracketing encodings (2p b ).",
"cite_spans": [
{
"start": 27,
"end": 63,
"text": "(Yli-Jyr\u00e4 and G\u00f3mez-Rodr\u00edguez, 2017)",
"ref_id": "BIBREF62"
},
{
"start": 209,
"end": 229,
"text": "Strzyz et al. (2019)",
"ref_id": "BIBREF51"
},
{
"start": 901,
"end": 902,
"text": "2",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Encodings for sequence labeling dependency parsing",
"sec_num": "3.1"
},
{
"text": "Transition-based encodings . Each x i encodes a sub-sequence of the transitions to be generated by a left-to-right transition-based parser. Given a sequence of transitions t = t 1 , ..., t m with exactly n read transitions 3 , it splits t into n chunks and assigns the ith chunk to the ith word. Its main advantage is more abstract, allowing to automatically derive encodings relying on any left-to-right transition based parser (including dependency, constituency and semantic parsers). According to G\u00f3mez-Rodr\u00edguez et al. 2020, they produce worse results than the bracketing encodings, but we include them in this work for completeness. In particular, we consider mappings from arc-hybrid (Kuhlmann et al., 2011 ) (ah tb ) and Covington (2001) (c tb ), which are projective and non-projective transition-based algorithms.",
"cite_spans": [
{
"start": 691,
"end": 713,
"text": "(Kuhlmann et al., 2011",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Encodings for sequence labeling dependency parsing",
"sec_num": "3.1"
},
{
"text": "To post-process corrupted predicted labels, we follow the heuristics described in each encoding paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encodings for sequence labeling dependency parsing",
"sec_num": "3.1"
},
{
"text": "Notes Let w be a sequence of words [w 1 , w 2 , ..., w |w| ], then w is a sequence of word vectors that will be used as the input to our models. Each w i will be a concatenation of: (i) a word embedding, (ii) a second word embedding computed through a char-LSTM, (iii) and optionally a PoS tag embedding (we will discuss more about this last point in \u00a74).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequence labeling framework",
"sec_num": "3.2"
},
{
"text": "We use bidirectional long short-term memory networks (biLSTMs; Hochreiter and Schmidhuber, 1997; Schuster and Paliwal, 1997) to train our sequence labeling parsers. BiLSTMs are a strong baseline used in recent work across a number of tasks (Yang and Zhang, 2018; Reimers and Gurevych, 2017) . More particularly, we use two layers of biLSTMs, and each hidden vector h i from the last biLSTM layer (associated to each input vector w i ) is fed to separate feed-forward networks that are in charge of predicting each of the label components of the linearization (i.e. x i and l i ) using softmaxes, relying on hard-sharing multi-task learning (MTL; Caruana, 1997; Ruder, 2017) . Following \u00a73.1, for all the encodings, except the 2-planar encoding, we will use a 2-task MTL setup: one task will predict x i according to each encoding specifics, and the other one will predict the dependency type, l i . For the 2-planar bracketing encoding, which uses a second pair of brackets to predict the arcs from the second plane, we use instead a 3-task MTL setup, where the difference is that the prediction of x i is split into two tasks: one that predicts the first plane brackets and another task that predicts the brackets from the second plane. 4 It is worth noting that for this particular work we skipped computational expensive models, such as BERT (Devlin et al., 2019) . There are three main reasons for this. First, the experiments in this paper imply training a total of 760 parsing models (see more details in \u00a74), making the training on BERT (or variants) less practical. Second, there is not a multilingual or specific-language BERT model for all languages, and this could be the source of uncontrolled variables that could have an impact on the performance, and thereof on the conclusions. 5 Third, even under the assumption of all languagespecific BERT models being available, these are pre-trained on different data that add extra noise, which could be undesirable for our purpose.",
"cite_spans": [
{
"start": 63,
"end": 96,
"text": "Hochreiter and Schmidhuber, 1997;",
"ref_id": "BIBREF18"
},
{
"start": 97,
"end": 124,
"text": "Schuster and Paliwal, 1997)",
"ref_id": "BIBREF44"
},
{
"start": 240,
"end": 262,
"text": "(Yang and Zhang, 2018;",
"ref_id": "BIBREF61"
},
{
"start": 263,
"end": 290,
"text": "Reimers and Gurevych, 2017)",
"ref_id": "BIBREF39"
},
{
"start": 646,
"end": 660,
"text": "Caruana, 1997;",
"ref_id": "BIBREF5"
},
{
"start": 661,
"end": 673,
"text": "Ruder, 2017)",
"ref_id": "BIBREF41"
},
{
"start": 1238,
"end": 1239,
"text": "4",
"ref_id": null
},
{
"start": 1345,
"end": 1366,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sequence labeling framework",
"sec_num": "3.2"
},
{
"text": "We design two studies, detailed in \u00a74.1 and 4.2:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology and experiments",
"sec_num": "4"
},
{
"text": "1. We explore if some encodings are more dataefficient than others. To do so, we will simulate data-restricted setups, selecting richresource languages and using partial data. The goal is to test if some encodings are learnable with fewer data, or if other ones could obtain a better performance instead, but only under the assumption of very large data being available.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology and experiments",
"sec_num": "4"
},
{
"text": "2. We focus on truly low-resource setups. This can be seen as a confirmation experiment to see if the findings under data-restricted setups hold for under-studied languages, and to confirm what sequence labeling linearizations are more recommendable under these conditions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology and experiments",
"sec_num": "4"
},
{
"text": "Experimental setups For experiments 1 and 2, we consider three setups that might have a different impact across the encodings:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology and experiments",
"sec_num": "4"
},
{
"text": "1. Gold PoS tags setup: We train and run the models under an ideal framework that uses gold PoS tags as part of the input. The reason is that encodings such as rp h rely on PoS tags to rebuild the linearized tree. This way, using gold PoS tags helps estimate the optimal data-efficiency and learnability of these parsers under perfect (but unreal) conditions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology and experiments",
"sec_num": "4"
},
{
"text": "2. Predicted PoS tags setup: Setup 1 cannot truly reflect the performance that the encodings would obtain under real-world data-restricted conditions. Predicted PoS tags will be less helpful because their quality will degrade. This issue can affect more to the rp h encoding, since it requires them to rebuild the tree from the labels, and miss-predicted PoS tags could propagate errors during decoding. Here, we train taggers for each treebank, using the same architecture used for the parsers. To be coherent with the data-restricted setups, taggers will be trained on the same amount of data used for the parsers. Appendix A discusses the PoS taggers performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology and experiments",
"sec_num": "4"
},
{
"text": "We train the models without using any PoS tags as part of the input. It is worth noting that the setup is somewhat forced for the rp h encoding, since we will still need to externally run the taggers to obtain the PoS tags and rebuild the tree. Yet, we include the PoS-based encoding for completeness, and to have a better understanding about how different families of encodings suffer from not (or minimally) using PoS tags. For instance, that is a simple way to obtain simpler and faster parsing models, as part of the pipeline does not need to be executed, and the input vectors to the models will be smaller, translating into faster executions too. Also, in low-resource setups, PoS tags might not be available or the tagging models are not accurate enough to help deep learning models (Zhou et al., 2020; Anderson and G\u00f3mez-Rodr\u00edguez, 2021) .",
"cite_spans": [
{
"start": 790,
"end": 809,
"text": "(Zhou et al., 2020;",
"ref_id": "BIBREF65"
},
{
"start": 810,
"end": 845,
"text": "Anderson and G\u00f3mez-Rodr\u00edguez, 2021)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "No PoS tags setup:",
"sec_num": "3."
},
{
"text": "Data We chose 11 treebanks from UD2.7 (Zeman et al., 2020) with more than 10 000 training sentences: German HDT , Czech PDT , Russian SynTagRus , Classical Chinese Kyoto , Persian PDT , Estonian EDT , Romanian Nonstandard , Korean Kaist , Ancient Greek PROIEL , Hindi HDTB and Latvian LVTB . They consider different families, scripts and levels of non-projectivity (see Appendix B). To simulate data-restricted setups, we created training subsets of 100, 500, 1 000, 5 000 and 10 000 samples, as well as the total training set. The training sets were shuffled before the division.",
"cite_spans": [
{
"start": 38,
"end": 58,
"text": "(Zeman et al., 2020)",
"ref_id": "BIBREF63"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment 1: Encodings data-efficiency",
"sec_num": "4.1"
},
{
"text": "Setup To assess the data-efficiency, we proceed as follows. As the rp h encoding has showed the strongest performance in previous work for multi-lingual setups (Strzyz et al., 2019; , we are taking these models as the reference and an a priori upper bound. Then, we compute the difference of the mean UAS (across the 11 treebanks) between the rp h and each of the other linearizations, for all the models trained up to 10 000 sentences. The goal is to determine which encodings suffer more when training with limited data and monitor to what extent the tendency holds as more data is introduced. We compute the statistically significant difference between the rp h and the other encodings, using the p-value (p < 0.05) of a paired t-test on the scores distribution, following recommended practices for dependency parsing (Dror et al., 2018) . Finally, we show specific results for the models trained on the whole treebanks. In this work, we will report UAS over LAS, since the differences in the encodings lie in how they encode the dependency arcs and not their types.",
"cite_spans": [
{
"start": 160,
"end": 181,
"text": "(Strzyz et al., 2019;",
"ref_id": "BIBREF51"
},
{
"start": 821,
"end": 840,
"text": "(Dror et al., 2018)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment 1: Encodings data-efficiency",
"sec_num": "4.1"
},
{
"text": "Results Tables 1, 2 and 3 show the difference of the mean UAS for each encoding with respect to the rp h one; for the gold PoS tags, predicted PoS tags and no PoS tags setups, respectively. For the gold PoS tags setup, the rp h encoding performs better than the bracketing (rx b and 2p b ) and the transition-based (ah tb and c tb ) encodings, for all the training splits. Yet, the gap narrows as the number of training sentence increases. For the predicted PoS tags setup, the relative PoS-based encoding performs better for the smallest set of 100 sentences, but slightly worse for the sets of 500 and 1 000 sentences with respect to rx b and 2p b . With more data, the tendency resembles the one from the gold PoS tags setup. Third, for the setup without PoS tags, the tendency reverses. The bracketing encodings perform better, particularly for the smallest test sets, but the gap narrows as the number of training sentences increases. Discussion The results from the experiments shed light about differences existing across different encodings and running configurations. First, under an ideal, gold environment, the rp h encoding makes a better use of limited data than the bracketing and transition-based encodings. Second, the predicted PoS tag setup shows that the performance of the PoS taggers can have a significant impact on the performance for the rp h encoding. More interestingly, weaknesses from different encodings seem to manifest to different extents depending on the amount of training data. For instance, when training data is scarce (100 sentences), bracketing encodings still cannot outperform the rp h encoding, despite the lower performance of the PoS taggers. However, when working with setups ranging from 500 to 1000 sentences, there is a slight advantage of the bracketing encodings with respect to rp h , suggesting that with this amount of data, bracketing encodings could be the preferable choice, since they seem able to exploit their potential in a better way than the rp h encoding can exploit not fully accurate PoS tags. With more training samples, the relative PoS-based encoding is again the best performing model across the board. In \u00a74.2 we will discuss deeper how for truly low-resources languages the advantage in favour of bracketing representations exacerbates more for the predicted and no PoS tags setups. Table 4 : UAS for the rich-resource treebanks, using the whole training set and the gold PoS tags setup. The red (--) and green cells (++) show that a given encoding performed worse or better than the rp h model, and that the difference is statistically significant. Lime and yellow cells mean that there is no a significant difference between a given encoding and the rp h , appending a + or a \u2212 when they performed better or worse than the rp h . Table 5 : UAS for the rich-resource treebanks, using the whole training set and the predicted PoS tags setup.",
"cite_spans": [
{
"start": 2467,
"end": 2471,
"text": "(--)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 2354,
"end": 2361,
"text": "Table 4",
"ref_id": null
},
{
"start": 2803,
"end": 2810,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiment 1: Encodings data-efficiency",
"sec_num": "4.1"
},
{
"text": "Tables 4, 5 and 6 show the UAS on the full training sets of the rich-resource treebanks for the gold PoS tags, predicted PoS tags, and no PoS tags setups. The goal is to show if under large amounts of data some of the encodings could perform on par with rp h , since Tables 1 and 2 indicated that differences in performance across encodings decreased when the number of training samples increase. Although performance across encodings becomes closer, their ranking remains the same. (see Appendix B) . Their sizes range between 153 and 1350 training sentences, most being around or between 500 and 1 000 (see Appendix C).",
"cite_spans": [
{
"start": 483,
"end": 499,
"text": "(see Appendix B)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment 1: Encodings data-efficiency",
"sec_num": "4.1"
},
{
"text": "We rerun a subset of the experiments from \u00a74.1, to check if the results follow the same trends, and conclusions are therefore similar.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": null
},
{
"text": "Results Tables 7, 8 be an outlier. For the predicted PoS tags setup, the bracketing-based encodings perform consistently better for most of the treebanks. For the no PoS tags setup, the bracketing-based encodings obtain, on average, more than 3 points than the relative PoS-head selection encoding, which even performs worse than the transition-based encodings.",
"cite_spans": [],
"ref_spans": [
{
"start": 8,
"end": 19,
"text": "Tables 7, 8",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Setup",
"sec_num": null
},
{
"text": "Discussion These experiments help elaborate on the findings of \u00a74.1. With respect to the ideal gold PoS tags setup, things do not change much, and the relative PoS-based encoding performs overall better. Still, this should not be taken as a ground truth about how the encodings will perform in real-world setups. For instance, for the predicted PoS tags setup, the bracketing-based encodings perform consistently better in most of the treebanks. This reinforces some of the suspicions found in the experiments of Table 2 , where training on rich-resource languages, but with limited data, revealed that bracketing encodings performed better, although just slightly. Also, it is worth noting that most of the low-resource treebanks tested in this work have a number of training sentences in the range where the bracketing-based encodings performed better for the predicted PoS tags setup in Table 2 , i.e. from 500 to 1 000 sentences (see Appendix C). Yet, the better performance of bracketing encodings is more evident when running on real low-resource treebanks. This does not only suggest that the bracketing encodings are better for real low-resource sequence labeling parsing, but it could also pose more general limitations for other low-resource NLP tasks that are evaluated only on 'faked' low-resource setups, and that could lead to incomplete or even misleading conclusions. Overall, the results suggest that bracketing encodings are the most suitable linearizations for real low-resource sequence labeling parsing.",
"cite_spans": [],
"ref_spans": [
{
"start": 513,
"end": 520,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 890,
"end": 897,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Setup",
"sec_num": null
},
{
"text": "We have studied sequence labeling encodings for dependency parsing in low-resource setups. First, we explored which encodings are more dataefficient under different conditions that include the use of gold PoS tags, predicted PoS tags and no PoS tags as part of the input. By restricting training data for rich-resource treebanks, we observe that although bracketing encodings are less data-efficient than head-selection ones under ideal conditions, this disadvantage can vanish when the input conditions are not gold and data is limited. Second, we studied their performance under the same running configurations, but on truly low-resource languages. These results show more clearly the greatest utility of bracketing encodings over the rest of the ones when training data is limited and the quality of external factors, such as PoS tags, is affected by the low-resource nature of the problem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Other head-selection variants encode arcs based on word properties different than PoS tags(Lacroix, 2019).2 An x-planar tree can be separated into x planes, where the arcs belonging to the same plane do not cross.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In left-to-right parsers, a read transition is an action that puts a word from the buffer into the stack. For algorithms such as the arc-standard or arc-hybrid this is only the shift action, while in the arc-eager both the shift and right-arc actions are read transitions. See also(Nivre, 2008).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We used 3 tasks because it establishes a more fair comparison in terms of label sparsity and follows previous work.5 Also, for the case of multilingual models, there is literature that concludes different about what makes a language beneficial for other under a BERT-based framework. For instance,Wu and Dredze (2019) conclude that sharing a large amount sub-word pieces is important, while authors such asPires et al. (2019) orArtetxe et al. (2020) state otherwise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Code switching treebanks and small treebanks of richresource languages were not considered.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "FBBVA accepts no responsibility for the opinions, statements and contents included in the project and/or the results thereof, which are entirely the responsibility of the authors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work is supported by a 2020 Leonardo Grant for Researchers and Cultural Creators from the FB-BVA. 7 The work also receives funding from the European Research Council (FASTPARSE, grant agreement No 714150), from ERDF/MICINN-AEI (ANSWER-ASAP, TIN2017-85160-C2-1-R, SCANNER, PID2020-113230RB-C21), from Xunta de Galicia (ED431C 2020/11), and from Centro de Investigaci\u00f3n de Galicia 'CITIC', funded by Xunta de Galicia and the European Union (European Regional Development Fund-Galicia 2014-2020 Program) by grant ED431G 2019/01.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Many languages, one parser",
"authors": [
{
"first": "Waleed",
"middle": [],
"last": "Ammar",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Mulcaire",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Noah A",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2016,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "4",
"issue": "",
"pages": "431--444",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Waleed Ammar, George Mulcaire, Miguel Ballesteros, Chris Dyer, and Noah A Smith. 2016. Many lan- guages, one parser. Transactions of the Association for Computational Linguistics, 4:431-444.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Distilling neural networks for greener and faster dependency parsing",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Anderson",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "G\u00f3mez-Rodr\u00edguez",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 16th International Conference on Parsing Technologies and the IWPT 2020 Shared Task on Parsing into Enhanced Universal Dependencies",
"volume": "",
"issue": "",
"pages": "2--13",
"other_ids": {
"DOI": [
"10.18653/v1/2020.iwpt-1.2"
]
},
"num": null,
"urls": [],
"raw_text": "Mark Anderson and Carlos G\u00f3mez-Rodr\u00edguez. 2020. Distilling neural networks for greener and faster de- pendency parsing. In Proceedings of the 16th In- ternational Conference on Parsing Technologies and the IWPT 2020 Shared Task on Parsing into En- hanced Universal Dependencies, pages 2-13, On- line. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "What taggers fail to learn, parsers need the most",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Anderson",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "G\u00f3mez-Rodr\u00edguez",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2104.01083"
]
},
"num": null,
"urls": [],
"raw_text": "Mark Anderson and Carlos G\u00f3mez-Rodr\u00edguez. 2021. What taggers fail to learn, parsers need the most. arXiv preprint arXiv:2104.01083.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "On the cross-lingual transferability of monolingual representations",
"authors": [
{
"first": "Mikel",
"middle": [],
"last": "Artetxe",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
},
{
"first": "Dani",
"middle": [],
"last": "Yogatama",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4623--4637",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.421"
]
},
"num": null,
"urls": [],
"raw_text": "Mikel Artetxe, Sebastian Ruder, and Dani Yogatama. 2020. On the cross-lingual transferability of mono- lingual representations. In Proceedings of the 58th Annual Meeting of the Association for Computa- tional Linguistics, pages 4623-4637, Online. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Anchoring and agreement in syntactic annotations",
"authors": [
{
"first": "Yevgeni",
"middle": [],
"last": "Berzak",
"suffix": ""
},
{
"first": "Yan",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Andrei",
"middle": [],
"last": "Barbu",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
},
{
"first": "Boris",
"middle": [],
"last": "Katz",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2215--2224",
"other_ids": {
"DOI": [
"10.18653/v1/D16-1239"
]
},
"num": null,
"urls": [],
"raw_text": "Yevgeni Berzak, Yan Huang, Andrei Barbu, Anna Ko- rhonen, and Boris Katz. 2016. Anchoring and agree- ment in syntactic annotations. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 2215-2224, Austin, Texas. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Multitask learning. Machine learning",
"authors": [
{
"first": "Rich",
"middle": [],
"last": "Caruana",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "28",
"issue": "",
"pages": "41--75",
"other_ids": {
"DOI": [
"https://link.springer.com/article/10.1023/A:1007379606734"
]
},
"num": null,
"urls": [],
"raw_text": "Rich Caruana. 1997. Multitask learning. Machine learning, 28(1):41-75.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A fast and accurate dependency parser using neural networks",
"authors": [
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "740--750",
"other_ids": {
"DOI": [
"10.3115/v1/D14-1082"
]
},
"num": null,
"urls": [],
"raw_text": "Danqi Chen and Christopher Manning. 2014. A fast and accurate dependency parser using neural net- works. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 740-750, Doha, Qatar. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A fundamental algorithm for dependency parsing",
"authors": [
{
"first": "",
"middle": [],
"last": "Michael A Covington",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 39th annual ACM southeast conference",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael A Covington. 2001. A fundamental algorithm for dependency parsing. In Proceedings of the 39th annual ACM southeast conference, volume 1. Cite- seer.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Efficient EUD parsing",
"authors": [
{
"first": "Mathieu",
"middle": [],
"last": "Dehouck",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Anderson",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "G\u00f3mez-Rodr\u00edguez",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 16th International Conference on Parsing Technologies and the IWPT 2020 Shared Task on Parsing into Enhanced Universal Dependencies",
"volume": "",
"issue": "",
"pages": "192--205",
"other_ids": {
"DOI": [
"10.18653/v1/2020.iwpt-1.20"
]
},
"num": null,
"urls": [],
"raw_text": "Mathieu Dehouck, Mark Anderson, and Carlos G\u00f3mez- Rodr\u00edguez. 2020. Efficient EUD parsing. In Pro- ceedings of the 16th International Conference on Parsing Technologies and the IWPT 2020 Shared Task on Parsing into Enhanced Universal Dependen- cies, pages 192-205, Online. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Data augmentation via subtree swapping for dependency parsing of low-resource languages",
"authors": [
{
"first": "Mathieu",
"middle": [],
"last": "Dehouck",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "G\u00f3mez-Rodr\u00edguez",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 28th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3818--3830",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mathieu Dehouck and Carlos G\u00f3mez-Rodr\u00edguez. 2020. Data augmentation via subtree swapping for de- pendency parsing of low-resource languages. In Proceedings of the 28th International Conference on Computational Linguistics, pages 3818-3830, Barcelona, Spain (Online). International Committee on Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Stanford's graph-based neural dependency parser at the CoNLL 2017 shared task",
"authors": [
{
"first": "Timothy",
"middle": [],
"last": "Dozat",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies",
"volume": "",
"issue": "",
"pages": "20--30",
"other_ids": {
"DOI": [
"10.18653/v1/K17-3002"
]
},
"num": null,
"urls": [],
"raw_text": "Timothy Dozat, Peng Qi, and Christopher D. Manning. 2017. Stanford's graph-based neural dependency parser at the CoNLL 2017 shared task. In Proceed- ings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 20-30, Vancouver, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "The hitchhiker's guide to testing statistical significance in natural language processing",
"authors": [
{
"first": "Rotem",
"middle": [],
"last": "Dror",
"suffix": ""
},
{
"first": "Gili",
"middle": [],
"last": "Baumer",
"suffix": ""
},
{
"first": "Segev",
"middle": [],
"last": "Shlomov",
"suffix": ""
},
{
"first": "Roi",
"middle": [],
"last": "Reichart",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1383--1392",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1128"
]
},
"num": null,
"urls": [],
"raw_text": "Rotem Dror, Gili Baumer, Segev Shlomov, and Roi Re- ichart. 2018. The hitchhiker's guide to testing statis- tical significance in natural language processing. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 1383-1392, Melbourne, Aus- tralia. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Lexicalized vs. delexicalized parsing in low-resource scenarios",
"authors": [
{
"first": "Agnieszka",
"middle": [],
"last": "Falenska",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Etinoglu",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th International Conference on Parsing Technologies",
"volume": "",
"issue": "",
"pages": "18--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Agnieszka Falenska and\u00d6zlem \u00c7 etinoglu. 2017. Lexi- calized vs. delexicalized parsing in low-resource sce- narios. In Proceedings of the 15th International Conference on Parsing Technologies, pages 18-24, Pisa, Italy. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Left-to-right dependency parsing with pointer networks",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Fern\u00e1ndez",
"suffix": ""
},
{
"first": "-",
"middle": [],
"last": "Gonz\u00e1lez",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "G\u00f3mez-Rodr\u00edguez",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "710--716",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1076"
]
},
"num": null,
"urls": [],
"raw_text": "Daniel Fern\u00e1ndez-Gonz\u00e1lez and Carlos G\u00f3mez- Rodr\u00edguez. 2019. Left-to-right dependency parsing with pointer networks. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 710-716, Minneapolis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "New treebank or repurposed? on the feasibility of cross-lingual parsing of romance languages with universal dependencies",
"authors": [
{
"first": "Marcos",
"middle": [],
"last": "Garcia",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "G\u00f3mez-Rodr\u00edguez",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Alonso",
"suffix": ""
}
],
"year": 2018,
"venue": "Natural Language Engineering",
"volume": "24",
"issue": "1",
"pages": "91--122",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marcos Garcia, Carlos G\u00f3mez-Rodr\u00edguez, and Miguel A Alonso. 2018. New treebank or repur- posed? on the feasibility of cross-lingual parsing of romance languages with universal dependencies. Natural Language Engineering, 24(1):91-122.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "How important is syntactic parsing accuracy? an empirical evaluation on rulebased sentiment analysis",
"authors": [
{
"first": "Carlos",
"middle": [],
"last": "G\u00f3mez-Rodr\u00edguez",
"suffix": ""
},
{
"first": "Iago",
"middle": [],
"last": "Alonso-Alonso",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Vilares",
"suffix": ""
}
],
"year": 2019,
"venue": "Artificial Intelligence Review",
"volume": "52",
"issue": "3",
"pages": "2081--2097",
"other_ids": {
"DOI": [
"https://link.springer.com/article/10.1007/s10462-017-9584-0"
]
},
"num": null,
"urls": [],
"raw_text": "Carlos G\u00f3mez-Rodr\u00edguez, Iago Alonso-Alonso, and David Vilares. 2019. How important is syntactic parsing accuracy? an empirical evaluation on rule- based sentiment analysis. Artificial Intelligence Re- view, 52(3):2081-2097.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A unifying theory of transitionbased and sequence labeling parsing",
"authors": [
{
"first": "Carlos",
"middle": [],
"last": "G\u00f3mez-Rodr\u00edguez",
"suffix": ""
},
{
"first": "Michalina",
"middle": [],
"last": "Strzyz",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Vilares",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 28th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3776--3793",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carlos G\u00f3mez-Rodr\u00edguez, Michalina Strzyz, and David Vilares. 2020. A unifying theory of transition- based and sequence labeling parsing. In Proceed- ings of the 28th International Conference on Com- putational Linguistics, pages 3776-3793, Barcelona, Spain (Online). International Committee on Compu- tational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Scheduled multi-task learning: From syntax to translation",
"authors": [
{
"first": "Eliyahu",
"middle": [],
"last": "Kiperwasser",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Ballesteros",
"suffix": ""
}
],
"year": 2018,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "6",
"issue": "",
"pages": "225--240",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00017"
]
},
"num": null,
"urls": [],
"raw_text": "Eliyahu Kiperwasser and Miguel Ballesteros. 2018. Scheduled multi-task learning: From syntax to trans- lation. Transactions of the Association for Computa- tional Linguistics, 6:225-240.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Corpusbased induction of syntactic structure: Models of dependency and constituency",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL-04)",
"volume": "",
"issue": "",
"pages": "478--485",
"other_ids": {
"DOI": [
"10.3115/1218955.1219016"
]
},
"num": null,
"urls": [],
"raw_text": "Dan Klein and Christopher Manning. 2004. Corpus- based induction of syntactic structure: Models of de- pendency and constituency. In Proceedings of the 42nd Annual Meeting of the Association for Com- putational Linguistics (ACL-04), pages 478-485, Barcelona, Spain.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Dependency parsing. Synthesis lectures on human language technologies",
"authors": [
{
"first": "Sandra",
"middle": [],
"last": "K\u00fcbler",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "1",
"issue": "",
"pages": "1--127",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sandra K\u00fcbler, Ryan McDonald, and Joakim Nivre. 2009. Dependency parsing. Synthesis lectures on human language technologies, 1(1):1-127.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Dynamic programming algorithms for transition-based dependency parsers",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Kuhlmann",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "G\u00f3mez-Rodr\u00edguez",
"suffix": ""
},
{
"first": "Giorgio",
"middle": [],
"last": "Satta",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "673--682",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Kuhlmann, Carlos G\u00f3mez-Rodr\u00edguez, and Gior- gio Satta. 2011. Dynamic programming algorithms for transition-based dependency parsers. In Pro- ceedings of the 49th Annual Meeting of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, pages 673-682, Portland, Ore- gon, USA. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Dependency parsing as sequence labeling with head-based encoding and multi-task learning",
"authors": [
{
"first": "Oph\u00e9lie",
"middle": [],
"last": "Lacroix",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fifth International Conference on Dependency Linguistics (Depling, SyntaxFest 2019)",
"volume": "",
"issue": "",
"pages": "136--143",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oph\u00e9lie Lacroix. 2019. Dependency parsing as sequence labeling with head-based encoding and multi-task learning. In Proceedings of the Fifth In- ternational Conference on Dependency Linguistics (Depling, SyntaxFest 2019), pages 136-143.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Frustratingly easy cross-lingual transfer for transition-based dependency parsing",
"authors": [
{
"first": "Oph\u00e9lie",
"middle": [],
"last": "Lacroix",
"suffix": ""
},
{
"first": "Lauriane",
"middle": [],
"last": "Aufrant",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Wisniewski",
"suffix": ""
},
{
"first": "Fran\u00e7ois",
"middle": [],
"last": "Yvon",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1058--1063",
"other_ids": {
"DOI": [
"10.18653/v1/N16-1121"
]
},
"num": null,
"urls": [],
"raw_text": "Oph\u00e9lie Lacroix, Lauriane Aufrant, Guillaume Wis- niewski, and Fran\u00e7ois Yvon. 2016a. Frustratingly easy cross-lingual transfer for transition-based de- pendency parsing. In Proceedings of the 2016 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, pages 1058-1063, San Diego, California. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Cross-lingual dependency transfer : What matters? assessing the impact of pre-and postprocessing",
"authors": [
{
"first": "Oph\u00e9lie",
"middle": [],
"last": "Lacroix",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Wisniewski",
"suffix": ""
},
{
"first": "Fran\u00e7ois",
"middle": [],
"last": "Yvon",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Workshop on Multilingual and Cross-lingual Methods in NLP",
"volume": "",
"issue": "",
"pages": "20--29",
"other_ids": {
"DOI": [
"10.18653/v1/W16-1203"
]
},
"num": null,
"urls": [],
"raw_text": "Oph\u00e9lie Lacroix, Guillaume Wisniewski, and Fran\u00e7ois Yvon. 2016b. Cross-lingual dependency transfer : What matters? assessing the impact of pre-and post- processing. In Proceedings of the Workshop on Mul- tilingual and Cross-lingual Methods in NLP, pages 20-29, San Diego, California. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Unsupervised dependency parsing: Let's use supervised parsers",
"authors": [
{
"first": "Phong",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Willem",
"middle": [],
"last": "Zuidema",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "651--661",
"other_ids": {
"DOI": [
"10.3115/v1/N15-1067"
]
},
"num": null,
"urls": [],
"raw_text": "Phong Le and Willem Zuidema. 2015. Unsupervised dependency parsing: Let's use supervised parsers. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 651-661, Denver, Colorado. Association for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Seq2seq dependency parsing",
"authors": [
{
"first": "Zuchao",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jiaxun",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Shexia",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3203--3214",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zuchao Li, Jiaxun Cai, Shexia He, and Hai Zhao. 2018. Seq2seq dependency parsing. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3203-3214, Santa Fe, New Mex- ico, USA. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Improving neural machine translation with neural syntactic distance",
"authors": [
{
"first": "Chunpeng",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Akihiro",
"middle": [],
"last": "Tamura",
"suffix": ""
},
{
"first": "Masao",
"middle": [],
"last": "Utiyama",
"suffix": ""
},
{
"first": "Eiichiro",
"middle": [],
"last": "Sumita",
"suffix": ""
},
{
"first": "Tiejun",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "2032--2037",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1205"
]
},
"num": null,
"urls": [],
"raw_text": "Chunpeng Ma, Akihiro Tamura, Masao Utiyama, Ei- ichiro Sumita, and Tiejun Zhao. 2019. Improving neural machine translation with neural syntactic dis- tance. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 2032-2037, Minneapolis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Stackpointer networks for dependency parsing",
"authors": [
{
"first": "Xuezhe",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Zecong",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Jingzhou",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Nanyun",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1403--1414",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1130"
]
},
"num": null,
"urls": [],
"raw_text": "Xuezhe Ma, Zecong Hu, Jingzhou Liu, Nanyun Peng, Graham Neubig, and Eduard Hovy. 2018. Stack- pointer networks for dependency parsing. In Pro- ceedings of the 56th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 1403-1414, Melbourne, Australia. Association for Computational Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Turning on the turbo: Fast third-order non-projective turbo parsers",
"authors": [
{
"first": "F",
"middle": [
"T"
],
"last": "Andr\u00e9",
"suffix": ""
},
{
"first": "Miguel",
"middle": [
"B"
],
"last": "Martins",
"suffix": ""
},
{
"first": "Noah A",
"middle": [],
"last": "Almeida",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "617--622",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andr\u00e9 FT Martins, Miguel B Almeida, and Noah A Smith. 2013. Turning on the turbo: Fast third-order non-projective turbo parsers. In Proceedings of the 51st Annual Meeting of the Association for Compu- tational Linguistics (Volume 2: Short Papers), pages 617-622.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Reranking and self-training for parser adaptation",
"authors": [
{
"first": "David",
"middle": [],
"last": "Mcclosky",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "337--344",
"other_ids": {
"DOI": [
"10.3115/1220175.1220218"
]
},
"num": null,
"urls": [],
"raw_text": "David McClosky, Eugene Charniak, and Mark Johnson. 2006. Reranking and self-training for parser adapta- tion. In Proceedings of the 21st International Con- ference on Computational Linguistics and 44th An- nual Meeting of the Association for Computational Linguistics, pages 337-344, Sydney, Australia. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Multi-source transfer of delexicalized dependency parsers",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "Keith",
"middle": [],
"last": "Hall",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "62--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan McDonald, Slav Petrov, and Keith Hall. 2011. Multi-source transfer of delexicalized dependency parsers. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 62-72, Edinburgh, Scotland, UK. Association for Computational Linguistics.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Dependency syntax: theory and practice",
"authors": [
{
"first": "Igor",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 1988,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Igor Aleksandrovic Mel'cuk et al. 1988. Dependency syntax: theory and practice. SUNY press.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Self-training for unsupervised parsing with PRPN",
"authors": [
{
"first": "Anhad",
"middle": [],
"last": "Mohananey",
"suffix": ""
},
{
"first": "Katharina",
"middle": [],
"last": "Kann",
"suffix": ""
},
{
"first": "Samuel",
"middle": [
"R"
],
"last": "Bowman",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 16th International Conference on Parsing Technologies and the IWPT 2020 Shared Task on Parsing into Enhanced Universal Dependencies",
"volume": "",
"issue": "",
"pages": "105--110",
"other_ids": {
"DOI": [
"10.18653/v1/2020.iwpt-1.11"
]
},
"num": null,
"urls": [],
"raw_text": "Anhad Mohananey, Katharina Kann, and Samuel R. Bowman. 2020. Self-training for unsupervised pars- ing with PRPN. In Proceedings of the 16th Interna- tional Conference on Parsing Technologies and the IWPT 2020 Shared Task on Parsing into Enhanced Universal Dependencies, pages 105-110, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Low-resource parsing with crosslingual contextualized representations",
"authors": [
{
"first": "Phoebe",
"middle": [],
"last": "Mulcaire",
"suffix": ""
},
{
"first": "Jungo",
"middle": [],
"last": "Kasai",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)",
"volume": "",
"issue": "",
"pages": "304--315",
"other_ids": {
"DOI": [
"10.18653/v1/K19-1029"
]
},
"num": null,
"urls": [],
"raw_text": "Phoebe Mulcaire, Jungo Kasai, and Noah A. Smith. 2019. Low-resource parsing with crosslingual con- textualized representations. In Proceedings of the 23rd Conference on Computational Natural Lan- guage Learning (CoNLL), pages 304-315, Hong Kong, China. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Selective sharing for multilingual dependency parsing",
"authors": [
{
"first": "Tahira",
"middle": [],
"last": "Naseem",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "Amir",
"middle": [],
"last": "Globerson",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "629--637",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tahira Naseem, Regina Barzilay, and Amir Globerson. 2012. Selective sharing for multilingual dependency parsing. In Proceedings of the 50th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 629-637, Jeju Is- land, Korea. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Algorithms for deterministic incremental dependency parsing",
"authors": [
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2008,
"venue": "Computational Linguistics",
"volume": "34",
"issue": "4",
"pages": "513--553",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joakim Nivre. 2008. Algorithms for deterministic in- cremental dependency parsing. Computational Lin- guistics, 34(4):513-553.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "How multilingual is multilingual BERT?",
"authors": [
{
"first": "Telmo",
"middle": [],
"last": "Pires",
"suffix": ""
},
{
"first": "Eva",
"middle": [],
"last": "Schlinger",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Garrette",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4996--5001",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1493"
]
},
"num": null,
"urls": [],
"raw_text": "Telmo Pires, Eva Schlinger, and Dan Garrette. 2019. How multilingual is multilingual BERT? In Pro- ceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4996- 5001, Florence, Italy. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Reporting score distributions makes a difference: Performance study of LSTM-networks for sequence tagging",
"authors": [
{
"first": "Nils",
"middle": [],
"last": "Reimers",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "338--348",
"other_ids": {
"DOI": [
"10.18653/v1/D17-1035"
]
},
"num": null,
"urls": [],
"raw_text": "Nils Reimers and Iryna Gurevych. 2017. Reporting score distributions makes a difference: Performance study of LSTM-networks for sequence tagging. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 338-348, Copenhagen, Denmark. Association for Computational Linguistics.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Deep contextualized self-training for low resource dependency parsing",
"authors": [
{
"first": "Guy",
"middle": [],
"last": "Rotman",
"suffix": ""
},
{
"first": "Roi",
"middle": [],
"last": "Reichart",
"suffix": ""
}
],
"year": 2019,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "7",
"issue": "",
"pages": "695--713",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00294"
]
},
"num": null,
"urls": [],
"raw_text": "Guy Rotman and Roi Reichart. 2019. Deep contextual- ized self-training for low resource dependency pars- ing. Transactions of the Association for Computa- tional Linguistics, 7:695-713.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "An overview of multi-task learning in",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
}
],
"year": 2017,
"venue": "deep neural networks",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1706.05098"
]
},
"num": null,
"urls": [],
"raw_text": "Sebastian Ruder. 2017. An overview of multi-task learning in deep neural networks. arXiv preprint arXiv:1706.05098.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Evaluating the effects of treebank size in a practical application for parsing",
"authors": [
{
"first": "Kenji",
"middle": [],
"last": "Sagae",
"suffix": ""
},
{
"first": "Yusuke",
"middle": [],
"last": "Miyao",
"suffix": ""
},
{
"first": "Rune",
"middle": [],
"last": "Saetre",
"suffix": ""
},
{
"first": "Jun'ichi",
"middle": [],
"last": "Tsujii",
"suffix": ""
}
],
"year": 2008,
"venue": "Software Engineering, Testing, and Quality Assurance for Natural Language Processing",
"volume": "",
"issue": "",
"pages": "14--20",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenji Sagae, Yusuke Miyao, Rune Saetre, and Jun'ichi Tsujii. 2008. Evaluating the effects of treebank size in a practical application for parsing. In Software En- gineering, Testing, and Quality Assurance for Natu- ral Language Processing, pages 14-20, Columbus, Ohio. Association for Computational Linguistics.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Adversarial training for crossdomain Universal Dependency parsing",
"authors": [
{
"first": "Motoki",
"middle": [],
"last": "Sato",
"suffix": ""
},
{
"first": "Hitoshi",
"middle": [],
"last": "Manabe",
"suffix": ""
},
{
"first": "Hiroshi",
"middle": [],
"last": "Noji",
"suffix": ""
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies",
"volume": "",
"issue": "",
"pages": "71--79",
"other_ids": {
"DOI": [
"10.18653/v1/K17-3007"
]
},
"num": null,
"urls": [],
"raw_text": "Motoki Sato, Hitoshi Manabe, Hiroshi Noji, and Yuji Matsumoto. 2017. Adversarial training for cross- domain Universal Dependency parsing. In Proceed- ings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 71-79, Vancouver, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Bidirectional recurrent neural networks",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Kuldip",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Paliwal",
"suffix": ""
}
],
"year": 1997,
"venue": "IEEE transactions on Signal Processing",
"volume": "45",
"issue": "11",
"pages": "2673--2681",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Schuster and Kuldip K Paliwal. 1997. Bidirec- tional recurrent neural networks. IEEE transactions on Signal Processing, 45(11):2673-2681.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "On the role of supervision in unsupervised constituency parsing",
"authors": [
{
"first": "Haoyue",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Karen",
"middle": [],
"last": "Livescu",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "7611--7621",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.614"
]
},
"num": null,
"urls": [],
"raw_text": "Haoyue Shi, Karen Livescu, and Kevin Gimpel. 2020. On the role of supervision in unsupervised con- stituency parsing. In Proceedings of the 2020 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 7611-7621, Online. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Data point selection for crosslanguage adaptation of dependency parsers",
"authors": [
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "682--686",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anders S\u00f8gaard. 2011. Data point selection for cross- language adaptation of dependency parsers. In Pro- ceedings of the 49th Annual Meeting of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, pages 682-686, Portland, Ore- gon, USA. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Leveraging dependency forest for neural medical relation extraction",
"authors": [
{
"first": "Linfeng",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
},
{
"first": "Mo",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Zhiguo",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "208--218",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Linfeng Song, Yue Zhang, Daniel Gildea, Mo Yu, Zhiguo Wang, et al. 2019. Leveraging dependency forest for neural medical relation extraction. In Pro- ceedings of the 2019 Conference on Empirical Meth- ods in Natural Language Processing and the 9th In- ternational Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 208-218.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "From baby steps to leapfrog: How \"less is more\" in unsupervised dependency parsing",
"authors": [
{
"first": "I",
"middle": [],
"last": "Valentin",
"suffix": ""
},
{
"first": "Hiyan",
"middle": [],
"last": "Spitkovsky",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Alshawi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2010,
"venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "751--759",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Valentin I. Spitkovsky, Hiyan Alshawi, and Daniel Ju- rafsky. 2010. From baby steps to leapfrog: How \"less is more\" in unsupervised dependency parsing. In Human Language Technologies: The 2010 An- nual Conference of the North American Chapter of the Association for Computational Linguistics, pages 751-759, Los Angeles, California. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Dependency parsing as a sequence labeling task",
"authors": [
{
"first": "Drahom\u00edra",
"middle": [],
"last": "Spoustov\u00e1",
"suffix": ""
},
{
"first": "Miroslav",
"middle": [],
"last": "Spousta",
"suffix": ""
}
],
"year": 2010,
"venue": "The Prague Bulletin of Mathematical Linguistics",
"volume": "94",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Drahom\u00edra Spoustov\u00e1 and Miroslav Spousta. 2010. De- pendency parsing as a sequence labeling task. The Prague Bulletin of Mathematical Linguistics, 94:7.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Linguistically-informed self-attention for semantic role labeling",
"authors": [
{
"first": "Emma",
"middle": [],
"last": "Strubell",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Verga",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Andor",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Weiss",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "5027--5038",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1548"
]
},
"num": null,
"urls": [],
"raw_text": "Emma Strubell, Patrick Verga, Daniel Andor, David Weiss, and Andrew McCallum. 2018. Linguistically-informed self-attention for semantic role labeling. In Proceedings of the 2018 Confer- ence on Empirical Methods in Natural Language Processing, pages 5027-5038, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Viable dependency parsing as sequence labeling",
"authors": [
{
"first": "Michalina",
"middle": [],
"last": "Strzyz",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Vilares",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "G\u00f3mez-Rodr\u00edguez",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "717--723",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1077"
]
},
"num": null,
"urls": [],
"raw_text": "Michalina Strzyz, David Vilares, and Carlos G\u00f3mez- Rodr\u00edguez. 2019. Viable dependency parsing as se- quence labeling. In Proceedings of the 2019 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long and Short Pa- pers), pages 717-723, Minneapolis, Minnesota. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Bracketing encodings for 2-planar dependency parsing",
"authors": [
{
"first": "Michalina",
"middle": [],
"last": "Strzyz",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Vilares",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "G\u00f3mez-Rodr\u00edguez",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 28th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2472--2484",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michalina Strzyz, David Vilares, and Carlos G\u00f3mez- Rodr\u00edguez. 2020. Bracketing encodings for 2-planar dependency parsing. In Proceedings of the 28th In- ternational Conference on Computational Linguis- tics, pages 2472-2484, Barcelona, Spain (Online). International Committee on Computational Linguis- tics.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Target language adaptation of discriminative transfer parsers",
"authors": [
{
"first": "Oscar",
"middle": [],
"last": "T\u00e4ckstr\u00f6m",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1061--1071",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oscar T\u00e4ckstr\u00f6m, Ryan McDonald, and Joakim Nivre. 2013. Target language adaptation of discrimina- tive transfer parsers. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1061-1071, Atlanta, Georgia. Association for Computational Linguistics.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "Treebank translation for cross-lingual parser induction",
"authors": [
{
"first": "J\u00f6rg",
"middle": [],
"last": "Tiedemann",
"suffix": ""
},
{
"first": "\u017deljko",
"middle": [],
"last": "Agi\u0107",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Eighteenth Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "130--140",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J\u00f6rg Tiedemann,\u017deljko Agi\u0107, and Joakim Nivre. 2014. Treebank translation for cross-lingual parser induc- tion. In Proceedings of the Eighteenth Confer- ence on Computational Natural Language Learning, pages 130-140.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "A systematic comparison of methods for low-resource dependency parsing on genuinely low-resource languages",
"authors": [
{
"first": "Clara",
"middle": [],
"last": "Vania",
"suffix": ""
},
{
"first": "Yova",
"middle": [],
"last": "Kementchedjhieva",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lopez",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "1105--1116",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1102"
]
},
"num": null,
"urls": [],
"raw_text": "Clara Vania, Yova Kementchedjhieva, Anders S\u00f8gaard, and Adam Lopez. 2019. A systematic comparison of methods for low-resource dependency parsing on genuinely low-resource languages. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Interna- tional Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 1105-1116, Hong",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "Association for Computational Linguistics",
"authors": [
{
"first": "China",
"middle": [],
"last": "Kong",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kong, China. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "Performance-oriented dependency parsing",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Volokh",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander Volokh. 2013. Performance-oriented depen- dency parsing.",
"links": null
},
"BIBREF58": {
"ref_id": "b58",
"title": "Synthetic data made to order: The case of parsing",
"authors": [
{
"first": "Dingquan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1325--1337",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1163"
]
},
"num": null,
"urls": [],
"raw_text": "Dingquan Wang and Jason Eisner. 2018. Synthetic data made to order: The case of parsing. In Proceed- ings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1325-1337, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF59": {
"ref_id": "b59",
"title": "How to best use syntax in semantic role labelling",
"authors": [
{
"first": "Yufei",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Wan",
"suffix": ""
},
{
"first": "Yifang",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5338--5343",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1529"
]
},
"num": null,
"urls": [],
"raw_text": "Yufei Wang, Mark Johnson, Stephen Wan, Yifang Sun, and Wei Wang. 2019. How to best use syntax in se- mantic role labelling. In Proceedings of the 57th An- nual Meeting of the Association for Computational Linguistics, pages 5338-5343, Florence, Italy. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF60": {
"ref_id": "b60",
"title": "Beto, bentz, becas: The surprising cross-lingual effectiveness of BERT",
"authors": [
{
"first": "Shijie",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Dredze",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "833--844",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1077"
]
},
"num": null,
"urls": [],
"raw_text": "Shijie Wu and Mark Dredze. 2019. Beto, bentz, be- cas: The surprising cross-lingual effectiveness of BERT. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 833-844, Hong Kong, China. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF61": {
"ref_id": "b61",
"title": "NCRF++: An opensource neural sequence labeling toolkit",
"authors": [
{
"first": "Jie",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of ACL 2018, System Demonstrations",
"volume": "",
"issue": "",
"pages": "74--79",
"other_ids": {
"DOI": [
"10.18653/v1/P18-4013"
]
},
"num": null,
"urls": [],
"raw_text": "Jie Yang and Yue Zhang. 2018. NCRF++: An open- source neural sequence labeling toolkit. In Proceed- ings of ACL 2018, System Demonstrations, pages 74-79, Melbourne, Australia. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF62": {
"ref_id": "b62",
"title": "Generic axiomatization of families of noncrossing graphs in dependency parsing",
"authors": [
{
"first": "Anssi",
"middle": [],
"last": "Yli",
"suffix": ""
},
{
"first": "-Jyr\u00e4",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "G\u00f3mez-Rodr\u00edguez",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1745--1755",
"other_ids": {
"DOI": [
"10.18653/v1/P17-1160"
]
},
"num": null,
"urls": [],
"raw_text": "Anssi Yli-Jyr\u00e4 and Carlos G\u00f3mez-Rodr\u00edguez. 2017. Generic axiomatization of families of noncrossing graphs in dependency parsing. In Proceedings of the 55th Annual Meeting of the Association for Com- putational Linguistics (Volume 1: Long Papers), pages 1745-1755, Vancouver, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF63": {
"ref_id": "b63",
"title": "Universal dependencies 2.7. LINDAT/CLARIAH-CZ digital library at the Institute of Formal and Applied Linguistics (\u00daFAL",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Zeman",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2020,
"venue": "Faculty of Mathematics and Physics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Zeman, Joakim Nivre, et al. 2020. Universal de- pendencies 2.7. LINDAT/CLARIAH-CZ digital li- brary at the Institute of Formal and Applied Linguis- tics (\u00daFAL), Faculty of Mathematics and Physics, Charles University.",
"links": null
},
"BIBREF64": {
"ref_id": "b64",
"title": "Cross-lingual dependency parsing using code-mixed TreeBank",
"authors": [
{
"first": "Meishan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Guohong",
"middle": [],
"last": "Fu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "997--1006",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1092"
]
},
"num": null,
"urls": [],
"raw_text": "Meishan Zhang, Yue Zhang, and Guohong Fu. 2019. Cross-lingual dependency parsing using code-mixed TreeBank. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Process- ing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 997-1006, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF65": {
"ref_id": "b65",
"title": "Is pos tagging necessary or even helpful for neural dependency parsing?",
"authors": [
{
"first": "Houquan",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Zhenghua",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Min",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2020,
"venue": "CCF International Conference on Natural Language Processing and Chinese Computing",
"volume": "",
"issue": "",
"pages": "179--191",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Houquan Zhou, Yu Zhang, Zhenghua Li, and Min Zhang. 2020. Is pos tagging necessary or even help- ful for neural dependency parsing? In CCF Interna- tional Conference on Natural Language Processing and Chinese Computing, pages 179-191. Springer.",
"links": null
}
},
"ref_entries": {
"TABREF2": {
"type_str": "table",
"num": null,
"html": null,
"text": "Average UAS difference for the subsets of the rich-resource treebanks under the gold PoS tags setup. Blue and yellow cells show the UAS increase and decrease with respect to the rp h encoding, respectively.",
"content": "<table><tr><td colspan=\"2\"># Sentences rp h rx b 2p b ah tb c tb</td></tr><tr><td>100</td><td>41.87 -0.42 -0.19 -1.9 -3.59</td></tr><tr><td>500</td><td>63.45 -0.01 0.14 -1.96 -5.73</td></tr><tr><td>1000</td><td>68.10 0.25 0.17 -2.44 -5.53</td></tr><tr><td>5 000</td><td>78.56 -0.62 -0.63 -2.53 -5.44</td></tr><tr><td>10 000</td><td>82.29 -0.37 -0.36 -2.49 -4.44</td></tr></table>"
},
"TABREF3": {
"type_str": "table",
"num": null,
"html": null,
"text": "Average UAS difference for the subsets of the rich-resource treebanks under the predicted PoS tags setup.",
"content": "<table><tr><td colspan=\"2\"># Sentences rp h rx b 2p b ah tb c tb</td></tr><tr><td>100</td><td>35.60 9.06 9.31 7.57 4.83</td></tr><tr><td>500</td><td>58.63 3.04 2.45 0.99 -2.26</td></tr><tr><td>1 000</td><td>63.99 3.59 3.42 0.83 -2.24</td></tr><tr><td>5 000</td><td>75.57 1.47 1.55 -0.19 -3.07</td></tr><tr><td>10 000</td><td>79.90 1.22 1.54 -0.87 -2.93</td></tr></table>"
},
"TABREF4": {
"type_str": "table",
"num": null,
"html": null,
"text": "Average UAS difference for the subsets of the rich-resource treebanks under the no PoS tags setup.",
"content": "<table/>"
},
"TABREF5": {
"type_str": "table",
"num": null,
"html": null,
"text": "79.89 \u2212\u2212 81.7 \u2212 78.57 \u2212\u2212 79.86 \u2212\u2212 lzh 90.21 89.56 \u2212 89.24 \u2212 89.04 \u2212\u2212 89.18 \u2212 cs 91.61 90.49 \u2212\u2212 90.91 \u2212\u2212 88.18 \u2212\u2212 85.64 \u2212\u2212 et 85.62 84.79 \u2212 84.91 \u2212 81.86 \u2212\u2212 81.11 \u2212\u2212 de 96.69 95.95 \u2212\u2212 96.38 \u2212\u2212 95.15 \u2212\u2212 86.51 \u2212\u2212 hi 94.69 94.09 \u2212\u2212 94.43 \u2212 93.05 \u2212\u2212 85.02 \u2212\u2212 ko 87.26 86.24 \u2212\u2212 86.52 \u2212 85.68 \u2212\u2212 84.06 \u2212\u2212 lv 85.3 83.88 \u2212 84.01 \u2212 80.88 \u2212\u2212 81.38 \u2212\u2212 fa 92.61 92.07 \u2212 92.44 \u2212 90.45 \u2212\u2212 87.09 \u2212\u2212 ro 90.49 89.68 \u2212 89.63 \u2212\u2212 87.39 \u2212\u2212 86.38 \u2212\u2212 ru 91.23 90.1 \u2212\u2212 90.1 \u2212\u2212 88.19 \u2212\u2212 84.96 \u2212\u2212",
"content": "<table><tr><td>rp h</td><td>rx b</td><td>2p b</td><td>ah tb</td><td>c tb</td></tr><tr><td colspan=\"2\">grc 83.09 Avg 89.89 88.79</td><td>89.12</td><td>87.13</td><td>84.65</td></tr></table>"
},
"TABREF6": {
"type_str": "table",
"num": null,
"html": null,
"text": "77.61 \u2212\u2212 79.21 \u2212 76.49 \u2212\u2212 77.71 \u2212 lzh 79.93 79.8 \u2212 79.42 \u2212 79.41 \u2212 79.54 \u2212 cs 90.04 88.93 \u2212\u2212 89.34 \u2212\u2212 86.67 \u2212\u2212 84.25 \u2212\u2212 et 81.07 80.36 \u2212 80.34 \u2212 77.71 \u2212\u2212 76.95 \u2212\u2212 de 95.85 95.14 \u2212\u2212 95.54 \u2212\u2212 94.34 \u2212\u2212 85.79 \u2212\u2212 hi 92.22 91.76 \u2212 92.21 \u2212 90.72 \u2212\u2212 83.24 \u2212\u2212 ko 84.25 83.44 \u2212 83.42 \u2212 82.98 \u2212\u2212 81.25 \u2212\u2212 lv 70.65 71.98 ++ 71.08 + 68.9 \u2212 68.97 \u2212 fa 90.39 89.8 \u2212 90.32 \u2212 88.27 \u2212\u2212 85.28 \u2212\u2212 ro 87.32 86.64 \u2212 86.49 \u2212 84.44 \u2212\u2212 83.5 \u2212\u2212 ru 88.71 88.13 \u2212 88.24 \u2212 85.93 \u2212\u2212 82.96 \u2212\u2212",
"content": "<table><tr><td>rp h</td><td>rx b</td><td>2p b</td><td>ah tb</td><td>c tb</td></tr><tr><td colspan=\"2\">grc 80.2 Avg 85.51 84.87</td><td>85.06</td><td>83.26</td><td>80.86</td></tr></table>"
},
"TABREF7": {
"type_str": "table",
"num": null,
"html": null,
"text": "grc 77.84 77.41 \u2212 79.16 + 75.64 \u2212 76.99 \u2212 lzh 79.99 81.02 + 80.75 + 81.11 ++ 81.42 ++ cs 88.67 88.2 \u2212 88.64 \u2212 85.8 \u2212\u2212 84.23 \u2212\u2212 et 77.85 79.69 ++ 79.99 ++ 77.15 \u2212 76.27 \u2212 de 94.51 95.09 ++ 95.41 ++ 94.18 \u2212 83.54 \u2212\u2212 hi 89.43 91.7 ++ 91.98 ++ 90.72 ++ 82.86 \u2212\u2212 ko 79.39 82.18 ++ 82.15 ++ 81.88 ++ 80.3 ++ lv 62.56 71.17 ++ 72.38 ++ 66.78 ++ 69.38 ++ fa 89.14 90.39 ++ 90.48 ++ 88.49 \u2212 84.54 \u2212\u2212 ro 85.28 86.41 + 86.94 ++ 84.25 \u2212 83.04 \u2212\u2212 ru 83.35 83.98 ++ 84.5 ++ 83.42 ++ 80.26 \u2212\u2212",
"content": "<table><tr><td>rp h</td><td>rx b</td><td>2p b</td><td>ah tb</td><td>c tb</td></tr><tr><td colspan=\"2\">Avg 82.55 84.29</td><td>84.76</td><td>82.67</td><td>80.26</td></tr></table>"
},
"TABREF8": {
"type_str": "table",
"num": null,
"html": null,
"text": "UAS for the rich-resource treebanks, using the whole training set and the no PoS tags setup. \u2212\u2212 85.48 \u2212\u2212 81.84 \u2212\u2212 78.6 \u2212\u2212 cop 88.73 88.43 \u2212 88.72 \u2212 85.5 \u2212\u2212 84.35 \u2212\u2212 fo 84.04 83.76 \u2212 84.09 + 81.78 \u2212\u2212 79.53 \u2212\u2212 hu 79.75 76.14 \u2212\u2212 76.13 \u2212\u2212 71.66 \u2212\u2212 64.27 \u2212\u2212 lt 51.98 50.28 \u2212 50.19 \u2212 45.0 \u2212 46.6 \u2212 mt 81.81 81.05 \u2212 80.82 \u2212 76.78 \u2212\u2212 74.98 \u2212\u2212 mr 77.43 76.46 \u2212 75.97 \u2212 76.94 \u2212 73.54 \u2212 ta 74.96 73.1 \u2212 71.9 \u2212 71.74 \u2212 66.01 \u2212\u2212 te 90.01 91.26 + 90.43 + 90.01 + 89.46 \u2212 wo 86.19 84.64 \u2212\u2212 84.51 \u2212\u2212 80.65 \u2212\u2212 77.43 \u2212\u2212",
"content": "<table><tr><td>rp h</td><td>rx b</td><td>2p b</td><td>ah tb</td><td>c tb</td></tr><tr><td colspan=\"2\">af 88.02 85.7 Avg 80.29 79.08</td><td>78.82</td><td>76.19</td><td>73.48</td></tr></table>"
},
"TABREF9": {
"type_str": "table",
"num": null,
"html": null,
"text": "UAS for the low-resource treebanks for the gold PoS tags setup.",
"content": "<table><tr><td colspan=\"3\">4.2 Experiment 2: Encodings performance</td></tr><tr><td colspan=\"2\">on truly low-resource languages</td><td/></tr><tr><td colspan=\"3\">Data We choose the 10 smallest treebanks 6</td></tr><tr><td colspan=\"3\">(in terms of training sentences) that had</td></tr><tr><td>a dev set:</td><td colspan=\"2\">Lithuanian HSE , Marathi UFAL ,</td></tr><tr><td>Hungarian Szeged ,</td><td>Telugu MTG ,</td><td>Tamil TTB ,</td></tr><tr><td colspan=\"3\">Faroese FarPaHC , Coptic Scriptorium , Maltese MUDT ,</td></tr><tr><td colspan=\"2\">Wolof WTB and Afrikaans AfriBooms</td><td/></tr></table>"
},
"TABREF10": {
"type_str": "table",
"num": null,
"html": null,
"text": "af 81.84 80.29 \u2212 79.9 \u2212 77.3 \u2212\u2212 73.61 \u2212\u2212 cop 85.77 86.25 + 85.92 + 83.14 \u2212\u2212 81.84 \u2212\u2212 fo 77.04 76.97 \u2212 77.52 + 75.23 \u2212 74.24 \u2212 hu 70.52 68.51 \u2212 68.77 \u2212 64.98 \u2212\u2212 58.37 \u2212\u2212 lt 30.28 34.53 + 33.11 + 31.23 + 29.91 \u2212 mt 74.6 75.64 + 75.07 + 71.17 \u2212\u2212 70.35 \u2212\u2212 mr 66.99 67.96 + 67.23 + 68.93 + 67.23 + ta 57.11 60.73 + 57.57 + 58.77 + 55.51 \u2212 te 86.41 87.93 + 87.93 + 86.96 + 86.69 + wo 76.88 76.4 \u2212 76.3 \u2212 73.24 \u2212\u2212 70.84 \u2212\u2212",
"content": "<table><tr><td>rp h</td><td>rx b</td><td>2p b</td><td>ah tb</td><td>c tb</td></tr><tr><td colspan=\"3\">Avg 70.74 71.52 70.93</td><td>69.10</td><td>66.86</td></tr><tr><td/><td/><td/><td/><td>and 9 show the UAS for each</td></tr><tr><td/><td/><td/><td/><td>encoding and treebank for the gold PoS tags setup,</td></tr><tr><td/><td/><td/><td/><td>the predicted PoS tags setup and the no PoS tags</td></tr><tr><td/><td/><td/><td/><td>setup, respectively. Again, under perfect condi-</td></tr><tr><td/><td/><td/><td/><td>tions, the relative PoS-based encoding performs</td></tr><tr><td/><td/><td/><td/><td>overall better, except for Telugu, which seems to</td></tr></table>"
},
"TABREF11": {
"type_str": "table",
"num": null,
"html": null,
"text": "UAS for the low-resource treebanks for the predicted PoS tags setup. 80.78 + 80.07 + 75.47 \u2212\u2212 73.76 \u2212\u2212 cop 84.36 85.76 + 85.13 + 83.07 \u2212 81.28 \u2212\u2212 fo 73.98 77.08 ++ 77.04 ++ 75.07 + 73.67 \u2212 hu 63.63 65.21 + 64.8 + 62.04 \u2212 56.17 \u2212\u2212 lt 26.89 34.62 ++ 35.38 ++ 34.06 ++ 32.92 + mt 70.95 75.5 ++ 75.3 ++ 71.69 + 70.32 \u2212 mr 64.08 66.75 + 67.96 + 69.66 + 64.56 + ta 52.79 60.03 ++ 56.61 + 59.58 ++ 54.95 + te 85.44 88.49 + 88.63 + 87.1 + 86.82 + wo 73.11 77.17 ++ 76.95 ++ 74.01 + 70.86 \u2212",
"content": "<table><tr><td>rp h</td><td>rx b</td><td>2p b</td><td>ah tb</td><td>c tb</td></tr><tr><td colspan=\"2\">af 79.86 Avg 67.51 71.14</td><td>70.79</td><td>69.18</td><td>66.53</td></tr></table>"
},
"TABREF12": {
"type_str": "table",
"num": null,
"html": null,
"text": "UAS for the low-resource treebanks for the no PoS tags setup.",
"content": "<table/>"
}
}
}
}