| { |
| "paper_id": "2022", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T15:21:14.100643Z" |
| }, |
| "title": "Word-order typology in Multilingual BERT: A case study in subordinate-clause detection", |
| "authors": [ |
| { |
| "first": "Dmitry", |
| "middle": [], |
| "last": "Nikolaev", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "dnikolaev@fastmail.com" |
| }, |
| { |
| "first": "Sebastian", |
| "middle": [], |
| "last": "Pad\u00f3", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "pado@ims.uni-stuttgart.de" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "The capabilities and limitations of BERT and similar models are still unclear when it comes to learning syntactic abstractions, in particular across languages. In this paper, we use the task of subordinate-clause detection within and across languages to probe these properties. We show that this task is deceptively simple, with easy gains offset by a long tail of harder cases, and that BERT's zero-shot performance is dominated by word-order effects, mirroring the SVO/VSO/SOV typology.", |
| "pdf_parse": { |
| "paper_id": "2022", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "The capabilities and limitations of BERT and similar models are still unclear when it comes to learning syntactic abstractions, in particular across languages. In this paper, we use the task of subordinate-clause detection within and across languages to probe these properties. We show that this task is deceptively simple, with easy gains offset by a long tail of harder cases, and that BERT's zero-shot performance is dominated by word-order effects, mirroring the SVO/VSO/SOV typology.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Analysing the ability of pre-trained neural language models, such as BERT (Devlin et al., 2019) , to abstract grammatical patterns from raw texts has become a prominent research question (Jawahar et al., 2019; Rogers et al., 2020) . Results remain mixed. While BERT-based models have been shown to learn syntactic representations that are similarly structured across languages (Chi et al., 2020) , some grammatical patterns, such as discontinuous constituents, remain challenging for them even when training data is plentiful (Kogkalidis and Winholds, 2022) . In practical terms, zero-shot performance of BERT-based models is lower for typologically distant languages (Pires et al., 2019) , and they can profit from direct exposure to typological features during fine-tuning (Bjerva and Augenstein, 2021) .", |
| "cite_spans": [ |
| { |
| "start": 74, |
| "end": 95, |
| "text": "(Devlin et al., 2019)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 187, |
| "end": 209, |
| "text": "(Jawahar et al., 2019;", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 210, |
| "end": 230, |
| "text": "Rogers et al., 2020)", |
| "ref_id": "BIBREF32" |
| }, |
| { |
| "start": 377, |
| "end": 395, |
| "text": "(Chi et al., 2020)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 526, |
| "end": 557, |
| "text": "(Kogkalidis and Winholds, 2022)", |
| "ref_id": null |
| }, |
| { |
| "start": 668, |
| "end": 688, |
| "text": "(Pires et al., 2019)", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 775, |
| "end": 804, |
| "text": "(Bjerva and Augenstein, 2021)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this study, we add another datapoint to the conversation by analysing the ability of BERT-based models to capture the distinction between main and subordinate clauses across languages. This task is promising for two reasons. First, it highlights variability in the way main and subordinate clauses are structured across languages, thus acting as an informative probe into the relationship between BERT and typological categories. Second, the task is arguably relevant for downstream performance on natural-language understanding, where (some notion of) syntactic scope and compositionality should support tasks such as analysing commitment (Jiang and de Marneffe, 2019; Zhang and de Marneffe, 2021) or factuality (Lotan et al., 2013) , text simplification (Sikka and Mago, 2020) , or paraphrase detection (Timmer et al., 2021) . In order to operationalise it in a cross-lingual fashion, we use the Universal Dependencies framework (UD; Nivre et al., 2020) with its large multilingual collection of corpora.", |
| "cite_spans": [ |
| { |
| "start": 643, |
| "end": 672, |
| "text": "(Jiang and de Marneffe, 2019;", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 673, |
| "end": 701, |
| "text": "Zhang and de Marneffe, 2021)", |
| "ref_id": "BIBREF42" |
| }, |
| { |
| "start": 716, |
| "end": 736, |
| "text": "(Lotan et al., 2013)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 759, |
| "end": 781, |
| "text": "(Sikka and Mago, 2020)", |
| "ref_id": null |
| }, |
| { |
| "start": 808, |
| "end": 829, |
| "text": "(Timmer et al., 2021)", |
| "ref_id": "BIBREF36" |
| }, |
| { |
| "start": 939, |
| "end": 958, |
| "text": "Nivre et al., 2020)", |
| "ref_id": "BIBREF28" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Our analysis proceeds in two stages. First, we survey the performance of BERT models fine-tuned and tested on the same language across 20 typologically diverse languages ( \u00a7 3). For the majority of languages, distinguishing main and subordinate clauses is easily solved with base-size models and relatively small training sets. However, some languages demonstrate a non-negligible number of errors, which we analyse.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Then we study the performance of Multilingual BERT (mBERT) in a zero-shot setting ( \u00a7 4), where we fine-tune the model on labeled data in 10 different languages and then test its performance on 31 datasets representing 27 different languages. We find that the performance of mBERT is dominated by word-order effects well known from the typological literature (Comrie, 1981 ): the Arabic model shows best-in-class performance on Irish, and the Japanese model has best-in-class performance on Korean, while both have poor performance overall. European languages with large training sets provide good inductive bias for typologically diverse languages but fail on SOV languages.", |
| "cite_spans": [ |
| { |
| "start": 359, |
| "end": 372, |
| "text": "(Comrie, 1981", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Data To make our analysis maximally comparable across languages, we start from the Parallel Universal Dependencies (PUD) collection (Zeman et al., 2017) , which contains translations for a set of 1000 English sentences. PUD only contains test corpora. As these are too small to be further split into train/test subsets, we use other corpora to fine-tune the models. We also add corpora for languages not covered by PUD for better typological coverage. See Appendix \u00a7 A.2 for the full list.", |
| "cite_spans": [ |
| { |
| "start": 132, |
| "end": 152, |
| "text": "(Zeman et al., 2017)", |
| "ref_id": "BIBREF41" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Setup", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Model The experimental setup is identical in the single-language and zero-shot settings. A pretrained mBERT model (a variant of bert-base) and several pre-trained single-language BERT models, all provided by HuggingFace (Wolf et al., 2020) , are fine-tuned on the binary classification of predicates into main vs. subordinate clauses. We operationalize main clauses as those headed by predicates with the UD label root and subordinate clauses through the UD labels acl, ccomp, advcl, csubj, and xcomp. The last hidden state of the embedding model for the first subword of each predicate is fed to a two-layer MLP with a tanh activation after the first layer, and the model is fine-tuned using cross-entropy loss. For the singlelanguage setup, the model is fine-tuned for five epochs, and we report the best result on the validation set. Most models begin overfitting after the second epoch, so in the zero-shot setting all models are fine-tuned for two epochs.", |
| "cite_spans": [ |
| { |
| "start": 220, |
| "end": 239, |
| "text": "(Wolf et al., 2020)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Setup", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The main results obtained by the models fine-tuned and tested on the same language are shown in Table 1. Results are above 90% for almost all languages, while a majority baseline (always assign subordinate clause) attains an accuracy of 50-70% depending on the language. ( Table 3 in the Appendix provides more details about the models and corpora, including exact baseline results.) At first glance, neither the size of the training set nor the size of the model seem to be a major factor: mBERT demonstrates better performance when fine-tuned on the small Afrikaans and Hebrew datasets than when trained on a bigger Chinese dataset. When fine-tuned on the English data, it attains the same performance as an English-only bert-large. 1", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 273, |
| "end": 280, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Single-Language Models", |
| "sec_num": "3" |
| }, |
| { |
| "text": "1 The mBERT result is reported in Table 2. A more fundamental distinction seems to exist between major European languages, the results on which are generally at > 97% accuracy (except for German), and Mandarin Chinese, Vietnamese, and Korean where results are around 90%. Our analysis indicates that these differences are partly due to discrepancies in UD annotations across corpora but also due to genuine syntactic differences. An example of an annotation-related confound is the treatment of quotations. The PUD corpora that we use preferentially as test sets treat quotations as sentential complements of communication verbs. Some of the corpora we use for fine-tuning, however, analyse the cases where quotation precedes the verb of speech as parataxis. The head predicate of the quotation therefore receives the label root and becomes the main predicate of the whole sentence, leading to spurious mistakes in the analysis of PUD corpora, where they are annotated as ccomp's. This discrepancy accounts for the lion's share of classification mistakes in German and some mistakes in Mandarin.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 34, |
| "end": 42, |
| "text": "Table 2.", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Single-Language Models", |
| "sec_num": "3" |
| }, |
| { |
| "text": "In contrast, an example of genuine ambiguity is provided by the Mandarin g\u0113nj\u00f9 construction. This construction means 'according to' and can incorporate both nominal and verbal constituents. Thus, g\u0113nj\u00f9 sh\u00e0ng bi\u01ceog\u00e9 zh\u014dng q\u012b g\u00e8 yu\u00e1ns\u00f9 de gu\u0101nx\u00ec from the Mandarin GSD corpus, which we used for fine-tuning, means 'based on the relationship of the seven elements in the above table', and the annotation treats this construction as an oblique prepositional phrase. Cf. the following example from the Mandarin PUD corpus: g\u0113nj\u00f9 k\u011bx\u00edng x\u00ecng y\u00e1nji\u016b g\u016bj\u00ec 'according to the feasibility study / the feasibility study estimates that / as the feasbility study estimates'. The analysis of this sentence in PUD makes g\u016bj\u00ec 'estimate' the main predicate of the sentence, while an alternative analysis would make it the head of an adverbial clause, and yet another analysis would label it as a nominal element. The ability of Mandarin words to act as different parts of speech in different contexts (especially in case of verbs, which can act as clause heads, auxiliaries, complementisers, and compound elements) makes this kind of disambiguation difficult even for human annotators, which in turn makes it hard to formulate the exact rule that language models are supposed to extract from the data. A similar situation holds for Vietnamese. 2 A different type of systematic ambiguity is presented by Korean, which also demonstrates poorer performance. Korean has about sixty markers connecting two clauses, and many of those allow for both coordinative and conjunctive readings, which makes either the first or the second clause the main one, respectively (Cho and Whitman, 2020, 220-227) . Examples of this type are responsible for a large share of mistakes in Korean.", |
| "cite_spans": [ |
| { |
| "start": 1325, |
| "end": 1326, |
| "text": "2", |
| "ref_id": null |
| }, |
| { |
| "start": 1640, |
| "end": 1672, |
| "text": "(Cho and Whitman, 2020, 220-227)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Single-Language Models", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Overall, these results indicate that subordinateclause detection is a long-tail task: major easily learnable patterns account for more than 90% of test cases for all languages, but in some languages there is an assortment of harder cases that prevent language models from efficiently generalising.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Single-Language Models", |
| "sec_num": "3" |
| }, |
| { |
| "text": "4 Zero-Shot Setting", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Single-Language Models", |
| "sec_num": "3" |
| }, |
| { |
| "text": "We now turn to the analysis of the performance of the models in the zero-shot setting. The model described in \u00a7 2 is fine-tuned for two epochs on five European languages (English, Russian, Czech, French, and German) and five Eurasian languages (Standard Arabic, Mandarin Chinese, Turkish, Korean, and Japanese) with larger training corpora (the ones shown in Table 3 ). Each of the fine-tuned models is then applied in a zero-shot way to a range of test corpora from the UD collection. 3 Based on the results in Table 2 , several observations can be made. First, there is a set of European languages with large training corpora that can act as 'general approximators': they demonstrate high performance across the board. The best overall performance is attained by Russian, which has the second-largest training corpus (nearly 33k sentences). German, with the largest training corpus (nearly 56k sentences) performs worse than both Russian and English (the second best, with only 2 'Syntactic category classification for Vietnamese is still in debate. That lack of consensus is due to the unclear limit between the grammatical roles of many words as well as the frequent phenomenon of syntactic category mutation' (Nguyen et al., 2004) .", |
| "cite_spans": [ |
| { |
| "start": 486, |
| "end": 487, |
| "text": "3", |
| "ref_id": null |
| }, |
| { |
| "start": 1214, |
| "end": 1235, |
| "text": "(Nguyen et al., 2004)", |
| "ref_id": "BIBREF26" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 359, |
| "end": 366, |
| "text": "Table 3", |
| "ref_id": null |
| }, |
| { |
| "start": 512, |
| "end": 519, |
| "text": "Table 2", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Quantitative Results", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "3 Where available, we experiment with two test sets for the same language to assess domain-induced variance. As Table 2 shows, the difference in scores between different testing corpora for the same language can reach 5-6%, but it does not change the overall pattern. circa 6k training sentences). While this good result for English may be attributed to more informative pre-training (English Wikipedia is much larger than the German one), such a bias would also have favoured German compared to Russian. An alternative explanation is provided by the more idiosyncratic German word-order patterns (V2 in main clauses vs. V-last in subordinate clauses), which help it achieve best-in-class performance on the similar Afrikaans. Notably, Russian beats English even though PUD corpora were translated from English and therefore should contain some traces of its morphosyntactic patterns (Rabinovich et al., 2017; Nikolaev et al., 2020) .", |
| "cite_spans": [ |
| { |
| "start": 884, |
| "end": 909, |
| "text": "(Rabinovich et al., 2017;", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 910, |
| "end": 932, |
| "text": "Nikolaev et al., 2020)", |
| "ref_id": "BIBREF27" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 112, |
| "end": 119, |
| "text": "Table 2", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Quantitative Results", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "At the other end of the spectrum, we find mediocre general approximators (Arabic, Turkish) and outright bad ones (Japanese and Korean). At first glance, their performance could be an artefact of lower-quality annotations or suboptimal tokenisation (Mielke et al., 2021) . This, however, does not explain a remarkable set of results that is clearly due to word-order patterns. While the fine-tuned model for Arabic, a VSO language, performs worse on its own test corpus than models fine-tuned on European languages, it provides best-in-class performance on Irish, another VSO language (96% accuracy). The English-based model is not far behind (95%), but given the overall large gap in performance between them across the board, it seems that congruent word-order patterns provide a strong inductive bias for subordinate-clause identification.", |
| "cite_spans": [ |
| { |
| "start": 248, |
| "end": 269, |
| "text": "(Mielke et al., 2021)", |
| "ref_id": "BIBREF24" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Quantitative Results", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Unfortunately, VSO languages are rare, 4 and it is impossible to check if this pattern generalises to other language pairs. However, our testcorpus suite includes data on strict SOV languages (Japanese, Korean) and languages where SOV is the dominant (Hindi, Turkish) or a common (Mandarin, Basque) pattern. These provide us with a large number of language pairs with different degrees of word-order congruence and fairly clear patterns of model performance. First, universal approximators, despite good performance on VSO languages, struggle on strict SOV languages, especially Japanese, while SOV languages demonstrate consistently good performance among themselves. E.g., Korean demonstrates best-in-class performance on Turkish, tied with Turkish itself, while Japanese has best-in-class performance on Korean. mance on Hindi, with which it shares a relatively flexible SOV order. Another language with strong SOV tendencies is Mandarin Chinese, which has been argued to be in transition from SVO to SOV order (Sun and Giv\u00f3n, 1985) . Mandarin, which we already found difficult to model in \u00a7 3, is very hard to generalise to, with no source languages attaining accuracy above 71-72%. Tellingly, Turkish is the only other language with decent results on both Mandarin test sets. Mandarin is also the only language to always beat the majority-class baseline.", |
| "cite_spans": [ |
| { |
| "start": 1014, |
| "end": 1035, |
| "text": "(Sun and Giv\u00f3n, 1985)", |
| "ref_id": "BIBREF35" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Quantitative Results", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "In order to get a better understanding of the difficulties that models face in the zero-shot setting we analysed the mistakes that the English-based fine-tuned model made when making predictions on Mandarin data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Case Study: English-Mandarin", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Setting aside errors stemming from annotation discrepancies, 5 the major source of model mistakes seems to be the fact that Mandarin complex sentences are predominantly right-headed: 99% of 5 E.g., as discussed in \u00a73, the model expects direct quotes to have the form ccomp (quote) + root (verb of speech) and not root (quote) + parataxis (verb of speech).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Case Study: English-Mandarin", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "advcl, 100% of acl, and 96% of dep 6 have their parent node to the right. In contrast, 75% of English advcl and 98% of English acl are left-headed in PUD. This makes an English-based zero-shot model prejudiced against finding root nodes in the final clause of the sentence, and it incorrectly analyses a wide range of right-headed Mandarin complex clauses. Statistically, there are 142 sentence-initial subordinate clauses mistakenly analysed as main clauses and only 6 reverse errors. By contrast, there are 278 sentence-final main clauses mistakenly analysed as subordinate ones and 82 reverse errors.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Case Study: English-Mandarin", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Sometimes this divergence further interacts with ways in which English and Mandarin alternate between clause coordination and subordination. Thus, Mandarin tends to describe sequences of events as a pair of an adverbial clause and a main clause (after having taken a shower, he dried himself ) instead of as two coordinated clauses (he took a shower and dried himself ). English UD treats the first conjoined clause as the matrix one, while it is often advcl in Mandarin, and the absence of overt unambiguous complementisers makes it hard for the model to see beyond mere frequencies.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Case Study: English-Mandarin", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "A similar situation obtains with some English postposed descriptive subordinate clauses, such as it's X-that Y constructions 7 and non-restrictive relative clauses. 8 In these cases, Mandarin uses a coordinative construction, in which the head, according to the UD analysis, is on the right conjunct, corresponding to the English acl, and the first conjunct is attached to it using the dep label. Again, the English-based model expects to find the root in the first of the two clauses, and there is no overt complementiser to suggest otherwise.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Case Study: English-Mandarin", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "An analysis of the properties of the models underlying these findings is beyond the scope of this paper, but preliminary checks of the attention patterns show that successful models strongly attend to complementisers in the last two layers. As SVO and VSO languages tend to have complementisers before subordinate clauses and SOV languages after (Hawkins, 1990) , fine-tuning biases models towards looking for them in only one direction. The attention of subordinate-clause heads to mainclause heads is weaker, presumably due to higher lexical variety in that position.", |
| "cite_spans": [ |
| { |
| "start": 346, |
| "end": 361, |
| "text": "(Hawkins, 1990)", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Attention Patterns", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Both aspects of our analysis -subordinate-clause detection and the study of word-order effects -have been addressed but not in conjunction and not in a multiple-source-language setting. Our study extends previous approaches by providing a ZS 'upper baseline' derived from the study of the performance of several monolingual models and then conducting a novel many-sources-to-many-target analysis of zero-shot performance. Lin et al. (2019) test BERT on the auxiliaryclassification task (main vs. subordinate clause) as part of their investigation of BERT's linguistic knowledge. R\u00f6nnqvist et al. (2019) extend this analysis to the multilingual setting with a focus on Nordic languages.", |
| "cite_spans": [ |
| { |
| "start": 422, |
| "end": 439, |
| "text": "Lin et al. (2019)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 579, |
| "end": 602, |
| "text": "R\u00f6nnqvist et al. (2019)", |
| "ref_id": "BIBREF33" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Word-order differences have been shown to impact the performance of English-based crosslingual models, especially in the domain of syntactic parsing (Ahmad et al., 2019) and with tasks that rely on syntactic information (Liu et al., 2020; Arviv et al., 2021) , while reordering has been long known to be an efficient preprocessing step in syntactic transfer (Rasooli and Collins, 2019) and machine translation, both statistical (Wang et al., 2007) and neural (Chen et al., 2019) .", |
| "cite_spans": [ |
| { |
| "start": 149, |
| "end": 169, |
| "text": "(Ahmad et al., 2019)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 220, |
| "end": 238, |
| "text": "(Liu et al., 2020;", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 239, |
| "end": 258, |
| "text": "Arviv et al., 2021)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 428, |
| "end": 447, |
| "text": "(Wang et al., 2007)", |
| "ref_id": "BIBREF38" |
| }, |
| { |
| "start": 459, |
| "end": 478, |
| "text": "(Chen et al., 2019)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "5" |
| }, |
| { |
| "text": "We extend previous work on syntactic capabilities of BERT, mostly focusing on English, by providing a more comprehensive analysis of its performance on the task of subordinate-clause detection in multiple languages and language pairs in the zero-shot setting. We show that the performance of single-language models is uneven across languages: East and Southeast Asian languages with less rigid boundaries between POS categories and coordination and subordination prove harder to model. We also show that mBERT's performance in the zeroshot setting, while being largely correlated with the size of the pre-training and fine-tuning corpora, with Russian being the best source language across the board, is well aligned with the word-order typology: language pairs with congruent word orders demonstrate better results, with both SVO and SOV orders having higher in-group than across-group accuracies. A single pair of VSO languages in the data further corroborates this finding, showing that the verb-final order is not important per se.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "The clause-initial position of complementisers in VSO languages partly blurs this effect and helps SVO languages with large training corpora serve as good sources for fine-tuning, but even Russian and English fail on SOV languages, where complementisers tend to be postposed and dependent-clause predicates never appear in the sentence-final position. This shows that at least for some tasks, training on a single source language is not enough. Moreover, our results from single-language modelling seem to indicate that even superficially simple syntactic tasks vary in difficulty across languages, which imposes a hard limit on how well cross-lingual projection can perform.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Out of 1376 languages in WALS(Dryer and Haspelmath, 2013), 95 are VSO, 564 are SOV and 488 are SVO.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "dep labels different kinds of hard-to-analyse relations and is frequent in Mandarin PUD (397 occurrences).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "It's fantastic that they got the Paris Agreement, but... 8 However, they could not find this same pattern in tissues such as the bladder, which are not directly exposed.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "We thank Lilja Maria Saeb\u00f8 and Chih-Yi Lin for their help with the analysis of Mandarin Chinese data and Dojun Park for his help with the analysis of Korean.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgements", |
| "sec_num": null |
| }, |
| { |
| "text": "In addition to the Parallel Universal Dependencies collection (Zeman et al., 2017) , the following corpora were used to train and/or validate models:\u2022 Afribooms: UD Afrikaans-AfriBooms, https://github.com/ UniversalDependencies/UD_ Afrikaans-AfriBooms (Haji\u010d et al., 2009) \u2022 PDT: UD version of the Prague Dependency Treebank, https://github.com/ UniversalDependencies/UD_ Czech-PDT (Bej\u010dek et al., 2013) \u2022 Syntagrus: SynTagRus Dependency Treebank, https://github.com/ UniversalDependencies/UD_ Russian-SynTagRus\u2022 VTB: UD version of the VLSP constituency treebank, https://github.com/ UniversalDependencies/UD_ Vietnamese-VTB (Nguyen et al., 2009) A.3 Single-language model resultsThe results attained by the models fine-tuned and tested on the same language are shown in Table 3 Table 3 : Performance of single-language models across languages. #Train and #Test denote the number of sentences in the train and test corpus respectively. In the 'Main-Main', 'Main-Sub', 'Sub-Main', and 'Sub-Sub' columns, the part before the hyphen is the gold label of a predicate (main/subordinate clause) and the second part is the guessed label. Acc: Accuracy.", |
| "cite_spans": [ |
| { |
| "start": 62, |
| "end": 82, |
| "text": "(Zeman et al., 2017)", |
| "ref_id": "BIBREF41" |
| }, |
| { |
| "start": 252, |
| "end": 272, |
| "text": "(Haji\u010d et al., 2009)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 382, |
| "end": 403, |
| "text": "(Bej\u010dek et al., 2013)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 625, |
| "end": 646, |
| "text": "(Nguyen et al., 2009)", |
| "ref_id": "BIBREF25" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 771, |
| "end": 778, |
| "text": "Table 3", |
| "ref_id": null |
| }, |
| { |
| "start": 779, |
| "end": 786, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "A Appendix", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "On difficulties of cross-lingual transfer with order differences: A case study on dependency parsing", |
| "authors": [ |
| { |
| "first": "Wasi", |
| "middle": [], |
| "last": "Ahmad", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhisong", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Xuezhe", |
| "middle": [], |
| "last": "Ma", |
| "suffix": "" |
| }, |
| { |
| "first": "Eduard", |
| "middle": [], |
| "last": "Hovy", |
| "suffix": "" |
| }, |
| { |
| "first": "Kai-Wei", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "Nanyun", |
| "middle": [], |
| "last": "Peng", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "1", |
| "issue": "", |
| "pages": "2440--2452", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/N19-1253" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Wasi Ahmad, Zhisong Zhang, Xuezhe Ma, Eduard Hovy, Kai-Wei Chang, and Nanyun Peng. 2019. On difficulties of cross-lingual transfer with order dif- ferences: A case study on dependency parsing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2440-2452, Minneapolis, Minnesota. Association for Computa- tional Linguistics.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "On the relation between syntactic divergence and zero-shot performance", |
| "authors": [ |
| { |
| "first": "Ofir", |
| "middle": [], |
| "last": "Arviv", |
| "suffix": "" |
| }, |
| { |
| "first": "Dmitry", |
| "middle": [], |
| "last": "Nikolaev", |
| "suffix": "" |
| }, |
| { |
| "first": "Taelin", |
| "middle": [], |
| "last": "Karidi", |
| "suffix": "" |
| }, |
| { |
| "first": "Omri", |
| "middle": [], |
| "last": "Abend", |
| "suffix": "" |
| } |
| ], |
| "year": 2021, |
| "venue": "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "4803--4817", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/2021.emnlp-main.394" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ofir Arviv, Dmitry Nikolaev, Taelin Karidi, and Omri Abend. 2021. On the relation between syntactic di- vergence and zero-shot performance. In Proceedings of the 2021 Conference on Empirical Methods in Nat- ural Language Processing, pages 4803-4817, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Prague dependency treebank 3.0. LINDAT/CLARIAH-CZ digital library at the Institute of Formal and Applied Linguistics (\u00daFAL", |
| "authors": [ |
| { |
| "first": "Eduard", |
| "middle": [], |
| "last": "Bej\u010dek", |
| "suffix": "" |
| }, |
| { |
| "first": "Eva", |
| "middle": [], |
| "last": "Haji\u010dov\u00e1", |
| "suffix": "" |
| }, |
| { |
| "first": "Jan", |
| "middle": [], |
| "last": "Haji\u010d", |
| "suffix": "" |
| }, |
| { |
| "first": "Pavl\u00edna", |
| "middle": [], |
| "last": "J\u00ednov\u00e1", |
| "suffix": "" |
| }, |
| { |
| "first": "V\u00e1clava", |
| "middle": [], |
| "last": "Kettnerov\u00e1", |
| "suffix": "" |
| }, |
| { |
| "first": "Veronika", |
| "middle": [], |
| "last": "Kol\u00e1\u0159ov\u00e1", |
| "suffix": "" |
| }, |
| { |
| "first": "Marie", |
| "middle": [], |
| "last": "Mikulov\u00e1", |
| "suffix": "" |
| }, |
| { |
| "first": "Ji\u0159\u00ed", |
| "middle": [], |
| "last": "M\u00edrovsk\u00fd", |
| "suffix": "" |
| }, |
| { |
| "first": "Anna", |
| "middle": [], |
| "last": "Nedoluzhko", |
| "suffix": "" |
| }, |
| { |
| "first": "Jarmila", |
| "middle": [], |
| "last": "Panevov\u00e1", |
| "suffix": "" |
| }, |
| { |
| "first": "Lucie", |
| "middle": [], |
| "last": "Pol\u00e1kov\u00e1", |
| "suffix": "" |
| }, |
| { |
| "first": "Magda", |
| "middle": [], |
| "last": "\u0160ev\u010d\u00edkov\u00e1", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Faculty of Mathematics and Physics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Eduard Bej\u010dek, Eva Haji\u010dov\u00e1, Jan Haji\u010d, Pavl\u00edna J\u00ednov\u00e1, V\u00e1clava Kettnerov\u00e1, Veronika Kol\u00e1\u0159ov\u00e1, Marie Mikulov\u00e1, Ji\u0159\u00ed M\u00edrovsk\u00fd, Anna Nedoluzhko, Jarmila Panevov\u00e1, Lucie Pol\u00e1kov\u00e1, Magda \u0160ev\u010d\u00edkov\u00e1, Jan \u0160t\u011bp\u00e1nek, and \u0160\u00e1rka Zik\u00e1nov\u00e1. 2013. Prague de- pendency treebank 3.0. LINDAT/CLARIAH-CZ dig- ital library at the Institute of Formal and Applied Linguistics (\u00daFAL), Faculty of Mathematics and Physics, Charles University.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "The Hindi/Urdu treebank project", |
| "authors": [ |
| { |
| "first": "Ahmad", |
| "middle": [], |
| "last": "Riyaz", |
| "suffix": "" |
| }, |
| { |
| "first": "Rajesh", |
| "middle": [], |
| "last": "Bhat", |
| "suffix": "" |
| }, |
| { |
| "first": "Annahita", |
| "middle": [], |
| "last": "Bhatt", |
| "suffix": "" |
| }, |
| { |
| "first": "Prescott", |
| "middle": [], |
| "last": "Farudi", |
| "suffix": "" |
| }, |
| { |
| "first": "Bhuvana", |
| "middle": [], |
| "last": "Klassen", |
| "suffix": "" |
| }, |
| { |
| "first": "Martha", |
| "middle": [], |
| "last": "Narasimhan", |
| "suffix": "" |
| }, |
| { |
| "first": "Owen", |
| "middle": [], |
| "last": "Palmer", |
| "suffix": "" |
| }, |
| { |
| "first": "Dipti", |
| "middle": [ |
| "Misra" |
| ], |
| "last": "Rambow", |
| "suffix": "" |
| }, |
| { |
| "first": "Ashwini", |
| "middle": [], |
| "last": "Sharma", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Vaidya", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Sri Ramagurumurthy", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Vishnu", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Handbook of Linguistic Annotation", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Riyaz Ahmad Bhat, Rajesh Bhatt, Annahita Farudi, Prescott Klassen, Bhuvana Narasimhan, Martha Palmer, Owen Rambow, Dipti Misra Sharma, Ash- wini Vaidya, Sri Ramagurumurthy Vishnu, et al. 2017. The Hindi/Urdu treebank project. In Handbook of Linguistic Annotation. Springer Press.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Does typological blinding impede cross-lingual sharing?", |
| "authors": [ |
| { |
| "first": "Johannes", |
| "middle": [], |
| "last": "Bjerva", |
| "suffix": "" |
| }, |
| { |
| "first": "Isabelle", |
| "middle": [], |
| "last": "Augenstein", |
| "suffix": "" |
| } |
| ], |
| "year": 2021, |
| "venue": "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume", |
| "volume": "", |
| "issue": "", |
| "pages": "480--486", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/2021.eacl-main.38" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Johannes Bjerva and Isabelle Augenstein. 2021. Does typological blinding impede cross-lingual sharing? In Proceedings of the 16th Conference of the Euro- pean Chapter of the Association for Computational Linguistics: Main Volume, pages 480-486, Online. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "HDT-UD: A very large Universal Dependencies treebank for German", |
| "authors": [ |
| { |
| "first": "Emanuel", |
| "middle": [], |
| "last": "Borges V\u00f6lker", |
| "suffix": "" |
| }, |
| { |
| "first": "Maximilian", |
| "middle": [], |
| "last": "Wendt", |
| "suffix": "" |
| }, |
| { |
| "first": "Felix", |
| "middle": [], |
| "last": "Hennig", |
| "suffix": "" |
| }, |
| { |
| "first": "Arne", |
| "middle": [], |
| "last": "K\u00f6hn", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the Third Workshop on Universal Dependencies", |
| "volume": "", |
| "issue": "", |
| "pages": "46--57", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/W19-8006" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Emanuel Borges V\u00f6lker, Maximilian Wendt, Felix Hen- nig, and Arne K\u00f6hn. 2019. HDT-UD: A very large Universal Dependencies treebank for German. In Proceedings of the Third Workshop on Universal De- pendencies (UDW, SyntaxFest 2019), pages 46-57, Paris, France. Association for Computational Lin- guistics.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Neural machine translation with reordering embeddings", |
| "authors": [ |
| { |
| "first": "Kehai", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Rui", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Masao", |
| "middle": [], |
| "last": "Utiyama", |
| "suffix": "" |
| }, |
| { |
| "first": "Eiichiro", |
| "middle": [], |
| "last": "Sumita", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 57th", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/P19-1174" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kehai Chen, Rui Wang, Masao Utiyama, and Eiichiro Sumita. 2019. Neural machine translation with re- ordering embeddings. In Proceedings of the 57th", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Annual Meeting of the Association for Computational Linguistics", |
| "authors": [], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "1787--1799", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Annual Meeting of the Association for Computational Linguistics, pages 1787-1799, Florence, Italy. Asso- ciation for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Finding universal grammatical relations in multilingual BERT", |
| "authors": [ |
| { |
| "first": "Ethan", |
| "middle": [ |
| "A" |
| ], |
| "last": "Chi", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Hewitt", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of the 58th", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/2020.acl-main.493" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ethan A. Chi, John Hewitt, and Christopher D. Man- ning. 2020. Finding universal grammatical relations in multilingual BERT. In Proceedings of the 58th", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Annual Meeting of the Association for Computational Linguistics", |
| "authors": [], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "5564--5577", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Annual Meeting of the Association for Computational Linguistics, pages 5564-5577, Online. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Korean: A linguistic introduction", |
| "authors": [ |
| { |
| "first": "Sungdai", |
| "middle": [], |
| "last": "Cho", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Whitman", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "DOI": [ |
| "10.1017/9781139048842" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sungdai Cho and John Whitman. 2020. Korean: A linguistic introduction. Cambridge University Press, Cambridge; New York.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Building Universal Dependency treebanks in Korean", |
| "authors": [ |
| { |
| "first": "Jayeol", |
| "middle": [], |
| "last": "Chun", |
| "suffix": "" |
| }, |
| { |
| "first": "Na-Rae", |
| "middle": [], |
| "last": "Han", |
| "suffix": "" |
| }, |
| { |
| "first": "Jena", |
| "middle": [ |
| "D" |
| ], |
| "last": "Hwang", |
| "suffix": "" |
| }, |
| { |
| "first": "Jinho", |
| "middle": [ |
| "D" |
| ], |
| "last": "Choi", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jayeol Chun, Na-Rae Han, Jena D. Hwang, and Jinho D. Choi. 2018. Building Universal Dependency tree- banks in Korean. In Proceedings of the Eleventh In- ternational Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA).", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Language universals and linguistic typology", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Bernard Comrie", |
| "suffix": "" |
| } |
| ], |
| "year": 1981, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bernard Comrie. 1981. Language universals and linguistic typology. University of Chicago Press, Chicago, IL.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", |
| "authors": [ |
| { |
| "first": "Jacob", |
| "middle": [], |
| "last": "Devlin", |
| "suffix": "" |
| }, |
| { |
| "first": "Ming-Wei", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenton", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Kristina", |
| "middle": [], |
| "last": "Toutanova", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "1", |
| "issue": "", |
| "pages": "4171--4186", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/N19-1423" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Conversion et am\u00e9liorations de corpus du fran\u00e7ais annot\u00e9s en universal dependencies", |
| "authors": [ |
| { |
| "first": "Bruno", |
| "middle": [], |
| "last": "Guillaume", |
| "suffix": "" |
| }, |
| { |
| "first": "Marie-Catherine", |
| "middle": [], |
| "last": "De Marneffe", |
| "suffix": "" |
| }, |
| { |
| "first": "Guy", |
| "middle": [], |
| "last": "Perrier", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Revue TAL", |
| "volume": "60", |
| "issue": "2", |
| "pages": "71--95", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bruno Guillaume, Marie-Catherine de Marneffe, and Guy Perrier. 2019. Conversion et am\u00e9liorations de corpus du fran\u00e7ais annot\u00e9s en universal dependencies. Revue TAL, 60(2):71-95.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Prague arabic dependency treebank 1.0. LINDAT/CLARIAH-CZ digital library at the Institute of Formal and Applied Linguistics (\u00daFAL), Faculty of Mathematics and Physics", |
| "authors": [ |
| { |
| "first": "Jan", |
| "middle": [], |
| "last": "Haji\u010d", |
| "suffix": "" |
| }, |
| { |
| "first": "Otakar", |
| "middle": [], |
| "last": "Smr\u017e", |
| "suffix": "" |
| }, |
| { |
| "first": "Petr", |
| "middle": [], |
| "last": "Zem\u00e1nek", |
| "suffix": "" |
| }, |
| { |
| "first": "Petr", |
| "middle": [], |
| "last": "Pajas", |
| "suffix": "" |
| }, |
| { |
| "first": "Jan", |
| "middle": [], |
| "last": "\u0160naidauf", |
| "suffix": "" |
| }, |
| { |
| "first": "Emanuel", |
| "middle": [], |
| "last": "Be\u0161ka", |
| "suffix": "" |
| }, |
| { |
| "first": "Jakub", |
| "middle": [], |
| "last": "Kracmar", |
| "suffix": "" |
| }, |
| { |
| "first": "Kamila", |
| "middle": [], |
| "last": "Hassanov\u00e1", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jan Haji\u010d, Otakar Smr\u017e, Petr Zem\u00e1nek, Petr Pajas, Jan \u0160naidauf, Emanuel Be\u0161ka, Jakub Kracmar, and Kamila Hassanov\u00e1. 2009. Prague arabic dependency treebank 1.0. LINDAT/CLARIAH-CZ digital li- brary at the Institute of Formal and Applied Linguis- tics (\u00daFAL), Faculty of Mathematics and Physics, Charles University.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "A parsing theory of word order universals", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "John", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Hawkins", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "Linguistic inquiry", |
| "volume": "21", |
| "issue": "2", |
| "pages": "223--261", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "John A Hawkins. 1990. A parsing theory of word order universals. Linguistic inquiry, 21(2):223-261.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "What does BERT learn about the structure of language", |
| "authors": [ |
| { |
| "first": "Ganesh", |
| "middle": [], |
| "last": "Jawahar", |
| "suffix": "" |
| }, |
| { |
| "first": "Beno\u00eet", |
| "middle": [], |
| "last": "Sagot", |
| "suffix": "" |
| }, |
| { |
| "first": "Djam\u00e9", |
| "middle": [], |
| "last": "Seddah", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "3651--3657", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/P19-1356" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ganesh Jawahar, Beno\u00eet Sagot, and Djam\u00e9 Seddah. 2019. What does BERT learn about the structure of language? In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, pages 3651-3657, Florence, Italy. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Evaluating BERT for natural language inference: A case study on the CommitmentBank", |
| "authors": [ |
| { |
| "first": "Nanjiang", |
| "middle": [], |
| "last": "Jiang", |
| "suffix": "" |
| }, |
| { |
| "first": "Marie-Catherine", |
| "middle": [], |
| "last": "De Marneffe", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
| "volume": "", |
| "issue": "", |
| "pages": "6086--6091", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/D19-1630" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nanjiang Jiang and Marie-Catherine de Marneffe. 2019. Evaluating BERT for natural language inference: A case study on the CommitmentBank. In Proceedings of the 2019 Conference on Empirical Methods in Nat- ural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6086-6091, Hong Kong, China. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Konstantinos Kogkalidis and Gijs Winholds. 2022. Discontinuous constituency and BERT: A case study of Dutch", |
| "authors": [], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:2203.01063" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Konstantinos Kogkalidis and Gijs Winholds. 2022. Dis- continuous constituency and BERT: A case study of Dutch. arXiv preprint arXiv:2203.01063.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Open sesame: Getting inside BERT's linguistic knowledge", |
| "authors": [ |
| { |
| "first": "Yongjie", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| }, |
| { |
| "first": "Yi", |
| "middle": [], |
| "last": "Chern Tan", |
| "suffix": "" |
| }, |
| { |
| "first": "Robert", |
| "middle": [], |
| "last": "Frank", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP", |
| "volume": "", |
| "issue": "", |
| "pages": "241--253", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/W19-4825" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yongjie Lin, Yi Chern Tan, and Robert Frank. 2019. Open sesame: Getting inside BERT's linguistic knowledge. In Proceedings of the 2019 ACL Work- shop BlackboxNLP: Analyzing and Interpreting Neu- ral Networks for NLP, pages 241-253, Florence, Italy. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "On the importance of word order information in cross-lingual sequence labeling", |
| "authors": [ |
| { |
| "first": "Zihan", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Samuel", |
| "middle": [], |
| "last": "Genta Indra Winata", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrea", |
| "middle": [], |
| "last": "Cahyawijaya", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhaojiang", |
| "middle": [], |
| "last": "Madotto", |
| "suffix": "" |
| }, |
| { |
| "first": "Pascale", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Fung", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:2001.11164" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zihan Liu, Genta Indra Winata, Samuel Cahyawijaya, Andrea Madotto, Zhaojiang Lin, and Pascale Fung. 2020. On the importance of word order information in cross-lingual sequence labeling. arXiv preprint arXiv:2001.11164.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "TruthTeller: Annotating predicate truth", |
| "authors": [ |
| { |
| "first": "Amnon", |
| "middle": [], |
| "last": "Lotan", |
| "suffix": "" |
| }, |
| { |
| "first": "Asher", |
| "middle": [], |
| "last": "Stern", |
| "suffix": "" |
| }, |
| { |
| "first": "Ido", |
| "middle": [], |
| "last": "Dagan", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "752--757", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Amnon Lotan, Asher Stern, and Ido Dagan. 2013. TruthTeller: Annotating predicate truth. In Proceed- ings of the 2013 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, pages 752- 757, Atlanta, Georgia. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Between words and characters: A brief history of open-vocabulary modeling and tokenization in NLP", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Sabrina", |
| "suffix": "" |
| }, |
| { |
| "first": "Zaid", |
| "middle": [], |
| "last": "Mielke", |
| "suffix": "" |
| }, |
| { |
| "first": "Elizabeth", |
| "middle": [], |
| "last": "Alyafeai", |
| "suffix": "" |
| }, |
| { |
| "first": "Colin", |
| "middle": [], |
| "last": "Salesky", |
| "suffix": "" |
| }, |
| { |
| "first": "Manan", |
| "middle": [], |
| "last": "Raffel", |
| "suffix": "" |
| }, |
| { |
| "first": "Matthias", |
| "middle": [], |
| "last": "Dey", |
| "suffix": "" |
| }, |
| { |
| "first": "Arun", |
| "middle": [], |
| "last": "Gall\u00e9", |
| "suffix": "" |
| }, |
| { |
| "first": "Chenglei", |
| "middle": [], |
| "last": "Raja", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Si", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Wilson", |
| "suffix": "" |
| }, |
| { |
| "first": "Beno\u00eet", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Samson", |
| "middle": [], |
| "last": "Sagot", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Tan", |
| "suffix": "" |
| } |
| ], |
| "year": 2021, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:2112.10508" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sabrina J Mielke, Zaid Alyafeai, Elizabeth Salesky, Colin Raffel, Manan Dey, Matthias Gall\u00e9, Arun Raja, Chenglei Si, Wilson Y Lee, Beno\u00eet Sagot, and Sam- son Tan. 2021. Between words and characters: A brief history of open-vocabulary modeling and tok- enization in NLP. arXiv preprint arXiv:2112.10508.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Building a large syntacticallyannotated corpus of Vietnamese", |
| "authors": [ |
| { |
| "first": "Phuong-Thai", |
| "middle": [], |
| "last": "Nguyen", |
| "suffix": "" |
| }, |
| { |
| "first": "Xuan-Luong", |
| "middle": [], |
| "last": "Vu", |
| "suffix": "" |
| }, |
| { |
| "first": "Thi-Minh-Huyen", |
| "middle": [], |
| "last": "Nguyen", |
| "suffix": "" |
| }, |
| { |
| "first": "Hong-Phuong", |
| "middle": [], |
| "last": "Van-Hiep Nguyen", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Le", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of the Third Linguistic Annotation Workshop (LAW III)", |
| "volume": "", |
| "issue": "", |
| "pages": "182--185", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Phuong-Thai Nguyen, Xuan-Luong Vu, Thi-Minh- Huyen Nguyen, Van-Hiep Nguyen, and Hong- Phuong Le. 2009. Building a large syntactically- annotated corpus of Vietnamese. In Proceedings of the Third Linguistic Annotation Workshop (LAW III), pages 182-185, Suntec, Singapore. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Lexical descriptions for Vietnamese language processing", |
| "authors": [ |
| { |
| "first": "Thi", |
| "middle": [], |
| "last": "Thanh Bon Nguyen", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Minh Huyen", |
| "suffix": "" |
| }, |
| { |
| "first": "Laurent", |
| "middle": [], |
| "last": "Nguyen", |
| "suffix": "" |
| }, |
| { |
| "first": "Xuan", |
| "middle": [ |
| "Luong" |
| ], |
| "last": "Romary", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Vu", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "The 1st International Joint Conference on Natural Language Processing -IJCNLP'04 / Workshop on Asian Language Resources", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thanh Bon Nguyen, Thi Minh Huyen Nguyen, Lau- rent Romary, and Xuan Luong Vu. 2004. Lexical descriptions for Vietnamese language processing. In The 1st International Joint Conference on Natural Language Processing -IJCNLP'04 / Workshop on Asian Language Resources, Sanya, Hainan Island, China. Colloque avec actes et comit\u00e9 de lecture. in- ternationale.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Morphosyntactic predictability of translationese", |
| "authors": [ |
| { |
| "first": "Dmitry", |
| "middle": [], |
| "last": "Nikolaev", |
| "suffix": "" |
| }, |
| { |
| "first": "Taelin", |
| "middle": [], |
| "last": "Karidi", |
| "suffix": "" |
| }, |
| { |
| "first": "Neta", |
| "middle": [], |
| "last": "Kenneth", |
| "suffix": "" |
| }, |
| { |
| "first": "Veronika", |
| "middle": [], |
| "last": "Mitnik", |
| "suffix": "" |
| }, |
| { |
| "first": "Lilja", |
| "middle": [], |
| "last": "Saeboe", |
| "suffix": "" |
| }, |
| { |
| "first": "Omri", |
| "middle": [], |
| "last": "Abend", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Linguistics Vanguard", |
| "volume": "6", |
| "issue": "1", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dmitry Nikolaev, Taelin Karidi, Neta Kenneth, Veronika Mitnik, Lilja Saeboe, and Omri Abend. 2020. Mor- phosyntactic predictability of translationese. Linguis- tics Vanguard, 6(1).", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Universal Dependencies v2: An evergrowing multilingual treebank collection", |
| "authors": [ |
| { |
| "first": "Joakim", |
| "middle": [], |
| "last": "Nivre", |
| "suffix": "" |
| }, |
| { |
| "first": "Marie-Catherine", |
| "middle": [], |
| "last": "De Marneffe", |
| "suffix": "" |
| }, |
| { |
| "first": "Filip", |
| "middle": [], |
| "last": "Ginter", |
| "suffix": "" |
| }, |
| { |
| "first": "Jan", |
| "middle": [], |
| "last": "Haji\u010d", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| }, |
| { |
| "first": "Sampo", |
| "middle": [], |
| "last": "Pyysalo", |
| "suffix": "" |
| }, |
| { |
| "first": "Sebastian", |
| "middle": [], |
| "last": "Schuster", |
| "suffix": "" |
| }, |
| { |
| "first": "Francis", |
| "middle": [], |
| "last": "Tyers", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Zeman", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of the 12th Language Resources and Evaluation Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "4034--4043", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Joakim Nivre, Marie-Catherine de Marneffe, Filip Gin- ter, Jan Haji\u010d, Christopher D. Manning, Sampo Pyysalo, Sebastian Schuster, Francis Tyers, and Daniel Zeman. 2020. Universal Dependencies v2: An evergrowing multilingual treebank collection. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 4034-4043, Marseille, France. European Language Resources Association.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "How multilingual is multilingual BERT?", |
| "authors": [ |
| { |
| "first": "Telmo", |
| "middle": [], |
| "last": "Pires", |
| "suffix": "" |
| }, |
| { |
| "first": "Eva", |
| "middle": [], |
| "last": "Schlinger", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Garrette", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "4996--5001", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/P19-1493" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Telmo Pires, Eva Schlinger, and Dan Garrette. 2019. How multilingual is multilingual BERT? In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4996-5001, Flo- rence, Italy. Association for Computational Linguis- tics.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Found in translation: Reconstructing phylogenetic language trees from translations", |
| "authors": [ |
| { |
| "first": "Ella", |
| "middle": [], |
| "last": "Rabinovich", |
| "suffix": "" |
| }, |
| { |
| "first": "Noam", |
| "middle": [], |
| "last": "Ordan", |
| "suffix": "" |
| }, |
| { |
| "first": "Shuly", |
| "middle": [], |
| "last": "Wintner", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "530--540", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/P17-1049" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ella Rabinovich, Noam Ordan, and Shuly Wintner. 2017. Found in translation: Reconstructing phylogenetic language trees from translations. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 530-540, Vancouver, Canada. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "Low-resource syntactic transfer with unsupervised source reordering", |
| "authors": [ |
| { |
| "first": "Mohammad", |
| "middle": [], |
| "last": "Sadegh", |
| "suffix": "" |
| }, |
| { |
| "first": "Rasooli", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "1", |
| "issue": "", |
| "pages": "3845--3856", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/N19-1385" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mohammad Sadegh Rasooli and Michael Collins. 2019. Low-resource syntactic transfer with unsupervised source reordering. In Proceedings of the 2019 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long and Short Pa- pers), pages 3845-3856, Minneapolis, Minnesota. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "A primer in BERTology: What we know about how BERT works", |
| "authors": [ |
| { |
| "first": "Anna", |
| "middle": [], |
| "last": "Rogers", |
| "suffix": "" |
| }, |
| { |
| "first": "Olga", |
| "middle": [], |
| "last": "Kovaleva", |
| "suffix": "" |
| }, |
| { |
| "first": "Anna", |
| "middle": [], |
| "last": "Rumshisky", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Transactions of the Association for Computational Linguistics", |
| "volume": "8", |
| "issue": "", |
| "pages": "842--866", |
| "other_ids": { |
| "DOI": [ |
| "10.1162/tacl_a_00349" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Anna Rogers, Olga Kovaleva, and Anna Rumshisky. 2020. A primer in BERTology: What we know about how BERT works. Transactions of the Association for Computational Linguistics, 8:842-866.", |
| "links": null |
| }, |
| "BIBREF33": { |
| "ref_id": "b33", |
| "title": "Is multilingual BERT fluent in language generation?", |
| "authors": [ |
| { |
| "first": "Samuel", |
| "middle": [], |
| "last": "R\u00f6nnqvist", |
| "suffix": "" |
| }, |
| { |
| "first": "Jenna", |
| "middle": [], |
| "last": "Kanerva", |
| "suffix": "" |
| }, |
| { |
| "first": "Tapio", |
| "middle": [], |
| "last": "Salakoski", |
| "suffix": "" |
| }, |
| { |
| "first": "Filip", |
| "middle": [], |
| "last": "Ginter", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the First NLPL Workshop on Deep Learning for Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "29--36", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Samuel R\u00f6nnqvist, Jenna Kanerva, Tapio Salakoski, and Filip Ginter. 2019. Is multilingual BERT fluent in lan- guage generation? In Proceedings of the First NLPL Workshop on Deep Learning for Natural Language Processing, pages 29-36, Turku, Finland. Link\u00f6ping University Electronic Press.", |
| "links": null |
| }, |
| "BIBREF34": { |
| "ref_id": "b34", |
| "title": "2020. A survey on text simplification", |
| "authors": [ |
| { |
| "first": "Punardeep", |
| "middle": [], |
| "last": "Sikka", |
| "suffix": "" |
| }, |
| { |
| "first": "Vijay", |
| "middle": [], |
| "last": "Mago", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:2008.08612" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Punardeep Sikka and Vijay Mago. 2020. A survey on text simplification. arXiv preprint arXiv:2008.08612.", |
| "links": null |
| }, |
| "BIBREF35": { |
| "ref_id": "b35", |
| "title": "On the so-called SOV word order in Mandarin Chinese: A quantified text study and its implications. Language", |
| "authors": [ |
| { |
| "first": "Talmy", |
| "middle": [], |
| "last": "Chao-Fen Sun", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Giv\u00f3n", |
| "suffix": "" |
| } |
| ], |
| "year": 1985, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "329--351", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chao-Fen Sun and Talmy Giv\u00f3n. 1985. On the so-called SOV word order in Mandarin Chinese: A quantified text study and its implications. Language, pages 329-351.", |
| "links": null |
| }, |
| "BIBREF36": { |
| "ref_id": "b36", |
| "title": "Can pre-trained transformers be used in detecting complex sensitive sentences? -A Monsanto case study", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Roelien", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Timmer", |
| "suffix": "" |
| }, |
| { |
| "first": "Surya", |
| "middle": [], |
| "last": "Liebowitz", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Nepal", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Kanhere", |
| "suffix": "" |
| } |
| ], |
| "year": 2021, |
| "venue": "2021 Third IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications (TPS-ISA)", |
| "volume": "", |
| "issue": "", |
| "pages": "90--97", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Roelien C Timmer, David Liebowitz, Surya Nepal, and Salil S Kanhere. 2021. Can pre-trained transformers be used in detecting complex sensitive sentences? - A Monsanto case study. In 2021 Third IEEE Inter- national Conference on Trust, Privacy and Security in Intelligent Systems and Applications (TPS-ISA), pages 90-97. IEEE.", |
| "links": null |
| }, |
| "BIBREF37": { |
| "ref_id": "b37", |
| "title": "A unified morpho-syntactic scheme of Stanford dependencies", |
| "authors": [ |
| { |
| "first": "Reut", |
| "middle": [], |
| "last": "Tsarfaty", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", |
| "volume": "2", |
| "issue": "", |
| "pages": "578--584", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Reut Tsarfaty. 2013. A unified morpho-syntactic scheme of Stanford dependencies. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 578-584, Sofia, Bulgaria. Association for Com- putational Linguistics.", |
| "links": null |
| }, |
| "BIBREF38": { |
| "ref_id": "b38", |
| "title": "Chinese syntactic reordering for statistical machine translation", |
| "authors": [ |
| { |
| "first": "Chao", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| }, |
| { |
| "first": "Philipp", |
| "middle": [], |
| "last": "Koehn", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)", |
| "volume": "", |
| "issue": "", |
| "pages": "737--745", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chao Wang, Michael Collins, and Philipp Koehn. 2007. Chinese syntactic reordering for statistical machine translation. In Proceedings of the 2007 Joint Con- ference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 737-745, Prague, Czech Republic. Association for Computational Lin- guistics.", |
| "links": null |
| }, |
| "BIBREF40": { |
| "ref_id": "b40", |
| "title": "Transformers: State-of-the-art natural language processing", |
| "authors": [ |
| { |
| "first": "Clara", |
| "middle": [], |
| "last": "Ma", |
| "suffix": "" |
| }, |
| { |
| "first": "Yacine", |
| "middle": [], |
| "last": "Jernite", |
| "suffix": "" |
| }, |
| { |
| "first": "Julien", |
| "middle": [], |
| "last": "Plu", |
| "suffix": "" |
| }, |
| { |
| "first": "Canwen", |
| "middle": [], |
| "last": "Xu", |
| "suffix": "" |
| }, |
| { |
| "first": "Teven", |
| "middle": [ |
| "Le" |
| ], |
| "last": "Scao", |
| "suffix": "" |
| }, |
| { |
| "first": "Sylvain", |
| "middle": [], |
| "last": "Gugger", |
| "suffix": "" |
| }, |
| { |
| "first": "Mariama", |
| "middle": [], |
| "last": "Drame", |
| "suffix": "" |
| }, |
| { |
| "first": "Quentin", |
| "middle": [], |
| "last": "Lhoest", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexander", |
| "middle": [], |
| "last": "Rush", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", |
| "volume": "", |
| "issue": "", |
| "pages": "38--45", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/2020.emnlp-demos.6" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Trans- formers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF41": { |
| "ref_id": "b41", |
| "title": "CoNLL 2017 shared task: Multilingual parsing from raw text to Universal Dependencies", |
| "authors": [ |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Zeman", |
| "suffix": "" |
| }, |
| { |
| "first": "Martin", |
| "middle": [], |
| "last": "Popel", |
| "suffix": "" |
| }, |
| { |
| "first": "Milan", |
| "middle": [], |
| "last": "Straka", |
| "suffix": "" |
| }, |
| { |
| "first": "Jan", |
| "middle": [], |
| "last": "Haji\u010d", |
| "suffix": "" |
| }, |
| { |
| "first": "Joakim", |
| "middle": [], |
| "last": "Nivre", |
| "suffix": "" |
| }, |
| { |
| "first": "Filip", |
| "middle": [], |
| "last": "Ginter", |
| "suffix": "" |
| }, |
| { |
| "first": "Juhani", |
| "middle": [], |
| "last": "Luotolahti", |
| "suffix": "" |
| }, |
| { |
| "first": "Sampo", |
| "middle": [], |
| "last": "Pyysalo", |
| "suffix": "" |
| }, |
| { |
| "first": "Slav", |
| "middle": [], |
| "last": "Petrov", |
| "suffix": "" |
| }, |
| { |
| "first": "Martin", |
| "middle": [], |
| "last": "Potthast", |
| "suffix": "" |
| }, |
| { |
| "first": "Francis", |
| "middle": [], |
| "last": "Tyers", |
| "suffix": "" |
| }, |
| { |
| "first": "Elena", |
| "middle": [], |
| "last": "Badmaeva", |
| "suffix": "" |
| }, |
| { |
| "first": "Memduh", |
| "middle": [], |
| "last": "Gokirmak", |
| "suffix": "" |
| }, |
| { |
| "first": "Anna", |
| "middle": [], |
| "last": "Nedoluzhko", |
| "suffix": "" |
| }, |
| { |
| "first": "Silvie", |
| "middle": [], |
| "last": "Cinkov\u00e1", |
| "suffix": "" |
| }, |
| { |
| "first": "Jan", |
| "middle": [], |
| "last": "Haji\u010d Jr", |
| "suffix": "" |
| }, |
| { |
| "first": "Jaroslava", |
| "middle": [], |
| "last": "Hlav\u00e1\u010dov\u00e1", |
| "suffix": "" |
| }, |
| { |
| "first": "V\u00e1clava", |
| "middle": [], |
| "last": "Kettnerov\u00e1", |
| "suffix": "" |
| }, |
| { |
| "first": "Zde\u0148ka", |
| "middle": [], |
| "last": "Ure\u0161ov\u00e1", |
| "suffix": "" |
| }, |
| { |
| "first": "Jenna", |
| "middle": [], |
| "last": "Kanerva", |
| "suffix": "" |
| }, |
| { |
| "first": "Stina", |
| "middle": [], |
| "last": "Ojala", |
| "suffix": "" |
| }, |
| { |
| "first": "Anna", |
| "middle": [], |
| "last": "Missil\u00e4", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| }, |
| { |
| "first": "Sebastian", |
| "middle": [], |
| "last": "Schuster", |
| "suffix": "" |
| }, |
| { |
| "first": "Siva", |
| "middle": [], |
| "last": "Reddy", |
| "suffix": "" |
| }, |
| { |
| "first": "Dima", |
| "middle": [], |
| "last": "Taji", |
| "suffix": "" |
| }, |
| { |
| "first": "Nizar", |
| "middle": [], |
| "last": "Habash", |
| "suffix": "" |
| }, |
| { |
| "first": "Herman", |
| "middle": [], |
| "last": "Leung", |
| "suffix": "" |
| }, |
| { |
| "first": "Marie-Catherine", |
| "middle": [], |
| "last": "De Marneffe", |
| "suffix": "" |
| }, |
| { |
| "first": "Manuela", |
| "middle": [], |
| "last": "Sanguinetti", |
| "suffix": "" |
| }, |
| { |
| "first": "Maria", |
| "middle": [], |
| "last": "Simi", |
| "suffix": "" |
| }, |
| { |
| "first": "Hiroshi", |
| "middle": [], |
| "last": "Kanayama", |
| "suffix": "" |
| }, |
| { |
| "first": "Valeria", |
| "middle": [], |
| "last": "De Paiva", |
| "suffix": "" |
| }, |
| { |
| "first": "Kira", |
| "middle": [], |
| "last": "Droganova", |
| "suffix": "" |
| }, |
| { |
| "first": "\u00c7agr\u0131", |
| "middle": [], |
| "last": "H\u00e9ctor Mart\u00ednez Alonso", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "\u00c7\u00f6ltekin", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies", |
| "volume": "", |
| "issue": "", |
| "pages": "1--19", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/K17-3001" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Daniel Zeman, Martin Popel, Milan Straka, Jan Ha- ji\u010d, Joakim Nivre, Filip Ginter, Juhani Luotolahti, Sampo Pyysalo, Slav Petrov, Martin Potthast, Francis Tyers, Elena Badmaeva, Memduh Gokirmak, Anna Nedoluzhko, Silvie Cinkov\u00e1, Jan Haji\u010d jr., Jaroslava Hlav\u00e1\u010dov\u00e1, V\u00e1clava Kettnerov\u00e1, Zde\u0148ka Ure\u0161ov\u00e1, Jenna Kanerva, Stina Ojala, Anna Missil\u00e4, Christo- pher D. Manning, Sebastian Schuster, Siva Reddy, Dima Taji, Nizar Habash, Herman Leung, Marie- Catherine de Marneffe, Manuela Sanguinetti, Maria Simi, Hiroshi Kanayama, Valeria de Paiva, Kira Droganova, H\u00e9ctor Mart\u00ednez Alonso, \u00c7agr\u0131 \u00c7\u00f6ltekin, Umut Sulubacak, Hans Uszkoreit, Vivien Macketanz, Aljoscha Burchardt, Kim Harris, Katrin Marheinecke, Georg Rehm, Tolga Kayadelen, Mohammed Attia, Ali Elkahky, Zhuoran Yu, Emily Pitler, Saran Lertpra- dit, Michael Mandl, Jesse Kirchner, Hector Fernan- dez Alcalde, Jana Strnadov\u00e1, Esha Banerjee, Ruli Ma- nurung, Antonio Stella, Atsuko Shimada, Sookyoung Kwak, Gustavo Mendon\u00e7a, Tatiana Lando, Rattima Nitisaroj, and Josie Li. 2017. CoNLL 2017 shared task: Multilingual parsing from raw text to Universal Dependencies. In Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 1-19, Vancouver, Canada. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF42": { |
| "ref_id": "b42", |
| "title": "Identifying inherent disagreement in natural language inference", |
| "authors": [ |
| { |
| "first": "Frederick", |
| "middle": [], |
| "last": "Xinliang", |
| "suffix": "" |
| }, |
| { |
| "first": "Marie-Catherine", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "De Marneffe", |
| "suffix": "" |
| } |
| ], |
| "year": 2021, |
| "venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "4908--4915", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/2021.naacl-main.390" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Xinliang Frederick Zhang and Marie-Catherine de Marneffe. 2021. Identifying inherent disagree- ment in natural language inference. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4908-4915, Online. Association for Computational Linguistics.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "TABREF0": { |
| "type_str": "table", |
| "num": null, |
| "html": null, |
| "text": "Language Mandarin Vietnamese Korean Arabic Hindi German Armenian Turkish Welsh Indonesian", |
| "content": "<table><tr><td>Accuracy 88.7</td><td>90</td><td>90.4</td><td>91.2</td><td>93.6</td><td>94.1</td><td>94.3</td><td>95.1</td><td>95.6 96</td></tr><tr><td colspan=\"2\">Language Basque Spanish Accuracy 96.9 97.1</td><td colspan=\"5\">Irish English Hebrew Afrikaans French 97.4 97.9 98.2 98.8 99</td><td colspan=\"2\">Japanese Czech Russian 99.1 99.6 99.7</td></tr><tr><td/><td/><td colspan=\"5\">Table 1: Performance of single-language models.</td><td/><td/></tr></table>" |
| }, |
| "TABREF1": { |
| "type_str": "table", |
| "num": null, |
| "html": null, |
| "text": "Turkish also demonstrates decent perfor-", |
| "content": "<table><tr><td/><td>ar padt</td><td>ga idt</td><td>af booms</td><td>de pud</td><td>cs pud</td><td>cy ccg</td><td>en ewt</td><td>en pud</td><td>es pud</td><td>fi pud</td><td>fr pud</td><td>he hdt</td><td>hy arm</td><td>id pud</td><td>is pud</td><td>it pud</td></tr><tr><td/><td>pl pud</td><td>pt pud</td><td>ru pud</td><td>ru syntag</td><td>sv pud</td><td>eu bdt</td><td>hi pud</td><td>tr pud</td><td>ja gsd</td><td>ja pud</td><td>ko pud</td><td>vi vtb</td><td>th pud</td><td>zh gsd</td><td>zh pud</td><td>mean</td></tr><tr><td>English Russian Czech</td><td>95 98 97</td><td>97 95 93</td><td>93 100 94</td><td>93 99 96</td><td>96 95 92</td><td>88 88 87</td><td>87 90 88</td><td>83 90 88</td><td>66 68 64</td><td>70 72 66</td><td>67 70 68</td><td>82 79 78</td><td>84 80 79</td><td/><td>71 69 71</td><td>88.9 89.2 87.5</td></tr><tr><td>French German Arabic</td><td>98 89 85</td><td>96 94 86</td><td>96 93 88</td><td>97 88 85</td><td>94 97 89</td><td>85 81 71</td><td>89 86 70</td><td>86 78 65</td><td>54 59 63</td><td>61 62 66</td><td>66 57 59</td><td>77 78 74</td><td>76 78 79</td><td/><td>69 68 65</td><td>87.0 85.2 80.3</td></tr><tr><td>Mandarin Turkish Korean Japanese</td><td>85 76 66 54</td><td>85 73 58 52</td><td>86 71 59 55</td><td>85 71 59 55</td><td>86 69 53 50</td><td>82 79 74 57</td><td>87 83 76 63</td><td>89 94 94 88</td><td>80 82 87 99</td><td>78 83 88 98</td><td>77 88 88 95</td><td>74 63 52 54</td><td>80 68 61 70</td><td/><td>86 71 66 66</td><td>84.6 72.3 63.1 60.3</td></tr></table>" |
| }, |
| "TABREF2": { |
| "type_str": "table", |
| "num": null, |
| "html": null, |
| "text": "Performance of zero-shot models. Rows: source languages; columns: target languages and corpora. Underlined values fail to beat the majority-class baseline (always predict subordinate clause). See \u00a7 A.1 for language abbreviations and \u00a7 A.2 for details about corpora.", |
| "content": "<table/>" |
| } |
| } |
| } |
| } |