| { |
| "paper_id": "2020", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T12:12:58.217048Z" |
| }, |
| "title": "UDPipe at EvaLatin 2020: Contextualized Embeddings and Treebank Embeddings", |
| "authors": [ |
| { |
| "first": "Milan", |
| "middle": [], |
| "last": "Straka", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Charles University", |
| "location": {} |
| }, |
| "email": "straka@ufal.mff.cuni.cz" |
| }, |
| { |
| "first": "Jana", |
| "middle": [], |
| "last": "Strakov\u00e1", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Charles University", |
| "location": {} |
| }, |
| "email": "strakova@ufal.mff.cuni.cz" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "We present our contribution to the EvaLatin shared task, which is the first evaluation campaign devoted to the evaluation of NLP tools for Latin. We submitted a system based on UDPipe 2.0, one of the winners of the CoNLL 2018 Shared Task, The 2018 Shared Task on Extrinsic Parser Evaluation and SIGMORPHON 2019 Shared Task. Our system places first by a wide margin both in lemmatization and POS tagging in the open modality, where additional supervised data is allowed, in which case we utilize all Universal Dependency Latin treebanks. In the closed modality, where only the EvaLatin training data is allowed, our system achieves the best performance in lemmatization and in classical subtask of POS tagging, while reaching second place in cross-genre and cross-time settings. In the ablation experiments, we also evaluate the influence of BERT and XLM-RoBERTa contextualized embeddings, and the treebank encodings of the different flavors of Latin treebanks.", |
| "pdf_parse": { |
| "paper_id": "2020", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "We present our contribution to the EvaLatin shared task, which is the first evaluation campaign devoted to the evaluation of NLP tools for Latin. We submitted a system based on UDPipe 2.0, one of the winners of the CoNLL 2018 Shared Task, The 2018 Shared Task on Extrinsic Parser Evaluation and SIGMORPHON 2019 Shared Task. Our system places first by a wide margin both in lemmatization and POS tagging in the open modality, where additional supervised data is allowed, in which case we utilize all Universal Dependency Latin treebanks. In the closed modality, where only the EvaLatin training data is allowed, our system achieves the best performance in lemmatization and in classical subtask of POS tagging, while reaching second place in cross-genre and cross-time settings. In the ablation experiments, we also evaluate the influence of BERT and XLM-RoBERTa contextualized embeddings, and the treebank encodings of the different flavors of Latin treebanks.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "This paper describes our participant system to the EvaLatin 2020 shared task (Sprugnoli et al., 2020) . Given a segmented and tokenized text in CoNLL-U format with surface forms as in The EvaLatin 2020 training data consists of 260k words of annotated texts from five authors. In the closed modality, only the given training data may be used, while in open modality any additional resources can be utilized. We submitted a system based on UDPipe 2.0 (Straka et al., 2019a) . In the open modality, our system also uses all three UD 2.5 (Zeman et al., 2019) Latin treebanks as additional training data and places first by a wide margin both in lemmatization and POS tagging. In the closed modality, our system achieves the best performance in lemmatization and in classical subtask of POS tagging (consisting of texts of the same five authors as the training data), while reaching second place in cross-genre and cross-time setting. Additionally, we evaluated the effect of:", |
| "cite_spans": [ |
| { |
| "start": 77, |
| "end": 101, |
| "text": "(Sprugnoli et al., 2020)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 450, |
| "end": 472, |
| "text": "(Straka et al., 2019a)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 535, |
| "end": 555, |
| "text": "(Zeman et al., 2019)", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "\u2022 BERT (Devlin et al., 2019) and XLM-RoBERTa (Conneau et al., 2019) contextualized embeddings; \u2022 various granularity levels of treebank embeddings (Stymne et al., 2018) .", |
| "cite_spans": [ |
| { |
| "start": 7, |
| "end": 28, |
| "text": "(Devlin et al., 2019)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 147, |
| "end": 168, |
| "text": "(Stymne et al., 2018)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "The EvaLatin 2020 shared task (Sprugnoli et al., 2020) is reminiscent of the SIGMORPHON2019 Shared Task (Mc-Carthy et al., 2019) , where the goal was also to perform lemmatization and POS tagging, but on 107 corpora in 66 languages. It is also related to CoNLL 2017 and 2018 Multilingual Parsing from Raw Texts to Universal Dependencies shared tasks (Zeman et al., 2017; Zeman et al., 2018) , in which the goal was to process raw texts into tokenized sentences with POS tags, lemmas, morphological features and dependency trees of the Universal Dependencies project (Nivre et al., 2016) , which seeks to develop cross-linguistically consistent treebank annotation of morphology and syntax for many languages. UDPipe 2.0 (Straka et al., 2016; Straka, 2018) was one of the winning systems of the CoNLL 2018 shared task, performing the POS tagging, lemmatization and dependency parsing jointly. Its modification (Straka et al., 2019a) Figure 1 : The UDPipe network architecture of the joint tagger and lemmatizer. et al., 2017) architecture. A multilingual BERT model trained on 102 languages can significantly improve performance in many NLP tasks across many languages. Recently, XLM-RoBERTa, an improved multilingual model based on BERT, was proposed by Conneau et al. (2019) , which appears to offer stronger performance in multilingual representation (Conneau et al., 2019; Lewis et al., 2019) .", |
| "cite_spans": [ |
| { |
| "start": 30, |
| "end": 54, |
| "text": "(Sprugnoli et al., 2020)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 104, |
| "end": 128, |
| "text": "(Mc-Carthy et al., 2019)", |
| "ref_id": null |
| }, |
| { |
| "start": 350, |
| "end": 370, |
| "text": "(Zeman et al., 2017;", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 371, |
| "end": 390, |
| "text": "Zeman et al., 2018)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 566, |
| "end": 586, |
| "text": "(Nivre et al., 2016)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 720, |
| "end": 741, |
| "text": "(Straka et al., 2016;", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 742, |
| "end": 755, |
| "text": "Straka, 2018)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 909, |
| "end": 931, |
| "text": "(Straka et al., 2019a)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 1254, |
| "end": 1275, |
| "text": "Conneau et al. (2019)", |
| "ref_id": null |
| }, |
| { |
| "start": 1353, |
| "end": 1375, |
| "text": "(Conneau et al., 2019;", |
| "ref_id": null |
| }, |
| { |
| "start": 1376, |
| "end": 1395, |
| "text": "Lewis et al., 2019)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 932, |
| "end": 940, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2." |
| }, |
| { |
| "text": "Our architecture is based on UDPipe entry to SIG-MORPHON 2019 Shared Task (Straka et al., 2019a) , which is available at https://github.com/ufal/ sigmorphon2019. The resulting model is presented in Figure 1 . In short, the architecture is a multi-task model predicting jointly lemmas and POS tags. After embedding input words, three shared bidirectional LSTM (Hochreiter and Schmidhuber, 1997) layers are performed. Then, softmax classifiers process the output and generate the lemmas and POS tags. The lemmas are generated by classifying into a set of edit scripts which process input word form and produce lemmas by performing character-level edits on the word prefix and suffix. The lemma classifier additionally takes the character-level word embeddings as input. The lemmatization is further described in Section 3.2. The input word embeddings are the same as in the previous versions of UDPipe 2.0:", |
| "cite_spans": [ |
| { |
| "start": 74, |
| "end": 96, |
| "text": "(Straka et al., 2019a)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 359, |
| "end": 393, |
| "text": "(Hochreiter and Schmidhuber, 1997)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 198, |
| "end": 206, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Architecture Overview", |
| "sec_num": "3.1." |
| }, |
| { |
| "text": "\u2022 end-to-end word embeddings, \u2022 character-level word embeddings: We employ bidirectional GRUs (Cho et al., 2014; Graves and Schmidhuber, 2005 ) of dimension 256 in line with (Ling et al., 2015) : we represent every Unicode character with a vector of dimension 256, and concatenate GRU output for forward and reversed word characters. The character-level word embeddings are trained together with UDPipe network. \u2022 pretrained word embeddings: We use FastText word embeddings (Bojanowski et al., 2017) of dimension 300, which we pretrain on plain texts provided by CoNLL 2017 UD Shared Task (Ginter et al., 2017) , using segmentation and tokenization trained from the UD data. 1 \u2022 pretrained contextualized word embeddings: We use the Multilingual Base Uncased BERT (Devlin et al., 2019) model to provide contextualized embeddings of dimensionality 768, averaging the last layer of subwords belonging to the same word.", |
| "cite_spans": [ |
| { |
| "start": 94, |
| "end": 112, |
| "text": "(Cho et al., 2014;", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 113, |
| "end": 141, |
| "text": "Graves and Schmidhuber, 2005", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 174, |
| "end": 193, |
| "text": "(Ling et al., 2015)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 474, |
| "end": 499, |
| "text": "(Bojanowski et al., 2017)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 589, |
| "end": 610, |
| "text": "(Ginter et al., 2017)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Architecture Overview", |
| "sec_num": "3.1." |
| }, |
| { |
| "text": "We refer the readers for detailed description of the architecture and the training procedure to Straka et al. (2019a) .", |
| "cite_spans": [ |
| { |
| "start": 96, |
| "end": 117, |
| "text": "Straka et al. (2019a)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Architecture Overview", |
| "sec_num": "3.1." |
| }, |
| { |
| "text": "The lemmatization is modeled as a multi-class classification, in which the classes are the complete rules which lead from input forms to the lemmas. We call each class encoding a transition from input form to lemma a lemma rule. We create a lemma rule by firstly encoding the correct casing as a casing script and secondly by creating a sequence of character edits, an edit script.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lemmatization", |
| "sec_num": "3.2." |
| }, |
| { |
| "text": "Firstly, we deal with the casing by creating a casing script. By default, word form and lemma characters are treated as lowercased. If the lemma however contains upper-cased characters, a rule is added to the casing script to uppercase the corresponding characters in the resulting lemma. For example, the most frequent casing script is \"keep the lemma lowercased (don't do anything)\" and the second most frequent casing script is \"uppercase the first character and keep the rest lowercased\". As a second step, an edit script is created to convert input lowercased form to lowercased lemma. To ensure meaningful editing, the form is split to three parts, which are then processed separately: a prefix, a root (stem) and a suffix. The root is discovered by matching the longest substring shared between the form and the lemma; if no matching substring is found (e.g., form eum and lemma is), we consider the word irregular, do not process it with any edits and directly replace the word form with the lemma. Otherwise, we proceed with the edit scripts, which process the prefix and the suffix separately and keep the root unchanged. The allowed character-wise operations are character copy, addition and deletion. The resulting lemma rule is a concatenation of a casing script and an edit script. The most common lemma rules present in EvaLatin training data are presented in Table 1 . Using the generated lemma rules, the task of lemmatization is then reduced to a multiclass classification task, in which the artificial neural network predicts the correct lemma rule.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 1375, |
| "end": 1382, |
| "text": "Table 1", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Lemmatization", |
| "sec_num": "3.2." |
| }, |
| { |
| "text": "In the open modality, we additionally train on all three UD 2.5 Latin treebanks. In order to recognize and handle possible differences in the treebank annotations, we employ treebank embeddings following (Stymne et al., 2018) . Furthermore, given that the author name is a known information both during training and prediction time, we train a second model with author-specific embeddings for the individual authors. We employ the model with author-specific embeddings whenever the predicted text comes from one of the training data authors (in-domain setting) and a generic model otherwise (out-of-domain setting).", |
| "cite_spans": [ |
| { |
| "start": 204, |
| "end": 225, |
| "text": "(Stymne et al., 2018)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Treebank Embedding", |
| "sec_num": "3.3." |
| }, |
| { |
| "text": "The official overall results are presented in Table 2 for lemmatization and in Table 3 for POS tagging. In the open modality, our system places first by a wide margin both in lemmatization and POS tagging. In the closed modality, our system achieves best performance in lemmatization and in classical subtask of POS tagging (where the texts from the training data authors are annotated), and second place in cross-genre and cross-time settings.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 46, |
| "end": 53, |
| "text": "Table 2", |
| "ref_id": "TABREF4" |
| }, |
| { |
| "start": 79, |
| "end": 86, |
| "text": "Table 3", |
| "ref_id": "TABREF6" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "4." |
| }, |
| { |
| "text": "The effect of various kinds contextualized embeddings is evaluated in Table 4 . While BERT embeddings yield only a minor accuracy increase, which is consistent with (Straka et al., 2019b) for Latin, using XLM-RoBERTa leads to larger Tables 2 and 3 .", |
| "cite_spans": [ |
| { |
| "start": 165, |
| "end": 187, |
| "text": "(Straka et al., 2019b)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 70, |
| "end": 77, |
| "text": "Table 4", |
| "ref_id": "TABREF8" |
| }, |
| { |
| "start": 233, |
| "end": 247, |
| "text": "Tables 2 and 3", |
| "ref_id": "TABREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Ablation Experiments", |
| "sec_num": "5." |
| }, |
| { |
| "text": "To quantify the boost of the additional training data in the open modality, we considered all models from the above mentioned Table 4 , arriving at the average improvement presented in Table 5 . While the performance on the indomain test set (classical subtask) improves only slightly, the out-of-domain test sets (cross-genre and cross-time subtasks) show more substantial improvement with the additional training data. The effect of different granularity of treebank embeddings in open modality is investigated in Table 6 . When treebank embeddings are removed from our competition system, the performance deteriorates the most, even if only a little in absolute terms. This indicates that the UD and EvaLatin annotations are very consistent. Providing one embedding for EvaLatin data and another for all UD treebanks improves the performance, and more so if three UD treebank specific Table 4 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 126, |
| "end": 133, |
| "text": "Table 4", |
| "ref_id": "TABREF8" |
| }, |
| { |
| "start": 185, |
| "end": 192, |
| "text": "Table 5", |
| "ref_id": "TABREF9" |
| }, |
| { |
| "start": 516, |
| "end": 523, |
| "text": "Table 6", |
| "ref_id": "TABREF10" |
| }, |
| { |
| "start": 888, |
| "end": 895, |
| "text": "Table 4", |
| "ref_id": "TABREF8" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Ablation Experiments", |
| "sec_num": "5." |
| }, |
| { |
| "text": "embeddings are used. Lastly, we evaluate the effect of the per-author embeddings. While on the development set the improvement was larger, the results on the test sets are nearly identical. To get more accurate estimate, we computed the average improvement for all models in Table 4 , arriving at marginal improvements in Table 7 , which indicates that per-author embeddings have nearly no effect on the final system performance (compared to EvaLatin and UD specific embeddings).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 275, |
| "end": 282, |
| "text": "Table 4", |
| "ref_id": "TABREF8" |
| }, |
| { |
| "start": 322, |
| "end": 329, |
| "text": "Table 7", |
| "ref_id": "TABREF11" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Ablation Experiments", |
| "sec_num": "5." |
| }, |
| { |
| "text": "We described our entry to the EvaLatin 2020 shared task, which placed first in the open modality and delivered strong performance in the closed modality. For a future shared task, we think it might be interesting to include also segmentation and tokenization or extend the shared task with an extrinsic evaluation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "6." |
| }, |
| { |
| "text": "This work was supported by the grant no. GX20-16819X of the Grant Agency of the Czech Republic, and has been using language resources stored and distributed by the LINDAT/CLARIAH-CZ project of the Ministry of Education, Youth and Sports of the Czech Republic (project LM2018101).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgements", |
| "sec_num": "7." |
| }, |
| { |
| "text": "We use -minCount 5 -epoch 10 -neg 10 options.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Enriching Word Vectors with Subword Information", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Bojanowski", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Grave", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Joulin", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Mikolov", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Transactions of the Association for Computational Linguistics", |
| "volume": "5", |
| "issue": "", |
| "pages": "135--146", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bojanowski, P., Grave, E., Joulin, A., and Mikolov, T. (2017). Enriching Word Vectors with Subword Informa- tion. Transactions of the Association for Computational Linguistics, 5:135-146.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "On the Properties of Neural Machine Translation: Encoder-Decoder Approaches", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Cho", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Van Merrienboer", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Bahdanau", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Cho, K., van Merrienboer, B., Bahdanau, D., and Bengio, Y. (2014). On the Properties of Neural Machine Trans- lation: Encoder-Decoder Approaches. CoRR.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Devlin", |
| "suffix": "" |
| }, |
| { |
| "first": "M.-W", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Toutanova", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "1", |
| "issue": "", |
| "pages": "4171--4186", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. (2019). BERT: Pre-training of deep bidirectional trans- formers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Pa- pers), pages 4171-4186, Minneapolis, Minnesota, June. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Framewise phoneme classification with bidirectional lstm and other neural network architectures", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Graves", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Schmidhuber", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Neural Networks", |
| "volume": "", |
| "issue": "", |
| "pages": "5--6", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Graves, A. and Schmidhuber, J. (2005). Framewise phoneme classification with bidirectional lstm and other neural network architectures. Neural Networks, pages 5- 6.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Long Short-Term Memory", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Hochreiter", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Schmidhuber", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Neural Comput", |
| "volume": "9", |
| "issue": "8", |
| "pages": "1735--1780", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hochreiter, S. and Schmidhuber, J. (1997). Long Short-Term Memory. Neural Comput., 9(8):1735-1780, November.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Mlqa: Evaluating cross-lingual extractive question answering", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Lewis", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Oguz", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Rinott", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Riedel", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Schwenk", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "ArXiv", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lewis, P., Oguz, B., Rinott, R., Riedel, S., and Schwenk, H. (2019). Mlqa: Evaluating cross-lingual extractive ques- tion answering. ArXiv, abs/1910.07475.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Finding Function in Form: Compositional Character Models for Open Vocabulary Word Representation", |
| "authors": [ |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Ling", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Lu\u00eds", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Marujo", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [ |
| "F" |
| ], |
| "last": "Astudillo", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Amir", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Dyer", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [ |
| "W" |
| ], |
| "last": "Black", |
| "suffix": "" |
| }, |
| { |
| "first": "I", |
| "middle": [], |
| "last": "Trancoso", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ling, W., Lu\u00eds, T., Marujo, L., Astudillo, R. F., Amir, S., Dyer, C., Black, A. W., and Trancoso, I. (2015). Finding Function in Form: Compositional Character Models for Open Vocabulary Word Representation. CoRR.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "The SIGMORPHON 2019 Shared Task: Morphological Analysis in Context and Cross-Lingual Transfer for Inflection", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [ |
| "D" |
| ], |
| "last": "Mccarthy", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Vylomova", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Malaviya", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Wolf-Sonkin", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Nicolai", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Kirov", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Silfverberg", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [ |
| "J" |
| ], |
| "last": "Mielke", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Heinz", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Cotterell", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Hulden", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 16th Workshop on Computational Research in Phonetics, Phonology, and Morphology", |
| "volume": "", |
| "issue": "", |
| "pages": "229--244", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "McCarthy, A. D., Vylomova, E., Wu, S., Malaviya, C., Wolf-Sonkin, L., Nicolai, G., Kirov, C., Silfverberg, M., Mielke, S. J., Heinz, J., Cotterell, R., and Hulden, M. (2019). The SIGMORPHON 2019 Shared Task: Mor- phological Analysis in Context and Cross-Lingual Trans- fer for Inflection. In Proceedings of the 16th Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 229-244, Florence, Italy, Au- gust. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Universal Dependencies v1: A multilingual treebank collection", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Nivre", |
| "suffix": "" |
| }, |
| { |
| "first": "M.-C", |
| "middle": [], |
| "last": "De Marneffe", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Ginter", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Goldberg", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Haji\u010d", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Mcdonald", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Petrov", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Pyysalo", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Silveira", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Tsarfaty", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Zeman", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 10th International Conference on Language Resources and Evaluation (LREC 2016)", |
| "volume": "", |
| "issue": "", |
| "pages": "1659--1666", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nivre, J., de Marneffe, M.-C., Ginter, F., Goldberg, Y., Haji\u010d, J., Manning, C., McDonald, R., Petrov, S., Pyysalo, S., Silveira, N., Tsarfaty, R., and Zeman, D. (2016). Universal Dependencies v1: A multilingual tree- bank collection. In Proceedings of the 10th Interna- tional Conference on Language Resources and Evalua- tion (LREC 2016), pages 1659-1666, Portoro\u017e, Slovenia. European Language Resources Association.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Deep Contextualized Word Representations", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Peters", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Neumann", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Iyyer", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Gardner", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Clark", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Zettlemoyer", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "1", |
| "issue": "", |
| "pages": "2227--2237", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Peters, M., Neumann, M., Iyyer, M., Gardner, M., Clark, C., Lee, K., and Zettlemoyer, L. (2018). Deep Con- textualized Word Representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227-2237. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Overview of the evalatin 2020 evaluation campaign", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Sprugnoli", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Passarotti", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [ |
| "M" |
| ], |
| "last": "Cecchini", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Pellegrini", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of the LT4HALA 2020 Workshop -1st Workshop on Language Technologies for Historical and Ancient Languages, satellite event to the Twelfth International Conference on Language Resources and Evaluation (LREC 2020)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sprugnoli, R., Passarotti, M., Cecchini, F. M., and Pelle- grini, M. (2020). Overview of the evalatin 2020 evalua- tion campaign. In Rachele Sprugnoli et al., editors, Pro- ceedings of the LT4HALA 2020 Workshop -1st Work- shop on Language Technologies for Historical and An- cient Languages, satellite event to the Twelfth Interna- tional Conference on Language Resources and Evalua- tion (LREC 2020), Paris, France, May. European Lan- guage Resources Association (ELRA).", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "UD-Pipe: Trainable Pipeline for Processing CoNLL-U Files Performing Tokenization, Morphological Analysis, POS Tagging and Parsing", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Straka", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Haji\u010d", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Strakov\u00e1", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 10th International Conference on Language Resources and Evaluation (LREC 2016)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Straka, M., Haji\u010d, J., and Strakov\u00e1, J. (2016). UD- Pipe: Trainable Pipeline for Processing CoNLL-U Files Performing Tokenization, Morphological Analysis, POS Tagging and Parsing. In Proceedings of the 10th Inter- national Conference on Language Resources and Eval- uation (LREC 2016), Portoro\u017e, Slovenia. European Lan- guage Resources Association.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "UDPipe at SIGMORPHON 2019: Contextualized Embeddings, Regularization with Morphological Categories, Corpora Merging", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Straka", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Strakov\u00e1", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Hajic", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 16th Workshop on Computational Research in Phonetics, Phonology, and Morphology", |
| "volume": "", |
| "issue": "", |
| "pages": "95--103", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Straka, M., Strakov\u00e1, J., and Hajic, J. (2019a). UDPipe at SIGMORPHON 2019: Contextualized Embeddings, Regularization with Morphological Categories, Corpora Merging. In Proceedings of the 16th Workshop on Com- putational Research in Phonetics, Phonology, and Mor- phology, pages 95-103, Florence, Italy, August. Associ- ation for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Evaluating Contextualized Embeddings on 54 Languages in POS Tagging, Lemmatization and Dependency Parsing. arXiv e-prints", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Straka", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Strakov\u00e1", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Haji\u010d", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1908.07448" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Straka, M., Strakov\u00e1, J., and Haji\u010d, J. (2019b). Evalu- ating Contextualized Embeddings on 54 Languages in POS Tagging, Lemmatization and Dependency Parsing. arXiv e-prints, page arXiv:1908.07448, August.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "UDPipe 2.0 Prototype at CoNLL 2018 UD Shared Task", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Straka", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of CoNLL 2018: The SIGNLL Conference on Computational Natural Language Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "197--207", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Straka, M. (2018). UDPipe 2.0 Prototype at CoNLL 2018 UD Shared Task. In Proceedings of CoNLL 2018: The SIGNLL Conference on Computational Natural Lan- guage Learning, pages 197-207, Stroudsburg, PA, USA. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Parser training with heterogeneous treebanks", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Stymne", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "De Lhoneux", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Smith", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Nivre", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "2", |
| "issue": "", |
| "pages": "619--625", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Stymne, S., de Lhoneux, M., Smith, A., and Nivre, J. (2018). Parser training with heterogeneous treebanks. In Proceedings of the 56th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 2: Short Pa- pers), pages 619-625, Melbourne, Australia, July. Asso- ciation for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Zeman", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Popel", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Straka", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Haji\u010d", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Nivre", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Ginter", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies", |
| "volume": "", |
| "issue": "", |
| "pages": "1--19", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zeman, D., Popel, M., Straka, M., Haji\u010d, J., Nivre, J., Gin- ter, F., et al. (2017). CoNLL 2017 Shared Task: Multi- lingual Parsing from Raw Text to Universal Dependen- cies. In Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Depen- dencies, pages 1-19, Vancouver, Canada. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Zeman", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Haji\u010d", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Popel", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Potthast", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Straka", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Ginter", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Nivre", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Petrov", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies", |
| "volume": "", |
| "issue": "", |
| "pages": "1--20", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zeman, D., Haji\u010d, J., Popel, M., Potthast, M., Straka, M., Ginter, F., Nivre, J., and Petrov, S. (2018). CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies. In Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 1-20, Brussels, Bel- gium, October. Association for Computational Linguis- tics.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Shared Task -Automatically Annotated Raw Texts and Word Embeddings. Institute of Formal and Applied Linguistics", |
| "authors": [ |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Ginter", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Haji\u010d", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Luotolahti", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Straka", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Zeman", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Lin-Dat/Clarin", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Pid", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ginter, F., Haji\u010d, J., Luotolahti, J., Straka, M., and Zeman, D. (2017). CoNLL 2017 Shared Task -Automatically Annotated Raw Texts and Word Embeddings. Institute of Formal and Applied Linguistics, LINDAT/CLARIN, Charles University, Prague, Czech Republic, LIN- DAT/CLARIN PID: http://hdl.handle.net/11234/1-1989.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Universal Dependencies 2.5. Institute of Formal and Applied Linguistics", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Zeman", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Nivre", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zeman, D., Nivre, J., et al. (2019). Universal Dependencies 2.5. Institute of Formal and Ap- plied Linguistics, LINDAT/CLARIN, Charles Univer- sity, Prague, Czech Republic, LINDAT/CLARIN PID: http://hdl.handle.net/11234/1-3105.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "TABREF1": { |
| "type_str": "table", |
| "num": null, |
| "html": null, |
| "content": "<table><tr><td/><td colspan=\"2\">Word Embeddings</td><td/><td/></tr><tr><td/><td colspan=\"2\">Input word cat</td><td>c</td><td>a</td><td>t</td></tr><tr><td>Pretrained</td><td>Pretrained</td><td/><td/><td/></tr><tr><td/><td/><td>Trained</td><td/><td/></tr><tr><td>regular</td><td>contextualized</td><td/><td>GRU</td><td>GRU</td><td>GRU</td></tr><tr><td/><td/><td>embeddings.</td><td/><td/></tr><tr><td>embeddings.</td><td>embeddings.</td><td/><td/><td/></tr><tr><td/><td/><td/><td colspan=\"3\">Character-level word</td></tr><tr><td/><td/><td/><td/><td>embeddings.</td></tr><tr><td/><td/><td>RNN Layers</td><td/><td/></tr><tr><td>Word 1 embeddings</td><td>Word 2 embeddings</td><td>Word 3 embeddings</td><td>...</td><td colspan=\"2\">Word N embeddings</td></tr><tr><td/><td/><td/><td>...</td><td/></tr><tr><td>LSTM</td><td>LSTM</td><td>LSTM</td><td/><td colspan=\"2\">LSTM</td></tr><tr><td/><td>LSTM</td><td>LSTM</td><td/><td colspan=\"2\">LSTM</td></tr><tr><td/><td/><td/><td>...</td><td/></tr><tr><td>LSTM</td><td>LSTM</td><td>LSTM</td><td/><td colspan=\"2\">LSTM</td></tr><tr><td/><td colspan=\"3\">Tagger & Lemmatizer</td><td/></tr><tr><td/><td>tanh</td><td/><td/><td/></tr><tr><td/><td/><td/><td/><td/><td>took</td></tr><tr><td/><td/><td/><td/><td/><td>part in the SIGMORPHON 2019 shared task, delivering</td></tr><tr><td/><td/><td/><td/><td/><td>best performance in lemmatization and comparable to best</td></tr><tr><td/><td/><td/><td/><td/><td>performance in POS tagging.</td></tr><tr><td/><td/><td/><td/><td/><td>A new type of deep contextualized word representation was</td></tr><tr><td/><td/><td/><td/><td/><td>introduced by Peters et al. (2018). The proposed embed-</td></tr><tr><td/><td/><td/><td/><td/><td>dings, called ELMo, were obtained from internal states of</td></tr><tr><td/><td/><td/><td/><td/><td>deep bidirectional language model, pretrained on a large</td></tr><tr><td/><td/><td/><td/><td/><td>text corpus. The idea of ELMos was extended to BERT</td></tr><tr><td/><td/><td/><td/><td/><td>by Devlin et al. (2019), who instead of a bidirectional re-</td></tr><tr><td/><td/><td/><td/><td/><td>current language model employ a Transformer (Vaswani</td></tr></table>", |
| "text": "..." |
| }, |
| "TABREF3": { |
| "type_str": "table", |
| "num": null, |
| "html": null, |
| "content": "<table><tr><td>System</td><td colspan=\"3\">Lemmatization classical cross-genre cross-time</td></tr><tr><td>UDPipe -open</td><td>96.19 (1)</td><td>87.13 (1)</td><td>91.01 (1)</td></tr><tr><td colspan=\"2\">UDPipe -closed 95.90 (2)</td><td>85.47 (3)</td><td>87.69 (2)</td></tr><tr><td>P2 -closed 1</td><td>94.76 (3)</td><td>85.49 (2)</td><td>85.75 (3)</td></tr><tr><td>P3 -closed 1</td><td>94.60 (4)</td><td>81.69 (5)</td><td>83.92 (4)</td></tr><tr><td>P2 -closed 2</td><td>94.22 (5)</td><td>82.69 (4)</td><td>83.76 (5)</td></tr><tr><td>Post ST -open</td><td>96.35</td><td>87.48</td><td>91.07</td></tr><tr><td>Post ST -closed</td><td>95.93</td><td>85.94</td><td>87.88</td></tr></table>", |
| "text": "Fifteen most frequent lemma rules in EvaLatin training data ordered from the most frequent one, and the most frequent rule with an absolute edit script." |
| }, |
| "TABREF4": { |
| "type_str": "table", |
| "num": null, |
| "html": null, |
| "content": "<table/>", |
| "text": "Official ranking of EvaLatin lemmatization. Additionally, we include our best post-competition model in italic." |
| }, |
| "TABREF6": { |
| "type_str": "table", |
| "num": null, |
| "html": null, |
| "content": "<table><tr><td>: Official ranking of EvaLatin lemmatization. Ad-</td></tr><tr><td>ditionally, we include our best post-competition model in</td></tr><tr><td>italic.</td></tr><tr><td>accuracy improvement. For comparison, we include the</td></tr><tr><td>post-competition system with XLM-RoBERTa embeddings</td></tr><tr><td>in</td></tr></table>", |
| "text": "" |
| }, |
| "TABREF8": { |
| "type_str": "table", |
| "num": null, |
| "html": null, |
| "content": "<table><tr><td/><td/><td>Lemmatization</td><td/><td/><td>Tagging</td><td/></tr><tr><td/><td colspan=\"6\">classical cross-genre cross-time classical cross-genre cross-time</td></tr><tr><td>The improvement of open modality, i.e., using all three UD Latin treebanks</td><td>+0.430</td><td>+1.795</td><td>+2.975</td><td>+0.177</td><td>+1.100</td><td>+3.315</td></tr></table>", |
| "text": "The evaluation of various pretrained embeddings (FastText word embeddings, Multilingual BERT embeddings, XLM-RoBERTa embeddings) on the lemmatization and POS tagging." |
| }, |
| "TABREF9": { |
| "type_str": "table", |
| "num": null, |
| "html": null, |
| "content": "<table><tr><td>Lemmatization</td><td>Tagging</td></tr><tr><td colspan=\"2\">classical cross-genre cross-time classical cross-genre cross-time</td></tr></table>", |
| "text": "The average percentage point improvement in the open modality settings compared to the closed modality. The results are averaged over all models inTable 4." |
| }, |
| "TABREF10": { |
| "type_str": "table", |
| "num": null, |
| "html": null, |
| "content": "<table><tr><td/><td>Lemmatization</td><td>Tagging</td></tr><tr><td/><td>classical</td><td>classical</td></tr><tr><td>The improvement of</td><td/><td/></tr><tr><td>using per-author</td><td>0.027</td><td>0.043</td></tr><tr><td>treebank embeddings</td><td/><td/></tr></table>", |
| "text": "The effect of various kinds of treebank embeddings in open modality -whether the individual authors in EvaLatin get a different or the same treebank embedding, and whether the UD treebanks get a different treebank embedding, same treebank embedding but different from the EvaLatin data, or the same treebank embedding as EvaLatin data." |
| }, |
| "TABREF11": { |
| "type_str": "table", |
| "num": null, |
| "html": null, |
| "content": "<table/>", |
| "text": "The average percentage point improvement of using per-author treebank embedding compared to not distinguishing among authors of EvaLatin data, averaged over all models in" |
| } |
| } |
| } |
| } |