ACL-OCL / Base_JSON /prefixL /json /lt4hala /2020.lt4hala-1.16.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:12:55.680591Z"
},
"title": "Overview of the EvaLatin 2020 Evaluation Campaign",
"authors": [
{
"first": "Rachele",
"middle": [],
"last": "Sprugnoli",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universit\u00e0 Cattolica del Sacro Cuore Largo Agostino Gemelli 1",
"location": {
"postCode": "20123",
"settlement": "Milano"
}
},
"email": "rachele.sprugnoli@unicatt.it"
},
{
"first": "Marco",
"middle": [],
"last": "Passarotti",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universit\u00e0 Cattolica del Sacro Cuore Largo Agostino Gemelli 1",
"location": {
"postCode": "20123",
"settlement": "Milano"
}
},
"email": "marco.passarotti@unicatt.it"
},
{
"first": "Flavio",
"middle": [
"M"
],
"last": "Cecchini",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universit\u00e0 Cattolica del Sacro Cuore Largo Agostino Gemelli 1",
"location": {
"postCode": "20123",
"settlement": "Milano"
}
},
"email": "flavio.cecchini@unicatt.it"
},
{
"first": "Matteo",
"middle": [],
"last": "Pellegrini",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universit\u00e0 Cattolica del Sacro Cuore Largo Agostino Gemelli 1",
"location": {
"postCode": "20123",
"settlement": "Milano"
}
},
"email": "matteo.pellegrini@unibg.it"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes the first edition of EvaLatin, a campaign totally devoted to the evaluation of NLP tools for Latin. The two shared tasks proposed in EvaLatin 2020, i. e. Lemmatization and Part-of-Speech tagging, are aimed at fostering research in the field of language technologies for Classical languages. The shared dataset consists of texts taken from the Perseus Digital Library, processed with UDPipe models and then manually corrected by Latin experts. The training set includes only prose texts by Classical authors. The test set, alongside with prose texts by the same authors represented in the training set, also includes data relative to poetry and to the Medieval period. This also allows us to propose the Cross-genre and Cross-time subtasks for each task, in order to evaluate the portability of NLP tools for Latin across different genres and time periods. The results obtained by the participants for each task and subtask are presented and discussed.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes the first edition of EvaLatin, a campaign totally devoted to the evaluation of NLP tools for Latin. The two shared tasks proposed in EvaLatin 2020, i. e. Lemmatization and Part-of-Speech tagging, are aimed at fostering research in the field of language technologies for Classical languages. The shared dataset consists of texts taken from the Perseus Digital Library, processed with UDPipe models and then manually corrected by Latin experts. The training set includes only prose texts by Classical authors. The test set, alongside with prose texts by the same authors represented in the training set, also includes data relative to poetry and to the Medieval period. This also allows us to propose the Cross-genre and Cross-time subtasks for each task, in order to evaluate the portability of NLP tools for Latin across different genres and time periods. The results obtained by the participants for each task and subtask are presented and discussed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "EvaLatin 2020 is the first campaign being totally devoted to the evaluation of Natural Language Processing (NLP) tools for the Latin language. 1 The campaign is designed following a long tradition in NLP, 2 with the aim of answering two main questions:",
"cite_spans": [
{
"start": 143,
"end": 144,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "\u2022 How can we promote the development of resources and language technologies for the Latin language?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "\u2022 How can we foster collaboration among scholars working on Latin and attract researchers from different disciplines?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "EvaLatin is proposed as part of the Workshop on Language Technologies for Historical and Ancient Languages (LT4HALA), co-located with LREC 2020. 3 EvaLatin is an initiative endorsed by the Italian association of Computational Linguistics 4 (AILC), and is organized by the CIRCSE research centre 5 at the Universit\u00e0 Cattolica del Sacro Cuore in Milan, Italy, with the support of the LiLa: Linking Latin ERC project. 6 Data, scorer and detailed guidelines are all available in a dedicated GitHub repository. 7",
"cite_spans": [
{
"start": 145,
"end": 146,
"text": "3",
"ref_id": null
},
{
"start": 415,
"end": 416,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "1 https://circse.github.io/LT4HALA/ 2 See for example other campaigns such as MUC (Message Understanding Conference), a competition dedicated to tools and methods for information extraction, SemEval (Semantic Evaluation), which is focused on the evaluation of systems for semantic analysis, CoNLL (Conference on Natural Language Learning), which since 1999 has been including a different NLP shared task in every edition, and EVALITA, a periodic evaluation campaign of NLP tools for the Italian language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "3 https://lrec2020.lrec-conf.org/en/ 4 http://www.ai-lc.it/ 5 https://centridiricerca.unicatt.it/ circse_index.html 6 https://lila-erc.eu/ 7 https://github.com/CIRCSE/LT4HALA/tree/ master/data_and_doc",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "EvaLatin 2020 has two tasks:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tasks and Subtasks",
"sec_num": "2."
},
{
"text": "1. Lemmatization, i. e. the process of transforming any word form into a corresponding, conventionally defined \"base\" form, i. e. its lemma, applied to each token;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tasks and Subtasks",
"sec_num": "2."
},
{
"text": "2. Part-of-Speech tagging, in which systems are required to assign a lexical category, i. e. a Part-of-Speech (PoS) tag, to each token, according to the Universal Dependencies (UD) PoS tagset (Petrov et al., 2011). 8 Each task has three subtasks:",
"cite_spans": [
{
"start": 192,
"end": 216,
"text": "(Petrov et al., 2011). 8",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tasks and Subtasks",
"sec_num": "2."
},
{
"text": "1. Classical: the test data belong to the same genre and time period of the training data;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tasks and Subtasks",
"sec_num": "2."
},
{
"text": "2. Cross-genre: the test data belong to a different genre, namely lyric poems, but to the same time period compared to the ones included in the training data;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tasks and Subtasks",
"sec_num": "2."
},
{
"text": "3. Cross-time: the test data belong to a different time period, namely the Medieval era, compared to the ones included in the training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tasks and Subtasks",
"sec_num": "2."
},
{
"text": "Through these subtasks, we aim to enhance the study of the portability of NLP tools for Latin across different genres and time periods by analyzing the impact of genre-specific and diachronic features. Shared data and a scorer are provided to the participants, who can choose to take part in either a single task, or in all tasks and subtasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tasks and Subtasks",
"sec_num": "2."
},
{
"text": "The EvaLatin 2020 dataset consists of texts taken from the Perseus Digital Library (Smith et al., 2000) . 9 These texts are first processed by means of UDPipe models (Straka and Strakov\u00e1, 2017) trained on texts by the same author, and then manually corrected by Latin language experts. Our author-specific models are trained on Opera Latina (Denooz, 2004) , a corpus which has been manually annotated at the Laboratoire d'Analyse Statistique des Langues Anciennes (LASLA) of the University of Li\u00e8ge since 1961. 10 Based on an agreement with LASLA, the Opera Latina corpus cannot be released to the public, but we are allowed to use it to create models for NLP tasks. Thus, we convert the original space-separated format of the Opera Latina into the field-based CoNLL-U format, 11 on which we train annotation models using the UDPipe pipeline. 12 These models are then run on the raw texts extracted from the Perseus files, 13 which are originally in XML format, after removing punctuation. Finally, the outputs of our automatic annotation are manually checked and corrected by two annotators; any doubts are resolved by a third Latin language expert. Figure 1 and Figure 2 show examples of our CoNLL-Uformatted training and test data respectively. Please note that our training and test data lack any tagging of syntactic dependencies or morphological features, since EvaLatin 2020 does not focus on the corresponding tasks; besides, tree-structured syntactic data are not available from the Opera Latina corpus.",
"cite_spans": [
{
"start": 83,
"end": 103,
"text": "(Smith et al., 2000)",
"ref_id": "BIBREF10"
},
{
"start": 106,
"end": 107,
"text": "9",
"ref_id": null
},
{
"start": 166,
"end": 193,
"text": "(Straka and Strakov\u00e1, 2017)",
"ref_id": "BIBREF12"
},
{
"start": 341,
"end": 355,
"text": "(Denooz, 2004)",
"ref_id": "BIBREF4"
},
{
"start": 511,
"end": 513,
"text": "10",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1151,
"end": 1159,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 1164,
"end": 1172,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": "3."
},
{
"text": "The texts provided as training data are by five Classical authors: Caesar, Cicero, Seneca, Pliny the Younger and Tacitus. For each author we release around 50,000 annotated tokens, for a total of almost 260,000 tokens. Each author is represented by prose texts: treatises in the case of Caesar, Seneca and Tacitus, public speeches for Cicero, and letters for Pliny the Younger. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training data",
"sec_num": "3.1."
},
{
"text": "Tokenization is a central issue in the evaluation of Lemmatization and PoS tagging: as each annotation system possibly applies different tokenization rules, these might lead to outputs which are difficult to compare to each other. In order to avoid such problem, we provide our test data in an already tokenized format, one token per line, with a white line separating each sentence. Our test data consist only of tokenized words, but neither lemmas nor PoS tags, as these have to be added by the participating systems submitted for the evaluation. The composition of the test dataset for the Classical subtask is given in Table 2 . Details for the data distributed in the Crossgenre and Cross-time subtasks are reported in Tables 3 and 4 ",
"cite_spans": [],
"ref_spans": [
{
"start": 623,
"end": 630,
"text": "Table 2",
"ref_id": "TABREF4"
},
{
"start": 724,
"end": 739,
"text": "Tables 3 and 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Test data",
"sec_num": "3.2."
},
{
"text": "The scorer employed for EvaLatin 2020 is a modified version of that developed for the CoNLL18 Shared Task on Multilingual Parsing from Raw Text to Universal Dependencies. 14 The evaluation starts by aligning the outputs of the participating systems to the gold standard: given that our test data are already tokenized and split by sentences, the alignment at the token and sentence levels is always perfect (i. e. 100.00%). Then, PoS tags and lemmas are evaluated and the final ranking is based on accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4."
},
{
"text": "Each participant was permitted to submit runs for either one or all tasks and subtasks. It was mandatory to produce one run according to the socalled \"closed modality\": the only annotated resources that could be used to train and tune the system were those distributed by the organizers. Also external non-annotated resources, like word embeddings, were allowed. The second run could be produced according to the \"open modality\", for which the use of annotated external data, like the Latin datasets present in the UD project, was allowed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4."
},
{
"text": "As for the baseline, we provided the participants with the scores obtained on our test data by UDPipe, using the ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4."
},
{
"text": "A total of five teams are taking part in the PoS tagging task; three of them are also taking part in the Lemmatization task. All the teams have submitted runs for all three subtasks. Only one team (namely, UDPipe) has submitted a run following the open modality for each task and subtask, whereas the others have submitted runs in the closed modality, thus eschewing additional training data. In total, we have received five runs for the Lemmatization task and nine runs for the PoS tagging task. Details on the participating teams and their systems are given below:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Participants and Results",
"sec_num": "5."
},
{
"text": "\u2022 UDPipe, Charles University, Prague, Czech Republic. This team proposes a multi-task model jointly predicting both lemmas and PoS tags. The architecture is a bidirectional long short-term memory (BiLSTM) softmax classifier fed by end-to-end, character-level, pre-trained and contextualized word embeddings. In the run submitted for the open modality, they use all UD Latin treebanks as additional training data (Straka and Strakov\u00e1, 2020) . \u2022 Leipzig, Leipzig University, Germany. PoS tags are predicted with a gradient boosting framework fed with word embeddings pre-computed on a corpus of Latin texts of different genres and time periods. Lemmatization is instead based on a character-level translation performed by a long short-term memory (LSTM) sequence-to-sequence model (Celano, 2020 \u2022 JHUBC, Johns Hopkins University and University of British Columbia, Canada. This team tests two systems for both Lemmatization and PoS tagging. The first one is an off-the-shelf neural machine translation toolkit, whereas the second puts together two different learning algorithms in an ensemble classifier: the aforementioned machine translation system and a BiLSTM sequence-to-sequence model (Wu and Nicolai, 2020 ).",
"cite_spans": [
{
"start": 412,
"end": 439,
"text": "(Straka and Strakov\u00e1, 2020)",
"ref_id": "BIBREF14"
},
{
"start": 779,
"end": 792,
"text": "(Celano, 2020",
"ref_id": "BIBREF3"
},
{
"start": 1189,
"end": 1210,
"text": "(Wu and Nicolai, 2020",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Participants and Results",
"sec_num": "5."
},
{
"text": "\u2022 Berkeley, University of California, Berkeley, USA. The proposed model for the PoS tagging task consists in a grapheme-level LSTM network whose output is the input of a word-level BiLSTM network. This model is fed by a set of grapheme and word embeddings pretrained on a corpus of over 23 million words (Bacon, 2020).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Participants and Results",
"sec_num": "5."
},
{
"text": "\u2022 TTLab, Goethe University, Frankfurt, Germany. This team tests three approaches to the PoS tagging task (Stoeckel et al., 2020) : 1) an ensemble classifier based on a two-stage recurrent neural network combining the taggers MarMoT (M\u00fcller et al., 2013 ) and anaGo; 17 2) a BiLSTM-CRF (conditional random fields) sequence tagger using pooled contextualized embeddings and a FLAIR character language model (Akbik et al., 2019) ; 3) another ensemble classifier combining the taggers MarMoT, anaGo, UDify (Kondratyuk and Straka, 2019) and UDPipe.",
"cite_spans": [
{
"start": 105,
"end": 128,
"text": "(Stoeckel et al., 2020)",
"ref_id": "BIBREF11"
},
{
"start": 232,
"end": 252,
"text": "(M\u00fcller et al., 2013",
"ref_id": "BIBREF6"
},
{
"start": 405,
"end": 425,
"text": "(Akbik et al., 2019)",
"ref_id": null
},
{
"start": 502,
"end": 531,
"text": "(Kondratyuk and Straka, 2019)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Participants and Results",
"sec_num": "5."
},
{
"text": "Tables 5 and 6 report the final rankings, showing the results in terms of accuracy, including our baseline. For each run, the team name, the modality and the run number are specified. Please note that for the Classical subtask the score corresponds to the macro-average accuracy obtained on the single text. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Participants and Results",
"sec_num": "5."
},
{
"text": "All the participating teams employ deep learning, and largely overcome the baseline. Systems mainly adopt LTSM networks, often in a bidirectional variant. Two teams also test the efficiency of ensemble classifiers, and one team a neural machine translation approach. Different types of embeddings are adopted: for example, grapheme embeddings, word embeddings, contextualized embeddings. In many cases, these embeddings are trained specifically for EvaLatin 2020 starting from large collections of Latin texts available online. Not surprisingly, the addition of annotated data to the training set proves to be beneficial: in particular, an increase in accuracy is registered in the Cross-genre (+1.64 points of accuracy with respect to the best system in the closed modality) and Cross-time (+3.32 points of accuracy with respect to the best system in the closed modality) subtasks of the Lemmatization task. The standard deviation among the texts of the test set in the Classical subtask fluctuates between 0.83 and 1.30 in the Lemmatization task, and between 0.60 and 1.98 in the PoS tagging task. With regard to the Lemmatization task, the easiest text to tackle for all the systems is In Catilinam by Cicero (accuracy ranging from 95.94 to 97.61), followed by the first book of the De Bello Civili by Caesar (accuracy ranging from 95.66 to 96.94). In the PoS tagging task, the situation is reversed: all the systems obtain better scores on the De Bello Civili (accuracy ranging from 93.08 to 97.91) than on In Catilinam (accuracy ranging from 93.02 to 97.44). All the systems suffer from the shift to a different genre or to a different time period with a drop in the performances which, in some cases, exceeds 10 points. Taking a more in-depth look at the results, we can notice that, in general, the participating systems perform better on the Medieval text by Thomas Aquinas than on the Classical poems by Horace in the Lemmatization task, whereas the opposite is true for the PoS tagging task. As for Lemmatization, Thomas Aquinas presents a less rich and varied vocabulary with respect to Horace: the lemma/token ratio is 0.09 and the percentage of out-ofvocabulary lemmas (i. e. lemmas not present in the training data) is 26%, while in the Carmina the lemma/token ratio is 0.26 and the percentage of out-of-vocabulary lem-mas is 29%. As for PoS tagging, Thomas Aquinas proves to be more challenging than Horace. This is probably due to the higher percentage and different distribution of tokens belonging to the categories of prepositions (ADP), conjunctions (CCONJ and SCONJ), auxiliaries (AUX) and numerals (NUM), as a consequence of a different textual and syntactic structure (with respect to the training set) that is more similar to that of modern Romance languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6."
},
{
"text": "In particular, in Thomas Aquinas we observe a more frequent use of prepositional phrases: in Classical Latin, case inflection alone often suffices to convey the syntactic role of a noun phrase, whereas in the same context Medieval Latin might prefer that same phrase to be introduced by a preposition, extending a trend that is already present in Classical Latin (Palmer, 1988) . We also find a greater number of subordinate clauses introduced by subordinating conjunctions (for example, the Classical construction of Accusativus cum infinitivo tends to be replaced by subordinate clauses introduced by subordinating conjunctions like quia/quod/ut 'that' (Bamman et al., 2008) ), as well as of coordinated structures with coordinating conjunctions, the latter fact being possibly due to the very infrequent use of the enclitic particle -que 'and'. As for auxiliaries, their high number in the text of Thomas Aquinas is due to the fact that its annotation, carried out in the context of the Index Thomisticus Treebank (IT-TB) project (Passarotti, 2019) , strictly follows the UD guidelines, so that the AUX tag is applied also to verbal copulas. This rule does not apply to the other texts employed in EvaLatin 2020, thus causing a discrepancy in the annotation criteria. Finally, the high occurrence of numerals is caused by the frequent use of biblical quotations (e. g. Iob 26 14 'Book of Job, chapter 26, verse 14', from Summa contra Gentiles, book 4, chapter 1, number 1).",
"cite_spans": [
{
"start": 363,
"end": 377,
"text": "(Palmer, 1988)",
"ref_id": "BIBREF7"
},
{
"start": 655,
"end": 676,
"text": "(Bamman et al., 2008)",
"ref_id": "BIBREF1"
},
{
"start": 1033,
"end": 1051,
"text": "(Passarotti, 2019)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6."
},
{
"text": "This paper describes the first edition of EvaLatin, an evaluation campaign dedicated to NLP tools and methods for the Lemmatization and PoS tagging of the Latin language. The call for EvaLatin 2020 has been spurred by the realization that times are mature enough for such an initiative. Indeed, despite the growing amount of linguistically annotated Latin texts which have become available over the last decades, today large collections of Latin texts are still lacking any layer of linguistic annotation, a state of affairs that prevents users from taking full advantage of digital corpora for Latin. One aspect that heavily impacts on any NLP task for Latin is the high degree of variability of the texts written in this language, due to its wide diachronic and diatopic diversity, which spans across several literary genres all over Europe in the course of more than two millennia. Just because we need to understand how much this aspect of Latin affects NLP, two subtasks dedicated respectively to the crossgenre and cross-time evaluation of data have been included in EvaLatin 2020. If it holds true that variation is a challenging issue that affects NLP applications for Latin, one advantage of dealing with Latin data is that Latin is a dead language, thus providing a substantially closed corpus of texts (contemporary additions are just a few, like for instance the documents of the Vatican City or song lyrics (Cecchini et al., forthcoming) ). This warrants us to speak of a possible complete linguistic annotation of all known Latin documents in the future.",
"cite_spans": [
{
"start": 1420,
"end": 1450,
"text": "(Cecchini et al., forthcoming)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7."
},
{
"text": "In the light of such considerations, we have decided to devote the first edition of EvaLatin to Lemmatization and PoS tagging, as we feel the need to understand the state of the art of these two fundamental annotation layers for what concerns Latin. We hope that the results of our evaluation campaign will help the community move towards the enhancement of an ever-increasing number of Latin texts by means of Lemmatization and PoS tagging as a first step towards a full linguistic annotation that includes also morphological features and syntactic dependencies, and that it will also help foster interest for Latin among the NLP community, confronting the challenge of portability of NLP tools for Latin across time, place and genres.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7."
},
{
"text": "This work is supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme via the LiLa: Linking Latin project -Grant Agreement No. 769994. The authors also wish to thank Giovanni Moretti for his technical assistance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": "8."
},
{
"text": "Akbik, A., Bergmann, T., Blythe, D., Rasul, K., Schweter, S., and Vollgraf, R. (2019). FLAIR: An easy-to-use framework for state-of-the-art NLP. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bibliographical References",
"sec_num": "9."
},
{
"text": "https://universaldependencies.org/u/pos/ index.html 9 http://www.perseus.tufts.edu/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://universaldependencies.org/ conll18/evaluation.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/vunb/anago-tagger",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The Ancient Greek and Latin Dependency Treebanks",
"authors": [
{
"first": "D",
"middle": [],
"last": "Bamman",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Crane",
"suffix": ""
}
],
"year": 2011,
"venue": "Language technology for cultural heritage, Theory and Applications of Natural Language Processing",
"volume": "",
"issue": "",
"pages": "79--98",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bamman, D. and Crane, G. (2011). The Ancient Greek and Latin Dependency Treebanks. In Caroline Sporleder, et al., editors, Language technology for cultural heritage, Theory and Applications of Natural Language Process- ing, pages 79-98. Springer, Berlin -Heidelberg, Ger- many.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A Case Study in Treebank Collaboration and Comparison: Accusativus cum Infinitivo and Subordination in Latin",
"authors": [
{
"first": "D",
"middle": [],
"last": "Bamman",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Passarotti",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Crane",
"suffix": ""
}
],
"year": 2008,
"venue": "The Prague Bulletin of Mathematical Linguistics",
"volume": "90",
"issue": "1",
"pages": "109--122",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bamman, D., Passarotti, M., and Crane, G. (2008). A Case Study in Treebank Collaboration and Comparison: Ac- cusativus cum Infinitivo and Subordination in Latin. The Prague Bulletin of Mathematical Linguistics, 90(1):109- 122, December.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Verba Bestiae: How Latin Conquered Heavy Metal",
"authors": [
{
"first": "F",
"middle": [
"M"
],
"last": "Cecchini",
"suffix": ""
},
{
"first": "G",
"middle": [
"H"
],
"last": "Franzini",
"suffix": ""
},
{
"first": "M",
"middle": [
"C"
],
"last": "Passarotti",
"suffix": ""
}
],
"year": null,
"venue": "Multilingual Metal: Sociocultural, Linguistic and Literary Perspectives on Heavy Metal Lyrics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cecchini, F. M., Franzini, G. H., and Passarotti, M. C. (forthcoming). Verba Bestiae: How Latin Conquered Heavy Metal. In Riitta Valij\u00e4rvi, et al., editors, Mul- tilingual Metal: Sociocultural, Linguistic and Literary Perspectives on Heavy Metal Lyrics, Emerald Studies in Metal Music and Culture. Emerald group publishing, Bingley, UK.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A Gradient Boosting-Seq2Seq System for Latin PoS Tagging and Lemmatization",
"authors": [
{
"first": "G",
"middle": [],
"last": "Celano",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the LT4HALA 2020 Workshop -1st Workshop on Language Technologies for Historical and Ancient Languages, satellite event to the Twelfth International Conference on Language Resources and Evaluation (LREC 2020)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Celano, G. (2020). A Gradient Boosting-Seq2Seq System for Latin PoS Tagging and Lemmatization. In Rachele Sprugnoli et al., editors, Proceedings of the LT4HALA 2020 Workshop -1st Workshop on Language Technolo- gies for Historical and Ancient Languages, satellite event to the Twelfth International Conference on Lan- guage Resources and Evaluation (LREC 2020), Paris, France, May. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Opera Latina: une base de donn\u00e9es sur internet",
"authors": [
{
"first": "J",
"middle": [],
"last": "Denooz",
"suffix": ""
}
],
"year": 2004,
"venue": "Euphrosyne",
"volume": "32",
"issue": "",
"pages": "79--88",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Denooz, J. (2004). Opera Latina: une base de donn\u00e9es sur internet. Euphrosyne, 32:79-88.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "75 Languages, 1 Model: Parsing Universal Dependencies Universally",
"authors": [
{
"first": "D",
"middle": [],
"last": "Kondratyuk",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Straka",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "2779--2795",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kondratyuk, D. and Straka, M. (2019). 75 Languages, 1 Model: Parsing Universal Dependencies Universally. In Kentaro Inui, et al., editors, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2779-2795, Hong Kong, China, November. Association for Computational Linguistics (ACL).",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Efficient Higher-Order CRFs for Morphological Tagging",
"authors": [
{
"first": "T",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Schmid",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "322--332",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M\u00fcller, T., Schmid, H., and Sch\u00fctze, H. (2013). Effi- cient Higher-Order CRFs for Morphological Tagging. In David Yarowsky, et al., editors, Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 322-332, Seattle, WA, USA, October. Association for Computational Linguistics (ACL).",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "The Latin language",
"authors": [
{
"first": "L",
"middle": [
"R"
],
"last": "Palmer",
"suffix": ""
}
],
"year": 1988,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Palmer, L. R. (1988). The Latin language. Oklahoma Uni- versity Press, Norman, OK, USA. Reprint.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "The Project of the Index Thomisticus Treebank",
"authors": [
{
"first": "M",
"middle": [],
"last": "Passarotti",
"suffix": ""
}
],
"year": 2019,
"venue": "Age of Access? Grundfragen der Informationsgesellschaft",
"volume": "10",
"issue": "",
"pages": "299--320",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Passarotti, M. (2019). The Project of the Index Thomisti- cus Treebank. In Monica Berti, editor, Digital Classi- cal Philology, volume 10 of Age of Access? Grund- fragen der Informationsgesellschaft, pages 299-320. De Gruyter Saur, Munich, Germany, August.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A Universal Part-of-Speech Tagset",
"authors": [
{
"first": "S",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Mcdonald",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1104.2086"
]
},
"num": null,
"urls": [],
"raw_text": "Petrov, S., Das, D., and McDonald, R. (2011). A Universal Part-of-Speech Tagset. ArXiv e-prints. arXiv:1104.2086 at https://arxiv.org/abs/1104.2086.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "The Perseus Project: A digital library for the humanities",
"authors": [
{
"first": "D",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
},
{
"first": "J",
"middle": [
"A"
],
"last": "Rydberg-Cox",
"suffix": ""
},
{
"first": "G",
"middle": [
"R"
],
"last": "Crane",
"suffix": ""
}
],
"year": 2000,
"venue": "Literary and Linguistic Computing",
"volume": "15",
"issue": "1",
"pages": "15--25",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Smith, D. A., Rydberg-Cox, J. A., and Crane, G. R. (2000). The Perseus Project: A digital library for the humanities. Literary and Linguistic Computing, 15(1):15-25.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Voting for PoS tagging of Latin texts: Using the flair of FLAIR to better Ensemble Classifiers by Example of Latin",
"authors": [
{
"first": "M",
"middle": [],
"last": "Stoeckel",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Henlein",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Hemati",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Mehler",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the LT4HALA 2020 Workshop -1st Workshop on Language Technologies for Historical and Ancient Languages, satellite event to the Twelfth International Conference on Language Resources and Evaluation (LREC 2020)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stoeckel, M., Henlein, A., Hemati, W., and Mehler, A. (2020). Voting for PoS tagging of Latin texts: Using the flair of FLAIR to better Ensemble Classifiers by Exam- ple of Latin. In Rachele Sprugnoli et al., editors, Pro- ceedings of the LT4HALA 2020 Workshop -1st Work- shop on Language Technologies for Historical and An- cient Languages, satellite event to the Twelfth Interna- tional Conference on Language Resources and Evalu- ation (LREC 2020), Paris, France, May. European Lan- guage Resources Association (ELRA).",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Tokenizing, PoS Tagging, Lemmatizing and Parsing UD 2.0 with UDPipe",
"authors": [
{
"first": "M",
"middle": [],
"last": "Straka",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Strakov\u00e1",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Straka, M. and Strakov\u00e1, J. (2017). Tokenizing, PoS Tag- ging, Lemmatizing and Parsing UD 2.0 with UDPipe.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "88--99",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "In Jan Haji\u010d et al., editors, Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 88-99, Vancouver, Canada, August. Association for Computational Lin- guistics (ACL). Available at http://www.aclweb. org/anthology/K/K17/K17-3009.pdf.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "UDPipe at EvaLatin 2020: Contextualized Embeddings and Treebank Embeddings",
"authors": [
{
"first": "M",
"middle": [],
"last": "Straka",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Strakov\u00e1",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the LT4HALA 2020 Workshop -1st Workshop on Language Technologies for Historical and Ancient Languages, satellite event to the Twelfth International Conference on Language Resources and Evaluation (LREC 2020)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Straka, M. and Strakov\u00e1, J. (2020). UDPipe at EvaLatin 2020: Contextualized Embeddings and Treebank Em- beddings. In Rachele Sprugnoli et al., editors, Pro- ceedings of the LT4HALA 2020 Workshop -1st Work- shop on Language Technologies for Historical and An- cient Languages, satellite event to the Twelfth Interna- tional Conference on Language Resources and Evalu- ation (LREC 2020), Paris, France, May. European Lan- guage Resources Association (ELRA).",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "JHUBC's Submission to LT4HALA EvaLatin 2020",
"authors": [
{
"first": "W",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Nicolai",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the LT4HALA 2020 Workshop -1st Workshop on Language Technologies for Historical and Ancient Languages, satellite event to the Twelfth International Conference on Language Resources and Evaluation (LREC 2020)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wu, W. and Nicolai, G. (2020). JHUBC's Submission to LT4HALA EvaLatin 2020. In Rachele Sprugnoli et al., editors, Proceedings of the LT4HALA 2020 Workshop - 1st Workshop on Language Technologies for Historical and Ancient Languages, satellite event to the Twelfth International Conference on Language Resources and Evaluation (LREC 2020), Paris, France, May. European Language Resources Association (ELRA).",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Format of training data.Figure 2: Format of test data."
},
"TABREF0": {
"num": null,
"content": "<table><tr><td/><td colspan=\"2\">presents details about the</td></tr><tr><td colspan=\"2\">training dataset of EvaLatin 2020.</td><td/></tr><tr><td>AUTHORS</td><td>TEXTS</td><td># TOKENS</td></tr><tr><td>Caesar</td><td>De Bello Gallico</td><td>44,818</td></tr><tr><td>Caesar</td><td>De Bello Civili (book II)</td><td>6,389</td></tr><tr><td>Cicero</td><td colspan=\"2\">Philippicae (books I-XIV) 52,563</td></tr><tr><td>Seneca</td><td>De Beneficiis</td><td>45,457</td></tr><tr><td>Seneca</td><td>De Clementia</td><td>8,172</td></tr><tr><td colspan=\"2\">Pliny the Younger Epistulae (books I-VIII)</td><td>50,827</td></tr><tr><td>Tacitus</td><td>Historiae</td><td>51,420</td></tr><tr><td>TOTAL</td><td/><td>259,646</td></tr></table>",
"html": null,
"type_str": "table",
"text": ""
},
"TABREF1": {
"num": null,
"content": "<table/>",
"html": null,
"type_str": "table",
"text": "Texts distributed as training data."
},
"TABREF3": {
"num": null,
"content": "<table><tr><td>AUTHORS</td><td>TEXTS</td><td># TOKENS</td></tr><tr><td>Caesar</td><td colspan=\"2\">De Bello Civili (book I) 10,898</td></tr><tr><td>Cicero</td><td>In Catilinam</td><td>12,564</td></tr><tr><td>Seneca</td><td>De Vita Beata</td><td>7,270</td></tr><tr><td>Seneca</td><td>De Providentia</td><td>4,077</td></tr><tr><td colspan=\"2\">Pliny the Younger Epistulae (book X)</td><td>9,868</td></tr><tr><td>Tacitus</td><td>Agricola</td><td>6,737</td></tr><tr><td>Tacitus</td><td>Germania</td><td>5,513</td></tr><tr><td>TOTAL</td><td/><td>56,927</td></tr></table>",
"html": null,
"type_str": "table",
"text": "respectively."
},
"TABREF4": {
"num": null,
"content": "<table><tr><td colspan=\"2\">AUTHORS TEXTS</td><td># TOKENS</td></tr><tr><td>Horatius</td><td colspan=\"2\">Carmina 13,290</td></tr></table>",
"html": null,
"type_str": "table",
"text": "Test data for the Classical subtask."
},
"TABREF5": {
"num": null,
"content": "<table><tr><td>AUTHORS</td><td>TEXTS</td><td># TOKENS</td></tr><tr><td>Thomas Aquinas</td><td>Summa Contra Gentiles (part of Book IV)</td><td>11,556</td></tr></table>",
"html": null,
"type_str": "table",
"text": "Test data for the Cross-genre subtask."
},
"TABREF6": {
"num": null,
"content": "<table/>",
"html": null,
"type_str": "table",
"text": ""
},
"TABREF8": {
"num": null,
"content": "<table/>",
"html": null,
"type_str": "table",
"text": "Results of the Lemmatization task for the three subtasks in terms of accuracy. The number in brackets indicates standard deviation calculated among the seven documents of the test set for the Classical subtask."
},
"TABREF11": {
"num": null,
"content": "<table/>",
"html": null,
"type_str": "table",
"text": "Results of the PoS tagging task for the three subtasks in terms of accuracy. The number in brackets indicates standard deviation calculated among the seven documents of the test set for the Classical subtask."
},
"TABREF12": {
"num": null,
"content": "<table/>",
"html": null,
"type_str": "table",
"text": "In Waleed Ammar, et al., editors, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations), pages 54-59, Minneapolis, MN, USA, June. Association for Computational Linguistics (ACL). Bacon, G. (2020). Data-driven Choices in Neural Part-of-Speech Tagging for Latin. In Rachele Sprugnoli et al., editors, Proceedings of the LT4HALA 2020 Workshop -1st Workshop on Language Technologies for Historical and Ancient Languages, satellite event to the Twelfth International Conference on Language Resources and Evaluation (LREC 2020), Paris, France, May. European Language Resources Association (ELRA)."
}
}
}
}