| { |
| "paper_id": "2020", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T12:12:56.427934Z" |
| }, |
| "title": "Latin-Spanish Neural Machine Translation: from the Bible to Saint Augustine", |
| "authors": [ |
| { |
| "first": "Eva", |
| "middle": [ |
| "Mart\u00ednez" |
| ], |
| "last": "Garcia", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Garc\u00eda", |
| "middle": [], |
| "last": "Tejedor", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "a.gtejedor@ceiec.es" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Although there are several sources where to find historical texts, they usually are available in the original language that makes them generally inaccessible. This paper presents the development of state-of-the-art Neural Machine Systems for the low-resourced Latin-Spanish language pair. First, we build a Transformer-based Machine Translation system on the Bible parallel corpus. Then, we build a comparable corpus from Saint Augustine texts and their translations. We use this corpus to study the domain adaptation case from the Bible texts to Saint Augustine's works. Results show the difficulties of handling a low-resourced language as Latin. First, we noticed the importance of having enough data, since the systems do not achieve high BLEU scores. Regarding domain adaptation, results show how using in-domain data helps systems to achieve a better quality translation. Also, we observed that it is needed a higher amount of data to perform an effective vocabulary extension that includes in-domain vocabulary.", |
| "pdf_parse": { |
| "paper_id": "2020", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Although there are several sources where to find historical texts, they usually are available in the original language that makes them generally inaccessible. This paper presents the development of state-of-the-art Neural Machine Systems for the low-resourced Latin-Spanish language pair. First, we build a Transformer-based Machine Translation system on the Bible parallel corpus. Then, we build a comparable corpus from Saint Augustine texts and their translations. We use this corpus to study the domain adaptation case from the Bible texts to Saint Augustine's works. Results show the difficulties of handling a low-resourced language as Latin. First, we noticed the importance of having enough data, since the systems do not achieve high BLEU scores. Regarding domain adaptation, results show how using in-domain data helps systems to achieve a better quality translation. Also, we observed that it is needed a higher amount of data to perform an effective vocabulary extension that includes in-domain vocabulary.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "There exist several digital libraries that store large collection of digitalized historical documents. However, most of these documents are usually written in Latin, Greek or other ancient languages, resulting in them being inaccessible to general public. Natural Language Processing (NLP) offers different tools that can help to save this language barrier to bring the content of these historical documents to people. In particular, Machine Translation (MT) approaches can reproduce these historical documents in modern languages.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "We present a set of experiments in machine translation for the Latin-Spanish language pair. We build a baseline Transformer-based (Vaswani et al., 2017) system trained on the Bible parallel corpus (Christodoulopoulos and Steedman, 2015) to study the associated difficulties of handling morphologically rich low-resourced languages like Latin. Latin is a low-resourced language, with few publicly available parallel data (Gonz\u00e1lez-Rubio et al., 2010a; Resnik et al., 1999) . This is a challenge for data-driven approaches in general, and state-of-the-art Neural Machine Translation (NMT) approaches in particular since these systems usually require a high amount of data (Zoph et al., 2016) . We create a comparable corpus from Saint Augustine's works and we study the impact of adapting the baseline Bible translation system towards the Saint Augustine writings.", |
| "cite_spans": [ |
| { |
| "start": 130, |
| "end": 152, |
| "text": "(Vaswani et al., 2017)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 197, |
| "end": 236, |
| "text": "(Christodoulopoulos and Steedman, 2015)", |
| "ref_id": null |
| }, |
| { |
| "start": 420, |
| "end": 450, |
| "text": "(Gonz\u00e1lez-Rubio et al., 2010a;", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 451, |
| "end": 471, |
| "text": "Resnik et al., 1999)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 670, |
| "end": 689, |
| "text": "(Zoph et al., 2016)", |
| "ref_id": "BIBREF24" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "The paper is organized as follows. In Section 2., we revisit the state-of-the-art MT approaches and their application to Latin. Then, in Section 3. we describe both the parallel and the comparable data that we use in our experiments, explaining how we compiled the comparable corpus. Section 4. gives details on the set of experiments that we carried out to evaluate a baseline NMT trained on the Bible and its adaptation towards the Saint Augustine work. Finally, Section 5. discusses the conclusions and future work.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "There is a growing interest in the computational linguistic analysis of historical texts (Bouma and Adesam, 2017; Tjong Kim Sang et al., 2017) . However, there are only a few works related to MT for ancient or historical languages. In (Schneider et al., 2017) , the authors treat the spelling normalization as a translation task and use a Statistical Machine Translation (SMT) system trained on sequences of characters instead of word sequences. There exist shared tasks like the CLIN27 (Tjong Kim Sang et al., 2017), a translation shared task for medieval Dutch. In the particular case of Latin, there exist several NLP tools, for instance, the LEMLAT morphological analyzer for Latin (Passarotti et al., 2017) . However, there are only a few works involving MT for Latin. In particular, (Gonz\u00e1lez-Rubio et al., 2010b) describe the development of a Latin-Catalan Statistical Machine Translation System and the collection of a Latin-Catalan parallel corpus. However, to the best of our knowledge, the present work describes the first experiments in neural machine translation for the Latin-Spanish language pair. Neural Machine Translation systems represent the current state-of-the-art for machine translation technologies and even some evaluations claim that they have reached human performance (Hassan et al., 2018) . The first successful NMT systems were attentional encoder-decoder approaches based on recurrent neural networks (Bahdanau et al., 2015) , but the current NMT state-of-the-art architecture is the Transformer (Vaswani et al., 2017) . This sequence-to-sequence neural model is based solely on attention mechanisms, without any recurrence nor convolution. Although RNN-based architectures can be more robust in low-resourced scenarios, Transformer-based models usually perform better according to automatic evaluation metrics (Rikters et al., 2018) . All the NMT systems built for our experiments follow the Transformer architecture.", |
| "cite_spans": [ |
| { |
| "start": 89, |
| "end": 113, |
| "text": "(Bouma and Adesam, 2017;", |
| "ref_id": null |
| }, |
| { |
| "start": 114, |
| "end": 142, |
| "text": "Tjong Kim Sang et al., 2017)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 235, |
| "end": 259, |
| "text": "(Schneider et al., 2017)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 686, |
| "end": 711, |
| "text": "(Passarotti et al., 2017)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 789, |
| "end": 819, |
| "text": "(Gonz\u00e1lez-Rubio et al., 2010b)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 1297, |
| "end": 1318, |
| "text": "(Hassan et al., 2018)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 1433, |
| "end": 1456, |
| "text": "(Bahdanau et al., 2015)", |
| "ref_id": null |
| }, |
| { |
| "start": 1528, |
| "end": 1550, |
| "text": "(Vaswani et al., 2017)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 1843, |
| "end": 1865, |
| "text": "(Rikters et al., 2018)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2." |
| }, |
| { |
| "text": "Latin and Spanish can be considered closely-related languages. There are several works that study the benefits of using NMT systems in contrast to using Phrase-Based Statistical MT (PBSMT) systems (Costa-juss\u00e0, 2017), observing how NMT systems are better for in-domain translations. (Alvarez et al., 2019) pursue a similar study from the post-editing point of view, showing how NMT systems solve typical problems of PBSMT systems achieving better results.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2." |
| }, |
| { |
| "text": "In this section, we describe the parallel and comparable data we use to train our NMT models.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Corpora", |
| "sec_num": "3." |
| }, |
| { |
| "text": "Latin is a low-resourced language in general, and parallel data for Latin-Spanish are scarce in particular. In the", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Parallel Data", |
| "sec_num": "3.1." |
| }, |
| { |
| "text": "Description sent. align.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Corpus", |
| "sec_num": null |
| }, |
| { |
| "text": "A collection of translated sentences from Tatoeba 1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tatoeba", |
| "sec_num": null |
| }, |
| { |
| "text": "Bible A multilingual parallel corpus created from translations of the Bible", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "3.9k", |
| "sec_num": null |
| }, |
| { |
| "text": "wikimedia Wikipedia translations published by the wikimedia foundation and their article translation system.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "30.3k", |
| "sec_num": null |
| }, |
| { |
| "text": "A parallel corpus of GNOME localization files.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "GNOME", |
| "sec_num": null |
| }, |
| { |
| "text": "Open multilingual collection of subtitles for educational videos and lectures collaboratively transcribed and translated over the AMARA 2 web-based platform.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "QED", |
| "sec_num": null |
| }, |
| { |
| "text": "Ubuntu A parallel corpus of Ubuntu localization files.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "6.1k", |
| "sec_num": null |
| }, |
| { |
| "text": "Total: 41.8k OPUS (Tiedemann, 2012) repository there are only 6 Latin-Spanish parallel corpora of different domains. Table 1 shows the statistics of these corpora, with a total of only 41.8k aligned sentences available. For our work, we choose the Bible corpus (Christodoulopoulos and Steedman, 2015) since it is the largest corpus and the only one containing historical texts which are closer to the Saint Augustine texts domain.", |
| "cite_spans": [ |
| { |
| "start": 18, |
| "end": 35, |
| "text": "(Tiedemann, 2012)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 117, |
| "end": 124, |
| "text": "Table 1", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "0.6k", |
| "sec_num": null |
| }, |
| { |
| "text": "NMT systems usually need a considerable amount of data to achieve good quality translations (Zoph et al., 2016) . We built a comparable Latin-Spanish corpus by collecting several texts from Saint Augustine of Hippo, one of the most prolific Latin authors. The Federaci\u00f3n Agustiniana Espa\u00f1ola (FAE) promoted the translation into Spanish of the Saint Augustine works and make them available online. We used most of the texts from the Biblioteca de Autores Cristianos (BAC), published under the auspices of the FAE, one of the most complete collections of the Augustinian works in Spanish 3 4 . After gathering the texts in Spanish and Latin, we processed the corpus. First, we split the text into sentences using the Moses (Koehn et al., 2007) sentence splitter and we tokenize the text using the Moses tokenizer. Then, we use Hunalign (Varga et al., 2007) to automatically align the data sentence by sentence. We filter out those sentence alignments that have assigned an alignment score below 0. Notice that since we are using automatically aligned data, the resulting corpus is comparable and not a parallel one. ", |
| "cite_spans": [ |
| { |
| "start": 92, |
| "end": 111, |
| "text": "(Zoph et al., 2016)", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 721, |
| "end": 741, |
| "text": "(Koehn et al., 2007)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 834, |
| "end": 854, |
| "text": "(Varga et al., 2007)", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Comparable Data", |
| "sec_num": "3.2." |
| }, |
| { |
| "text": "We want to study, first, the aplicability of the state-of-theart NMT systems to the Latin-Spanish language pair. Once we have created the comparable corpus on the Saint Augustine writings, we analyze the impact of applying several domain-adaptation techniques to adapt our models from the Bible domain to the Saint Augustine domain.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "4." |
| }, |
| { |
| "text": "Our NMT systems follow the Transformer architecture (Vaswani et al., 2017) and they are built using the OpenNMT-tf toolkit (Klein et al., 2018; Klein et al., 2017) . In particular, we use the Transformer small configuration described in (Vaswani et al., 2017) , mostly using the available OpenNMT-tf default settings: 6 layers of 2,048 innerunits with 8 attention heads. Word embeddings are set to 512 dimensions both for source and target vocabularies. Adam (Kingma and Ba, 2015) optimizer was used for training, using Noam learning rate decay and 4,000 warmup steps. We followed an early-stopping strategy to stop the training process when the BLEU (Papineni et al., 2002) on the development set did not improve more than 0.01 in the last 10 evaluations, evaluating the model each 500 steps.", |
| "cite_spans": [ |
| { |
| "start": 52, |
| "end": 74, |
| "text": "(Vaswani et al., 2017)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 123, |
| "end": 143, |
| "text": "(Klein et al., 2018;", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 144, |
| "end": 163, |
| "text": "Klein et al., 2017)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 237, |
| "end": 259, |
| "text": "(Vaswani et al., 2017)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 459, |
| "end": 480, |
| "text": "(Kingma and Ba, 2015)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 651, |
| "end": 674, |
| "text": "(Papineni et al., 2002)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Settings", |
| "sec_num": "4.1." |
| }, |
| { |
| "text": "Training data was distributed on batches of 3,072 tokens and we used a 0.1 dropout probability. Finally, a maximum sentence length of 100 tokens is used for both source and target sides and the vocabulary size is 30,000 for both target and source languages. Vocabularies are set at the subword level to overcome the vocabulary limitation. We segmented the data using Sentencepiece (Kudo and Richardson, 2018) trained jointly on the source and target training data used for building each model, following the unigram language model (Kud, 2018). The Sentencepiece models were trained to produce a final vocabulary size of 30,000 subword units.", |
| "cite_spans": [ |
| { |
| "start": 381, |
| "end": 408, |
| "text": "(Kudo and Richardson, 2018)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Settings", |
| "sec_num": "4.1." |
| }, |
| { |
| "text": "We evaluate the quality of the outputs by calculating BLEU, TER (Snover et al., 2006) and METEOR (Denkowski and Lavie, 2011) metrics. We used multeval (Clark et al., 2011) to compute these scores on the truecased and tokenized evaluation sets.", |
| "cite_spans": [ |
| { |
| "start": 64, |
| "end": 85, |
| "text": "(Snover et al., 2006)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Settings", |
| "sec_num": "4.1." |
| }, |
| { |
| "text": "First, we trained a baseline model on the Bible parallel corpus. Table 3 shows the results of the automatic evaluation of this system in its in-domain development and test sets.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 65, |
| "end": 72, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "4.2." |
| }, |
| { |
| "text": "The checkpoint-30000 is the model that achieved the best BLEU score on the development data. Following a usual technique to improve the translation quality, we averaged the 8 checkpoints with the best BLEU on the development set resulting in the avg-8 model. In this particular case, the average model is able to improve +0.47 on the development set and +0.78 on the test set with respect to the ckpt-30000 model. Also, the avg-8 system improves the TER metric both on the development and the test set by 1.4 and 1.5 points respectively. Table 3 : Automatic evaluation of the Bible NMT models on the development (dev) and test sets extracted from the Bible corpus. ckpt-30000 is the model resulting from the training step 30000, and the avg-8 is the average of 8 checkpoints.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 538, |
| "end": 545, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "4.2." |
| }, |
| { |
| "text": "We selected the avg-8 for adapting it to the Saint Augustine text via fine-tuning (Crego et al., 2016; Freitag and Al-Onaizan, 2016), that is, by further training the avg-8 on the in-domain data (hereafter the Bible model). We created two systems adapted by fine-tuning, the first one uses the Bible vocabulary (Bible-ft), and the second one updates the Bible vocabulary by adding those missing elements from the Saint Augustine texts vocabulary (Bible-ft-vocabExt.). Furthermore, we also built a model trained only using the comparable corpus (SAugustine) and a model trained on the concatenation of the data from the Bible and the Saint Augustine comparable data (Bible+SAugustine) 5 . For all the systems, we selected those models that achieved the best BLEU scores on the development sets, considering also the models resulting from averaging 8 checkpoints with higher Table 4 shows the results of the automatic evaluation of the different systems on the ValTest from the Saint Augustine texts.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 873, |
| "end": 880, |
| "text": "Table 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "4.2." |
| }, |
| { |
| "text": "The best system is Bible+SAugustine, the one trained on the concatenated data, improving +0.7 points on BLEU regarding the best-adapted model Bible-ft. Also, it outperforms the model trained only on the in-domain data. These results show the importance of having enough data to train an NMT system as well as having an important percentage of data from the working domain. The impact of using in-domain data to tune or train the translation models is remarkable. All the fine-tuned models outperform significantly the Bible model performance, gaining up to 8.5 points of BLEU. Notice that the fine-tuned model (Bible-ft) uses the same vocabulary as the Bible model. These numbers support the importance of having in-domain data for developing MT systems. Since many of the Saint Augustine writings discuss texts from the Bible, these results also evidence the sensitivity of MT systems to capture characteristics from different writing styles. These features can come from different authors or different time periods, which can be very important when studying historical texts, giving a wider sense to the domain definition. Extending the vocabulary when fine-tuning the Bible model does not result in improvements regarding any of the automatic metrics. In fact, the Bible-ft-vocabExt. model is 2.3 BLEU poins below the Bible-ft model. Although the model with the extended vocabulary can have wider coverage, it does not have enough data to learn a good representation for the new elements in the vocabulary. We observe also that the SAugustine model obtains better scores than the Bible model since its training data is larger and belongs to the test domain, although it was trained on comparable data. However, the results of the adapted model Bible-ft are slightly better than the SAugustine. This evidences the importance of having data of quality to model the translation from Latin to Spanish.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "4.2." |
| }, |
| { |
| "text": "We built NMT systems for translating from Latin to Spanish. We identified the typical issues for low-resourced languages for the particular case of Latin-Spanish. Since we only found few parallel corpora available for this particular language pair, we collected the work of Saint Augustine of Hippo in Spanish and Latin and built a comparable corpus of 93,544 aligned sentences. Furthermore, we created a manually validated test set to better evaluate the translation quality of our systems. We built 5 NMT models trained on different data. First, we built a baseline system trained on the Bible parallel corpus. Then, we adapted the Bible model towards the Saint Augustine domain by fine-tuning it in two ways: maintaining the Bible vocabulary and extending this vocabulary by including new elements from the Saint Augustine data. Finally, we trained two models using directly the in-domain data. We built a model trained only on the comparable Saint Augustine corpus and, finally, we trained an NMT on the concatenation of the Bible and the Saint Augustine writings corpora. The automatic evaluation results show significant differences among the Bible model and the rest of the models that somehow include information from the in-domain data when translating the manually validated Saint Augustine test set, showing the importance of the in-domain data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions and Future Work", |
| "sec_num": "5." |
| }, |
| { |
| "text": "The best system was the one trained on the concatenated data Bible+SAugustine, showing the importance of having enough data to train an NMT model. As future work, we want to study the behavior of training NMT systems in the other direction: from Spanish to Latin. We find interesting to analyze if the issues observed when trying to translate into other morphologically rich languages like Basque (Etchegoyhen et al., 2018) or Turkish (Ataman et al., 2020) can be observed when dealing with Latin. In this line, we want to study the impact of using morphologically motivated subword tokenization like the ones proposed by (Alegria et al., 1996) for Basque and by (Ataman et al., 2020; Ataman et al., 2017) for Turkish. Also, we want to include a more in depht analysis of the linguistic related issues that can appear for these closeslyrelated languages (Popovi\u0107 et al., 2016) . In order to deal with the low resource feature of the Latin-Spanish language pair, we want to continue with our work by applying data augmentation techniques like backtranslation (Sennrich et al., 2016) to artificially extend the training data. The Latin-Spanish scenario seems to apply the unsupervised NMT approaches (Artetxe et al., 2018; Artetxe et al., 2019; Lample et al., 2018) , since there are available resources in both languages but only a few parallel data. Also, we want to explore how a Latin-Spanish MT system can benefit from other languages in a multilingual scenario (Johnson et al., 2017; Lakew et al., 2018) , i.e. romance languages, to improve the final translation quality.", |
| "cite_spans": [ |
| { |
| "start": 390, |
| "end": 423, |
| "text": "Basque (Etchegoyhen et al., 2018)", |
| "ref_id": null |
| }, |
| { |
| "start": 622, |
| "end": 644, |
| "text": "(Alegria et al., 1996)", |
| "ref_id": null |
| }, |
| { |
| "start": 663, |
| "end": 684, |
| "text": "(Ataman et al., 2020;", |
| "ref_id": null |
| }, |
| { |
| "start": 685, |
| "end": 705, |
| "text": "Ataman et al., 2017)", |
| "ref_id": null |
| }, |
| { |
| "start": 854, |
| "end": 876, |
| "text": "(Popovi\u0107 et al., 2016)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 1058, |
| "end": 1081, |
| "text": "(Sennrich et al., 2016)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 1198, |
| "end": 1220, |
| "text": "(Artetxe et al., 2018;", |
| "ref_id": null |
| }, |
| { |
| "start": 1221, |
| "end": 1242, |
| "text": "Artetxe et al., 2019;", |
| "ref_id": null |
| }, |
| { |
| "start": 1243, |
| "end": 1263, |
| "text": "Lample et al., 2018)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 1465, |
| "end": 1487, |
| "text": "(Johnson et al., 2017;", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 1488, |
| "end": 1507, |
| "text": "Lakew et al., 2018)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions and Future Work", |
| "sec_num": "5." |
| }, |
| { |
| "text": "We would like to thank Beatriz Mag\u00e1n and Miguel Pajares for their assistance during the development of this research work. We would also like to thank the Federaci\u00f3n Agustiniana Espa\u00f1ola (FAE) and the Biblioteca de Autores Cristianos (BAC) for making available online the Spanish translations of the Saint Augustine's works.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgements", |
| "sec_num": "6." |
| }, |
| { |
| "text": "Alegria, I., Artola, X., Sarasola, K., and Urkia, M. (1996) . Automatic morphological analysis of basque. Literary and Linguistic Computing, 11(4):193-203.", |
| "cite_spans": [ |
| { |
| "start": 25, |
| "end": 59, |
| "text": "Sarasola, K., and Urkia, M. (1996)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Bibliographical References", |
| "sec_num": "7." |
| }, |
| { |
| "text": "Alvarez, S., Oliver, A., and Badia, T. (2019). Does NMT make a difference when post-editing closely related languages? the case of Spanish-Catalan. In Proceedings of Machine Translation Summit XVII Volume 2: Translator, Project and User Tracks, pages 49-56, Dublin, Ireland, August. European Association for Machine Translation. Artetxe, M., Labaka, G., Agirre, E., and Cho, K. (2018) .", |
| "cite_spans": [ |
| { |
| "start": 342, |
| "end": 384, |
| "text": "Labaka, G., Agirre, E., and Cho, K. (2018)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Bibliographical References", |
| "sec_num": "7." |
| }, |
| { |
| "text": "Unsupervised neural machine translation. Proceedings of the ICLR2018. Better hypothesis testing for statistical machine translation: Controlling for optimizer instability. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers-Volume 2, pages 176-181. Association for Computational Linguistics. Costa-juss\u00e0, M. R. (2017). Why Catalan-Spanish neural machine translation? analysis, comparison and combination with standard rule and phrase-based technologies.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Bibliographical References", |
| "sec_num": "7." |
| }, |
| { |
| "text": "In Proceedings of the Fourth Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial), pages 55-62, Valencia, Spain, April. Association for Computational Linguistics. Crego, J., Kim, J., Klein, G., Rebollo, A., Yang, K., Senellart, J., Akhanov, E., Brunelle, P., Coquard, A., Deng, Y., et al. (2016) . Systran's pure neural machine translation systems. arXiv preprint arXiv:1610.05540. Denkowski, M. and Lavie, A. (2011). Meteor 1.3: Automatic metric for reliable optimization and evaluation of machine translation systems. In Proceedings of the sixth workshop on statistical machine translation, pages 85-91. Association for Computational Linguistics. Etchegoyhen, T., Mart\u00ednez Garcia, E., Azpeitia, A., Labaka, G., Alegria, I., Cortes Etxabe, I., Jauregi Carrera, A., Ellakuria Santos, I., Martin, M., and Calonge, E. (2018). Neural machine translation of basque.", |
| "cite_spans": [ |
| { |
| "start": 202, |
| "end": 314, |
| "text": "Klein, G., Rebollo, A., Yang, K., Senellart, J., Akhanov, E., Brunelle, P., Coquard, A., Deng, Y., et al. (2016)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Bibliographical References", |
| "sec_num": "7." |
| }, |
| { |
| "text": "Saint Augustine texts are available in https://www.augustinus.it4 We use all the texts except the Tractates on the Gospel of John and Sermons from Sermon 100th onward.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Fast domain adaptation for neural machine translation", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Freitag", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Al-Onaizan", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1612.06897" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Freitag, M. and Al-Onaizan, Y. (2016). Fast domain adap- tation for neural machine translation. arXiv preprint arXiv:1612.06897.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Saturnalia: A latin-catalan parallel corpus for statistical mt", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Gonz\u00e1lez-Rubio", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Civera", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Juan", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Casacuberta", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "LREC", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gonz\u00e1lez-Rubio, J., Civera, J., Juan, A., and Casacuberta, F. (2010a). Saturnalia: A latin-catalan parallel corpus for statistical mt. In LREC.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Saturnalia: A latin-catalan parallel corpus for statistical mt", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Gonz\u00e1lez-Rubio", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Civera", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Juan", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Casacuberta", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "LREC", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gonz\u00e1lez-Rubio, J., Civera, J., Juan, A., and Casacuberta, F. (2010b). Saturnalia: A latin-catalan parallel corpus for statistical mt. In LREC.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Achieving human parity on automatic Chinese", |
| "authors": [ |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Hassan", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Aue", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Chowdhary", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Clark", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Federmann", |
| "suffix": "" |
| }, |
| { |
| "first": "X", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Junczys-Dowmunt", |
| "suffix": "" |
| }, |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Lewis", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Luo", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Menezes", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Qin", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Seide", |
| "suffix": "" |
| }, |
| { |
| "first": "X", |
| "middle": [], |
| "last": "Tan", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Tian", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Xia", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Z", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hassan, H., Aue, A., Chen, C., Chowdhary, V., Clark, J., Federmann, C., Huang, X., Junczys-Dowmunt, M., Lewis, W., Li, M., Liu, S., Liu, T., Luo, R., Menezes, A., Qin, T., Seide, F., Tan, X., Tian, F., Wu, L., Wu, S., Xia, Y., Zhang, D., Zhang, Z., and Zhou, M. (2018). Achiev- ing human parity on automatic Chinese to English news translation. CoRR, abs/1803.05567.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Google's multilingual neural machine translation system: Enabling zero-shot translation", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Johnson", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Schuster", |
| "suffix": "" |
| }, |
| { |
| "first": "Q", |
| "middle": [ |
| "V" |
| ], |
| "last": "Le", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Krikun", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "Z", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Thorat", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Vi\u00e9gas", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Wattenberg", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Corrado", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Hughes", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Dean", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Transactions of the Association for Computational Linguistics", |
| "volume": "5", |
| "issue": "", |
| "pages": "339--351", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Johnson, M., Schuster, M., Le, Q. V., Krikun, M., Wu, Y., Chen, Z., Thorat, N., Vi\u00e9gas, F., Wattenberg, M., Cor- rado, G., Hughes, M., and Dean, J. (2017). Google's multilingual neural machine translation system: En- abling zero-shot translation. Transactions of the Asso- ciation for Computational Linguistics, 5:339-351.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Adam: A method for stochastic optimization", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [ |
| "P" |
| ], |
| "last": "Kingma", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Ba", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the ICLR2015", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kingma, D. P. and Ba, J. (2015). Adam: A method for stochastic optimization. In Proceedings of the ICLR2015.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "OpenNMT: Open-source toolkit for neural machine translation", |
| "authors": [ |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Kim", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Deng", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Senellart", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Rush", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of ACL 2017, System Demonstrations", |
| "volume": "", |
| "issue": "", |
| "pages": "67--72", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Klein, G., Kim, Y., Deng, Y., Senellart, J., and Rush, A. (2017). OpenNMT: Open-source toolkit for neural ma- chine translation. In Proceedings of ACL 2017, System Demonstrations, pages 67-72, Vancouver, Canada, July. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "OpenNMT: Neural machine translation toolkit", |
| "authors": [ |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Kim", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Deng", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Nguyen", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Senellart", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Rush", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 13th Conference of the Association for Machine Translation in the Americas", |
| "volume": "1", |
| "issue": "", |
| "pages": "177--184", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Klein, G., Kim, Y., Deng, Y., Nguyen, V., Senellart, J., and Rush, A. (2018). OpenNMT: Neural machine transla- tion toolkit. In Proceedings of the 13th Conference of the Association for Machine Translation in the Americas (Volume 1: Research Papers), pages 177-184, Boston, MA, March. Association for Machine Translation in the Americas.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Subword regularization: Improving neural network translation models with multiple subword candidates", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Koehn", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Hoang", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Birch", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Callison-Burch", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Federico", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Bertoldi", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Cowan", |
| "suffix": "" |
| }, |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Shen", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Moran", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Zens", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Dyer", |
| "suffix": "" |
| }, |
| { |
| "first": "O", |
| "middle": [], |
| "last": "Bojar", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Constantin", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Herbst", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics on Interactive Poster and Demonstration Sessions (ACL)", |
| "volume": "", |
| "issue": "", |
| "pages": "177--180", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Koehn, P., Hoang, H., Birch, A., Callison-Burch, C., Fed- erico, M., Bertoldi, N., Cowan, B., Shen, W., Moran, C., Zens, R., Dyer, C., Bojar, O., Constantin, A., and Herbst, E. (2007). Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguis- tics on Interactive Poster and Demonstration Sessions (ACL), pages 177-180. (2018). Subword regularization: Improving neural network translation models with multiple subword candidates. In Kudo, Taku.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing", |
| "authors": [ |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Kudo", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Richardson", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
| "volume": "", |
| "issue": "", |
| "pages": "66--71", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kudo, T. and Richardson, J. (2018). SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 66-71.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Multilingual neural machine translation for zeroresource languages", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [ |
| "M" |
| ], |
| "last": "Lakew", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Federico", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Negri", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Turchi", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Italian Journal of Computational Linguistics", |
| "volume": "1", |
| "issue": "1", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lakew, S. M., Federico, M., Negri, M., and Turchi, M. (2018). Multilingual neural machine translation for zero- resource languages. Italian Journal of Computational Linguistics. Volume 1, Number 1.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Phrase-based & neural unsupervised machine translation", |
| "authors": [ |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Lample", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Ott", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Conneau", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Denoyer", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Ranzato", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lample, G., Ott, M., Conneau, A., Denoyer, L., and Ran- zato, M. (2018). Phrase-based & neural unsupervised machine translation. Proceedings of the 2018 Confer- ence on Empirical Methods in Natural Language Pro- cessing.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "BLEU: a method for automatic evaluation of machine translation", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Papineni", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Roukos", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Ward", |
| "suffix": "" |
| }, |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Zhu", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of the 40th Annual Meeting on Association for Computational Linguistics (ACL)", |
| "volume": "", |
| "issue": "", |
| "pages": "311--318", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Papineni, K., Roukos, S., Ward, T., and Zhu, W. (2002). BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meet- ing on Association for Computational Linguistics (ACL), pages 311-318.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "The lemlat 3.0 package for morphological analysis of Latin", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Passarotti", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Budassi", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Litta", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Ruffolo", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the NoDaLiDa 2017 Workshop on Processing Historical Language", |
| "volume": "", |
| "issue": "", |
| "pages": "24--31", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Passarotti, M., Budassi, M., Litta, E., and Ruffolo, P. (2017). The lemlat 3.0 package for morphological anal- ysis of Latin. In Proceedings of the NoDaLiDa 2017 Workshop on Processing Historical Language, pages 24- 31, Gothenburg, May. Link\u00f6ping University Electronic Press.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Language related issues for machine translation between closely related south Slavic languages", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Popovi\u0107", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Ar\u010dan", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Klubi\u010dka", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the Third Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial3)", |
| "volume": "", |
| "issue": "", |
| "pages": "43--52", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Popovi\u0107, M., Ar\u010dan, M., and Klubi\u010dka, F. (2016). Lan- guage related issues for machine translation between closely related south Slavic languages. In Proceedings of the Third Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial3), pages 43-52, Osaka, Japan, December. The COLING 2016 Organizing Com- mittee.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "The bible as a parallel corpus: Annotating the 'book of 2000 tongues", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Resnik", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [ |
| "B" |
| ], |
| "last": "Olsen", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Diab", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Computers and the Humanities", |
| "volume": "33", |
| "issue": "1-2", |
| "pages": "129--153", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Resnik, P., Olsen, M. B., and Diab, M. (1999). The bible as a parallel corpus: Annotating the 'book of 2000 tongues'. Computers and the Humanities, 33(1-2):129- 153.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Training and adapting multilingual nmt for less-resourced and morphologically rich languages", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Rikters", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Pinnis", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Kri\u0161lauks", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rikters, M., Pinnis, M., and Kri\u0161lauks, R. (2018). Train- ing and adapting multilingual nmt for less-resourced and morphologically rich languages. In Proceedings of the Eleventh International Conference on Language Re- sources and Evaluation (LREC 2018).", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Comparing rule-based and SMT-based spelling normalisation for English historical texts", |
| "authors": [ |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Schneider", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Pettersson", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Percillier", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the NoDaLiDa 2017 Workshop on Processing Historical Language", |
| "volume": "", |
| "issue": "", |
| "pages": "40--46", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Schneider, G., Pettersson, E., and Percillier, M. (2017). Comparing rule-based and SMT-based spelling normal- isation for English historical texts. In Proceedings of the NoDaLiDa 2017 Workshop on Processing Historical Language, pages 40-46, Gothenburg, May. Link\u00f6ping University Electronic Press.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Improving neural machine translation models with monolingual data", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Sennrich", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Haddow", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Birch", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the ACL2016", |
| "volume": "1", |
| "issue": "", |
| "pages": "86--96", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sennrich, R., Haddow, B., and Birch, A. (2016). Improv- ing neural machine translation models with monolingual data. In Proceedings of the ACL2016 (Volume 1: Long Papers), pages 86-96.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "A study of translation edit rate with targeted human annotation", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Snover", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Dorr", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Schwartz", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Micciulla", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Makhoul", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of the 7th Biennial Conference of the Association for Machine Translation in the Americas (AMTA)", |
| "volume": "", |
| "issue": "", |
| "pages": "223--231", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Snover, M., Dorr, B., Schwartz, R., Micciulla, L., and Makhoul, J. (2006). A study of translation edit rate with targeted human annotation. In Proceedings of the 7th Bi- ennial Conference of the Association for Machine Trans- lation in the Americas (AMTA), pages 223-231.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Parallel data, tools and interfaces in opus", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [ |
| ";" |
| ], |
| "last": "Tiedemann", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tiedemann, J. (2012). Parallel data, tools and interfaces in opus. In Nicoletta Calzolari (Conference Chair), et al., editors, Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12), Istanbul, Turkey. European Language Re- sources Association (ELRA).", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "The clin27 shared task: Translating historical text to contemporary language for improving automatic linguistic annotation", |
| "authors": [ |
| { |
| "first": "Tjong", |
| "middle": [], |
| "last": "Kim Sang", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Bollman", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Boschker", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Casacuberta", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Dietz", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Dipper", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Domingo", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Van Der Goot", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Van Koppen", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Ljube\u0161i\u0107", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Computational Linguistics in the Netherlands Journal", |
| "volume": "7", |
| "issue": "", |
| "pages": "53--64", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tjong Kim Sang, E., Bollman, M., Boschker, R., Casacu- berta, F., Dietz, F., Dipper, S., Domingo, M., van der Goot, R., van Koppen, J., Ljube\u0161i\u0107, N., et al. (2017). The clin27 shared task: Translating historical text to contem- porary language for improving automatic linguistic an- notation. Computational Linguistics in the Netherlands Journal, 7:53-64.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Parallel corpora for medium density languages", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Varga", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Hal\u00e1csy", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Kornai", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Nagy", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "N\u00e9meth", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Tr\u00f3n", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Amsterdam Studies In The Theory And History Of Linguistic Science Series", |
| "volume": "4", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Varga, D., Hal\u00e1csy, P., Kornai, A., Nagy, V., N\u00e9meth, L., and Tr\u00f3n, V. (2007). Parallel corpora for medium den- sity languages. Amsterdam Studies In The Theory And History Of Linguistic Science Series 4, 292:247.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Attention is all you need", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Vaswani", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Shazeer", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Parmar", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Uszkoreit", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Jones", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [ |
| "N" |
| ], |
| "last": "Gomez", |
| "suffix": "" |
| }, |
| { |
| "first": "\u0141", |
| "middle": [], |
| "last": "Kaiser", |
| "suffix": "" |
| }, |
| { |
| "first": "I", |
| "middle": [], |
| "last": "Polosukhin", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 31st Annual Conference on Neural Information Processing Systems (NIPS)", |
| "volume": "", |
| "issue": "", |
| "pages": "6000--6010", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, \u0141., and Polosukhin, I. (2017). Attention is all you need. In Proceedings of the 31st An- nual Conference on Neural Information Processing Sys- tems (NIPS), pages 6000-6010.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Transfer learning for low-resource neural machine translation", |
| "authors": [ |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Zoph", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Yuret", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "May", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Knight", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "1568--1575", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zoph, B., Yuret, D., May, J., and Knight, K. (2016). Trans- fer learning for low-resource neural machine translation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 1568 -1575.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "type_str": "figure", |
| "uris": null, |
| "num": null, |
| "text": "Artetxe, M., Labaka, G., and Agirre, E. (2019). An effective approach to unsupervised machine translation. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics.Ataman, D., Negri, M., Turchi, M., and Federico, M. (2017). Linguistically motivated vocabulary reduction for neural machine translation from turkish to english. The Prague Bulletin of Mathematical Linguistics, 108(1):331-342. Ataman, D., Aziz, W., and Birch, A. (2020). A latent morphology model for open-vocabulary neural machine translation. Proceedings of the ICLR2020. Bahdanau, D., Cho, K., and Bengio, Y. (2015). Neural machine translation by jointly learning to align and translate. In Proceedings of the 3rd International Conference on Learning Representations (ICLR). Gerlof Bouma et al., editors. (2017). Proceedings of the NoDaLiDa 2017 Workshop on Processing Historical Language, Gothenburg, May. Link\u00f6ping University Electronic Press. Christodoulopoulos, C. and Steedman, M. (2015). A massively parallel corpus: the bible in 100 languages. In Language Resources and Evaluation. Clark, J. H., Dyer, C., Lavie, A., and Smith, N. A. (2011)." |
| }, |
| "TABREF0": { |
| "type_str": "table", |
| "content": "<table/>", |
| "html": null, |
| "num": null, |
| "text": "" |
| }, |
| "TABREF2": { |
| "type_str": "table", |
| "content": "<table/>", |
| "html": null, |
| "num": null, |
| "text": "Figures for the comparable corpus on Saint Augustine works, showing the number of aligned sentences (#sents) and the number of tokens in Latin (#tokens la) and in Spanish (#tokens es) . Train, Development and Test represent the slices used for building the MT systems. Total shows the total amount of data." |
| }, |
| "TABREF4": { |
| "type_str": "table", |
| "content": "<table><tr><td>System</td><td colspan=\"3\">BLEU \u2191 METEOR\u2191 TER\u2193</td></tr><tr><td>Bible</td><td>0.9</td><td>6.9</td><td>106.1</td></tr><tr><td>Bible-ft</td><td>9.4</td><td>25.3</td><td>79.2</td></tr><tr><td>Bible-ft-vocabExt.</td><td>7.1</td><td>21.9</td><td>84.4</td></tr><tr><td>SAugustine</td><td>9.1</td><td>25.2</td><td>79.7</td></tr><tr><td>Bible+SAugustine</td><td>10.1</td><td>26.6</td><td>78.5</td></tr><tr><td colspan=\"4\">Table 4: Automatic evaluation of the different MT systems</td></tr><tr><td colspan=\"4\">on the in-domain manually validated Saint Augustine test</td></tr><tr><td>set.</td><td/><td/><td/></tr></table>", |
| "html": null, |
| "num": null, |
| "text": "scores on the development set like we did for the Bible model." |
| } |
| } |
| } |
| } |