ACL-OCL / Base_JSON /prefixL /json /loresmt /2020.loresmt-1.14.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T11:59:41.229562Z"
},
"title": "Using Multiple Subwords to Improve English-Esperanto Automated Literary Translation Quality",
"authors": [
{
"first": "Alberto",
"middle": [],
"last": "Poncelas",
"suffix": "",
"affiliation": {
"laboratory": "Trinity Centre for Literary and Cultural Translation",
"institution": "Trinity College Dublin",
"location": {
"country": "Ireland"
}
},
"email": ""
},
{
"first": "Jan",
"middle": [],
"last": "Buts",
"suffix": "",
"affiliation": {
"laboratory": "Trinity Centre for Literary and Cultural Translation",
"institution": "Trinity College Dublin",
"location": {
"country": "Ireland"
}
},
"email": "butsj@tcd.ie"
},
{
"first": "James",
"middle": [],
"last": "Hadley",
"suffix": "",
"affiliation": {
"laboratory": "Trinity Centre for Literary and Cultural Translation",
"institution": "Trinity College Dublin",
"location": {
"country": "Ireland"
}
},
"email": "hadleyj@tcd.ie"
},
{
"first": "Andy",
"middle": [],
"last": "Way",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Dublin City University",
"location": {
"country": "Ireland"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Building Machine Translation (MT) systems for low-resource languages remains challenging. For many language pairs, parallel data are not widely available, and in such cases MT models do not achieve results comparable to those seen with high-resource languages. When data are scarce, it is of paramount importance to make optimal use of the limited material available. To that end, in this paper we propose employing the same parallel sentences multiple times, only changing the way the words are split each time. For this purpose we use several Byte Pair Encoding models, with various merge operations used in their configuration. In our experiments, we use this technique to expand the available data and improve an MT system involving a low-resource language pair, namely English-Esperanto. As an additional contribution, we made available a set of English-Esperanto parallel data in the literary domain.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Building Machine Translation (MT) systems for low-resource languages remains challenging. For many language pairs, parallel data are not widely available, and in such cases MT models do not achieve results comparable to those seen with high-resource languages. When data are scarce, it is of paramount importance to make optimal use of the limited material available. To that end, in this paper we propose employing the same parallel sentences multiple times, only changing the way the words are split each time. For this purpose we use several Byte Pair Encoding models, with various merge operations used in their configuration. In our experiments, we use this technique to expand the available data and improve an MT system involving a low-resource language pair, namely English-Esperanto. As an additional contribution, we made available a set of English-Esperanto parallel data in the literary domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In this paper, we use the constructed language Esperanto to illustrate potential improvements in the automatic translation of material from low-resource languages. Languages are considered low-resource when there is little textual material available in the form of electronically stored corpora. They pose significant challenges in the field of Machine Translation (MT), since it is difficult to build models that perform adequately using small amounts of data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Multiple techniques have been developed to improve MT in conditions of data scarcity. A popular approach is to translate indirectly via a pivot language (Utiyama and Isahara, 2007; Firat et al., 2017; Liu et al., 2018; Poncelas et al., 2020a) . Moreover, indirect translation can be used for creating additional training data. A further useful technique for expanding the dataset is backtranslation (Sennrich et al., 2016a) . This procedure consists of automatically translating a monolingual text from the target language into the selected source language, and then using the resulting parallel set as training data so the model benefits from this additional information. Although the quality of these sentence pairs is not as high as that of human-translated sentences (the source side contains mistakes produced by the MT system), the pairs are still useful when used as training data, because they do often improve the models (Poncelas et al., 2019a) .",
"cite_spans": [
{
"start": 153,
"end": 180,
"text": "(Utiyama and Isahara, 2007;",
"ref_id": "BIBREF29"
},
{
"start": 181,
"end": 200,
"text": "Firat et al., 2017;",
"ref_id": "BIBREF5"
},
{
"start": 201,
"end": 218,
"text": "Liu et al., 2018;",
"ref_id": "BIBREF13"
},
{
"start": 219,
"end": 242,
"text": "Poncelas et al., 2020a)",
"ref_id": "BIBREF18"
},
{
"start": 399,
"end": 423,
"text": "(Sennrich et al., 2016a)",
"ref_id": "BIBREF23"
},
{
"start": 930,
"end": 954,
"text": "(Poncelas et al., 2019a)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Nonetheless, for some languages, the available data are in such short supply that MT models used for generating back-translated sentences may produce a high proportion of noisy sentences. The use of noisy sentences for building MT models could ultimately have a negative impact on the quality of the MT system's outputs (Goutte et al., 2012) , and therefore they are often removed (Khadivi and Ney, 2005; Taghipour et al., 2010; Popovi\u0107 and Poncelas, 2020) .",
"cite_spans": [
{
"start": 320,
"end": 341,
"text": "(Goutte et al., 2012)",
"ref_id": "BIBREF7"
},
{
"start": 381,
"end": 404,
"text": "(Khadivi and Ney, 2005;",
"ref_id": "BIBREF8"
},
{
"start": 405,
"end": 428,
"text": "Taghipour et al., 2010;",
"ref_id": "BIBREF26"
},
{
"start": 429,
"end": 456,
"text": "Popovi\u0107 and Poncelas, 2020)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We propose employing another technique to augment datasets: using the same set of sentences multiple times, but in slightly altered form each time. Specifically, we modify the sentences by using different Byte Pair Encoding (BPE) (Sennrich et al., 2016b) merge operations. We perform a fine-grained analysis, exploring the use of different splitting options on the source side, on the target side, and on both sides.",
"cite_spans": [
{
"start": 230,
"end": 254,
"text": "(Sennrich et al., 2016b)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This research is inspired by techniques for augmenting the training set artificially. One of these techniques is back-translation (Sennrich et al., 2016a) , which involves creating artificial source-side sen-tences by translating a monolingual set in the target language. Similar techniques include the use of several models to generate sentences (Poncelas et al., 2019b; Soto et al., 2020) , or the use of synthetic data on the target side (Chinea-Rios et al., 2017; Li et al., 2020) .",
"cite_spans": [
{
"start": 130,
"end": 154,
"text": "(Sennrich et al., 2016a)",
"ref_id": "BIBREF23"
},
{
"start": 347,
"end": 371,
"text": "(Poncelas et al., 2019b;",
"ref_id": "BIBREF20"
},
{
"start": 372,
"end": 390,
"text": "Soto et al., 2020)",
"ref_id": "BIBREF25"
},
{
"start": 441,
"end": 467,
"text": "(Chinea-Rios et al., 2017;",
"ref_id": "BIBREF2"
},
{
"start": 468,
"end": 484,
"text": "Li et al., 2020)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous work",
"sec_num": "2"
},
{
"text": "A technique that involves multiple segmentation is subword regularization (Kudo, 2018) , in which candidate sentences with different splits are sampled, either probabilistically or using a language model for training.",
"cite_spans": [
{
"start": 74,
"end": 86,
"text": "(Kudo, 2018)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous work",
"sec_num": "2"
},
{
"text": "In the work of Poncelas et al. (2020b) , different splits are used to build an English-Thai MT model. As the Thai language does not use whitespace separation between words, different splits can be applied, to address the fact that all the words and sub-words are joined together in the final output.",
"cite_spans": [
{
"start": 15,
"end": 38,
"text": "Poncelas et al. (2020b)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous work",
"sec_num": "2"
},
{
"text": "More recently, Provilkov et al. (2020) introduced BPE-dropout, an improvement on standard BPE consisting of randomly dropping merges when training the model, such that a single word can have several segmentations.",
"cite_spans": [
{
"start": 15,
"end": 38,
"text": "Provilkov et al. (2020)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous work",
"sec_num": "2"
},
{
"text": "This article is concerned with improving MT models for Esperanto, the most successful constructed international language (Blanke, 2009) . It was created in the late nineteenth century, and is said to be currently spoken by over 2 million people, spread across more than 100 countries (Eberhard et al., 2020) . During its first century of development, Esperanto was principally maintained by means of membership-based organisations. Currently, internet applications such as Duolingo are supporting the wider spread of the language among new enthusiasts. While many Esperanto speakers have sought to develop the language through translation, the body of work available -particularly in digital formats -remains relatively small, making Esperanto a clear example of a low-resource language.",
"cite_spans": [
{
"start": 121,
"end": 135,
"text": "(Blanke, 2009)",
"ref_id": "BIBREF0"
},
{
"start": 284,
"end": 307,
"text": "(Eberhard et al., 2020)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Esperanto language",
"sec_num": "3"
},
{
"text": "Esperanto loosely derives its lexicon from several Indo-European languages, and shares some typological characteristics with, among others, Russian, English, and French (Parkvall, 2010) . In contrast to most natural languages, Esperanto's most distinctive characteristic is its regularity. The grammar consists of a very limited set of operations, to which there are, in principle, no exceptions. Furthermore, the language is agglutinative, and its suffixes are independently meaningful and invariable. For instance, virino, the word for \"woman\", con-sists of the compound parts vir [adult human], in [female], and o [entity] (as the 'o' ending is used for all nouns). The word for \"mother\", patrino, largely refers to the same semantic categories, and is therefore structurally highly similar.",
"cite_spans": [
{
"start": 169,
"end": 185,
"text": "(Parkvall, 2010)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Esperanto language",
"sec_num": "3"
},
{
"text": "As a consequence of this internal consistency, Esperanto learners can quickly expand their vocabulary by learning to segment words into their various parts, which can then be used to construct new words by morphological analogy. Because of its affinity with many other languages, and because of the thoroughly logical composition of its vocabulary, Esperanto has historically been central to several experiments in MT, most notably regarding its potential function as a pivot language between European languages (Gobbo, 2015) . In this study, however, we focus on automatic translation into Esperanto for its own sake.",
"cite_spans": [
{
"start": 512,
"end": 525,
"text": "(Gobbo, 2015)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Esperanto language",
"sec_num": "3"
},
{
"text": "We propose building MT models using training data composed of a dataset split into multiple variants with a different configuration of BPE, as presented in Figure 1 . At the top of the figure, one can see that the same parallel set has been processed using BPE with 89,500, 50,000 and 10,000 operations (trained separately for each language). The MT model represented on the left has been built using the same dataset replicated three times, the only difference being that on the target side, different splits were implemented. Similarly, the MT model in the centre is built with different splits on the source side. The last model, represented on the right, combines different splits both on the source and the target side.",
"cite_spans": [],
"ref_spans": [
{
"start": 156,
"end": 164,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Research Questions",
"sec_num": "4"
},
{
"text": "In order to evaluate the models, we use a test set that is split with a single BPE strategy (i.e. using 89,500 merge operations, the default proposed in the work of Sennrich et al. (2016b)). Therefore, using different merge operations on the source side of the training data may not have as big an impact as when they are applied to the target side (not all the words will match those in the test set). However, the addition of other BPE configurations could in principle still be useful to improve modeling for the source language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Research Questions",
"sec_num": "4"
},
{
"text": "In Section 5 we describe the settings of the MT and the data used for training. In Section 6 we analyze the results achieved by the baseline system. This paper's experiments are divided into three sections. Each of these sections describes and also provides the evaluation of a model. The sections are the following:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Research Questions",
"sec_num": "4"
},
{
"text": "\u2022 Combination of dataset with different merge operations on the target side (Section 7.1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Research Questions",
"sec_num": "4"
},
{
"text": "\u2022 Combination of dataset with different merge operations on the source side (Section 7.2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Research Questions",
"sec_num": "4"
},
{
"text": "\u2022 Combination of dataset with different merge operations on both the source and target side (Section 7.3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Research Questions",
"sec_num": "4"
},
{
"text": "In Section 8, we compare translation examples from the different models and analyze the different outcomes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Research Questions",
"sec_num": "4"
},
{
"text": "Finally, in Section 9 we conclude and propose how these experiments could be expanded in future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Research Questions",
"sec_num": "4"
},
{
"text": "The NMT systems we build are Transformer (Vaswani et al., 2017) models, based on OpenNMT (Klein et al., 2017) . Models are trained for a maximum of 30K steps using the recommended parameters. 1 We have selected the model with the lowest perplexity on the development set.",
"cite_spans": [
{
"start": 41,
"end": 63,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF30"
},
{
"start": 89,
"end": 109,
"text": "(Klein et al., 2017)",
"ref_id": "BIBREF9"
},
{
"start": 192,
"end": 193,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "5"
},
{
"text": "For training the models we use the Tatoeba, Glob-alVoices and bible-uedin (Christodouloupoulos and Steedman, 2015) datasets from OPUS project. 2 Our dataset thus contains material from the Bible, from news sources, and from less domain-specific multilingual translation examples. The sentences are randomly shuffled, after which 302,768 sentences are used as a training set and the other 1,000 as our dev set. All the sentences are tokenized and truecased.",
"cite_spans": [
{
"start": 74,
"end": 114,
"text": "(Christodouloupoulos and Steedman, 2015)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "5.1"
},
{
"text": "BPE is applied using several merge operations. We use 89,500 operations as a starting point and explore other splits that produce smaller subword units (by using a lower number of merge operations). In our experiments we work with 50,000, 20,000 and 10,000 operations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "5.1"
},
{
"text": "We also concatenate the dev set using the same configuration of BPE.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "5.1"
},
{
"text": "1 https://opennmt.net/OpenNMT-py/FAQ. html 2 http://opus.nlpl.eu/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "5.1"
},
{
"text": "In order to evaluate the quality of the models, two test sets are translated. The test sets are the same for all models. In addition to tokenization and truecase, we also use BPE with 89,500 merge operations. We do not use (or combine) other BPE configurations. The translations are evaluated using the BLEU (Papineni et al., 2002) metric.",
"cite_spans": [
{
"start": 308,
"end": 331,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Test Set",
"sec_num": "5.2"
},
{
"text": "The first test set is taken from the OPUS (Books) dataset (Tiedemann, 2012 ) (1562 sentences). Specifically, the test set consists of material from two texts available in English and in Esperanto translation, namely Carroll's Alice's Adventures in Wonderland (Carroll and Kearney, 1865 (1910) and Poe's The Fall of the House of Usher (Poe and Grobe, 1839 Grobe, (2000 . 3 The second test set (which contains 1256 sentences) consists of an English and an Esperanto version of Oscar Wilde's Salom\u00e9 (Wilde et al., 1891 (Wilde et al., (1894 (Wilde et al., , 1910 , 4 a play originally written in French. As an additional contribution to this paper, we have made a set of aligned sentences from the texts available via OPUS. 5 Both test sets are in the literary domain, which is especially challenging (Toral and Way, 2018) for MT models. Not only do the test sets contain numerous personal names and uncommon vocabulary, they are also highly creative and, at times, experimental. For instance, in Alice's Adventures in Wonderland, grammatical and lexical principles are often challenged on purpose to portray a character's individual traits (i.e. the Mock Turtle sings of Beau-ootiful soo-oop!. In Salome, characters regularly produce complex similes and metaphors to describe one another. The text is a variation on a religious theme, and heavily draws on Biblical imagery. While such material is highly challenging, the inclusion of Biblical matter in the training data may have a positive impact on the overall results.",
"cite_spans": [
{
"start": 58,
"end": 74,
"text": "(Tiedemann, 2012",
"ref_id": "BIBREF27"
},
{
"start": 259,
"end": 271,
"text": "(Carroll and",
"ref_id": "BIBREF1"
},
{
"start": 272,
"end": 292,
"text": "Kearney, 1865 (1910)",
"ref_id": null
},
{
"start": 343,
"end": 354,
"text": "Grobe, 1839",
"ref_id": null
},
{
"start": 355,
"end": 367,
"text": "Grobe, (2000",
"ref_id": "BIBREF16"
},
{
"start": 370,
"end": 371,
"text": "3",
"ref_id": null
},
{
"start": 496,
"end": 515,
"text": "(Wilde et al., 1891",
"ref_id": null
},
{
"start": 516,
"end": 536,
"text": "(Wilde et al., (1894",
"ref_id": null
},
{
"start": 537,
"end": 558,
"text": "(Wilde et al., , 1910",
"ref_id": "BIBREF31"
},
{
"start": 720,
"end": 721,
"text": "5",
"ref_id": null
},
{
"start": 797,
"end": 818,
"text": "(Toral and Way, 2018)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Test Set",
"sec_num": "5.2"
},
{
"text": "In Table 1 we present the models trained with the training data using different merge operations on the target side.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Baseline MT",
"sec_num": "6"
},
{
"text": "The rows of the table correspond to the evaluation of the model, using the same data. The only difference is the number of BPE merge operations that have been used on the target side.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline MT",
"sec_num": "6"
},
{
"text": "As the test set is split using 89,500 merge operations, it would not be beneficial to apply BPE with merge operations other than 89,500 on the source side. In fact, when using BPE with 50,000, 20,000 and 10,000 operations on the source side, the BLEU score for the translation of the Books data is only 5.75, 5.70 and 5.76, respectively, and 14.30, 13.24, and 14.53 for Salome Table 1 shows that the four models achieve similar results. As mentioned before, the Books set contains complex grammatical and lexical constructions, which makes it more difficult to translate. This is also evidenced in the table as BLEU scores of the Books set are lower than those of the Salome set. Moreover, there is no correlation between the number of merge operations and the performance. For example, we observe a small drop in the performance when decreasing the number of merge operations from 89,500 to 50,000, but the performance improves slightly when the number of operations is further decreased to 20,000.",
"cite_spans": [],
"ref_spans": [
{
"start": 377,
"end": 384,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Baseline MT",
"sec_num": "6"
},
{
"text": "In the first set of experiments we explore the models when the sentences in the parallel set are replicated by changing only the number of BPE merge operations used on the target side. We perform two sets of experiments: one where we keep the duplicates (sentences that remain the same after being split with different BPE configurations), and another where duplicates are removed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Different Merge Operations on the Target Side",
"sec_num": "7.1"
},
{
"text": "In Table 2 we present the results of the models when trained with a different concatenation of datasets. The first column specifies the datasets used in the training. For example, the row TRG89500 & TRG50000 indicates that the training set used for building the MT model consists of sentences split using 89,500 and 50,000 merge operations, respectively",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Different Merge Operations on the Target Side",
"sec_num": "7.1"
},
{
"text": "We mark in bold those scores that exceed 6.89 BLEU points, i.e. the maximum score achieved by the baseline models presented in Table 1 . The scores receive an asterisk when the improvements are statistically significant at p=0.01. Statistical significance has been computed using Bootstrap Resampling (Koehn, 2004) .",
"cite_spans": [
{
"start": 301,
"end": 314,
"text": "(Koehn, 2004)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 127,
"end": 134,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Different Merge Operations on the Target Side",
"sec_num": "7.1"
},
{
"text": "In the when duplicate sentence pairs are removed. By doing this the dataset is reduced by between 30% and 45%. In the second subtable, all the BLEU scores indicate improvements over the baseline, whereas in the first subtable some models, such as TRG89500 & TRG50000, have a lower score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Different Merge Operations on the Target Side",
"sec_num": "7.1"
},
{
"text": "The best performance is seen when for the multiple settings used, the number of merge operations differs greatly. For example, the highest scores are achieved when mixing 89,500 and 10,000 operations (i.e. the TRG89500 & TRG10000 rows in both subtables), the uppermost and the lowermost number of operations used in the experiments. The same principle holds true for those models built by combining three or four datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Different Merge Operations on the Target Side",
"sec_num": "7.1"
},
{
"text": "The next set of experiments explores the use of several merge operations on the source side. In this case, when combining the datasets, we ensure that the SRC89500 set is used, as the test set has been processed using 89,500 operations. We present the results in Table 3 . Those scores that are higher than the baselines of Table 1 are marked in bold.",
"cite_spans": [],
"ref_spans": [
{
"start": 263,
"end": 270,
"text": "Table 3",
"ref_id": "TABREF5"
},
{
"start": 324,
"end": 331,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Different Merge Operations on the Source Side",
"sec_num": "7.2"
},
{
"text": "Our observations are similar to those obtained in Section 7.1. The best results are observed when the duplicate sentences are removed (between 25% and 40% of the sentences are removed) and the merge operation settings are the furthest apart (89,500 and 10,000).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Different Merge Operations on the Source Side",
"sec_num": "7.2"
},
{
"text": "Most of the models using several BPE configurations on the source side perform better than the baseline models. However, when compared to the experiments in the previous section (Table 2) , the performance is lower.",
"cite_spans": [],
"ref_spans": [
{
"start": 178,
"end": 187,
"text": "(Table 2)",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Different Merge Operations on the Source Side",
"sec_num": "7.2"
},
{
"text": "The last set of experiments consists of building a model with data created using different splits both on the source and on the target side.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Different Merge Operations on both Source and Target Side",
"sec_num": "7.3"
},
{
"text": "We perform experiments based on the outcomes observed in the previous section. Thus, two models are built. One combines the datasets split using BPE with 89,500 and 10,000 merge operations (both source and target side) and the other model, All, combines the dataset with all the splits (i.e. 89,500, 50,000, 20,000 and 10,000). 6 The duplicates are removed, as this approach showed the best results. We present the translation quality of the test set using these models in Table 4 . We see that the use of different splits on both the source and target sides tends to achieve the best results when compared both to baselines and to the experiments in the previous sections 7 .",
"cite_spans": [],
"ref_spans": [
{
"start": 473,
"end": 480,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Different Merge Operations on both Source and Target Side",
"sec_num": "7.3"
},
{
"text": "In Table 5 , we show some translation examples of the models that, as discussed in the previous sections, achieved the best performance. We mark in bold some important differences across the translations.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 5",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Comparison of Outputs",
"sec_num": "8"
},
{
"text": "The first example, drawn from Alice in Wonderland, contains a joke. Alice, who is collecting her thoughts, aims to voice her opinion, and starts out by saying I don't think.... Before she can finish her sentence, however, the Mad Hatter interrupts her by stating that in that case, she should not speak. The human Esperanto translation makes this joke very explicit by repeating the emphasis on 'not thinking', whereas in English the transition is more subtle. Two of the systems, while differing in exact word order, succeed in reproducing the joke (TRG89500 and TRG89500 & TRG10000). In the other two models, either the crucial element do [so], which realises the inference, is omitted, or the meaning is mistakenly changed to a positive imperative: vi devus diri [you should say]. It can further be observed in the sentences that none of the systems translates the Hatter's name meaningfully. Either the name remains the same, or it is slightly altered from the original, in a seemingly random manner. Interestingly, Alice's name is adapted to Alico, which conforms to the rule that all Esperanto names end in -o (or, in some cases -a), but the adaptation does not equal the human choice for Alicio.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison of Outputs",
"sec_num": "8"
},
{
"text": "The second example, also taken from Alice's Adventures in Wonderland, is concerned with a particular fixed expression in the English language: venture to say. The baseline system does not translate this mark of politeness, while the other models do provide varying translations (i.e. decidis, sukcesis and entrepenis, which correspond to the past tenses of the verb to decide, to succeed and to undertake). While none of them is completely correct (when compared to the human translation), all of them are fairly transparent in context, and foreground different aspects of meaning contained in system sentence source said Alice, very much confused, \"I don't think-\" \"Then you shouldn't talk,\" said the Hatter . reference Alicio , tre konfuzite, respondis... : \"mi ne pensas-\" \"se vi ne pensas, vi ne rajtas paroli,\" diris la\u0108apelisto .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison of Outputs",
"sec_num": "8"
},
{
"text": "Alico, tre konfuzita; mi ne pensas . \"do vi ne parolu,\" diris la Hater.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TRG89500",
"sec_num": null
},
{
"text": "Alico... \"vi ne parolu,\" diris la Hatar.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SRC89500 & SRC10000",
"sec_num": null
},
{
"text": "\"vi do ne parolu,\" diris la Hatter.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TRG89500 & TRG10000",
"sec_num": null
},
{
"text": "All \"vi devus diri, \" diris la Hater. source but she did not venture to say it out loud . reference sed tion\u015di ne kura\u011dis diri la\u0217te .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TRG89500 & TRG10000",
"sec_num": null
},
{
"text": "sed\u015di ne diris tion la\u0217te . kiel granato, kiu tran\u0109i\u011dis en du per tran\u0109ilo de eburo. TRG89500 & TRG10000\u011di similas al granato, tran\u0109ita en du kun tran\u0109ilo de eburo. All kiel granato eltran\u0109ita en du kun tran\u0109ilo ebura. the English venture.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TRG89500",
"sec_num": null
},
{
"text": "With reference to the Salome test set, we find in the entire translated text numerous small and relatively inconsequential vocabulary differences across systems (e.g. veston or mantelon for referring to a piece of clothing), as well as varying preferences for orthographically similar verb tenses (e.g lacigis or lacigas, past and present tense of the verb to tire or wear out). At times, the systems differ in their translation of multi-word units such as sacred person, which is translated either as the literal sankta homo or as the more interpretative sanktulo [saint] . Overall, the systems perform well when translating the play's dense symbolism, as illustrated in Table 5 .",
"cite_spans": [
{
"start": 565,
"end": 572,
"text": "[saint]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 672,
"end": 679,
"text": "Table 5",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "TRG89500",
"sec_num": null
},
{
"text": "The examples in the table are similes, which start with the explicit comparative phrase it is like. In the first example, the baseline system does not manage to reproduce the reference to serpentoj [snakes], although the mention of turmentoj [afflictions] does offer an interesting metaphorical perspective. The system SRC89500 & SRC10000 does not produce a correct translation, but those systems trained with different splits on the target side (i.e. the SRC10000 & SRC89500 and All systems) provide a remarkably good translation of the source. Similarly, in the last example included in the table, the baseline system fails to reproduce the meaning of the original (the knife falls apart instead of cutting the fruit), whereas all systems with multiple segmentation are successful in conveying a variant of the poetic image presented in the source text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TRG89500",
"sec_num": null
},
{
"text": "In short, the examples in Table 5 indicate that a combination of different merge operations may improve results for translation into Esperanto, a language for which limited resources are available. In a number of cases, the systems succeed in translating highly uncommon constructions in the context of humorous and poetic literary discourse.",
"cite_spans": [],
"ref_spans": [
{
"start": 26,
"end": 33,
"text": "Table 5",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "TRG89500",
"sec_num": null
},
{
"text": "In this work, we have aimed to improve an English-Esperanto MT system by using multiple instances of the same sentence pair, split with different configurations of BPE.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "9"
},
{
"text": "In our experiments, the best performance tends to be achieved when splitting strategies are applied both on the source and target side, duplicate parallel sentences are removed, and the number of merge operations used are very different from each other. In our experiments, the best results are achieved when all the split-combinations are used on both sides.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "9"
},
{
"text": "Although the goal of these experiments is to find a technique to improve the MT models when the available data are very limited, this technique could also be applied in scenarios where data are abundant. It should be noted that Esperanto is perhaps a particularly suitable candidate for word-split methods, as the language's vocabulary consists of fixed chunks that are combined to form transparent compounds. However, the techniques applied here are in principle language-independent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "9"
},
{
"text": "Finally, although we demonstrated that combining sentences with different merge operations improves the model, in this paper we could not determine the best configuration to use. Similarly, the test set that we used was processed using 89,500 merge operations. If the test set had been processed with a different BPE configuration the performance could have been different, especially when using models with different split configurations on the source side. Extensions of this work could involve finding an optimal configuration for achieving the best results, or testing the performance when combined with other word-splitting techniques.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "9"
},
{
"text": "https://farkastranslations.com/ bilingual_books.php 4 https://en.wikisource.org/wiki/Salom% C3%A9 and http://www.gutenberg.org/ebooks/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://opus.nlpl.eu/Salome-v1.php",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Note that we use all possible combinations. For example, the training set of the All model is built combining 4 * 4 = 16 datasets.7 We observed that the output tends to be more similar to the splits following the TRG89500 configuration.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This research has been supported by the ADAPT Centre for Digital Content Technology which is funded under the SFI Research Centres Programme (Grant 13/RC/2106).The QuantiQual Project, generously funded by the Irish Research Council's COALESCE scheme (COALESCE/2019/117).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Causes of the relative success of esperanto. Language Problems and Language Planning",
"authors": [
{
"first": "Detlev",
"middle": [],
"last": "Blanke",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "33",
"issue": "",
"pages": "251--266",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Detlev Blanke. 2009. Causes of the relative success of esperanto. Language Problems and Language Plan- ning, 33(3):251-266.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Alice's adventures in wonderland (La aventuroj de Alicio en Mirlando)",
"authors": [
{
"first": "Lewis",
"middle": [],
"last": "Carroll",
"suffix": ""
},
{
"first": "E",
"middle": [
"L"
],
"last": "Kearney",
"suffix": ""
}
],
"year": 1910,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lewis Carroll and E.L. Kearney, trans. 1865 (1910). Al- ice's adventures in wonderland (La aventuroj de Ali- cio en Mirlando). Project Gutenberg.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Adapting neural machine translation with parallel synthetic data",
"authors": [
{
"first": "Mara",
"middle": [],
"last": "Chinea-Rios",
"suffix": ""
},
{
"first": "Alvaro",
"middle": [],
"last": "Peris",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Casacuberta",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Second Conference on Machine Translation",
"volume": "",
"issue": "",
"pages": "138--147",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mara Chinea-Rios, Alvaro Peris, and Francisco Casacuberta. 2017. Adapting neural machine trans- lation with parallel synthetic data. In Proceedings of the Second Conference on Machine Translation, pages 138-147, Copenhagen, Denmark.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A massively parallel corpus: the bible in 100 languages. Language resources and evaluation",
"authors": [
{
"first": "Christos",
"middle": [],
"last": "Christodouloupoulos",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "49",
"issue": "",
"pages": "375--395",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christos Christodouloupoulos and Mark Steedman. 2015. A massively parallel corpus: the bible in 100 languages. Language resources and evaluation, 49(2):375-395.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Ethnologue: Languages of the World, twenty-third edition",
"authors": [
{
"first": "M",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Eberhard",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Gary",
"suffix": ""
},
{
"first": "Charles",
"middle": [
"D"
],
"last": "Simons",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fennig",
"suffix": ""
}
],
"year": 2020,
"venue": "SIL International",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David M Eberhard, Gary F Simons, and Charles D Fen- nig. 2020. Ethnologue: Languages of the World, twenty-third edition. SIL International, Dallas, TX, USA.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Multi-way, multilingual neural machine translation",
"authors": [
{
"first": "Orhan",
"middle": [],
"last": "Firat",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Baskaran",
"middle": [],
"last": "Sankaran",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fatos T Yarman",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Vural",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2017,
"venue": "Computer Speech & Language",
"volume": "45",
"issue": "",
"pages": "236--252",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Orhan Firat, Kyunghyun Cho, Baskaran Sankaran, Fatos T Yarman Vural, and Yoshua Bengio. 2017. Multi-way, multilingual neural machine translation. Computer Speech & Language, 45:236-252.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Machine translation as a complex system, and the phenomenon of esperanto. Interdisciplinary Description of Complex Systems",
"authors": [
{
"first": "Federico",
"middle": [],
"last": "Gobbo",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "13",
"issue": "",
"pages": "264--274",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Federico Gobbo. 2015. Machine translation as a complex system, and the phenomenon of esperanto. Interdisciplinary Description of Complex Systems, 13(2):264-274.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "The impact of sentence alignment errors on phrasebased machine translation performance",
"authors": [
{
"first": "Cyril",
"middle": [],
"last": "Goutte",
"suffix": ""
},
{
"first": "Marine",
"middle": [],
"last": "Carpuat",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Foster",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of Association for Machine Translation in the Americas",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cyril Goutte, Marine Carpuat, and George Foster. 2012. The impact of sentence alignment errors on phrase- based machine translation performance. In Proceed- ings of Association for Machine Translation in the Americas, AMTA, San Diego, USA.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Automatic filtering of bilingual corpora for statistical machine translation",
"authors": [
{
"first": "Shahram",
"middle": [],
"last": "Khadivi",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2005,
"venue": "International Conference on Application of Natural Language to Information Systems",
"volume": "",
"issue": "",
"pages": "263--274",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shahram Khadivi and Hermann Ney. 2005. Automatic filtering of bilingual corpora for statistical machine translation. In International Conference on Appli- cation of Natural Language to Information Systems, pages 263-274.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Opennmt: Open-source toolkit for neural machine translation",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Yuntian",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Senellart",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics-System Demonstrations",
"volume": "",
"issue": "",
"pages": "67--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senel- lart, and Alexander M. Rush. 2017. Opennmt: Open-source toolkit for neural machine translation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics-System Demonstrations, pages 67-72, Vancouver, Canada.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Statistical significance tests for machine translation evaluation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of EMNLP 2004",
"volume": "",
"issue": "",
"pages": "388--395",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proceedings of EMNLP 2004, pages 388-395, Barcelona, Spain.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Subword regularization: Improving neural network translation models with multiple subword candidates",
"authors": [
{
"first": "Taku",
"middle": [],
"last": "Kudo",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "66--75",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Taku Kudo. 2018. Subword regularization: Improv- ing neural network translation models with multiple subword candidates. In Proceedings of the 56th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 66-75, Melbourne, Australia.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Revisiting Back-Translation for Low-Resource Machine Translation Between Chinese and Vietnamese",
"authors": [
{
"first": "Hongzheng",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jiu",
"middle": [],
"last": "Sha",
"suffix": ""
},
{
"first": "Can",
"middle": [],
"last": "Shi",
"suffix": ""
}
],
"year": 2020,
"venue": "IEEE Access",
"volume": "8",
"issue": "",
"pages": "119931--119939",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hongzheng Li, Jiu Sha, and Can Shi. 2020. Revisiting Back-Translation for Low-Resource Machine Trans- lation Between Chinese and Vietnamese. IEEE Ac- cess, 8:119931-119939.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Pivot machine translation using chinese as pivot language",
"authors": [
{
"first": "Chao-Hong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Catarina",
"middle": [
"Cruz"
],
"last": "Silva",
"suffix": ""
},
{
"first": "Longyue",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Andy",
"middle": [],
"last": "Way",
"suffix": ""
}
],
"year": 2018,
"venue": "China Workshop on Machine Translation",
"volume": "",
"issue": "",
"pages": "74--85",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chao-Hong Liu, Catarina Cruz Silva, Longyue Wang, and Andy Way. 2018. Pivot machine translation us- ing chinese as pivot language. In China Workshop on Machine Translation, pages 74-85, Wuyishan, China.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of 40th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of 40th Annual Meeting of the Association for Com- putational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "How european is esperanto?: A typological study. Language Problems and Language Planning",
"authors": [
{
"first": "Mikael",
"middle": [],
"last": "Parkvall",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "34",
"issue": "",
"pages": "63--79",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikael Parkvall. 2010. How european is esperanto?: A typological study. Language Problems and Lan- guage Planning, 34(1):63-79.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "The fall of the house of usher (La Falo De U\u015dero-Domo)",
"authors": [
{
"first": "Allan",
"middle": [],
"last": "Edgar",
"suffix": ""
},
{
"first": "Edwin",
"middle": [],
"last": "Poe",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Grobe",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Edgar Allan Poe and Edwin Grobe, trans. 1839 (2000). The fall of the house of usher (La Falo De U\u015dero- Domo). Project Gutenberg.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Adaptation of machine translation models with back-translated data using transductive data selection methods",
"authors": [
{
"first": "Alberto",
"middle": [],
"last": "Poncelas",
"suffix": ""
},
{
"first": "Gideon",
"middle": [],
"last": "Maillette De Buy",
"suffix": ""
},
{
"first": "Andy",
"middle": [],
"last": "Wenniger",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Way",
"suffix": ""
}
],
"year": 2019,
"venue": "20th International Conference on Computational Linguistics and Intelligent Text Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alberto Poncelas, Gideon Maillette de Buy Wenniger, and Andy Way. 2019a. Adaptation of machine trans- lation models with back-translated data using trans- ductive data selection methods. In 20th Interna- tional Conference on Computational Linguistics and Intelligent Text Processing, La Rochelle, France.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "The Impact of Indirect Machine Translation on Sentiment Classification",
"authors": [
{
"first": "Alberto",
"middle": [],
"last": "Poncelas",
"suffix": ""
},
{
"first": "Pintu",
"middle": [],
"last": "Lohar",
"suffix": ""
},
{
"first": "Andy",
"middle": [],
"last": "Way",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Hadley",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of Association for Machine Translation in the Americas",
"volume": "",
"issue": "",
"pages": "78--88",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alberto Poncelas, Pintu Lohar, Andy Way, and James Hadley. 2020a. The Impact of Indirect Machine Translation on Sentiment Classification. In Proceed- ings of Association for Machine Translation in the Americas, AMTA, pages 78-88, Orlando, Florida.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Multiple Segmentations of Thai Sentences for Neural Machine Translation",
"authors": [
{
"first": "Alberto",
"middle": [],
"last": "Poncelas",
"suffix": ""
},
{
"first": "Wichaya",
"middle": [],
"last": "Pidchamook",
"suffix": ""
},
{
"first": "Chao-Hong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Hadley",
"suffix": ""
},
{
"first": "Andy",
"middle": [],
"last": "Way",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of The 1st Joint Spoken Language Technologies for Underresourced languages and Collaboration and Computing for Under-Resourced Languages Workshop, SLTU-CCURL",
"volume": "",
"issue": "",
"pages": "240--244",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alberto Poncelas, Wichaya Pidchamook, Chao-Hong Liu, James Hadley, and Andy Way. 2020b. Mul- tiple Segmentations of Thai Sentences for Neu- ral Machine Translation. In Proceedings of The 1st Joint Spoken Language Technologies for Under- resourced languages and Collaboration and Com- puting for Under-Resourced Languages Workshop, SLTU-CCURL, pages 240-244, Marseille, France.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Combining SMT and NMT back-translated data for efficient NMT",
"authors": [
{
"first": "Alberto",
"middle": [],
"last": "Poncelas",
"suffix": ""
},
{
"first": "Maja",
"middle": [],
"last": "Popovic",
"suffix": ""
},
{
"first": "Dimitar",
"middle": [],
"last": "Shterionov",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of Recent Advances in Natural Language Processing (RANLP)",
"volume": "",
"issue": "",
"pages": "922--931",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alberto Poncelas, Maja Popovic, Dimitar Shterionov, Gideon Maillette de Buy Wenniger, and Andy Way. 2019b. Combining SMT and NMT back-translated data for efficient NMT. In Proceedings of Re- cent Advances in Natural Language Processing (RANLP), pages 922-931, Varna, Bulgaria.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Extracting correctly aligned segments from unclean parallel data using character n-gram matching",
"authors": [
{
"first": "Maja",
"middle": [],
"last": "Popovi\u0107",
"suffix": ""
},
{
"first": "Alberto",
"middle": [],
"last": "Poncelas",
"suffix": ""
}
],
"year": 2020,
"venue": "Konferenca Jezikovne tehnologije in digitalna humanistika",
"volume": "",
"issue": "",
"pages": "74--80",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maja Popovi\u0107 and Alberto Poncelas. 2020. Extract- ing correctly aligned segments from unclean paral- lel data using character n-gram matching. In Konfer- enca Jezikovne tehnologije in digitalna humanistika, pages 74-80, Ljubljana, Slovenia.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "BPE-Dropout: Simple and Effective Subword Regularization",
"authors": [
{
"first": "Ivan",
"middle": [],
"last": "Provilkov",
"suffix": ""
},
{
"first": "Dmitrii",
"middle": [],
"last": "Emelianenko",
"suffix": ""
},
{
"first": "Elena",
"middle": [],
"last": "Voita",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1882--1892",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ivan Provilkov, Dmitrii Emelianenko, and Elena Voita. 2020. BPE-Dropout: Simple and Effective Subword Regularization. In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics, pages 1882-1892, Seattle, USA.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Improving Neural Machine Translation Models with Monolingual Data",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL 2016)",
"volume": "",
"issue": "",
"pages": "86--96",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Improving Neural Machine Translation Models with Monolingual Data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL 2016), pages 86- 96, Berlin, Germany.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Neural machine translation of rare words with subword units",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1715--1725",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural machine translation of rare words with subword units. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715- 1725, Berlin, Germany.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Selecting Backtranslated Data from Multiple Sources for Improved Neural Machine Translation",
"authors": [
{
"first": "Xabier",
"middle": [],
"last": "Soto",
"suffix": ""
},
{
"first": "Dimitar",
"middle": [],
"last": "Shterionov",
"suffix": ""
},
{
"first": "Alberto",
"middle": [],
"last": "Poncelas",
"suffix": ""
},
{
"first": "Andy",
"middle": [],
"last": "Way",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3898--3908",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xabier Soto, Dimitar Shterionov, Alberto Poncelas, and Andy Way. 2020. Selecting Backtranslated Data from Multiple Sources for Improved Neural Ma- chine Translation. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 3898-3908, Seattle, USA.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "A discriminative approach to filter out noisy sentence pairs from bilingual corpora",
"authors": [
{
"first": "Kaveh",
"middle": [],
"last": "Taghipour",
"suffix": ""
},
{
"first": "Nasim",
"middle": [],
"last": "Afhami",
"suffix": ""
},
{
"first": "Shahram",
"middle": [],
"last": "Khadivi",
"suffix": ""
},
{
"first": "Saeed",
"middle": [],
"last": "Shiry",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of 5th International Symposium on Telecommunications (IST 2010)",
"volume": "",
"issue": "",
"pages": "537--541",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kaveh Taghipour, Nasim Afhami, Shahram Khadivi, and Saeed Shiry. 2010. A discriminative approach to filter out noisy sentence pairs from bilingual cor- pora. In Proceedings of 5th International Sympo- sium on Telecommunications (IST 2010), pages 537- 541, Tehran, Iran.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Parallel data, tools and interfaces in opus",
"authors": [
{
"first": "J\u00f6rg",
"middle": [],
"last": "Tiedemann",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Eighth International Conference on Language Resources and Evaluation, (LREC)",
"volume": "",
"issue": "",
"pages": "2214--2218",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J\u00f6rg Tiedemann. 2012. Parallel data, tools and inter- faces in opus. In Proceedings of the Eighth Interna- tional Conference on Language Resources and Eval- uation, (LREC), pages 2214-2218, Istanbul, Turkey.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "What level of quality can neural machine translation attain on literary text?",
"authors": [
{
"first": "Antonio",
"middle": [],
"last": "Toral",
"suffix": ""
},
{
"first": "Andy",
"middle": [],
"last": "Way",
"suffix": ""
}
],
"year": 2018,
"venue": "Translation Quality Assessment",
"volume": "",
"issue": "",
"pages": "263--287",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antonio Toral and Andy Way. 2018. What level of quality can neural machine translation attain on liter- ary text? In Translation Quality Assessment, pages 263-287. Springer.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "A comparison of pivot methods for phrase-based statistical machine translation",
"authors": [
{
"first": "Masao",
"middle": [],
"last": "Utiyama",
"suffix": ""
},
{
"first": "Hitoshi",
"middle": [],
"last": "Isahara",
"suffix": ""
}
],
"year": 2007,
"venue": "Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "484--491",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Masao Utiyama and Hitoshi Isahara. 2007. A compari- son of pivot methods for phrase-based statistical ma- chine translation. In Human Language Technologies 2007: The Conference of the North American Chap- ter of the Association for Computational Linguistics, pages 484-491, Rochester, USA.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998-6008, Long Beach, USA.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Salom\u00e9 (Salome)",
"authors": [
{
"first": "Oscar",
"middle": [],
"last": "Wilde",
"suffix": ""
},
{
"first": "Alfred",
"middle": [],
"last": "Douglas",
"suffix": ""
},
{
"first": "Hendrik",
"middle": [],
"last": "Bulthuis",
"suffix": ""
}
],
"year": 1910,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oscar Wilde, Alfred Douglas, trans, and Hendrik Bulthuis, trans. 1891 (1894, 1910). Salom\u00e9 (Sa- lome). Wikisource, Project Gutenberg.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Diagram with the experiments"
},
"TABREF1": {
"num": null,
"type_str": "table",
"text": "BLEU scores of the Books and Salome test sets when translated using the Baseline MT.",
"content": "<table/>",
"html": null
},
"TABREF3": {
"num": null,
"type_str": "table",
"text": "Model performance using different merge operations on the target side.",
"content": "<table/>",
"html": null
},
"TABREF5": {
"num": null,
"type_str": "table",
"text": "Model performance using different merge operations on the source side.",
"content": "<table><tr><td>Traindata</td><td>Books Salome</td></tr><tr><td>SRC89500 &amp; SRC10000 &amp;</td><td>7.99* 17.78</td></tr><tr><td>TRG89500 &amp; TRG50000</td><td/></tr><tr><td>All</td><td>8.11* 19.70*</td></tr></table>",
"html": null
},
"TABREF6": {
"num": null,
"type_str": "table",
"text": "Model performance using different merge operations both on the source and target side.",
"content": "<table/>",
"html": null
},
"TABREF8": {
"num": null,
"type_str": "table",
"text": "Translation examples from the test set.",
"content": "<table/>",
"html": null
}
}
}
}