ACL-OCL / Base_JSON /prefixW /json /wmt /2020.wmt-1.132.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:43:32.865570Z"
},
"title": "NRC Systems for Low Resource German-Upper Sorbian Machine Translation 2020: Transfer Learning with Lexical Modifications",
"authors": [
{
"first": "Rebecca",
"middle": [],
"last": "Knowles",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Larkin",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Darlene",
"middle": [],
"last": "Stewart",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Littell",
"suffix": "",
"affiliation": {},
"email": "patrick.littell@nrc-cnrc.gc.ca"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We describe the National Research Council of Canada (NRC) neural machine translation systems for the German-Upper Sorbian supervised track of the 2020 shared task on Unsupervised MT and Very Low Resource Supervised MT. Our models are ensembles of Transformer models, built using combinations of BPE-dropout, lexical modifications, and backtranslation.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "We describe the National Research Council of Canada (NRC) neural machine translation systems for the German-Upper Sorbian supervised track of the 2020 shared task on Unsupervised MT and Very Low Resource Supervised MT. Our models are ensembles of Transformer models, built using combinations of BPE-dropout, lexical modifications, and backtranslation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "We describe the National Research Council of Canada (NRC) neural machine translation systems for the shared task on Unsupervised MT and Very Low Resource Supervised MT. We participated in the supervised track of the low resource task, building Upper Sorbian-German neural machine translation (NMT) systems in both translation directions. Upper Sorbian is a minority language spoken in Germany. We built baseline systems (standard Transformer (Vaswani et al., 2017) with a byte-pair encoding vocabulary (BPE; Sennrich et al., 2016b)) trained on all available parallel data (60,000 lines), which resulted in unusually high BLEU scores for a language pair with such limited data.",
"cite_spans": [
{
"start": 442,
"end": 464,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In order to improve upon this baseline, we used transfer learning with modifications to the training lexicon. We did this in two ways: by experimenting with the application of BPE-dropout (Provilkov et al., 2020) to the transfer learning setting (Section 2.3), and by modifying Czech data used for training parent systems with word and character replacements in order to make it more \"Upper Sorbian-like\" (Section 2.4).",
"cite_spans": [
{
"start": 188,
"end": 212,
"text": "(Provilkov et al., 2020)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our final systems were ensembles of systems built using transfer learning and these two approaches to lexicon modification, along with iterative backtranslation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In both translation directions, our final systems consist of ensembles of multiple systems, built using transfer learning (Section 2.2), BPE-Dropout (Section 2.3), alternative preprocessing of Czech data (Section 2.4), and backtranslation (Section 2.5). We describe these approaches and related work in the following sections, providing implementation details for reproducibility in Sections 3, 4 and 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approaches 2.1 General System Notes",
"sec_num": "2"
},
{
"text": "Zoph et al. 2016proposed a transfer learning approach for neural machine translation, using language pairs with larger amounts of data to pre-train a parent system, followed by finetuning a child system on the language pair of interest. Nguyen and Chiang (2017) expand on that, showing improved performance using BPE and shared vocabularies between the parent and child. We follow this approach: we build disjoint source and target BPE models and vocabularies, with one vocabulary for German (DE) and one for the combination of Czech (CS) and Upper Sorbian (HSB); see Section 4.",
"cite_spans": [
{
"start": 492,
"end": 496,
"text": "(DE)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Transfer Learning",
"sec_num": "2.2"
},
{
"text": "We chose to use Czech-German data as the parent language pair due to the task suggestions, relative abundance of data, and the close relationship between Czech and Upper Sorbian (cf. Lin et al., 2019; Kocmi and Bojar, 2018) . While Czech and Upper Sorbian cognates are often not identical at the character level (Table 1) , there is a high level of character-level overlap; trying to take advantage of that overlap without assuming complete characterlevel identity is a motivation for the explorations in subsequent sections (Section 2.3, Section 2.4). Another relatively high-resource language related to Upper Sorbian is Polish, but while the Czech and Upper Sorbian orthographies are fairly similar, mostly using the same characters for the same sounds (with a few notable exceptions), Polish orthography is more distinct. This, combined with the lack of a direct Polish-German parallel dataset in the constrained condition, led us to choose Czech as our transfer language for these experiments.",
"cite_spans": [
{
"start": 183,
"end": 200,
"text": "Lin et al., 2019;",
"ref_id": "BIBREF14"
},
{
"start": 201,
"end": 223,
"text": "Kocmi and Bojar, 2018)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 312,
"end": 321,
"text": "(Table 1)",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Transfer Learning",
"sec_num": "2.2"
},
{
"text": "Upper Sorbian analyzovat analyzowa\u0107 donesl donjes\u0142 extern\u00edch eksternych hospod\u00e1\u0159sk\u00e1 hospodarsce kreativn\u00ed kreatiwne okres wokrjes potom potym projekt projekt s\u00e9mantick\u00e1 semantisku velk\u00fdm wulkim Other work on transfer learning for low-resource machine translation includes multilingual seed models (Neubig and Hu, 2018) , dynamically adding to the vocabulary when adding languages (Lakew et al., 2018) , and using a hierarchical architecture to use multiple language pairs (Luo et al., 2019) .",
"cite_spans": [
{
"start": 297,
"end": 318,
"text": "(Neubig and Hu, 2018)",
"ref_id": "BIBREF18"
},
{
"start": 380,
"end": 400,
"text": "(Lakew et al., 2018)",
"ref_id": "BIBREF13"
},
{
"start": 472,
"end": 490,
"text": "(Luo et al., 2019)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Czech",
"sec_num": null
},
{
"text": "We apply the recently-proposed approach of performing BPE-dropout (Provilkov et al., 2020) , which takes an existing BPE model and randomly drops some merges at each merge step when applying the model to text. The goal of this, beside leading to more robust subword representations in general, is to produce subword representations that are more likely to overlap between the pretraining (Czech-German) and finetuning (Upper Sorbian-German) stages. We hypothesized that, in the same way that BPE-Dropout leads to robustness against accidental spelling errors and variant spellings (Provilkov et al., 2020) , it could likewise lead to robustness to the kind of spelling variations we see between two related languages.",
"cite_spans": [
{
"start": 66,
"end": 90,
"text": "(Provilkov et al., 2020)",
"ref_id": "BIBREF21"
},
{
"start": 581,
"end": 605,
"text": "(Provilkov et al., 2020)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "BPE-Dropout",
"sec_num": "2.3"
},
{
"text": "For example, consider the putative Czech-Upper Sorbian cognates and shared loanwords presented in Table 1 . Sometimes a fixed BPE segmentation happens to separate shared characters into shared subwords (e.g. CS analy@@ z@@ ovat vs. HSB analy@@ z@@ owa\u0107), such that the presence of the former during pre-training can initialize at least some of the subwords that the model will later see in Upper Sorbian. However, other times the character-level differences lead to segmentations where no subwords are shared (e.g. CS hospod\u00e1\u0159@@ sk\u00e1 vs. HSB hospodar@@ sce or potom vs. HSB po@@ tym). Considering a wider variety of segmentations would, we hypothesized, mean that Upper Sorbian subwords would have more chance of being initialized during Czech pre-training (see Appendix C).",
"cite_spans": [],
"ref_spans": [
{
"start": 98,
"end": 105,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "BPE-Dropout",
"sec_num": "2.3"
},
{
"text": "Rather than modifying the NMT system itself to reapply BPE-dropout during training, we treated BPE-dropout as a preprocessing step. Additionally, we experimented with BPE-dropout in the context of transfer learning, examining the effects of using source-side, both-sides, or no dropout in both parent and child systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BPE-Dropout",
"sec_num": "2.3"
},
{
"text": "For the Upper Sorbian-German direction, we also experimented with two techniques for modifying the Czech-German parallel data so that the Czech side is more like Upper Sorbian. In particular, we concentrated on modification methods that require neither large amounts of data, nor in-depth knowledge of the historical relationships between the languages, since both of these are often lacking for the lower-resourced language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pseudo-Sorbian",
"sec_num": "2.4"
},
{
"text": "We considered two variations of this idea:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pseudo-Sorbian",
"sec_num": "2.4"
},
{
"text": "\u2022 word-level modification, in which some frequent Czech words (e.g. prepositions) are replaced by likely Upper Sorbian equivalents, and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pseudo-Sorbian",
"sec_num": "2.4"
},
{
"text": "\u2022 character-level modification, where we attempt to convert Czech words at the character level to forms that may more closely resemble Upper Sorbian words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pseudo-Sorbian",
"sec_num": "2.4"
},
{
"text": "Note that in neither case do we know what particular conversions are correct; we ourselves do not know enough about historical Western Slavic to predict the actual Upper Sorbian cognates of Czech words. Rather, we took inspiration from stochastic segmentation methods like BPE-Dropout (Provilkov et al., 2020) and SentencePiece (Kudo and Richardson, 2018 ): when we have an idea of the possible solutions to the segmentation problem but do not know which one is the correct one, we can sample randomly from the possible segmentations as a sort of regularization, with the goal of discouraging the model from relying too heavily on a single segmentation scheme and giving it some exposure to a variety of possible segmentations. Whereas BPE-dropout and Sentence-Piece focus on possible segmentations of the word, our pseudo-Sorbian experiments focus on possible word-and character-level replacements. The goal was to discourage the parent Czech-German model from relying too heavily on regularities in Czech (e.g. the presence of particular frequent words, the presence of particular Czech character n-grams) and perhaps also gain some prior exposure to Upper Sorbian words and characters that will occur in the genuine Upper Sorbian data; we can also think of this as a form of low-resource data augmentation (Fadaee et al., 2017; . See Appendix C for an analysis of increased subword overlap between pseudo-Sorbian and test data, as compared to BPE-dropout and the baseline approach.",
"cite_spans": [
{
"start": 285,
"end": 309,
"text": "(Provilkov et al., 2020)",
"ref_id": "BIBREF21"
},
{
"start": 328,
"end": 354,
"text": "(Kudo and Richardson, 2018",
"ref_id": "BIBREF12"
},
{
"start": 1309,
"end": 1330,
"text": "(Fadaee et al., 2017;",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pseudo-Sorbian",
"sec_num": "2.4"
},
{
"text": "To generate the word-level pseudo-Sorbian, we ran fast align (Dyer et al., 2013) on the Czech-German and German-Upper Sorbian parallel corpora, and took the product of the resulting word correspondences, to generate candidate Czech-Upper Sorbian word correspondences. As this process produces many unlikely correspondences, particularly for words that occur only a few times in the corpora, we filtered this list so that any Czech-German word correspondence that occurred fewer than 500 times in the aligned corpus was ineligible, and likewise any German-Upper Sorbian correspondence that occurred fewer than 50 times. We then used these correspondences to randomly replace 10% of eligible Czech words in the Czech-German corpus with one of their putative equivalents in Upper Sorbian. The result is a language that is mostly still Czech, but in which some high-frequency words (especially prepositions) are Upper Sorbian.",
"cite_spans": [
{
"start": 61,
"end": 80,
"text": "(Dyer et al., 2013)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word-level pseudo-Sorbian",
"sec_num": "2.4.1"
},
{
"text": "To generate the character-level pseudo-Sorbian, we began with the same list of putative Czech-Upper Sorbian word correspondences, calculated the Levenshtein distances (normalized by length) between them, and filtered out pairs that exceeded 0.5 distance. This gave a list of words that were likely cognates, from which we hand-selected a development set of about 200; a sample of these is seen in Table 1 . Using this set to identify character-level correspondences (e.g. CS v to HSB w, CS d to HSB d\u017a before front vowels, etc.), we wrote a program to randomly replace the appropriate Czech character sequences with possible correspondences in Upper Sorbian. Again, as Czech-Upper Sorbian correspondences are not entirely predictable (CS e might happen to correspond, in a particular cognate, to HSB e or ej or i or a or o, etc.), we cannot expect that any given result is correct Upper Sorbian. Rather, we can think of this process as attempting to train a system that can respond to inputs from a variety of possible (but not necessarily actual) Western Slavic languages, rather than just a system that can respond to precisely-spelled Czech and only Czech.",
"cite_spans": [],
"ref_spans": [
{
"start": 397,
"end": 404,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Character-level pseudo-Sorbian",
"sec_num": "2.4.2"
},
{
"text": "In initial testing, we determined that a combination of word-level and character-level modification performed best; we ran each process on the Czech-German corpus separately, then concatenated the resulting corpora and trained a parent model on it. Due to time constraints we did not run the full set of ablation experiments. Subsequent finetuning on genuine Upper Sorbian-German data proceeded as normal, without any modification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combined pseudo-Sorbian",
"sec_num": "2.4.3"
},
{
"text": "For all pseudo-Sorbian systems, we used the BPE vocabulary trained on the original Czech and Upper Sorbian data, rather than the modified data, so that systems trained on pseudo-Sorbian data could still be ensembled with systems trained only on the original data (Section 2.6).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combined pseudo-Sorbian",
"sec_num": "2.4.3"
},
{
"text": "We used backtranslation (Sennrich et al., 2016a) to incorporate monolingual German and Upper Sorbian data into training. We backtranslated all Upper Sorbian monolingual data (after filtering as described in Section 3). We backtranslated the German monolingual news-commentary data and 1.2M randomly sampled lines of 2019 German news.",
"cite_spans": [
{
"start": 24,
"end": 48,
"text": "(Sennrich et al., 2016a)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Backtranslation",
"sec_num": "2.5"
},
{
"text": "We experiment with iterative backtranslation: backtranslating data using systems without backtranslation, and then using the new systems built using the backtranslated text to perform a second iteration of backtranslation (Hoang et al., 2018; Niu et al., 2018; . Like Caswell et al. 2019, we use source-side tags at the start of backtranslated sentences to indicate to the models which sentences are the product of backtranslation.",
"cite_spans": [
{
"start": 222,
"end": 242,
"text": "(Hoang et al., 2018;",
"ref_id": "BIBREF7"
},
{
"start": 243,
"end": 260,
"text": "Niu et al., 2018;",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Backtranslation",
"sec_num": "2.5"
},
{
"text": "Our final systems are ensembles of several systems. Because all systems used the same vocabulary sets and same model sizes, we could decode using Sockeye's (Hieber et al., 2018) default ensembling mechanism.",
"cite_spans": [
{
"start": 156,
"end": 177,
"text": "(Hieber et al., 2018)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Ensembling",
"sec_num": "2.6"
},
{
"text": "We used all provided parallel German-Upper Sorbian data and all monolingual Upper Sorbian data (after filtering), along with German-Czech parallel data from Open Subtitles (Lison and Tiedemann, 2016) , 1 DGT (Tiedemann, 2012; Steinberger et al., 2012) , JW300 (Agi\u0107 and Vuli\u0107, 2019) , Europarl v10 (Koehn, 2005) , News-Commentary v15, and WMT-News 2 for building the BPE vocabularies. The monolingual Upper Sorbian Web and Witaj datasets 3 were filtered to remove lines containing characters that had not been observed in the Upper Sorbian parallel data or in the Czech data; this removed sentences that contained text in other scripts and other languages. The Czech-German data was used for training parent models, while monolingual German and Upper Sorbian were used (along with parallel German-Upper Sorbian data) for training child models. A table of data sizes and how they were used is shown in Appendix A.",
"cite_spans": [
{
"start": 172,
"end": 199,
"text": "(Lison and Tiedemann, 2016)",
"ref_id": "BIBREF15"
},
{
"start": 208,
"end": 225,
"text": "(Tiedemann, 2012;",
"ref_id": "BIBREF27"
},
{
"start": 226,
"end": 251,
"text": "Steinberger et al., 2012)",
"ref_id": "BIBREF26"
},
{
"start": 260,
"end": 282,
"text": "(Agi\u0107 and Vuli\u0107, 2019)",
"ref_id": "BIBREF0"
},
{
"start": 298,
"end": 311,
"text": "(Koehn, 2005)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "We build BPE vocabularies of size 2k, 5k, 10k, 15k, and 20k using subword-nmt 4 (Sennrich et al., 2016b) . After building the vocabulary, we add a set of 25 generic tags, plus a special backtranslation tag \"<BT>\", which we use in future experiments for indicating when training data has been backtranslated (Caswell et al., 2019) . We also add all Moses and Sockeye special tags (ampersand, <unk> etc.) to a glossary file used for applying BPE, which prevents them from being segmented.",
"cite_spans": [
{
"start": 80,
"end": 104,
"text": "(Sennrich et al., 2016b)",
"ref_id": "BIBREF23"
},
{
"start": 307,
"end": 329,
"text": "(Caswell et al., 2019)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "4"
},
{
"text": "Because there is so much more Czech data than Upper Sorbian data, we duplicate the in-domain parallel hsb-de data and the monolingual HSB data 25 times when training BPE in order to make sure that HSB data is adequately represented (and not Table 2 : Comparison of BPE-dropout use in both parent and child systems for 10k vocabulary DE-HSB translation (measured on devel test set), without backtranslation. All parent systems were trained on the German-Czech data, while child systems trained on the parallel DE-HSB data. None involves no BPE-dropout, source applies BPE-dropout to the source side only, and both applies it to both the source and the target.",
"cite_spans": [],
"ref_spans": [
{
"start": 241,
"end": 248,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "4"
},
{
"text": "overwhelmed by Czech data) in training the encoding. After training BPE, we extract (and fix for the remainder of our experiments) a single DE vocabulary and a single HSB-CS vocabulary, covering all the relevant data used to train BPE for that language pair. We ran BPE-dropout with a rate of 0.1 over the training data 5 times using the same BPE merge operations, vocabularies and glossaries as before, concatenating these variants to form an extended training set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "4"
},
{
"text": "We used Sockeye's (Hieber et al., 2018) implementation of Transformer (Vaswani et al., 2017) with 6 layers, 8 attention heads, network size of 512 units, and feedforward size of 2048 units. We have changed the default gradient clipping type to absolute, used the whole validation set during validation, an initial learning rate of 0.0001, batches of \u223c8192 tokens/words, maximum sentence length of 200 tokens, optimizing for BLEU. Parent systems used checkpoint intervals of 2500 and 4000. Child system checkpoint intervals varied from 65 to 4000 to balance frequent checkpointing with efficiency. Decoding was performed with beam size 5.",
"cite_spans": [
{
"start": 18,
"end": 39,
"text": "(Hieber et al., 2018)",
"ref_id": "BIBREF6"
},
{
"start": 70,
"end": 92,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Software and Systems",
"sec_num": "5"
},
{
"text": "6 Results and Discussion",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Software and Systems",
"sec_num": "5"
},
{
"text": "Provilkov et al. (2020) examine BPE-dropout when building translation systems for individual language pairs. Here we apply it in a transfer learning setting, raising the question of whether BPEdropout should be applied to the parent system, the child system, or both, as well as the question of using source-side BPE-dropout or both source-and target-side BPE-dropout.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BPE-Dropout in Transfer Learning",
"sec_num": "6.1"
},
{
"text": "Our results for this are somewhat mixed, owing in part to the relatively small BLEU gains produced by BPE-dropout (as compared to backtranslation). In Table 2 we show BLEU scores for German-Upper Sorbian translation with a 10k vocabulary and no backtranslation. The most promising systems in that experiment are those with source-side BPE-dropout in the child system, with either both side or source-side dropout in the parent. In the 20k vocabulary DE-HSB setting with second iteration backtranslation, we saw a similar effect, with source BPE-dropout for both parent and child having a BLEU score of 58.4 on devel test, +1.1 above the second-best system (no BPE-dropout in parent or child). Results in the other translation direction were more ambiguous, leaving room for future analysis of BPE-dropout in the transfer learning setting.",
"cite_spans": [],
"ref_spans": [
{
"start": 151,
"end": 158,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "BPE-Dropout in Transfer Learning",
"sec_num": "6.1"
},
{
"text": "As a result of these experiments, many of the systems we used in our final ensembles were trained with source-side BPE-dropout, though when it appeared promising we also ensembled with systems without BPE-dropout.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BPE-Dropout in Transfer Learning",
"sec_num": "6.1"
},
{
"text": "We performed two rounds of backtranslation of Upper Sorbian monolingual data and German monolingual data described in Section 2.5. The first round (BT1) used our strongest system without backtranslation, while the second round (BT2) used our strongest system including backtranslated data from the first round. We ran experiments sweeping BPE vocabulary sizes and backtranslated corpora; for German news we experimented 300k and 600k subsets as well as the full 1.2M line random subselection. In all experiments the 60k sentence-pair parallel HSB-DE corpus was replicated a number of times to approximately match the included backtranslated data in number of lines.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Iterative Backtranslation",
"sec_num": "6.2"
},
{
"text": "The second round of backtranslation of the Upper Sorbian monolingual data improved the BLEU score by 0.7 BLEU points for the best configuration, with the vocabulary size of the best configuration increasing to 20k from 15k. However, the second round of backtranslation of the German monolingual data did not improve the subsequent HSB-DE systems, instead showing a drop of 0.1 BLEU points; our final system (Section 6.5) uses a mix of systems trained using BT1 and BT2. For full details of the systems used for backtranslation, see Appendix B. Generating multiple translation for backtranslation (i.e. multiple source sentences for each authentic target sentence) is known to improve translation quality ; all of the systems we have described here used a single backtranslation per target sentence. After the submission of our final systems, we experimented with backtranslation using n-best translations of the monolingual text. In both directions, we found that building student systems using the 10-best backtranslation list generated with sampling from the softmax's top-10 vocabulary (rather than taking the max), but without BPE-dropout, produced improvements of around 0.2-0.8 BLEU. 5 The resulting systems had comparable BLEU scores to the systems trained with single variant backtranslation and BPE-dropout; we leave as future work an examination of the result of combining multiple backtranslations with BPE-dropout.",
"cite_spans": [
{
"start": 1190,
"end": 1191,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Iterative Backtranslation",
"sec_num": "6.2"
},
{
"text": "Here we first discuss the impact of our non-pseudo-Sorbian approaches: BPE-dropout, backtranslation, and transfer learning, showing how each contributed to the final systems used for ensembling. Table 3 shows ablation experiments for DE-HSB (20k vocabulary) and HSB-DE (10k vocabulary). 6 In the first four lines, we consider training a system without transfer learning, starting from a base-line built using only the parallel Upper Sorbian-German data. Despite the small data size, and perhaps due to the close match between training and test data, this baseline has high BLEU scores on the devel test set: 44.2 (DE-HSB) and 44.1 (HSB-DE). Adding BPE-dropout to this setting (with 5 runs of the algorithm) results in a modest improvement (+0.2 BLEU for DE-HSB and +0.6 BLEU for DE-HSB). If we instead add backtranslated data (translated in our second iteration of backtranslation), we see a much larger jump of +10.7 and +10.6 BLEU respectively over the baselines; note that this also represents a huge increase in available data for training. Combining the two approaches adds an additional +1.2 and 0.3 BLEU, respectively.",
"cite_spans": [
{
"start": 287,
"end": 288,
"text": "6",
"ref_id": null
}
],
"ref_spans": [
{
"start": 195,
"end": 202,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Ablation",
"sec_num": "6.3"
},
{
"text": "In fact, these systems outperform both a parentchild baseline and a parent-child system with BPEdropout, highlighting the importance of incorporating additional target-side monolingual data in the low-resource setting. Once we combine backtranslation we see a moderate improvement over the child systems with BPE-dropout (+2.6 and +2.4 BLEU, respectively). Again, combining BPEdropout and backtranslation still produces more improvement, as does eventual ensembling.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ablation",
"sec_num": "6.3"
},
{
"text": "Due to time constraints, we did not run a full ablation study of word, character and combined pseudo-Sorbian. Our initial results (run with an earlier version of character pseudo-Sorbian, and a differently extracted BPE vocabulary) found for the HSB-DE direction that word pseudo-Sorbian slightly outperformed (on the order of 0.5 BLEU) character pseudo-Sorbian for 10k vocabulary, but was comparable for 2k and 5k vocabulary sizes; these results are given in Appendix C. The combination of the two had slightly higher scores across those three vocabulary sizes (ranging from +0.1 to +0.6 BLEU) than either of the two individual approaches, so we used the combination for the remaining experiments. Our final German-Upper Sorbian system is an ensemble of four systems, with vocabulary size of 20k merges. All child models ensembled were trained on second iteration backtranslated monolingual HSB data (all available, filtered) and 12 replications of the de-hsb parallel text, with backtranslation tags.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ablation",
"sec_num": "6.3"
},
{
"text": "1. Child without BPE-dropout, de-cs parent without BPE-dropout. 2. Child with source side BPE-dropout, de-cs parent with source side BPE-dropout 3. Child without BPE-dropout, pseudo-hsb-de parent without BPE-dropout. 4. Child with source side BPE-dropout, pseudohsb-de parent with source side BPE-dropout",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Final German-Upper Sorbian System",
"sec_num": "6.4"
},
{
"text": "The system scores on devel test are shown in Table 4 . The best scoring individual systems were transfer learning systems with source-side BPE-dropout, with the one using pseudo-Sorbian falling slightly behind the non-pseudo-Sorbian by 0.2 BLEU points. Without BPE-dropout, the best pseudo-Sorbian system shown here outperforms its corresponding non-pseudo-Sorbian system by approximately 0.1 BLEU. On the test set, this system had scores of (as computed by the Matrix submission) 57.3 BLEU-cased, TER (Snover et al., 2006) of 0.3, BEER 2.0 (Stanojevi\u0107 and Sima'an, 2014) of 0.754, and CharacTER (Wang et al., 2016) of 0.255. This was 3.4 BLEU-cased behind the best-scoring system (SJTU-NICT), but within 0.6 BLEU of the second-and third-highest scoring systems (University of Helsinki); it was also tied with the third-highest scoring system (University of Helsinki) in terms of CharactTER. The final Upper Sorbian-German system is an ensemble of systems with a BPE vocabulary of 10k merges.",
"cite_spans": [
{
"start": 502,
"end": 523,
"text": "(Snover et al., 2006)",
"ref_id": "BIBREF24"
},
{
"start": 541,
"end": 571,
"text": "(Stanojevi\u0107 and Sima'an, 2014)",
"ref_id": "BIBREF25"
},
{
"start": 596,
"end": 615,
"text": "(Wang et al., 2016)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [
{
"start": 45,
"end": 52,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Final German-Upper Sorbian System",
"sec_num": "6.4"
},
{
"text": "1. Child with source side BPE-dropout, 20 times hsb-de data, 1.2M lines of first iteration backtranslated news data; cs-de parent with source side BPE-dropout 2. Child without BPE-dropout, 25 times hsb-de data, news commentary (NC) and 1.2M lines of news second iteration backtranslated; 7 csde parent without BPE-dropout 3. Child without BPE-dropout, 25 times hsb-de data, NC and 1.2M lines of news first iteration backtranslated; pseudo-hsb-de parent without BPE-dropout 4. Child with source side BPE-dropout, 25 times hsb-de data, NC and 1.2M lines of news first iteration backtranslated; pseudo-hsb-de parent with source side BPE-dropout 5. Child without BPE-dropout, 20 times hsb-de data and 1.2M lines of second iteration backtranslated news data; pseudo-hsb-de parent without BPE-dropout Table 5 shows that the five systems combined were very comparable in BLEU scores (57.1 and 57.2), but their ensembled BLEU score showed an improvement of \u22651.7 BLEU over each individual score. The final ensemble had a BLEU-cased score of 58.9 on the test data (calculated by the Matrix submission systems), a TER of 0.29, a BEER 2.0 of 0.579, and a CharacTER score of 0.268. This represented a -0.7 BLEU-cased difference off of the best system (University of Helsinki), but only a -0.001 CharactTER difference.",
"cite_spans": [],
"ref_spans": [
{
"start": 795,
"end": 802,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Final Upper Sorbian-German System",
"sec_num": "6.5"
},
{
"text": "We experimented with a variety of ensembles, and found that our strongest ensembles were those that included both the pseudo-Sorbian systems and those built without pseudo-Sorbian. In initial experiments with Upper Sorbian-German systems, with vocabulary size 5k, we found that adding pseudo-Sorbian systems to ensembles produced improvements even if the pseudo-Sorbian system did not have quite as high of a BLEU score as the systems built without it. For example, combining the top three systems without pseudo-Sorbian (BLEU scores of 57.3, 57.2, and 57.0, respectively) or the top two of those systems resulted in ensemble system BLEU scores of 57.9. Replacing the thirdbest system with a pseudo-Sorbian system with a BLEU score of 56.6 resulted in an improved ensemble BLEU score of 58.5. Diverse ensembles (e.g., different architectures or runs) are known to outperform less diverse ensembles (e.g., ensembles of checkpoints) for neural machine translation (Farajian et al., 2016; Denkowski and Neubig, 2017; . While diversity of models for ensembling is usually discussed in terms of model architecture or seeding of multiple runs, we could argue that the use of lexically modified training data could constitute another form of model diversity, contributing to a stronger ensembled model.",
"cite_spans": [
{
"start": 962,
"end": 985,
"text": "(Farajian et al., 2016;",
"ref_id": "BIBREF5"
},
{
"start": 986,
"end": 1013,
"text": "Denkowski and Neubig, 2017;",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6.6"
},
{
"text": "For baseline systems trained only on the parallel data, smaller vocabulary sizes performed best, as expected (given only 60,000 lines of text, large vocabulary sizes may contain many tokens that are only observed a small number of times). As we added transfer learning, backtranslation, and eventually ensembling, the best systems were those with slightly larger vocabulary sizes. In the Upper Sorbian-German translation direction, some of our best performing systems that did not use pseudo-Sorbian were found with a 5k vocabulary size, while 10k was generally better for the pseudo-Sorbian systems. We tried ensembles with both 5k and 10k that included pseudo-Sorbian and nonpseudo-Sorbian systems, and found the best results with 10k.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6.6"
},
{
"text": "In this work, we demonstrated that transfer learning, BPE-dropout, and backtranslation all provide improvements for this low-resource setting. Our experiments on lexical modifications, building pseudo-Sorbian text for training parent models, performed approximately on-par with standard transfer learning approaches, and could be trivially combined with BPE-dropout. While the lexical modification approach did not outperform the standard transfer learning setup, we found that it still improved ensembles, possibly due to the increase in system diversity. Table 6 shows the data sizes, including the size after filtering for the monolingual Upper Sorbian data, as well as how each dataset was used for BPE training and vocabulary extraction, parent training, and/or child training.",
"cite_spans": [],
"ref_spans": [
{
"start": 557,
"end": 564,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "The configurations used to backtranslate the first round were:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Backtranslation Details",
"sec_num": null
},
{
"text": "\u2022 For monolingual Upper Sorbian, the HSB-DE child system with 5k vocabulary size and both source and target side BPE-dropout for both the HSB-DE system and its CS-DE parent (53.4 BLEU on devel test)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Backtranslation Details",
"sec_num": null
},
{
"text": "\u2022 For monolingual German, the DE-HSB child with 10k vocabulary size and both source and target side BPE-dropout for both the DE-HSB system and its DE-CS parent (55.0 BLEU on devel test).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Backtranslation Details",
"sec_num": null
},
{
"text": "The following configurations were used to backtranslate the second round:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Backtranslation Details",
"sec_num": null
},
{
"text": "\u2022 For monolingual Upper Sorbian, the HSB-DE child system with 5k vocabulary size and source side BPE-dropout for both the HSB-DE system and its CS-DE parent; 25 times hsb-de data, DE news commentary and 1.2M lines of DE news backtranslated (57.25 BLEU on devel test)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Backtranslation Details",
"sec_num": null
},
{
"text": "\u2022 For monolingual German, the DE-HSB system with 15k vocabulary size and source side BPE-dropout for both the DE-HSB system and its DE-CS parent; 12 times hsb-de data, HSB Sorbian Institute, Witaj, and Web data backtranslated (57.7 BLEU on devel test).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Backtranslation Details",
"sec_num": null
},
{
"text": "After the second round of backtranslation, the top configurations were:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Backtranslation Details",
"sec_num": null
},
{
"text": "\u2022 For HSB-DE, the 5k vocabulary size child with source side BPE-dropout for both the HSB-DE system and its CS-DE parent; 20 times hsb-de data, 1.2M lines of (second round) backtranslated DE news (57.15 BLEU on devel test)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Backtranslation Details",
"sec_num": null
},
{
"text": "\u2022 For monolingual German, the 20k vocabulary size child with source side BPE-dropout for both the DE-HSB system and its DE-CS parent; 12 times hsb-de data, backtranslated (second round) HSB Sorbian Institute, Witaj, and Web data (58.4 BLEU on devel test).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Backtranslation Details",
"sec_num": null
},
{
"text": "We note that the second round of backtranslating the German monolingual news data into Upper Sorbian did not improve the BLEU score for the subsequent HSB-DE systems, with the best configuration dropping by 0.1 BLEU points. However, the second round of backtranslation of the Upper Sorbian monolingual data did improve the BLEU score by 0.7 BLEU points for the best configuration, with the vocabulary size of the best configuration increasing to 20k from 15k.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Backtranslation Details",
"sec_num": null
},
{
"text": "C Pseudo-Sorbian Comparisons and Analysis Table 7 presents the results of our pseudo-Sorbian comparison discussed in Sections 2.4 and 6.3; as mentioned; we find that both word-and characterlevel modifications are similar at small vocabulary sizes, but that word-level modification outperforms at a higher vocabulary size. However, at all vocabulary sizes a combination of the two improves over either approach on its own. It should be noted again that these preliminary results are not directly comparable to other results in this paper (having trained on a smaller corpus, lacking the JW300 documents) and are also not technically constrained (as the word list used to create the character-level replacement was from bilingual dictionaries, not the constrained corpora). In our submitted systems, we created a new characterlevel system using only the constrained corpora.",
"cite_spans": [],
"ref_spans": [
{
"start": 42,
"end": 49,
"text": "Table 7",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "B Backtranslation Details",
"sec_num": null
},
{
"text": "As pseudo-Sorbian lexical modification creates a new training corpus, this raises questions of how to appropriately create BPE vocabularies, in particular when the character-level version is used. In wordlevel pseudo-Sorbian, the resulting corpus still only consists of words found in the original Czech and Upper Sorbian corpora, although the resulting ngram frequencies will differ somewhat because of some Czech words being replaced by Upper Sorbian ones. Character-level pseudo-Sorbian, however, can create words and character-level n-grams that do not appear in the original corpus at all. 8",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Backtranslation Details",
"sec_num": null
},
{
"text": "Lines BPE/Voc. Table 6 : Data and how it was used, whether for BPE training and vocabulary extraction, parent model training, or child model training. Note that the monolingual German news.2019 data was subsampled, and the number of lines shown represents the full set from which the subsample was drawn.",
"cite_spans": [],
"ref_spans": [
{
"start": 15,
"end": 22,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data",
"sec_num": null
},
{
"text": "Pseudo-Sorbian BPE 2k BPE 5k BPE 10k Word-level 51.8 52.6 52.6 Character-level 51.9 52.6 52.1 Both 52.4 52.7 52.8 The systems in Table 7 use system-specific BPE; that is, the BPE operations and vocabulary are constructed for each specific {pseudo-Sorbian, Upper Sorbian} training corpus. However, in the final submitted systems, we used a fixed vocabulary from the original {Czech, Upper Sorbian} corpus, which made it possible to ensemble pseudo-Sorbian systems with our other systems, giving us better results than either type of system alone. We do not know what effect (negative or positive) this may have on the quality of the pseudo-Sorbian-trained systems (since they would be using a BPE vocabulary for a different set of \"languages\", and thus may be over-segmented). 9 This raises a number of questions about appropriate choices of BPE models, which increases the complexity of ablation studies beyond what we are able to address in the scope of this paper.",
"cite_spans": [],
"ref_spans": [
{
"start": 129,
"end": 136,
"text": "Table 7",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Data",
"sec_num": null
},
{
"text": "Setting aside the complications of various BPE 9 Using our final BPE segmentation does result in a slightly higher number of segmentations per token than a BPE model trained directly on the pseudo-Sorbian (combined version) data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": null
},
{
"text": "training schemes, we return to the BPE segmentations used in our final systems to analyze whether pseudo-Sorbian and BPE-dropout do indeed achieve their goals of producing more overlap between the pseudo-Sorbian or Czech training data and the Upper Sorbian data. We consider the devel test portion of the Upper Sorbian data. With a 10k BPE vocabulary, that test set contains 4540 unique subword types. 62.6% of those types (2840) are observed in the baseline Czech parent model training data, and 52.9% of the training tokens are in that set. After applying BPE-dropout to the Czech parent training data, the percentage of observed types increases slightly, to 63.4% (2878), with 58.9% of the training tokens in that set. With the pseudo-Sorbian combined system, however, we see a much bigger increase in type overlap: 89.0% of the Upper Sorbian devel test types (4041) were observed at least once in the pseudo-Sorbian parent data, making up 70.9% of the training tokens. Increased coverage of Upper Sorbian devel test subword tokens during parent training means that embeddings for those subword tokens will be updated during parent model training, hopefully in a way that improves their warm start in the Upper Sorbian student training. 10 ",
"cite_spans": [
{
"start": 1240,
"end": 1242,
"text": "10",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": null
},
{
"text": "http://www.opensubtitles.com 2 http://www.statmt.org/wmt20/ translation-task.html 3 http://www.statmt.org/wmt20/unsup_ and_very_low_res/ 4 https://github.com/rsennrich/ subword-nmt",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Authentic bitext was upsampled to keep the ratio identical to our prior experiments.6 Smaller vocabulary sizes perform better on the baseline experiments, but the trends remain the same, so we show results for our final vocabulary sizes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This version of the second iteration backtranslation differs slightly from that used in the remainder of the experiments, in that UNKs (tokens representing unknown words) were not filtered out.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In future work, it would probably be beneficial to guide the output of the modification with a character-level language model trained on target-language data, to better avoid the generation of n-grams that are unlikely or unattested in the target language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "While we could imagine that in some situations, they might end up with inappropriate representations, we expect those to be improved when the tokens are observed in student model training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank the reviewers for their comments and suggestions. We thank Yunli Wang, Chi-kiu Lo, Sowmya Vajjala, and Huda Khayrallah for discussion and feedback.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "JW300: A widecoverage parallel corpus for low-resource languages",
"authors": [
{
"first": "Zeljko",
"middle": [],
"last": "Agi\u0107",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3204--3210",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1310"
]
},
"num": null,
"urls": [],
"raw_text": "Zeljko Agi\u0107 and Ivan Vuli\u0107. 2019. JW300: A wide- coverage parallel corpus for low-resource languages. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3204-3210, Florence, Italy. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Tagged back-translation",
"authors": [
{
"first": "Isaac",
"middle": [],
"last": "Caswell",
"suffix": ""
},
{
"first": "Ciprian",
"middle": [],
"last": "Chelba",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Conference on Machine Translation",
"volume": "",
"issue": "",
"pages": "53--63",
"other_ids": {
"DOI": [
"10.18653/v1/W19-5206"
]
},
"num": null,
"urls": [],
"raw_text": "Isaac Caswell, Ciprian Chelba, and David Grangier. 2019. Tagged back-translation. In Proceedings of the Fourth Conference on Machine Translation (Vol- ume 1: Research Papers), pages 53-63, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Stronger baselines for trustable results in neural machine translation",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Denkowski",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the First Workshop on Neural Machine Translation",
"volume": "",
"issue": "",
"pages": "18--27",
"other_ids": {
"DOI": [
"10.18653/v1/W17-3203"
]
},
"num": null,
"urls": [],
"raw_text": "Michael Denkowski and Graham Neubig. 2017. Stronger baselines for trustable results in neural ma- chine translation. In Proceedings of the First Work- shop on Neural Machine Translation, pages 18-27, Vancouver. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A simple, fast, and effective reparameterization of IBM model 2",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Chahuneau",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "644--648",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Dyer, Victor Chahuneau, and Noah A. Smith. 2013. A simple, fast, and effective reparameter- ization of IBM model 2. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, pages 644-648, At- lanta, Georgia. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Data augmentation for low-resource neural machine translation",
"authors": [
{
"first": "Marzieh",
"middle": [],
"last": "Fadaee",
"suffix": ""
},
{
"first": "Arianna",
"middle": [],
"last": "Bisazza",
"suffix": ""
},
{
"first": "Christof",
"middle": [],
"last": "Monz",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "567--573",
"other_ids": {
"DOI": [
"10.18653/v1/P17-2090"
]
},
"num": null,
"urls": [],
"raw_text": "Marzieh Fadaee, Arianna Bisazza, and Christof Monz. 2017. Data augmentation for low-resource neural machine translation. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 567- 573, Vancouver, Canada. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "FBK's neural machine translation systems for IWSLT 2016",
"authors": [
{
"first": "Rajen",
"middle": [],
"last": "M Amin Farajian",
"suffix": ""
},
{
"first": "Costanza",
"middle": [],
"last": "Chatterjee",
"suffix": ""
},
{
"first": "Shahab",
"middle": [],
"last": "Conforti",
"suffix": ""
},
{
"first": "Vevake",
"middle": [],
"last": "Jalalvand",
"suffix": ""
},
{
"first": "Mattia A Di",
"middle": [],
"last": "Balaraman",
"suffix": ""
},
{
"first": "Duygu",
"middle": [],
"last": "Gangi",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Ataman",
"suffix": ""
},
{
"first": "Matteo",
"middle": [],
"last": "Turchi",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Negri",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Federico",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the ninth International Workshop on Spoken Language Translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M Amin Farajian, Rajen Chatterjee, Costanza Con- forti, Shahab Jalalvand, Vevake Balaraman, Mat- tia A Di Gangi, Duygu Ataman, Marco Turchi, Mat- teo Negri, and Marcello Federico. 2016. FBK's neu- ral machine translation systems for IWSLT 2016. In Proceedings of the ninth International Workshop on Spoken Language Translation, USA.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "The sockeye neural machine translation toolkit at AMTA 2018",
"authors": [
{
"first": "Felix",
"middle": [],
"last": "Hieber",
"suffix": ""
},
{
"first": "Tobias",
"middle": [],
"last": "Domhan",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Denkowski",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Vilar",
"suffix": ""
},
{
"first": "Artem",
"middle": [],
"last": "Sokolov",
"suffix": ""
},
{
"first": "Ann",
"middle": [],
"last": "Clifton",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 13th Conference of the Association for Machine Translation in the Americas",
"volume": "1",
"issue": "",
"pages": "200--207",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Felix Hieber, Tobias Domhan, Michael Denkowski, David Vilar, Artem Sokolov, Ann Clifton, and Matt Post. 2018. The sockeye neural machine translation toolkit at AMTA 2018. In Proceedings of the 13th Conference of the Association for Machine Transla- tion in the Americas (Volume 1: Research Papers), pages 200-207, Boston, MA. Association for Ma- chine Translation in the Americas.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Iterative backtranslation for neural machine translation",
"authors": [
{
"first": "Duy",
"middle": [],
"last": "Vu Cong",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "Gholamreza",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Haffari",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Cohn",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2nd Workshop on Neural Machine Translation and Generation",
"volume": "",
"issue": "",
"pages": "18--24",
"other_ids": {
"DOI": [
"10.18653/v1/W18-2703"
]
},
"num": null,
"urls": [],
"raw_text": "Vu Cong Duy Hoang, Philipp Koehn, Gholamreza Haffari, and Trevor Cohn. 2018. Iterative back- translation for neural machine translation. In Pro- ceedings of the 2nd Workshop on Neural Machine Translation and Generation, pages 18-24, Mel- bourne, Australia. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Enhancement of encoder and attention using target monolingual corpora in neural machine translation",
"authors": [
{
"first": "Kenji",
"middle": [],
"last": "Imamura",
"suffix": ""
},
{
"first": "Atsushi",
"middle": [],
"last": "Fujita",
"suffix": ""
},
{
"first": "Eiichiro",
"middle": [],
"last": "Sumita",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2nd Workshop on Neural Machine Translation and Generation",
"volume": "",
"issue": "",
"pages": "55--63",
"other_ids": {
"DOI": [
"10.18653/v1/W18-2707"
]
},
"num": null,
"urls": [],
"raw_text": "Kenji Imamura, Atsushi Fujita, and Eiichiro Sumita. 2018. Enhancement of encoder and attention using target monolingual corpora in neural machine trans- lation. In Proceedings of the 2nd Workshop on Neu- ral Machine Translation and Generation, pages 55- 63, Melbourne, Australia. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "NICT self-training approach to neural machine translation at NMT-2018",
"authors": [
{
"first": "Kenji",
"middle": [],
"last": "Imamura",
"suffix": ""
},
{
"first": "Eiichiro",
"middle": [],
"last": "Sumita",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2nd Workshop on Neural Machine Translation and Generation",
"volume": "",
"issue": "",
"pages": "110--115",
"other_ids": {
"DOI": [
"10.18653/v1/W18-2713"
]
},
"num": null,
"urls": [],
"raw_text": "Kenji Imamura and Eiichiro Sumita. 2018. NICT self-training approach to neural machine translation at NMT-2018. In Proceedings of the 2nd Work- shop on Neural Machine Translation and Genera- tion, pages 110-115, Melbourne, Australia. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Trivial transfer learning for low-resource neural machine translation",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Kocmi",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Conference on Machine Translation: Research Papers",
"volume": "",
"issue": "",
"pages": "244--252",
"other_ids": {
"DOI": [
"10.18653/v1/W18-6325"
]
},
"num": null,
"urls": [],
"raw_text": "Tom Kocmi and Ond\u0159ej Bojar. 2018. Trivial transfer learning for low-resource neural machine translation. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 244-252, Bel- gium, Brussels. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Europarl: A parallel corpus for statistical machine translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 10th Machine Translation Summit (MT Summit)",
"volume": "",
"issue": "",
"pages": "79--86",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. Proceedings of the 10th Machine Translation Summit (MT Summit), pages 79-86.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing",
"authors": [
{
"first": "Taku",
"middle": [],
"last": "Kudo",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Richardson",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tok- enizer and detokenizer for neural text processing.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Transfer learning in multilingual neural machine translation with dynamic vocabulary",
"authors": [
{
"first": "M",
"middle": [],
"last": "Surafel",
"suffix": ""
},
{
"first": "Aliia",
"middle": [],
"last": "Lakew",
"suffix": ""
},
{
"first": "Matteo",
"middle": [],
"last": "Erofeeva",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Negri",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Federico",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Turchi",
"suffix": ""
}
],
"year": 2018,
"venue": "International Workshop on Spoken Language Translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Surafel M Lakew, Aliia Erofeeva, Matteo Negri, Mar- cello Federico, and Marco Turchi. 2018. Transfer learning in multilingual neural machine translation with dynamic vocabulary. In International Work- shop on Spoken Language Translation.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Choosing transfer languages for cross-lingual learning",
"authors": [
{
"first": "Yu-Hsiang",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Chian-Yu",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Zirui",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yuyan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Mengzhou",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Shruti",
"middle": [],
"last": "Rijhwani",
"suffix": ""
},
{
"first": "Junxian",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Zhisong",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xuezhe",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Antonios",
"middle": [],
"last": "Anastasopoulos",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Littell",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3125--3135",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1301"
]
},
"num": null,
"urls": [],
"raw_text": "Yu-Hsiang Lin, Chian-Yu Chen, Jean Lee, Zirui Li, Yuyan Zhang, Mengzhou Xia, Shruti Rijhwani, Junxian He, Zhisong Zhang, Xuezhe Ma, Antonios Anastasopoulos, Patrick Littell, and Graham Neubig. 2019. Choosing transfer languages for cross-lingual learning. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguis- tics, pages 3125-3135, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "OpenSub-titles2016: Extracting large parallel corpora from movie and TV subtitles",
"authors": [
{
"first": "Pierre",
"middle": [],
"last": "Lison",
"suffix": ""
},
{
"first": "J\u00f6rg",
"middle": [],
"last": "Tiedemann",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016)",
"volume": "",
"issue": "",
"pages": "923--929",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pierre Lison and J\u00f6rg Tiedemann. 2016. OpenSub- titles2016: Extracting large parallel corpora from movie and TV subtitles. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016), pages 923-929, Por- toro\u017e, Slovenia. European Language Resources As- sociation (ELRA).",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A comparable study on model averaging, ensembling and reranking in nmt",
"authors": [
{
"first": "Yuchen",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Long",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Yining",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Jiajun",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Chengqing",
"middle": [],
"last": "Zong",
"suffix": ""
}
],
"year": 2018,
"venue": "Natural Language Processing and Chinese Computing",
"volume": "",
"issue": "",
"pages": "299--308",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuchen Liu, Long Zhou, Yining Wang, Yang Zhao, Ji- ajun Zhang, and Chengqing Zong. 2018. A com- parable study on model averaging, ensembling and reranking in nmt. In Natural Language Process- ing and Chinese Computing, pages 299-308, Cham. Springer International Publishing.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Hierarchical transfer learning architecture for low-resource neural machine translation",
"authors": [
{
"first": "G",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Ainiwaer",
"suffix": ""
}
],
"year": 2019,
"venue": "IEEE Access",
"volume": "7",
"issue": "",
"pages": "154157--154166",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. Luo, Y. Yang, Y. Yuan, Z. Chen, and A. Ainiwaer. 2019. Hierarchical transfer learning architecture for low-resource neural machine translation. IEEE Ac- cess, 7:154157-154166.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Rapid adaptation of neural machine translation to new languages",
"authors": [
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Junjie",
"middle": [],
"last": "Hu",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "875--880",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1103"
]
},
"num": null,
"urls": [],
"raw_text": "Graham Neubig and Junjie Hu. 2018. Rapid adapta- tion of neural machine translation to new languages. In Proceedings of the 2018 Conference on Empiri- cal Methods in Natural Language Processing, pages 875-880, Brussels, Belgium. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Transfer learning across low-resource, related languages for neural machine translation",
"authors": [
{
"first": "Q",
"middle": [],
"last": "Toan",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Eighth International Joint Conference on Natural Language Processing",
"volume": "2",
"issue": "",
"pages": "296--301",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Toan Q. Nguyen and David Chiang. 2017. Trans- fer learning across low-resource, related languages for neural machine translation. In Proceedings of the Eighth International Joint Conference on Natu- ral Language Processing (Volume 2: Short Papers), pages 296-301, Taipei, Taiwan. Asian Federation of Natural Language Processing.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Bi-directional neural machine translation with synthetic parallel data",
"authors": [
{
"first": "Xing",
"middle": [],
"last": "Niu",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Denkowski",
"suffix": ""
},
{
"first": "Marine",
"middle": [],
"last": "Carpuat",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2nd Workshop on Neural Machine Translation and Generation",
"volume": "",
"issue": "",
"pages": "84--91",
"other_ids": {
"DOI": [
"10.18653/v1/W18-2710"
]
},
"num": null,
"urls": [],
"raw_text": "Xing Niu, Michael Denkowski, and Marine Carpuat. 2018. Bi-directional neural machine translation with synthetic parallel data. In Proceedings of the 2nd Workshop on Neural Machine Translation and Generation, pages 84-91, Melbourne, Australia. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "BPE-dropout: Simple and effective subword regularization",
"authors": [
{
"first": "Ivan",
"middle": [],
"last": "Provilkov",
"suffix": ""
},
{
"first": "Dmitrii",
"middle": [],
"last": "Emelianenko",
"suffix": ""
},
{
"first": "Elena",
"middle": [],
"last": "Voita",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1882--1892",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ivan Provilkov, Dmitrii Emelianenko, and Elena Voita. 2020. BPE-dropout: Simple and effective subword regularization. In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics, pages 1882-1892, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Improving neural machine translation models with monolingual data",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "86--96",
"other_ids": {
"DOI": [
"10.18653/v1/P16-1009"
]
},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Improving neural machine translation mod- els with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 86-96, Berlin, Germany. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Neural machine translation of rare words with subword units",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1715--1725",
"other_ids": {
"DOI": [
"10.18653/v1/P16-1162"
]
},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural machine translation of rare words with subword units. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715- 1725, Berlin, Germany. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "A study of translation edit rate with targeted human annotation",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Snover",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Dorr",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Linnea",
"middle": [],
"last": "Micciulla",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Makhoul",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of association for machine translation in the Americas",
"volume": "200",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Snover, Bonnie Dorr, Richard Schwartz, Lin- nea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of association for machine transla- tion in the Americas, volume 200. Cambridge, MA.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Fitting sentence level translation evaluation with many dense features",
"authors": [
{
"first": "Milo\u0161",
"middle": [],
"last": "Stanojevi\u0107",
"suffix": ""
},
{
"first": "Khalil",
"middle": [],
"last": "Sima'an",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "202--206",
"other_ids": {
"DOI": [
"10.3115/v1/D14-1025"
]
},
"num": null,
"urls": [],
"raw_text": "Milo\u0161 Stanojevi\u0107 and Khalil Sima'an. 2014. Fit- ting sentence level translation evaluation with many dense features. In Proceedings of the 2014 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 202-206, Doha, Qatar. Association for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "DGT-TM: A freely available translation memory in 22 languages",
"authors": [
{
"first": "Ralf",
"middle": [],
"last": "Steinberger",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Eisele",
"suffix": ""
},
{
"first": "Szymon",
"middle": [],
"last": "Klocek",
"suffix": ""
},
{
"first": "Spyridon",
"middle": [],
"last": "Pilos",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Schl\u00fcter",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC-2012)",
"volume": "",
"issue": "",
"pages": "454--459",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ralf Steinberger, Andreas Eisele, Szymon Klocek, Spyridon Pilos, and Patrick Schl\u00fcter. 2012. DGT- TM: A freely available translation memory in 22 lan- guages. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC-2012), pages 454-459, Istanbul, Turkey. Eu- ropean Languages Resources Association (ELRA).",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Parallel data, tools and interfaces in OPUS",
"authors": [
{
"first": "J\u00f6rg",
"middle": [],
"last": "Tiedemann",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC-2012)",
"volume": "",
"issue": "",
"pages": "2214--2218",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J\u00f6rg Tiedemann. 2012. Parallel data, tools and inter- faces in OPUS. In Proceedings of the Eighth In- ternational Conference on Language Resources and Evaluation (LREC-2012), pages 2214-2218, Istan- bul, Turkey. European Languages Resources Associ- ation (ELRA).",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 Decem- ber 2017, Long Beach, CA, USA, pages 5998-6008.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "CharacTer: Translation edit rate on character level",
"authors": [
{
"first": "Weiyue",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jan-Thorsten",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Hendrik",
"middle": [],
"last": "Rosendahl",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the First Conference on Machine Translation",
"volume": "2",
"issue": "",
"pages": "505--510",
"other_ids": {
"DOI": [
"10.18653/v1/W16-2342"
]
},
"num": null,
"urls": [],
"raw_text": "Weiyue Wang, Jan-Thorsten Peter, Hendrik Rosendahl, and Hermann Ney. 2016. CharacTer: Translation edit rate on character level. In Proceedings of the First Conference on Machine Translation: Volume 2, Shared Task Papers, pages 505-510, Berlin, Ger- many. Association for Computational Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "SwitchOut: an efficient data augmentation algorithm for neural machine translation",
"authors": [
{
"first": "Xinyi",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Zihang",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "856--861",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1100"
]
},
"num": null,
"urls": [],
"raw_text": "Xinyi Wang, Hieu Pham, Zihang Dai, and Graham Neubig. 2018. SwitchOut: an efficient data aug- mentation algorithm for neural machine translation. In Proceedings of the 2018 Conference on Empiri- cal Methods in Natural Language Processing, pages 856-861, Brussels, Belgium. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Joint training for neural machine translation models with monolingual data",
"authors": [
{
"first": "Zhirui",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Shujie",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Mu",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Enhong",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2018,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "555--562",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhirui Zhang, Shujie Liu, Mu Li, Ming Zhou, and En- hong Chen. 2018. Joint training for neural machine translation models with monolingual data. In AAAI, pages 555-562.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Transfer learning for low-resource neural machine translation",
"authors": [
{
"first": "Barret",
"middle": [],
"last": "Zoph",
"suffix": ""
},
{
"first": "Deniz",
"middle": [],
"last": "Yuret",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "May",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1568--1575",
"other_ids": {
"DOI": [
"10.18653/v1/D16-1163"
]
},
"num": null,
"urls": [],
"raw_text": "Barret Zoph, Deniz Yuret, Jonathan May, and Kevin Knight. 2016. Transfer learning for low-resource neural machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 1568-1575, Austin, Texas. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"num": null,
"type_str": "table",
"content": "<table/>",
"text": "",
"html": null
},
"TABREF3": {
"num": null,
"type_str": "table",
"content": "<table><tr><td>: Ablation experiments showing performance of</td></tr><tr><td>baseline systems, BPE-dropout, backtranslation, trans-</td></tr><tr><td>fer learning, and their combination. All systems shown</td></tr><tr><td>here do not use pseudo-Sorbian. DE-HSB systems here</td></tr><tr><td>have a 20k vocabulary, while HSB-DE have a 10k vo-</td></tr><tr><td>cabulary. BLEU score is reported on devel test set.</td></tr><tr><td>The final line shows the submitted primary systems and</td></tr><tr><td>their performance on devel test.</td></tr></table>",
"text": "",
"html": null
},
"TABREF5": {
"num": null,
"type_str": "table",
"content": "<table/>",
"text": "",
"html": null
},
"TABREF7": {
"num": null,
"type_str": "table",
"content": "<table><tr><td>: Primary Upper Sorbian-German ensemble</td></tr><tr><td>submission BLEU score on devel test, with scores of</td></tr><tr><td>each of its individual component systems. The num-</td></tr><tr><td>bers correspond to the list in Section 6.5.</td></tr></table>",
"text": "",
"html": null
},
"TABREF9": {
"num": null,
"type_str": "table",
"content": "<table/>",
"text": "Comparison of approaches to create Pseudo-Sorbian corpora for pre-training, by word-level or character-level replacement of Czech text, at different vocabulary sizes. All scores represent BLEU scores on dev-test, in the HSB-DE direction.",
"html": null
}
}
}
}