ACL-OCL / Base_JSON /prefixW /json /wmt /2020.wmt-1.10.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:43:39.749043Z"
},
"title": "The TALP-UPC System Description for WMT20 News Translation Task: Multilingual Adaptation for Low Resource MT",
"authors": [
{
"first": "Carlos",
"middle": [],
"last": "Escolano",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "TALP Research Center Universitat Polit\u00e8cnica de Catalunya",
"location": {
"settlement": "Barcelona"
}
},
"email": "carlos.escolano@upc.edu"
},
{
"first": "Marta",
"middle": [
"R"
],
"last": "Costa-Juss\u00e0",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "TALP Research Center Universitat Polit\u00e8cnica de Catalunya",
"location": {
"settlement": "Barcelona"
}
},
"email": ""
},
{
"first": "Jos\u00e9",
"middle": [
"A R"
],
"last": "Fonollosa",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "TALP Research Center Universitat Polit\u00e8cnica de Catalunya",
"location": {
"settlement": "Barcelona"
}
},
"email": "jose.fonollosa@upc.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this article, we describe the TALP-UPC participation in the WMT20 news translation shared task for Tamil-English. Given the low amount of parallel training data, we resort to adapt the task to a multilingual system to benefit from the positive transfer from high resource languages. We use iterative backtranslation to fine-tune the system and benefit from the monolingual data available. In order to measure the effectivity of such methods, we compare our results to a bilingual baseline system.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "In this article, we describe the TALP-UPC participation in the WMT20 news translation shared task for Tamil-English. Given the low amount of parallel training data, we resort to adapt the task to a multilingual system to benefit from the positive transfer from high resource languages. We use iterative backtranslation to fine-tune the system and benefit from the monolingual data available. In order to measure the effectivity of such methods, we compare our results to a bilingual baseline system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Modern NMT systems such as Transformer require large amounts of training data in order to obtain good generation results. For this reason, low resource languages represent a good opportunity to explore new techniques to treat data more efficiently and benefit from available sources of data like monolingual corpora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "From the WMT20 news tasks proposed languages we are presenting our results on the English-Tamil language pair, Tamil is an official language from India, Sri Lanka, and Singapore having approximately 75 million native speakers. It belongs to the Dravidian family, originated in Asia.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Two principal reasons can make Tamil a challenging language for machine translation: script and agglutination. Tamil's script consists of 12 vowels and 18 consonants plus one special character, allowing the combination of 247 possible characters. Compared to the Latin script employed by most western languages, it is an order of magnitude higher in the number of possible characters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Also, by agglutination, suffixes can be added to root words to form new ones. These words can lead to multiple words in the target language in the context of machine translation, which may affect attention and decoding in NMT systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This work discusses the system proposed for the evaluation in which we combine the use of multilingual parallel data with monolingual data to boost the performance of our proposed NMT system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Modern NMT systems benefit from having hundreds of thousands or even millions of parallel sentences. When working with low resource language pairs, the two main approaches are the use of monolingual corpora and multilingual NMT. While parallel data may be difficult to obtain for low resource languages, monolingual data is usually more available, as it does not require any additional labeling.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Low Resource NMT",
"sec_num": "2"
},
{
"text": "A common approach to benefit from monolingual data is back-translation (Sennrich et al., 2016a) , which consists of translating a monolingual corpus to generate synthetic corpora that can be later employed to continue training. Similar techniques create a synthetic pseudo-parallel corpus through a pivot language (Casas et al., 2019 ) that can be then trained similarly to back-translation when data is available between the desired language pair and a pivot high resource language. More recently, iterative back-translation (Hoang et al., 2018) was proposed. This technique allows the system to generate synthetic data while updating the system, so better the new data improves as the system trains. On the other hand, several works on Multilingual NMT have shown benefits for low resource language pairs by allowing positive transfer from the high resource languages, boosting the performance of the low resource ones. Different architectures have been proposed that show this behavior, from universal models where all parameters are shared between all languages (Johnson et al., 2017) , to architectures that share a common device that maps representations into a shared represen-tation space (Firat et al., 2016; Zhu et al., 2020) , to architectures that do not share parameters Schwenk and Douze, 2017) .",
"cite_spans": [
{
"start": 71,
"end": 95,
"text": "(Sennrich et al., 2016a)",
"ref_id": "BIBREF12"
},
{
"start": 314,
"end": 333,
"text": "(Casas et al., 2019",
"ref_id": "BIBREF0"
},
{
"start": 1066,
"end": 1088,
"text": "(Johnson et al., 2017)",
"ref_id": "BIBREF7"
},
{
"start": 1197,
"end": 1217,
"text": "(Firat et al., 2016;",
"ref_id": "BIBREF5"
},
{
"start": 1218,
"end": 1235,
"text": "Zhu et al., 2020)",
"ref_id": "BIBREF15"
},
{
"start": 1284,
"end": 1308,
"text": "Schwenk and Douze, 2017)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Low Resource NMT",
"sec_num": "2"
},
{
"text": "In the context of the WMT20 Tamil-English news shared task, as the provided parallel data is limited, we resorted to a combination of both proposed methods by incrementally train the new language pair into a Multilingual NMT system using the provided parallel data, to later fine-tune the system using iterative-back-translation with monolingual corpora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Low Resource NMT",
"sec_num": "2"
},
{
"text": "Previous works (Choudhary et al., 2018) have shown that Indian languages are usually a challenge for NMT systems due to their difference in terms of vocabulary and grammar compared to western languages such as English. Also, standard preprocessing methods do not always work well with them, so specific solutions are required to obtain good results.",
"cite_spans": [
{
"start": 15,
"end": 39,
"text": "(Choudhary et al., 2018)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3"
},
{
"text": "In the context of NMT, previous systems, such as MIDAS (Choudhary et al., 2018) , proved that the use of subword units leads to significant improvements in translation quality when applied to Tamil by preventing Out of Vocabulary words in at generation time.",
"cite_spans": [
{
"start": 55,
"end": 79,
"text": "(Choudhary et al., 2018)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3"
},
{
"text": "All proposed systems in this work are constrained using exclusively data provided by the task's organization. The multilingual initial system was trained using Europarl v8, for all translation directions between English, French, Spanish, and German. For English-Tamil PMIndia, Tanzil v1, The UFAL EnTam corpus, The NLPC UOM En-Ta corpus, Wikimatrix, and Wikitiles. As monolingual Tamil data, we used News Crawl, while for English, we used News-commentary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpora and Data Preparation",
"sec_num": "4"
},
{
"text": "We processed all non-Tamil data following Moses (Koehn et al., 2007) scripts provided by the organization. For each one, we applied punctuation normalization, tokenization, and true-casing. Then each language is independently tokenized using BPE (Sennrich et al., 2016b) with 32 thousand operations. Table 1 the estatistics for each language. Tamil data has been tokenized at word-level using Indic-NLP (Kunchukuttan, 2020) As test set, we used 1275 lines extracted from the development set provided from the organization, keeping the remaining ones as validation set.",
"cite_spans": [
{
"start": 48,
"end": 68,
"text": "(Koehn et al., 2007)",
"ref_id": "BIBREF8"
},
{
"start": 246,
"end": 270,
"text": "(Sennrich et al., 2016b)",
"ref_id": "BIBREF13"
},
{
"start": 403,
"end": 423,
"text": "(Kunchukuttan, 2020)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 300,
"end": 307,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Corpora and Data Preparation",
"sec_num": "4"
},
{
"text": "In this section, we are going to discuss the details of the pipeline followed to create the translations systems for this submission, including the multilingual supervised pretraining and the unsupervised fine-tuning using monolingual corpora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System Description",
"sec_num": "5"
},
{
"text": "Methodology. Following the proposed model in (Escolano et al., 2020) , new languages can be added to the system without retraining the system, just using parallel data to one of the initial ones. In this work, we added Tamil using the provided parallel data to English. To train the new Tamil to English translation direction, a new Tamil encoder is added to the system with the previous English encoder frozen, to prevent the model from affecting the performance of the remaining pairs. Training with the frozen decoder induces the new encoder to learn a similar representation to the ones already in the multilingual model. In addition, as the English decoder has been trained with more data from all the language pairs in the Multilingual NMT system, we have positive transfer from the frozen modules to the new ones, boosting the translation performance compared to the bilingual NMT baseline. Following the same principles, the English-Tamil translation direction is trained by freezing the English encoder and training the Tamil decoder to force the shared representation. In this case, we also notice the positive transfer compared to the baseline trained with just parallel data. See in Figure 1 the schema of the supervised pretraining that we have just described.",
"cite_spans": [
{
"start": 45,
"end": 68,
"text": "(Escolano et al., 2020)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 1195,
"end": 1201,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Multilingual Supervised Pretraining",
"sec_num": "5.1"
},
{
"text": "Implementation. For this work, all encoders and decoders were implemented using the Transformer (Vaswani et al., 2017) architecture, with 6 layers, 8 heads, 512 embedding size, and 2048 feed-forward size for each of them, and everything was implemented using Fairseq's (Ott et al., 2019) 0.6 release. The multilingual NMT model was trained in a single NVIDIA TITAN XP for 50 thousand updates using adam optimizer with 0.001 as learning, 4000 warmup updates and updating every 16 batches of 2000 tokens. Adding Tamil-English and English-Tamil directions to the system took approximately 45 thousand updates using the same parameters and GPU configuration.",
"cite_spans": [
{
"start": 96,
"end": 118,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF14"
},
{
"start": 269,
"end": 287,
"text": "(Ott et al., 2019)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual Supervised Pretraining",
"sec_num": "5.1"
},
{
"text": "Methodology. The previous process has benefited from the additional corpus from the Multilingual NMT system, but as stated before, monolingual data is another common source of improvement for NMT systems. In this section, we are going to discuss how we added monolingual data to the previously described model. To employ the monolingual data available in our system, we define an autoencoder using the already trained encoder and decoder modules in the given language. These modules are not trained to regenerate the input, we introduce an adaptor, between both modules, responsible for processing the representation generated and output a new one understood by the decoder. Taking advantage of the architecture, we can use one of the decoders to greedy decode the representation created and encode it back with one the encoders, to compute the reconstruction of the monolingual input. Figure 1 showcases in \"unsupervised fine-tunning\" how is process is applied in our work to use both Tamil monolingual data with an English adaptor and English data with the Tamil adaptor.",
"cite_spans": [],
"ref_spans": [
{
"start": 886,
"end": 894,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Monolingual Unsupervised Fine-tuning",
"sec_num": "5.2"
},
{
"text": "In this work, both encoder and adaptor were frozen, and only the final decoder was updated. As future work, then encoder could be also trained, improving the representations generated at each training epoch.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monolingual Unsupervised Fine-tuning",
"sec_num": "5.2"
},
{
"text": "Implementation. As the rest of the architecture, this process has been implemented using the same GPU and parameter configuration, in this case for approximately 6 thousand updates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Monolingual Unsupervised Fine-tuning",
"sec_num": "5.2"
},
{
"text": "Once our model is fully trained the apply an additional step of checkpoint averaging in which the n checkpoints containing the weights of the network are combined using the defaults script provided by Fairseq.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Post-processing",
"sec_num": "5.3"
},
{
"text": "In this work, given that the corpus was small we saved checkpoint every epoch of approximately 400 updates and averaged the last 4 checkpoints saved.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Post-processing",
"sec_num": "5.3"
},
{
"text": "Finally, to generate the final submissions, detruecasing and detokenization using the scripts provided by Moses to the English outputs, while Indic-NLP detokenization is applied to the Tamil ones.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Post-processing",
"sec_num": "5.3"
},
{
"text": "The motivation for this work was to explore the combination of both positive transfer and monolingual data in a low resource task such as English-Tamil Translation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "6"
},
{
"text": "To test our hypothesis we trained a bilingual baseline with just the parallel data available for the task and compared its results to an incremental using adaptation to a multilingual NMT system and monolingual fine-tuning to measure the impact of each measure in the final performance. All configurations have the same architecture and number of parameters and have been tested on the same 1275 lines extracted from the newsdev2020 Tamil-English set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "6"
},
{
"text": "To introduce some context about the multilingual system, we evaluated its performance using newstest13 as test set, and the performance English performance ranged from 20.31 BLEU points from the English-German translations direction, to 29.74 for English-French. When English is the target language the results vary from 24.54 for German-English, to 27.75 for Spanish-English. About the impact of positive transfer from Multilingual NMT, Tables 4 and 3 show that both directions benefit from adding Tamil into the MNMT system with improvement of 1.58 and 4.09 BLEU points respectively, approximately a 40% better than the bilingual baseline in both directions.",
"cite_spans": [],
"ref_spans": [
{
"start": 438,
"end": 452,
"text": "Tables 4 and 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "6"
},
{
"text": "When looking at the monolingual fine-tuning results, we can observe that the English to Tamil translation direction benefits more (2.65 BLEU) from the technique than the Tamil to English direction (1.02 BLEU). This difference in the performance may be explained by the difference in the training of both decoders. While the Tamil de-coder has been trained just with the parallel data for the task, the English decoder was trained with the multilingual NMT system with more data available, which may lead to a more robust model to fine-tuning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "6"
},
{
"text": "Finally, looking at the checkpoint averaging results, in both directions it leads to a small improvement, less than 0.2 BLEU, showing limited impact in the final results. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "6"
},
{
"text": "In this paper, we described the TALP-UPC participation in the WMT20 news translation shared task for Tamil-English. The motivation of this work was to explore the combination of multilingual transfer from high resource languages and monolingual data applied to low resource NMT. Our experiments showcase the effectiveness of adapting low resource languages pre-trained multilingual systems and how it introduces positive transfer compared to a bilingual baseline system. Also it shows that monolingual data can be successfully introduced into the system and that it can boost the performance of the system. As future work, we could explore the fine-tuning of both encoder and decoder during the monolingual unsupervised finetuning in order to help the system produce better synthetic data as the training takes place.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "toral senior grant Ram\u00f3n y Cajal and by the Agencia Estatal de Investigaci\u00f3n through the projects EUR2019-103819, PCIN-2017-079 and PID2019-107579RB-I00 / AEI / 10.13039/501100011033",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
}
],
"back_matter": [
{
"text": "This work is supported in part by the Google Faculty Research Award 2019, the Spanish Ministerio de Ciencia e Innovaci\u00f3n, through the postdoc-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The TALP-UPC machine translation systems for WMT19 news translation task: Pivoting techniques for low resource MT",
"authors": [
{
"first": "Noe",
"middle": [],
"last": "Casas",
"suffix": ""
},
{
"first": "A",
"middle": [
"R"
],
"last": "Jos\u00e9",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Fonollosa",
"suffix": ""
},
{
"first": "Christine",
"middle": [],
"last": "Escolano",
"suffix": ""
},
{
"first": "Marta",
"middle": [
"R"
],
"last": "Basta",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Costa-Juss\u00e0",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Conference on Machine Translation",
"volume": "2",
"issue": "",
"pages": "155--162",
"other_ids": {
"DOI": [
"10.18653/v1/W19-5311"
]
},
"num": null,
"urls": [],
"raw_text": "Noe Casas, Jos\u00e9 A. R. Fonollosa, Carlos Escolano, Christine Basta, and Marta R. Costa-juss\u00e0. 2019. The TALP-UPC machine translation systems for WMT19 news translation task: Pivoting techniques for low resource MT. In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 155-162, Flo- rence, Italy. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Neural machine translation for English-Tamil",
"authors": [
{
"first": "Himanshu",
"middle": [],
"last": "Choudhary",
"suffix": ""
},
{
"first": "Aditya",
"middle": [
"Kumar"
],
"last": "Pathak",
"suffix": ""
},
{
"first": "Rajiv Ratan",
"middle": [],
"last": "Saha",
"suffix": ""
},
{
"first": "Ponnurangam",
"middle": [],
"last": "Kumaraguru",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Conference on Machine Translation: Shared Task Papers",
"volume": "",
"issue": "",
"pages": "770--775",
"other_ids": {
"DOI": [
"10.18653/v1/W18-6459"
]
},
"num": null,
"urls": [],
"raw_text": "Himanshu Choudhary, Aditya Kumar Pathak, Ra- jiv Ratan Saha, and Ponnurangam Kumaraguru. 2018. Neural machine translation for English-Tamil. In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 770-775, Belgium, Brussels. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "From bilingual to multilingual neural machine translation by incremental training",
"authors": [
{
"first": "Carlos",
"middle": [],
"last": "Escolano",
"suffix": ""
},
{
"first": "Marta",
"middle": [
"R"
],
"last": "Costa-Juss\u00e0",
"suffix": ""
},
{
"first": "Jos\u00e9",
"middle": [
"A R"
],
"last": "Fonollosa",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop",
"volume": "",
"issue": "",
"pages": "236--242",
"other_ids": {
"DOI": [
"10.18653/v1/P19-2033"
]
},
"num": null,
"urls": [],
"raw_text": "Carlos Escolano, Marta R. Costa-juss\u00e0, and Jos\u00e9 A. R. Fonollosa. 2019. From bilingual to multilingual neu- ral machine translation by incremental training. In Proceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics: Student Re- search Workshop, pages 236-242, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Multilingual machine translation: Closing the gap between shared and language-specific encoder-decoders",
"authors": [
{
"first": "Carlos",
"middle": [],
"last": "Escolano",
"suffix": ""
},
{
"first": "Marta",
"middle": [
"R"
],
"last": "Costa-Juss\u00e0",
"suffix": ""
},
{
"first": "A",
"middle": [
"R"
],
"last": "Jos\u00e9",
"suffix": ""
},
{
"first": "Mikel",
"middle": [],
"last": "Fonollosa",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Artetxe",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.06575"
]
},
"num": null,
"urls": [],
"raw_text": "Carlos Escolano, Marta R Costa-juss\u00e0, Jos\u00e9 AR Fonol- losa, and Mikel Artetxe. 2020. Multilingual ma- chine translation: Closing the gap between shared and language-specific encoder-decoders. arXiv preprint arXiv:2004.06575.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "From bilingual to multilingual neuralbased machine translation by incremental training",
"authors": [
{
"first": "Carlos",
"middle": [],
"last": "Escolano",
"suffix": ""
},
{
"first": "Marta",
"middle": [
"R"
],
"last": "Costa-Juss\u00e0",
"suffix": ""
},
{
"first": "Jos\u00e9",
"middle": [
"A R"
],
"last": "Fonollosa",
"suffix": ""
}
],
"year": null,
"venue": "Journal of the Association for Information Science and Technology",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1002/asi.24395"
]
},
"num": null,
"urls": [],
"raw_text": "Carlos Escolano, Marta R. Costa-Juss\u00e0, and Jos\u00e9 A. R. Fonollosa. From bilingual to multilingual neural- based machine translation by incremental training. Journal of the Association for Information Science and Technology, n/a(n/a).",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Zero-resource translation with multi-lingual neural machine translation",
"authors": [
{
"first": "Orhan",
"middle": [],
"last": "Firat",
"suffix": ""
},
{
"first": "Baskaran",
"middle": [],
"last": "Sankaran",
"suffix": ""
},
{
"first": "Yaser",
"middle": [],
"last": "Al-Onaizan",
"suffix": ""
},
{
"first": "Fatos",
"middle": [
"T Yarman"
],
"last": "Vural",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "268--277",
"other_ids": {
"DOI": [
"10.18653/v1/D16-1026"
]
},
"num": null,
"urls": [],
"raw_text": "Orhan Firat, Baskaran Sankaran, Yaser Al-Onaizan, Fatos T. Yarman Vural, and Kyunghyun Cho. 2016. Zero-resource translation with multi-lingual neural machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Lan- guage Processing, pages 268-277, Austin, Texas.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Iterative backtranslation for neural machine translation",
"authors": [
{
"first": "Duy",
"middle": [],
"last": "Vu Cong",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "Gholamreza",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Haffari",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Cohn",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2nd Workshop on Neural Machine Translation and Generation",
"volume": "",
"issue": "",
"pages": "18--24",
"other_ids": {
"DOI": [
"10.18653/v1/W18-2703"
]
},
"num": null,
"urls": [],
"raw_text": "Vu Cong Duy Hoang, Philipp Koehn, Gholamreza Haffari, and Trevor Cohn. 2018. Iterative back- translation for neural machine translation. In Pro- ceedings of the 2nd Workshop on Neural Machine Translation and Generation, pages 18-24, Mel- bourne, Australia. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Google's multilingual neural machine translation system: Enabling zero-shot translation",
"authors": [
{
"first": "Melvin",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "Maxim",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Yonghui",
"middle": [],
"last": "Krikun",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Nikhil",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Fernanda",
"middle": [],
"last": "Thorat",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Vi\u00e9gas",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Wattenberg",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Corrado",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "339--351",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Melvin Johnson, Mike Schuster, Quoc V Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Vi\u00e9gas, Martin Wattenberg, Greg Corrado, et al. 2017. Google's multilingual neural machine translation system: Enabling zero-shot translation. Transactions of the Association for Computational Linguistics, 5:339-351.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Moses: Open source toolkit for statistical machine translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Federico",
"suffix": ""
},
{
"first": "Nicola",
"middle": [],
"last": "Bertoldi",
"suffix": ""
},
{
"first": "Brooke",
"middle": [],
"last": "Cowan",
"suffix": ""
},
{
"first": "Wade",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Christine",
"middle": [],
"last": "Moran",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Zens",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the ACL: Demo Papers",
"volume": "",
"issue": "",
"pages": "177--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, et al. 2007. Moses: Open source toolkit for statistical machine translation. In Pro- ceedings of the ACL: Demo Papers, pages 177-180.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "The IndicNLP Library",
"authors": [
{
"first": "Anoop",
"middle": [],
"last": "Kunchukuttan",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anoop Kunchukuttan. 2020. The IndicNLP Library. https://github.com/anoopkunchukuttan/ indic_nlp_library/blob/master/docs/ indicnlp.pdf.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "fairseq: A fast, extensible toolkit for sequence modeling",
"authors": [
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Alexei",
"middle": [],
"last": "Baevski",
"suffix": ""
},
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of NAACL-HLT 2019: Demonstrations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of NAACL-HLT 2019: Demonstrations.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Learning joint multilingual sentence representations with neural machine translation",
"authors": [
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Matthijs",
"middle": [],
"last": "Douze",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2nd Workshop on Representation Learning for NLP",
"volume": "",
"issue": "",
"pages": "157--167",
"other_ids": {
"DOI": [
"10.18653/v1/W17-2619"
]
},
"num": null,
"urls": [],
"raw_text": "Holger Schwenk and Matthijs Douze. 2017. Learn- ing joint multilingual sentence representations with neural machine translation. In Proceedings of the 2nd Workshop on Representation Learning for NLP, pages 157-167, Vancouver, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Improving neural machine translation models with monolingual data",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "86--96",
"other_ids": {
"DOI": [
"10.18653/v1/P16-1009"
]
},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Improving neural machine translation mod- els with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 86-96, Berlin, Germany. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Neural machine translation of rare words with subword units",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1715--1725",
"other_ids": {
"DOI": [
"10.18653/v1/P16-1162"
]
},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural machine translation of rare words with subword units. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715- 1725, Berlin, Germany. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Language-aware interlingua for multilingual neural machine translation",
"authors": [
{
"first": "Changfeng",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Heng",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Shanbo",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Weihua",
"middle": [],
"last": "Luo",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1650--1655",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.150"
]
},
"num": null,
"urls": [],
"raw_text": "Changfeng Zhu, Heng Yu, Shanbo Cheng, and Weihua Luo. 2020. Language-aware interlingua for multi- lingual neural machine translation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1650-1655, On- line. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Training pipeline.Step 1 Supervised preptraining, Step 2 Unsupervised fine-tuning.",
"type_str": "figure",
"num": null,
"uris": null
},
"TABREF0": {
"num": null,
"html": null,
"content": "<table><tr><td colspan=\"4\">corpus lang sentences words</td></tr><tr><td>DE-EN</td><td>DE EN</td><td>1758872 1758872</td><td>40265543 40265543</td></tr><tr><td>DE-ES</td><td>DE ES</td><td>1663458 1663458</td><td>37698204 40808518</td></tr><tr><td>DE-FR</td><td>DE FR</td><td>1681466 1681466</td><td>37410662 43056346</td></tr><tr><td>EN-ES</td><td>EN ES</td><td>1769606 1769606</td><td>41803882 43156309</td></tr><tr><td>EN-FR</td><td>EN FR</td><td>1770112 1770112</td><td>41211543 45196313</td></tr></table>",
"text": "and then tokenized with BPE with 16 thousand operations.",
"type_str": "table"
},
"TABREF1": {
"num": null,
"html": null,
"content": "<table><tr><td colspan=\"3\">corpus lang set</td><td colspan=\"2\">sentences words</td></tr><tr><td>EN-TA</td><td>EN TA</td><td colspan=\"2\">train 494310 test 1275 train 494310 test 1275</td><td>7355160 29774 15163570 66564</td></tr><tr><td>EN</td><td>EN</td><td colspan=\"2\">train 608912</td><td>14995557</td></tr><tr><td>TA</td><td>TA</td><td colspan=\"2\">train 504320</td><td>6426186</td></tr></table>",
"text": "Corpus statistics in number of words and sentences for the language pairs of the Multilingual initial system.",
"type_str": "table"
},
"TABREF2": {
"num": null,
"html": null,
"content": "<table/>",
"text": "",
"type_str": "table"
},
"TABREF3": {
"num": null,
"html": null,
"content": "<table/>",
"text": "",
"type_str": "table"
},
"TABREF5": {
"num": null,
"html": null,
"content": "<table><tr><td>System</td><td colspan=\"2\">BLEU \u2206BLEU</td></tr><tr><td>Baseline</td><td>6.51</td><td>-</td></tr><tr><td>Multilingual</td><td>10.6</td><td>4.09</td></tr><tr><td>+ Mono</td><td>11.62</td><td>1.02</td></tr><tr><td colspan=\"2\">+ Checkpoint Avg 11.8</td><td>0.18</td></tr></table>",
"text": "Results measured in BLEU of the English to Tamil Translation direction.",
"type_str": "table"
},
"TABREF6": {
"num": null,
"html": null,
"content": "<table/>",
"text": "Results measured in BLEU of the Tamil to English Translation direction.",
"type_str": "table"
}
}
}
}