ACL-OCL / Base_JSON /prefixW /json /wat /2021.wat-1.13.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T04:33:52.252134Z"
},
"title": "TMU NMT System with Japanese BART for the Patent task of WAT 2021",
"authors": [
{
"first": "Hwichan",
"middle": [],
"last": "Kim",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tokyo Metropolitan University",
"location": {
"addrLine": "6-6 Asahigaoka",
"postCode": "191-0065",
"settlement": "Hino",
"region": "Tokyo",
"country": "Japan"
}
},
"email": "kim-hwichan@ed.tmu.ac.jp"
},
{
"first": "Mamoru",
"middle": [],
"last": "Komachi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tokyo Metropolitan University",
"location": {
"addrLine": "6-6 Asahigaoka",
"postCode": "191-0065",
"settlement": "Hino",
"region": "Tokyo",
"country": "Japan"
}
},
"email": "komachi@tmu.ac.jp"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we introduce our TMU Neural Machine Translation (NMT) system submitted for the Patent task (Korean Japanese and English Japanese) of 8th Workshop on Asian Translation (Nakazawa et al., 2021). Recently, several studies proposed pre-trained encoder-decoder models using monolingual data. One of the pre-trained models, BART (Lewis et al., 2020), was shown to improve translation accuracy via fine-tuning with bilingual data. However, they experimented only Romanian\u2192English translation using English BART. In this paper, we examine the effectiveness of Japanese BART using Japan Patent Office Corpus 2.0. Our experiments indicate that Japanese BART can also improve translation accuracy in both Korean Japanese and English Japanese translations.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we introduce our TMU Neural Machine Translation (NMT) system submitted for the Patent task (Korean Japanese and English Japanese) of 8th Workshop on Asian Translation (Nakazawa et al., 2021). Recently, several studies proposed pre-trained encoder-decoder models using monolingual data. One of the pre-trained models, BART (Lewis et al., 2020), was shown to improve translation accuracy via fine-tuning with bilingual data. However, they experimented only Romanian\u2192English translation using English BART. In this paper, we examine the effectiveness of Japanese BART using Japan Patent Office Corpus 2.0. Our experiments indicate that Japanese BART can also improve translation accuracy in both Korean Japanese and English Japanese translations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Neural Machine Translation (NMT) has achieved high translation accuracy in large-scale data conditions. However, translation accuracy of NMT drops in the lack of bilingual data (Koehn and Knowles, 2017) . There are several approaches such as backtranslation (Sennrich et al., 2016) and transfer learning (Zoph et al., 2016) to address this problem. Furthermore, in addition to these methods, there are some approaches to use pre-trained models using only monolingual data.",
"cite_spans": [
{
"start": 177,
"end": 202,
"text": "(Koehn and Knowles, 2017)",
"ref_id": "BIBREF4"
},
{
"start": 258,
"end": 281,
"text": "(Sennrich et al., 2016)",
"ref_id": "BIBREF12"
},
{
"start": 304,
"end": 323,
"text": "(Zoph et al., 2016)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "BERT (Devlin et al., 2019) , which is the most typical pre-trained model, can boost the accuracy of many downstream tasks compared to models without BERT via fine-tuning with the task-specific training data. However, applying BERT to NMT in fine-tuning form like the other tasks requires two-stage optimization and does not provide significant improvement (Imamura and Sumita, 2019) . Recently, several studies proposed pre-trained encoder-decoder models using a monolingual data. proposed BART, which is one of the pre-trained encoder-decoder models. They demonstrated that BART works well for not only comprehension tasks such as GLEU (Wang et al., 2018) and SQuAD (Rajpurkar et al., 2016) but also text generation tasks such as text summarization and translation. However, they reported only the effect of English BART, so they did not investigate BART trained by monolingual data of another language. Furthermore, in the translation task, they experimented with only Romanian\u2192English translation, which have subword overlap. Therefore, the effect in translations between language pairs without subword overlapping is not clear. Furthermore, they did not experiment in translation direction where the source language matches the language of the pre-trained model.",
"cite_spans": [
{
"start": 5,
"end": 26,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF1"
},
{
"start": 356,
"end": 382,
"text": "(Imamura and Sumita, 2019)",
"ref_id": "BIBREF2"
},
{
"start": 637,
"end": 656,
"text": "(Wang et al., 2018)",
"ref_id": "BIBREF14"
},
{
"start": 667,
"end": 691,
"text": "(Rajpurkar et al., 2016)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Additionally, we consider that fine-tuning pretraining models such as BART in translation task is similar to transfer learning (Zoph et al., 2016) . Transfer learning in NMT is a method that trains the network of the parent language pair (the parent model) as the initial network and then fine-tunes it for the child language pair (the child model). In the terminology of transfer learning, the pretrained BART and fine-tuned model are the parent model and child model, respectively. Previous studies have shown that transfer learning works most efficiently when the source languages of the parent and child models are syntactically similar (Dabre et al., 2017; Nguyen and Chiang, 2017) . Therefore, we hypothesize that BART is more effective when the language pair for fine-tuning is syntactically similar to the pre-training language.",
"cite_spans": [
{
"start": 127,
"end": 146,
"text": "(Zoph et al., 2016)",
"ref_id": "BIBREF18"
},
{
"start": 641,
"end": 661,
"text": "(Dabre et al., 2017;",
"ref_id": "BIBREF0"
},
{
"start": 662,
"end": 686,
"text": "Nguyen and Chiang, 2017)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this study, we examine the effects of Japanese BART on the translation task. We use Korean/Japanese and English/Japanese bilingual data of Japan Patent Office Patent Corpus 2.0 (JPO corpus) for fine-tuning. We also experiment in both translation directions of Ko Ja and En Ja. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There are some approaches pre-trained encoder models like BERT (Devlin et al., 2019) to the NMT task. Imamura and Sumita (2019) used BERT as an encoder and demonstrated the effectiveness of two-stage optimization, which first trains parameters without BERT encoder, and then fine-tunes all parameters. Zhu et al. (2020) used BERT representations as input embedding and showed more effectiveness than using BERT as the encoder. Recently, several studies proposed pre-trained encoder-decoder models such as MASS (Song et al., 2019) and BART , and these models can improve the translation accuracy via fine-tuning with bilingual data. MASS (Song et al., 2019) uses monolingual data from both the source and target languages for pre-training when applying to the NMT. On the contrary, BART uses only monolingual data of target language, unlike MASS. trained multilingual BART (mBART) using monolingual data of 25 languages. They indicated that mBART initialization leads significant gains in low resource settings. However, Wang and Htun (2020) showed that mBART cannot obtain improvements in the Patent task.",
"cite_spans": [
{
"start": 63,
"end": 84,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF1"
},
{
"start": 102,
"end": 127,
"text": "Imamura and Sumita (2019)",
"ref_id": "BIBREF2"
},
{
"start": 302,
"end": 319,
"text": "Zhu et al. (2020)",
"ref_id": "BIBREF16"
},
{
"start": 510,
"end": 529,
"text": "(Song et al., 2019)",
"ref_id": "BIBREF13"
},
{
"start": 637,
"end": 656,
"text": "(Song et al., 2019)",
"ref_id": "BIBREF13"
},
{
"start": 1020,
"end": 1040,
"text": "Wang and Htun (2020)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "3 Experimental Settings",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In this study, we use Japanese BART 1 base v1.1 (JaBART) trained using Japanese Wikipedia sentences (18M sentences). For fine-tuning, we do not use an additional encoder like in Lewis et al. (2020)'s method. Instead, we add randomly initialized embeddings for each unknown subword in JaBART to both encoder and decoder. We share the embeddings of characters that match across languages, such as numbers and units. We also train baseline models consisting of the same architecture as that of JaBART. We use the same hyperparameters indicated in Table 2 for both finetuning JaBART and training the baseline model. We fine-tune and train the models using the fairseq implementation 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 544,
"end": 551,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Implementation",
"sec_num": "3.1"
},
{
"text": "To train and fin-tune the models, we use Ko-Ja and En-Ja datasets of JPO corpus. Korean and English have almost no subword overlaps with Japanese, because these languages use Hangul, Latin alphabets, and Hiragana/Katakana/Kanji characters, respectively. For Japanese pre-processing, we use JaBART tokenizer. For Korean and English, we tokenize sentences using MeCab-ko 3 and Moses scripts 4 , respectively. Then, we apply the Senten-cePiece (Kudo and Richardson, 2018) with a 32k vocabulary size. Table 3 : BLEU / RIBES scores of each single and ensemble of three models. The scores of single are the average of the three models. We indicate the best scores in bold. The scores of \u2206 indicate the gains of the fine-tuned JaBART's BLEU score over the baseline model. velopment, and test 5 data statics. Table 3 shows that the BLEU and RIBES scores of each single and ensemble model. In the single model, the fine-tuned JaBART achieves the highest scores for dev and test data in both language pairs and translation directions of Ko Ja and En Ja. Specifically, the BLEU scores of the dev and test data reveal improvements of 0.440-1.350 and 1.013-1.250 from the baseline models, respectively. The RIBES scores also reveal improvements of 0.001-0.007, but there is no significant difference between the fine-tuned BART and baseline models.",
"cite_spans": [
{
"start": 441,
"end": 468,
"text": "(Kudo and Richardson, 2018)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 497,
"end": 504,
"text": "Table 3",
"ref_id": null
},
{
"start": 801,
"end": 808,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "3.2"
},
{
"text": "In the ensemble model 6 , the fine-tuned JaBART improves the BLEU and RIBES scores approximately 0.440-0.850 and 0.001-0.008, respectively, in the dev and test of Ko Ja and Ja\u2192En translations. However, in En\u2192Ja translation, the BLEU score of the fine-tuned JaBART decreases 0.09 in the dev and improves 0.240 in the test data. Thus, in the ensemble scenario, the fine-tuned JaBART model can improve translation accuracy except for En\u2192Ja translation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "3.3"
},
{
"text": "In this paper, we described our NMT system submitted to the Patent task (Ko Ja and En Ja) of the 8th Workshop on Asian Translation. We compared the baseline and fine-tuned JaBART models, and demonstrated that the fine-tuned JaBART achieves consistent improvements of BLEU scores in language pairs with no subword overlapping, and irrespective of translation directions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "Contrary to our hypothesis, our experiments indicated no significant difference in the translation accuracy depending on the syntactic similarity. However, we consider that there are some differences in another aspect such as training process per epoch and network representations. Therefore, we attempt to analyze BART fine-tuned using language pairs with varying syntactic proximities in detail in the future.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "https://github.com/utanaka2000/fairseq 3 https://bitbucket.org/eunjeon/mecab-ko 4 https://github.com/moses-smt/mosesdecodertree/ RELEASE-4.0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In this study, we use test-n data, a union of test-n1, test-n2, and test-n3 data, for evaluation.6 We submitted the En Ja ensemble models as the target for human evaluation.4 DiscussionsWe hypothesize that JaBART is more effective when the language pair for fine-tuning is syntactically similar to the pre-training language, as in transfer learning. In our experimental settings, Korean and English are syntactically similar and different languages with Japanese, respectively 7 . Therefore, we expect that JaBART is more effective in the Ko Ja translations than in the En Ja translations. However,Table 3shows no significant differences in \u2206 scores between the Ko Ja and En Ja translations. These results indicate that syntactic similarity does not affect the enhancement in the final BLEU scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Japanese and Korean are SOV and agglutinative languages, whereas English is SVO and fusional language(Masayoshi, 1990;Jeong et al., 2007).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work has been partly supported by the programs of the Grant-in-Aid for Scientific Research from the Japan Society for the Promotion of Science (JSPS KAKENHI) Grant Number 19K12099.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "An empirical study of language relatedness for transfer learning in neural machine translation",
"authors": [
{
"first": "Raj",
"middle": [],
"last": "Dabre",
"suffix": ""
},
{
"first": "Tetsuji",
"middle": [],
"last": "Nakagawa",
"suffix": ""
},
{
"first": "Hideto",
"middle": [],
"last": "Kazawa",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 31st Pacific Asia Conference on Language, Information and Computation",
"volume": "",
"issue": "",
"pages": "282--286",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Raj Dabre, Tetsuji Nakagawa, and Hideto Kazawa. 2017. An empirical study of language relatedness for transfer learning in neural machine translation. In Proceedings of the 31st Pacific Asia Conference on Language, Information and Computation, pages 282-286.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Recycling a pre-trained BERT encoder for neural machine translation",
"authors": [
{
"first": "Kenji",
"middle": [],
"last": "Imamura",
"suffix": ""
},
{
"first": "Eiichiro",
"middle": [],
"last": "Sumita",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 3rd Workshop on Neural Generation and Translation",
"volume": "",
"issue": "",
"pages": "23--31",
"other_ids": {
"DOI": [
"10.18653/v1/D19-5603"
]
},
"num": null,
"urls": [],
"raw_text": "Kenji Imamura and Eiichiro Sumita. 2019. Recycling a pre-trained BERT encoder for neural machine trans- lation. In Proceedings of the 3rd Workshop on Neu- ral Generation and Translation, pages 23-31.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Effect of syntactic similarity on cortical activation during second language processing: a comparison of English and Japanese among native Korean trilinguals",
"authors": [
{
"first": "Hyeonjeong",
"middle": [],
"last": "Jeong",
"suffix": ""
},
{
"first": "Motoaki",
"middle": [],
"last": "Sugiura",
"suffix": ""
},
{
"first": "Yuko",
"middle": [],
"last": "Sassa",
"suffix": ""
},
{
"first": "Tomoki",
"middle": [],
"last": "Haji",
"suffix": ""
},
{
"first": "Nobuo",
"middle": [],
"last": "Usui",
"suffix": ""
},
{
"first": "Masato",
"middle": [],
"last": "Taira",
"suffix": ""
},
{
"first": "Kaoru",
"middle": [],
"last": "Horie",
"suffix": ""
},
{
"first": "Shigeru",
"middle": [],
"last": "Sato",
"suffix": ""
},
{
"first": "Ryuta",
"middle": [],
"last": "Kawashita",
"suffix": ""
}
],
"year": 2007,
"venue": "Human Brain Mapping",
"volume": "28",
"issue": "3",
"pages": "195--204",
"other_ids": {
"DOI": [
"10.1002/hbm.20269"
]
},
"num": null,
"urls": [],
"raw_text": "Hyeonjeong Jeong, Motoaki Sugiura, Yuko Sassa, Tomoki Haji, Nobuo Usui, Masato Taira, Kaoru Horie, Shigeru Sato, and Ryuta Kawashita. 2007. Effect of syntactic similarity on cortical activation during second language processing: a comparison of English and Japanese among native Korean trilin- guals. Human Brain Mapping, 28(3):195-204.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Six challenges for neural machine translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Rebecca",
"middle": [],
"last": "Knowles",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the First Workshop on Neural Machine Translation",
"volume": "",
"issue": "",
"pages": "28--39",
"other_ids": {
"DOI": [
"10.18653/v1/W17-3204"
]
},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn and Rebecca Knowles. 2017. Six chal- lenges for neural machine translation. In Proceed- ings of the First Workshop on Neural Machine Trans- lation, pages 28-39.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing",
"authors": [
{
"first": "Taku",
"middle": [],
"last": "Kudo",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Richardson",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "66--71",
"other_ids": {
"DOI": [
"10.18653/v1/D18-2012"
]
},
"num": null,
"urls": [],
"raw_text": "Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tok- enizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66-71.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "BART: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal ; Abdelrahman Mohamed",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "7871--7880",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.703"
]
},
"num": null,
"urls": [],
"raw_text": "Mike Lewis, Yinhan Liu, Naman Goyal, Mar- jan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre- training for natural language generation, translation, and comprehension. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 7871-7880.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Multilingual denoising pre-training for neural machine translation",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Xian",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Marjan",
"middle": [],
"last": "Ghazvininejad",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2020,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "8",
"issue": "",
"pages": "726--742",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00343"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual denoising pre-training for neural machine translation. Transac- tions of the Association for Computational Linguis- tics, 8:726-742.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "The Languages of Japan",
"authors": [
{
"first": "Shibatani",
"middle": [],
"last": "Masayoshi",
"suffix": ""
}
],
"year": 1990,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shibatani Masayoshi. 1990. The Languages of Japan. Cambridge University Press.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Overview of the 8th workshop on Asian translation",
"authors": [
{
"first": "Toshiaki",
"middle": [],
"last": "Nakazawa",
"suffix": ""
},
{
"first": "Hideki",
"middle": [],
"last": "Nakayama",
"suffix": ""
},
{
"first": "Chenchen",
"middle": [],
"last": "Ding",
"suffix": ""
},
{
"first": "Raj",
"middle": [],
"last": "Dabre",
"suffix": ""
},
{
"first": "Shohei",
"middle": [],
"last": "Higashiyama",
"suffix": ""
},
{
"first": "Hideya",
"middle": [],
"last": "Mino",
"suffix": ""
},
{
"first": "Isao",
"middle": [],
"last": "Goto",
"suffix": ""
},
{
"first": "Win",
"middle": [
"Pa"
],
"last": "Pa",
"suffix": ""
},
{
"first": "Anoop",
"middle": [],
"last": "Kunchukuttan",
"suffix": ""
},
{
"first": "Shantipriya",
"middle": [],
"last": "Parida",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Chenhui",
"middle": [],
"last": "Chu",
"suffix": ""
},
{
"first": "Akiko",
"middle": [],
"last": "Eriguchi",
"suffix": ""
},
{
"first": "Kaori",
"middle": [],
"last": "Abe",
"suffix": ""
},
{
"first": "Sadao",
"middle": [],
"last": "Oda",
"suffix": ""
},
{
"first": "Yusuke",
"middle": [],
"last": "Kurohashi",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 8th Workshop on Asian Translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Toshiaki Nakazawa, Hideki Nakayama, Chenchen Ding, Raj Dabre, Shohei Higashiyama, Hideya Mino, Isao Goto, Win Pa Pa, Anoop Kunchukut- tan, Shantipriya Parida, Ond\u0159ej Bojar, Chenhui Chu, Akiko Eriguchi, Kaori Abe, and Sadao Oda, Yusuke Kurohashi. 2021. Overview of the 8th work- shop on Asian translation. In Proceedings of the 8th Workshop on Asian Translation.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Transfer learning across low-resource, related languages for neural machine translation",
"authors": [
{
"first": "Q",
"middle": [],
"last": "Toan",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Eighth International Joint Conference on Natural Language Processing",
"volume": "2",
"issue": "",
"pages": "296--301",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Toan Q. Nguyen and David Chiang. 2017. Trans- fer learning across low-resource, related languages for neural machine translation. In Proceedings of the Eighth International Joint Conference on Natu- ral Language Processing (Volume 2: Short Papers), pages 296-301.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "SQuAD: 100,000+ questions for machine comprehension of text",
"authors": [
{
"first": "Pranav",
"middle": [],
"last": "Rajpurkar",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Konstantin",
"middle": [],
"last": "Lopyrev",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2383--2392",
"other_ids": {
"DOI": [
"10.18653/v1/D16-1264"
]
},
"num": null,
"urls": [],
"raw_text": "Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Nat- ural Language Processing, pages 2383-2392. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Improving neural machine translation models with monolingual data",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "86--96",
"other_ids": {
"DOI": [
"10.18653/v1/P16-1009"
]
},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation mod- els with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 86-96.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "MASS: Masked sequence to sequence pre-training for language generation",
"authors": [
{
"first": "Kaitao",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Tie-Yan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 36th International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "5926--5936",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and Tie- Yan Liu. 2019. MASS: Masked sequence to se- quence pre-training for language generation. In Pro- ceedings of the 36th International Conference on Machine Learning, pages 5926-5936.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "GLUE: A multi-task benchmark and analysis platform for natural language understanding",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Amanpreet",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Bowman",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 EMNLP Workshop Black-boxNLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "353--355",
"other_ids": {
"DOI": [
"10.18653/v1/W18-5446"
]
},
"num": null,
"urls": [],
"raw_text": "Alex Wang, Amanpreet Singh, Julian Michael, Fe- lix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis plat- form for natural language understanding. In Pro- ceedings of the 2018 EMNLP Workshop Black- boxNLP: Analyzing and Interpreting Neural Net- works for NLP, pages 353-355. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Goku's participation in WAT 2020",
"authors": [
{
"first": "Dongzhe",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Ohnmar",
"middle": [],
"last": "Htun",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 7th Workshop on Asian Translation",
"volume": "",
"issue": "",
"pages": "135--141",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dongzhe Wang and Ohnmar Htun. 2020. Goku's par- ticipation in WAT 2020. In Proceedings of the 7th Workshop on Asian Translation, pages 135-141.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Incorporating BERT into neural machine translation",
"authors": [
{
"first": "Jinhua",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Yingce",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Lijun",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Di",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Wengang",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Houqiang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Tieyan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jinhua Zhu, Yingce Xia, Lijun Wu, Di He, Tao Qin, Wengang Zhou, Houqiang Li, and Tieyan Liu. 2020. Incorporating BERT into neural machine translation.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "International Conference on Learning Representations",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "In International Conference on Learning Represen- tations.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Transfer learning for low-resource neural machine translation",
"authors": [
{
"first": "Barret",
"middle": [],
"last": "Zoph",
"suffix": ""
},
{
"first": "Deniz",
"middle": [],
"last": "Yuret",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "May",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1568--1575",
"other_ids": {
"DOI": [
"10.18653/v1/D16-1163"
]
},
"num": null,
"urls": [],
"raw_text": "Barret Zoph, Deniz Yuret, Jonathan May, and Kevin Knight. 2016. Transfer learning for low-resource neural machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1568-1575.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"text": "Data statistics.",
"num": null,
"html": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF3": {
"text": "",
"num": null,
"html": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF4": {
"text": "400\u00b1.080 / -71.510\u00b1.166 / 0.947\u00b1.001 67.816\u00b1.028 / -71.103\u00b1.144 / 0.942\u00b1.001 JaBART 68.750\u00b1.104 / -72.760\u00b1.140 / 0.949\u00b1.000 68.563\u00b1.065 / -72.116\u00b1.060 / 0.946\u00b1.001",
"num": null,
"html": null,
"type_str": "table",
"content": "<table><tr><td>presents the training, de-</td></tr></table>"
}
}
}
}