ACL-OCL / Base_JSON /prefixI /json /icon /2020.icon-adapmt.2.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:29:52.278855Z"
},
"title": "AdapNMT : Neural Machine Translation with Technical Domain Adaptation for Indic Languages",
"authors": [
{
"first": "Hema",
"middle": [],
"last": "Ala",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IIIT-Hyderabad",
"location": {
"country": "India"
}
},
"email": "hema.ala@research.iiit.ac.in"
},
{
"first": "Dipti",
"middle": [],
"last": "Misra",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "LTRC",
"location": {
"region": "Hyderabad",
"country": "India"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Adapting new domain is highly challenging task for Neural Machine Translation (NMT). In this paper we show the capability of general domain machine translation when translating into Indic languages (English-Hindi and Hindi-Telugu), and low resource domain adaptation of MT systems using existing general parallel data and small in domain parallel data for AI and Chemistry Domains. We carried out our experiments using Byte Pair Encoding(BPE) as it solves rare word problems. It has been observed that with addition of little amount of in-domain data to the general data improves the BLEU score significantly.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Adapting new domain is highly challenging task for Neural Machine Translation (NMT). In this paper we show the capability of general domain machine translation when translating into Indic languages (English-Hindi and Hindi-Telugu), and low resource domain adaptation of MT systems using existing general parallel data and small in domain parallel data for AI and Chemistry Domains. We carried out our experiments using Byte Pair Encoding(BPE) as it solves rare word problems. It has been observed that with addition of little amount of in-domain data to the general data improves the BLEU score significantly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Due to the fact that Neural Machine Translation (NMT) is performing better compared to the traditional statistical machine translation (SMT) models, it has become very popular in the recent years. NMT systems require a large amount of training data and thus perform poorly relative to phrase-based machine translation (PBMT) systems in low resource and domain adaptation scenarios (Koehn and Knowles, 2017) . One of the challenges in NMT is domain adaptation, it becomes more challenging when it comes to low resource Indic languages and technical domains like Artificial Intelligence(AI) and Chemistry as these domains may contain many technical terms and equations etc. In a typical domain adaptation setup like ours, we have a large amount of outof-domain bilingual training data for which we need to train a NMT model, we can treat this as a baseline model. Now given only an additional small amount of in-domain data, the challenge is to improve the translation perfor-mance on the new domain. Domain adaptation became very popular in these times, but very few works have been carried out on technical domains like chemistry, computer science, etc. Therefore we adopted two new technical domains in our experiments, those include Artificial Intelligence and Chemistry provided by ICON Adap-MT 2020 shared task for English -Hindi and Hindi -Telugu language pairs. In our approach first we train a general models(baseline models) which trains based on only general data, we test domain data (AI, Chemistry) on this general model then we try to improve performance of this new domain by training another model which uses combined training data(general data + domain data). Inspired from (Sennrich et al., 2015) , we encode rare and unknown words as sequences of sub word units using Byte Pair Encodings(BPE) in order to make our NMT model capable of open vocabulary translation, this is further discussed in 3.2.",
"cite_spans": [
{
"start": 381,
"end": 406,
"text": "(Koehn and Knowles, 2017)",
"ref_id": "BIBREF7"
},
{
"start": 1689,
"end": 1712,
"text": "(Sennrich et al., 2015)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Domain Adaptation has became an active research topic in NMT. Freitag and Al-Onaizan (2016) proposed two approaches, continue the training of the baseline model(general model) only on the in-domain data (domain data) and ensemble the continue model with the baseline model at decoding time. Zeng et al. (2019) proposed iterative dual domain adaptation framework for NMT, which continuously fully exploits the mutual complementarity between indomain and out-domain corpora for translation knowledge transfer. Apart from these domain adaptation techniques, there exists some approaches which has domain terminology and how to use that in NMT. Similarly Hasler et al. (2018) proposed an approach on NMT decoding with terminology constraints using decoder attentions which enables reduced output duplication and better constraint placement compared to existing methods. Apart from traditional approaches there is a stack-based lattice search algorithm, constraining its search space with lattices generated by phrase-based machine translation (PBMT) improves the robustness (Khayrallah et al., 2017) . Wang et al. (2017) proposed two instance weighting methods with a dynamic weight learning strategy for NMT domain adaptation.",
"cite_spans": [
{
"start": 62,
"end": 91,
"text": "Freitag and Al-Onaizan (2016)",
"ref_id": "BIBREF2"
},
{
"start": 291,
"end": 309,
"text": "Zeng et al. (2019)",
"ref_id": "BIBREF15"
},
{
"start": 651,
"end": 671,
"text": "Hasler et al. (2018)",
"ref_id": "BIBREF4"
},
{
"start": 1070,
"end": 1095,
"text": "(Khayrallah et al., 2017)",
"ref_id": "BIBREF6"
},
{
"start": 1098,
"end": 1116,
"text": "Wang et al. (2017)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background & Motivation",
"sec_num": "2"
},
{
"text": "Although huge amount of research exists in this area , there exists very few works on Indian languages. As per our knowledge there is no work on technical domains like ours (Artificial Intelligence and Chemistry). Therefore there is a need to handle these technical domains and work on morphological rich and resource poor languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background & Motivation",
"sec_num": "2"
},
{
"text": "There are many approaches for domain adaptation discussed in section 2. However the approach we adopted , falls under combining the training data of general domain and specific technical domain data. This is further discussed in section 3.3. Our approach follows attention-based NMT implementation similar to and Luong et al. (2015) . Our model is very much similar to the model described in Luong et al. (2015) and supports label smoothing, beam-search decoding and random sampling. The brief explanation about NMT is described in section 3.1.",
"cite_spans": [
{
"start": 313,
"end": 332,
"text": "Luong et al. (2015)",
"ref_id": "BIBREF8"
},
{
"start": 392,
"end": 411,
"text": "Luong et al. (2015)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "3"
},
{
"text": "NMT system tries to find the conditional probability of target sentence with the given source sentence. In our case targets are indic languages. There are many ways to parameterize these conditional probability. Kalchbrenner and Blunsom (2013) used combination of a convolutional neural network and a recurrent neural network , Sutskever et al. (2014) used a deep Long Short-Term Memory (LSTM) model, used an architecture similar to the LSTM, and Bahdanau et al. (2014) used a more elaborate neural network architecture that uses an atten-tional mechanism over the input sequence. In this work, following Luong et al. (2015) and Sutskever et al. (2014) we used LSTM architectures for our NMT Models, which uses a LSTM to encode the input sequence and a separate LSTM to output the translation. The encoder reads the source sentence, one word at a time, and produces a large vector that represents the entire source sentence. The decoder is initialized with this vector and generates a translation, one word at a time, until it emits the end of sentence symbol. For better translations we use bi-directional LSTM (Bahdanau et al., 2014) and attention mechanism described in Luong et al. (2015) .",
"cite_spans": [
{
"start": 605,
"end": 624,
"text": "Luong et al. (2015)",
"ref_id": "BIBREF8"
},
{
"start": 629,
"end": 652,
"text": "Sutskever et al. (2014)",
"ref_id": "BIBREF12"
},
{
"start": 1173,
"end": 1192,
"text": "Luong et al. (2015)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation",
"sec_num": "3.1"
},
{
"text": "BPE (Gage, 1994) is a data compression technique that replaces the most frequent pair of bytes in a sequence. We use this algorithm for word segmentation , and merging frequent pairs of character sequences we can get the vocabulary of desired size (Sennrich et al., 2015) . As Telugu and Hindi are morphological rich languages, particularly Telugu being an Agglutinative language, therefore there is need to handle postpositions and compound words etc. BPE helps the same by separating suffix , prefix and compound words. It creates new and complex words of Telugu and Hindi language by interpreting them as sub-words units. NMT with Byte Pair Encoding made significant improvements in translation quality for low resource morphologically rich languages (Pinnis et al., 2017) . We also adopted same for our experiments for all the language pairs namely English-Hindi and Hindi-Telugu. In our approach we got the best results with a vocabulary size of 20000 and dimension as 300.",
"cite_spans": [
{
"start": 4,
"end": 16,
"text": "(Gage, 1994)",
"ref_id": "BIBREF3"
},
{
"start": 248,
"end": 271,
"text": "(Sennrich et al., 2015)",
"ref_id": "BIBREF11"
},
{
"start": 754,
"end": 775,
"text": "(Pinnis et al., 2017)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Byte Pair Encoding (BPE)",
"sec_num": "3.2"
},
{
"text": "Freitag and Al-Onaizan (2016) discussed two problems when we combine general data and domain data for training. First, training a neural machine translation system on large data sets can take several weeks and training a new model based on the combined training data is time consuming. Second, since the in-domain data is relatively small, the out-ofdomain data will tend to dominate the training data and hence the learned model will not perform as well on the in-domain test data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Technical Domain Adaptation",
"sec_num": "3.3"
},
{
"text": "However we preferred that approach only as our target languages are morphologically rich and resource poor languages. We addressed solutions for the above problems discussed in Freitag and Al-Onaizan (2016). First, as our main objective is to use the less amount of technical domain data(AI and Chemistry) available along with general data and improve the translation of given domain test data, adding very little amount of data will not make it more time consuming as the general data itself is less for these mentioned morphologically rich languages(Telugu and Hindi).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Technical Domain Adaptation",
"sec_num": "3.3"
},
{
"text": "To address the second problem, we use BPE. Technical domain data is very very less compared to general data so if we take top 50k words as our vocabulary then most of the words will come from general data which leads to poor translation of domain data, to overcome this we used BPE as it uses sub word units and handles rare words, and it can easily recognize inflected words which are prevalent in morphologically rich languages. Due to the fact that technical domain data is very less , performing validation on combined data(general validation data + domain validation data) will lead to low translation quality for domain test data. Therefore we used only domain data for validation and got significant improvement in BLEU score on domain test data. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Technical Domain Adaptation",
"sec_num": "3.3"
},
{
"text": "We evaluate our approach on test data sets provided by ICON Adap-MT 2020 shared task for all language pairs for all domains. We can see data statistics in table 1. All the sentences presented in table 1 are taken from various sources provide by ICON Adap-MT 2020, these include opensubtitles, globalvoices , gnome, etc from OPUS corpus (Tiedemann, 2012) . After collecting the data from above mentioned sources, training and validation data split was done based on the corpus size , then removed empty lines. To measure the translation quality we used an automatic evaluation metric called BLEU (Papineni et al., 2002) .",
"cite_spans": [
{
"start": 336,
"end": 353,
"text": "(Tiedemann, 2012)",
"ref_id": "BIBREF13"
},
{
"start": 595,
"end": 618,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and Results",
"sec_num": "4"
},
{
"text": "We have three models for each language pair 1. Baseline model trained on general data 2. Trained on general+AI data 3. general data+Chemistry data. For statistics regarding training & validation sentences refer table 1. We followed and (Luong et al., 2015) while training our NMT systems. Our parameters are uniformly initial-ized in [-0.1-0.1]. We used standard embedding dimension i.e 300. Comparatively we have less amount of data(including general data as well) hence we preferred to use small batch size as 10. we start with a learning rate of 0.001, for every 5 epochs we halve the learning rate. Additionally, we also use dropout with probability 0.3. In order to avoid overfitting of our models we used an early stopping criteria which is one of the forms of regularization.",
"cite_spans": [
{
"start": 236,
"end": 256,
"text": "(Luong et al., 2015)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training Details",
"sec_num": "4.1"
},
{
"text": "8.4 Chem-En-Hi 6 AI-Hi-Te 0.6 Chem-Hi-Te 0.03 ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain BLEU(on val) AI-En-Hi",
"sec_num": null
},
{
"text": "We conducted an evaluation of random sentences from the test data for both the mentioned domains, it was found that the translation of domain/technical terms or named entities was improved after adding less amount of technical domain data to the general data, we can see some of the examples in table 4 for English to Hindi for AI and Chemistry domains respectively. If we observe the first example from table 4 which is taken from AI domain, the domain term \"square function\" was translated properly into \"\u0938\u094d\u0915\u094d\u0935\u0947 \u0930 \u092b\u0902 \u0915\u094d\u0936\u0928\"(skver phankshan) when it is tested on our proposed model, same happened with chemistry domain as well, for \"enzyme immunoassay\" and \"radioimmunoassay\" domain terms, our model translated them correctly whereas the general model not. In order to show improvement in terms of bleu score, we tested our AI and Chemistry validation data on general model which was trained on only general data. Then we tested same validation data on our proposed models which trains on combining data(general+domain). When we get improvements in validation data from general model to new model, we fixed the parameters of the model as mentioned in section 3.3 for testing purpose. Table 2 shows the bleu scores of AI and Chemistry validation data on English-Hindi and Hindi-Telugu general mod-els. Now, when we test that validation data on proposed models (table 3) , the bleu score of chemistry validation data improved from 6 to 19.6 for English to Hindi language pair , in this case the bleu score increased more than three times. Similarly for AI, the bleu score increased from 8.4 to 16 for English to Hindi. For Hindi to Telugu bleu score of AI domain is increased from 0.6 to 8.2, likewise it is increased from 0.03 to 5.7 for chemistry domain. Next we evaluated domain test data on proposed models AI-En-Hi, Chem-En-Hi, AI -Hi-Te and Chem-Hi-Te. Refer table 3 for bleu scores on test data.",
"cite_spans": [],
"ref_spans": [
{
"start": 1184,
"end": 1191,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 1359,
"end": 1368,
"text": "(table 3)",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "4.2"
},
{
"text": "We would like to extend this work to possible technical domains and for more languages as well. We plan to explore many other approaches like Transformer based models for technical domain adaptation. And try to incorporate linguistic features into the NMT models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future Work",
"sec_num": "5"
},
{
"text": "For morphologically rich and resource poor languages like Telugu it's very difficult to get the large amount of parallel corpus for technical domain. Therefor there is a need to optimize our general models with available small amount of domain data. In this paper we showed an approach which combines little amount of technical domain data to the available general domain data and trains a model using BPE. For better translation quality on technical domain we used only domain data as validation and observed our approach is giving promising results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1409.0473"
]
},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Learning phrase representations using rnn encoder-decoder for statistical machine translation",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Van Merri\u00ebnboer",
"suffix": ""
},
{
"first": "Caglar",
"middle": [],
"last": "Gulcehre",
"suffix": ""
},
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Fethi",
"middle": [],
"last": "Bougares",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1406.1078"
]
},
"num": null,
"urls": [],
"raw_text": "Kyunghyun Cho, Bart Van Merri\u00ebnboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine transla- tion. arXiv preprint arXiv:1406.1078.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Fast domain adaptation for neural machine translation",
"authors": [
{
"first": "Markus",
"middle": [],
"last": "Freitag",
"suffix": ""
},
{
"first": "Yaser",
"middle": [],
"last": "Al-Onaizan",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1612.06897"
]
},
"num": null,
"urls": [],
"raw_text": "Markus Freitag and Yaser Al-Onaizan. 2016. Fast domain adaptation for neural machine transla- tion. arXiv preprint arXiv:1612.06897.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A new algorithm for data compression",
"authors": [
{
"first": "Philip",
"middle": [],
"last": "Gage",
"suffix": ""
}
],
"year": 1994,
"venue": "C Users Journal",
"volume": "12",
"issue": "2",
"pages": "23--38",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philip Gage. 1994. A new algorithm for data com- pression. C Users Journal, 12(2):23-38.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Neural machine translation decoding with terminology constraints",
"authors": [
{
"first": "Eva",
"middle": [],
"last": "Hasler",
"suffix": ""
},
{
"first": "Adri\u00e0",
"middle": [],
"last": "De Gispert",
"suffix": ""
},
{
"first": "Gonzalo",
"middle": [],
"last": "Iglesias",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Byrne",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1805.03750"
]
},
"num": null,
"urls": [],
"raw_text": "Eva Hasler, Adri\u00e0 De Gispert, Gonzalo Iglesias, and Bill Byrne. 2018. Neural machine trans- lation decoding with terminology constraints. arXiv preprint arXiv:1805.03750.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Recurrent continuous translation models",
"authors": [
{
"first": "Nal",
"middle": [],
"last": "Kalchbrenner",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1700--1709",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nal Kalchbrenner and Phil Blunsom. 2013. Re- current continuous translation models. In Pro- ceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1700-1709.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Neural lattice search for domain adaptation in machine translation",
"authors": [
{
"first": "Huda",
"middle": [],
"last": "Khayrallah",
"suffix": ""
},
{
"first": "Gaurav",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Duh",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Eighth International Joint Conference on Natural Language Processing",
"volume": "2",
"issue": "",
"pages": "20--25",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Huda Khayrallah, Gaurav Kumar, Kevin Duh, Matt Post, and Philipp Koehn. 2017. Neural lattice search for domain adaptation in machine translation. In Proceedings of the Eighth Inter- national Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 20- 25.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Six challenges for neural machine translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Rebecca",
"middle": [],
"last": "Knowles",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1706.03872"
]
},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn and Rebecca Knowles. 2017. Six challenges for neural machine translation. arXiv preprint arXiv:1706.03872.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Effective approaches to attention-based neural machine translation",
"authors": [
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1508.04025"
]
},
"num": null,
"urls": [],
"raw_text": "Minh-Thang Luong, Hieu Pham, and Christo- pher D Manning. 2015. Effective approaches to attention-based neural machine translation. arXiv preprint arXiv:1508.04025.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th annual meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for auto- matic evaluation of machine translation. In Pro- ceedings of the 40th annual meeting of the As- sociation for Computational Linguistics, pages 311-318.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Neural machine translation for morphologically rich languages with improved sub-word units and synthetic data",
"authors": [
{
"first": "M\u0101rcis",
"middle": [],
"last": "Pinnis",
"suffix": ""
},
{
"first": "Rihards",
"middle": [],
"last": "Kri\u0161lauks",
"suffix": ""
},
{
"first": "Daiga",
"middle": [],
"last": "Deksne",
"suffix": ""
},
{
"first": "Toms",
"middle": [],
"last": "Miks",
"suffix": ""
}
],
"year": 2017,
"venue": "International Conference on Text, Speech, and Dialogue",
"volume": "",
"issue": "",
"pages": "237--245",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M\u0101rcis Pinnis, Rihards Kri\u0161lauks, Daiga Deksne, and Toms Miks. 2017. Neural machine trans- lation for morphologically rich languages with improved sub-word units and synthetic data. In International Conference on Text, Speech, and Dialogue, pages 237-245. Springer.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Neural machine translation of rare words with subword units",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1508.07909"
]
},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Sequence to sequence learning with neural networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "3104--3112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural net- works. In Advances in neural information pro- cessing systems, pages 3104-3112.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Parallel data, tools and interfaces in opus",
"authors": [
{
"first": "Jorg",
"middle": [],
"last": "Tiedemann",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jorg Tiedemann. 2012. Parallel data, tools and interfaces in opus. In Proceedings of the Eight International Conference on Language Re- sources and Evaluation (LREC'12), Istanbul, Turkey. European Language Resources Associ- ation (ELRA).",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Instance weighting for neural machine translation domain adaptation",
"authors": [
{
"first": "Rui",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Masao",
"middle": [],
"last": "Utiyama",
"suffix": ""
},
{
"first": "Lemao",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Kehai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Eiichiro",
"middle": [],
"last": "Sumita",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1482--1488",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rui Wang, Masao Utiyama, Lemao Liu, Kehai Chen, and Eiichiro Sumita. 2017. Instance weighting for neural machine translation domain adaptation. In Proceedings of the 2017 Confer- ence on Empirical Methods in Natural Language Processing, pages 1482-1488.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Iterative dual domain adaptation for neural machine translation",
"authors": [
{
"first": "Jiali",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jinsong",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Yubin",
"middle": [],
"last": "Ge",
"suffix": ""
},
{
"first": "Yaojie",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Yongjing",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Jiebo",
"middle": [],
"last": "Luo",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1912.07239"
]
},
"num": null,
"urls": [],
"raw_text": "Jiali Zeng, Yang Liu, Jinsong Su, Yubin Ge, Yaojie Lu, Yongjing Yin, and Jiebo Luo. 2019. Itera- tive dual domain adaptation for neural machine translation. arXiv preprint arXiv:1912.07239.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"num": null,
"html": null,
"content": "<table/>",
"type_str": "table",
"text": "Data statistics (no. of sentences) Valvalidation data Gen-general data for that language pair"
},
"TABREF2": {
"num": null,
"html": null,
"content": "<table><tr><td colspan=\"3\">: BLEU scores of AI and Chemistry vali-</td></tr><tr><td colspan=\"3\">dation data on general models (trained on only</td></tr><tr><td colspan=\"2\">general data) for respective language pairs</td><td/></tr><tr><td>Model</td><td colspan=\"2\">BLEU(on val) BLEU(on test)</td></tr><tr><td>AI-En-Hi</td><td>16</td><td>15.37</td></tr><tr><td>Chem-En-Hi</td><td>19.6</td><td>12.35</td></tr><tr><td>AI-Hi-Te</td><td>8.2</td><td>10.35</td></tr><tr><td>Chem-Hi-Te</td><td>5.7</td><td>6.87</td></tr></table>",
"type_str": "table",
"text": ""
},
"TABREF3": {
"num": null,
"html": null,
"content": "<table><tr><td>: AI-En-Hi:trained on ai+gen data for</td></tr><tr><td>English-Hindi AI-Hi-Te:trained on ai+gen data for</td></tr><tr><td>Hindi-Telugu Chem-En-Hi:trained on chem+gen</td></tr><tr><td>data for English-Hindi Chem-En-Hi:trained on</td></tr><tr><td>chem+gen data for Hindi-Telugu</td></tr></table>",
"type_str": "table",
"text": ""
},
"TABREF4": {
"num": null,
"html": null,
"content": "<table><tr><td>: Examples of improved sentences</td></tr><tr><td>MT1 : output of general model(trained on only general data)</td></tr><tr><td>MT2 : output of proposed model(trained on general+domain data)</td></tr></table>",
"type_str": "table",
"text": ""
}
}
}
}