ACL-OCL / Base_JSON /prefixS /json /sltu /2020.sltu-1.40.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:33:55.714944Z"
},
"title": "On the Exploration of English to Urdu Machine Translation",
"authors": [
{
"first": "Sadaf",
"middle": [
"Abdul"
],
"last": "Rauf",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Fatima Jinnah Women University",
"location": {
"country": "Pakistan"
}
},
"email": ""
},
{
"first": "Syeda",
"middle": [],
"last": "Abida",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Fatima Jinnah Women University",
"location": {
"country": "Pakistan"
}
},
"email": ""
},
{
"first": "",
"middle": [],
"last": "Noor-E-Hira",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Fatima Jinnah Women University",
"location": {
"country": "Pakistan"
}
},
"email": ""
},
{
"first": "Syeda",
"middle": [],
"last": "Zahra",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Fatima Jinnah Women University",
"location": {
"country": "Pakistan"
}
},
"email": ""
},
{
"first": "Dania",
"middle": [],
"last": "Parvez",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Fatima Jinnah Women University",
"location": {
"country": "Pakistan"
}
},
"email": ""
},
{
"first": "Javeria",
"middle": [],
"last": "Bashir",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Fatima Jinnah Women University",
"location": {
"country": "Pakistan"
}
},
"email": ""
},
{
"first": "Qurat-Ul-Ain",
"middle": [],
"last": "Majid",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Fatima Jinnah Women University",
"location": {
"country": "Pakistan"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Machine Translation is the inevitable technology to reduce communication barriers in today's world. It has made substantial progress in recent years and is being widely used in commercial as well as non-profit sectors. Such is only the case for European and other high resource languages. For English-Urdu language pair, the technology is in its infancy stage due to scarcity of resources. Present research is an important milestone in English-Urdu machine translation, as we present results for four major domains including Biomedical, Religious, Technological and General using Statistical and Neural Machine Translation. We performed series of experiments in attempts to optimize the performance of each system and also to study the impact of data sources on the systems. Finally, we established a comparison of the data sources and the effect of language model size on statistical machine translation performance.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Machine Translation is the inevitable technology to reduce communication barriers in today's world. It has made substantial progress in recent years and is being widely used in commercial as well as non-profit sectors. Such is only the case for European and other high resource languages. For English-Urdu language pair, the technology is in its infancy stage due to scarcity of resources. Present research is an important milestone in English-Urdu machine translation, as we present results for four major domains including Biomedical, Religious, Technological and General using Statistical and Neural Machine Translation. We performed series of experiments in attempts to optimize the performance of each system and also to study the impact of data sources on the systems. Finally, we established a comparison of the data sources and the effect of language model size on statistical machine translation performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Machine translation (MT) for low resource languages has been a challenging task (Irvine, 2013; Zoph et al., 2016) . The dimensionality of difficulty increases when it comes to translating between a morphologically rich and morphologically poor language (Habash and Sadat, 2006) . In this study, we will be presenting one such pair, English to Urdu translation, with English being a morphologically simple language while Urdu is a language with rich inflectional and derivational morphology. In case of Urdu-English translation topological distance between both languages is the biggest hurdle to get best results (Jawaid et al., 2016; . Findings of WMT 2011 evaluation (Callison-Burch et al., 2011) reported Urdu-English translation to be a relatively difficult problem. With some works on rule based systems (RBMT) (Tafseer and Alvi, 2002; Karamat, 2006; Naila Ata, 2007) and a small cascade of works on phrase based SMT systems (Jawaid and Zeman, 2011; Ali et al., 2013; Jawaid et al., 2014a) , hierarchical MT systems (Khan et al., 2013; Jawaid et al., 2014a) and NMT using transfer learning from a high resource language (Zoph et al., 2016) , it is still an arena requiring much work. Present study is a consolidated study in this regard. In this study we present results of some of the unexplored areas with reference to this language pair. Previous works have built general domain translation systems, we present a domain analysis on Technological, Religious and General domain translations (Section 5). This study is also an attempt to initiate the field of MT for Bio-medical domain despite zero resources available for the language pair. Effect of smaller and larger language models on translations are also explored. We have explored and used all the freely available English-Urdu corpora and also developed various small corpora by using human translations, synthetic corpora by machine translation and Hindi to Urdu transliteration. Starting with a brief review of previous works we describe the resources used in Section 3 followed by detailed results in Section 4 The paper concludes with a brief discussion on results.",
"cite_spans": [
{
"start": 80,
"end": 94,
"text": "(Irvine, 2013;",
"ref_id": "BIBREF10"
},
{
"start": 95,
"end": 113,
"text": "Zoph et al., 2016)",
"ref_id": "BIBREF34"
},
{
"start": 253,
"end": 277,
"text": "(Habash and Sadat, 2006)",
"ref_id": "BIBREF7"
},
{
"start": 613,
"end": 634,
"text": "(Jawaid et al., 2016;",
"ref_id": "BIBREF14"
},
{
"start": 669,
"end": 698,
"text": "(Callison-Burch et al., 2011)",
"ref_id": "BIBREF5"
},
{
"start": 816,
"end": 840,
"text": "(Tafseer and Alvi, 2002;",
"ref_id": "BIBREF32"
},
{
"start": 841,
"end": 855,
"text": "Karamat, 2006;",
"ref_id": "BIBREF15"
},
{
"start": 856,
"end": 872,
"text": "Naila Ata, 2007)",
"ref_id": "BIBREF25"
},
{
"start": 930,
"end": 954,
"text": "(Jawaid and Zeman, 2011;",
"ref_id": "BIBREF11"
},
{
"start": 955,
"end": 972,
"text": "Ali et al., 2013;",
"ref_id": "BIBREF2"
},
{
"start": 973,
"end": 994,
"text": "Jawaid et al., 2014a)",
"ref_id": "BIBREF12"
},
{
"start": 1021,
"end": 1040,
"text": "(Khan et al., 2013;",
"ref_id": "BIBREF16"
},
{
"start": 1041,
"end": 1062,
"text": "Jawaid et al., 2014a)",
"ref_id": "BIBREF12"
},
{
"start": 1125,
"end": 1144,
"text": "(Zoph et al., 2016)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Perhaps, Tafseer and Alvi (2002) presents one of the earliest attempts on English to Urdu translation based on transforming the parse tree of the English sentence to Urdu using transformation rules. Issues relating to translation for verbs in context of English to Urdu RBMT using lexical functional grammar are discussed by (Karamat, 2006) . A minimal English to Urdu RBMT system is presented in (Naila Ata, 2007) (Jawaid and Zeman, 2011) used phrase based models to solve the long distance word reordering problem between the two languages. They used Emille (Baker et al., 2002) , Treebank (Marcus et al., 1993) , Quran and Bible corpora and report improvement in BLEU scores by the proposed reordering scheme. Our general domain systems are built using these above mentioned corpora. (Jawaid and Zeman, 2011) used phrase based models to solve the long distance word reordering problem between the two languages. They used Emille (Baker et al., 2002) , Treebank (Marcus et al., 1993) , Quran and bible corpora and report improvement in BLEU scores by the proposed reordering scheme. We also use these corpora in our general domain systems. Building up on previous work (Jawaid et al., 2014a ) present a comparison of phrase based versus hierarchical systems. They have added AFRL corpus (not free) to the earlier system and reported the hierarchical systems to outperform phrase based systems. (Ali et al., 2010; Ali et al., 2013) built SMT using parallel ahadith corpus from Sahih bukhari and Sahih Muslim. (Khan et al., 2013) also presented a hierarchical SMT system. Several other studies have also contributed, for instance (Shahnawaz and Mishra, 2013) and (Khan Jadoon et al., 2017) ",
"cite_spans": [
{
"start": 9,
"end": 32,
"text": "Tafseer and Alvi (2002)",
"ref_id": "BIBREF32"
},
{
"start": 325,
"end": 340,
"text": "(Karamat, 2006)",
"ref_id": "BIBREF15"
},
{
"start": 404,
"end": 414,
"text": "Ata, 2007)",
"ref_id": "BIBREF25"
},
{
"start": 415,
"end": 439,
"text": "(Jawaid and Zeman, 2011)",
"ref_id": "BIBREF11"
},
{
"start": 560,
"end": 580,
"text": "(Baker et al., 2002)",
"ref_id": "BIBREF3"
},
{
"start": 592,
"end": 613,
"text": "(Marcus et al., 1993)",
"ref_id": "BIBREF24"
},
{
"start": 787,
"end": 811,
"text": "(Jawaid and Zeman, 2011)",
"ref_id": "BIBREF11"
},
{
"start": 932,
"end": 952,
"text": "(Baker et al., 2002)",
"ref_id": "BIBREF3"
},
{
"start": 964,
"end": 985,
"text": "(Marcus et al., 1993)",
"ref_id": "BIBREF24"
},
{
"start": 1171,
"end": 1192,
"text": "(Jawaid et al., 2014a",
"ref_id": "BIBREF12"
},
{
"start": 1396,
"end": 1414,
"text": "(Ali et al., 2010;",
"ref_id": "BIBREF1"
},
{
"start": 1415,
"end": 1432,
"text": "Ali et al., 2013)",
"ref_id": "BIBREF2"
},
{
"start": 1510,
"end": 1529,
"text": "(Khan et al., 2013)",
"ref_id": "BIBREF16"
},
{
"start": 1630,
"end": 1658,
"text": "(Shahnawaz and Mishra, 2013)",
"ref_id": "BIBREF31"
},
{
"start": 1663,
"end": 1689,
"text": "(Khan Jadoon et al., 2017)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2."
},
{
"text": "Data collection and its cleaning is an important but a challenging part for NLP, including machine translation. Our Data collection scheme included 1) an extensive search of all the freely available parallel corpora. 2) Synthetic parallel corpus creation using a good translation system and 3) transliteration from a highly similar language, Hindi. We have categorised the corpora in four categories, General, Biomedical, Religious and Technology, each explained in subsections 3.1, 3.2, 3.3, and 3.4 respectively. Corpus details are summarized in table 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Collection",
"sec_num": "3."
},
{
"text": "This section lists the corpora and their details for general category.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "General",
"sec_num": "3.1."
},
{
"text": "1. The Emille 1 corpus (Baker et al., 2002) is a collection of annotated, parallel and monolingual data in written and spoken form. It consists of multi domain corpora (social, legal, educational, health, etc.) (Marcus et al., 1993) . The Urdu corpus was available online and we were able to get English sentences from LDC Treebank.",
"cite_spans": [
{
"start": 23,
"end": 43,
"text": "(Baker et al., 2002)",
"ref_id": "BIBREF3"
},
{
"start": 168,
"end": 210,
"text": "(social, legal, educational, health, etc.)",
"ref_id": null
},
{
"start": 211,
"end": 232,
"text": "(Marcus et al., 1993)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "General",
"sec_num": "3.1."
},
{
"text": "3. Indic 3 is a freely available multi-domain parallel corpus created by using crowd-sourcing (Post et al., 2012) .",
"cite_spans": [
{
"start": 94,
"end": 113,
"text": "(Post et al., 2012)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "General",
"sec_num": "3.1."
},
{
"text": "Language Technology Proliferation and Deployment Center. We were able to get a sample of this corpus in domains of tourism, art, culture and architecture etc.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "TDIL 4 is an Indian",
"sec_num": "4."
},
{
"text": "5. Opus 5 project (Tiedemann, 2012) provides freely available annotated corpora to the research community. We used their English-Urdu corpus comprising of Tanzil, Tatoeba, OpenSubtitles {2016, 2018}, Ubuntu, GNOME and Global Voices. Tanzil was a religious corpus, whereas Ubuntu and Gnome were technology related corpora. We further sub categorized these according to the domains as shown in To overcome data scarceness we experimented with transliterations from Hindi to Urdu. A similar scheme has been used by (Durrani et al., 2014) but in the opposite direction, .i.e they transliterated from Urdu to Hindi.",
"cite_spans": [
{
"start": 18,
"end": 35,
"text": "(Tiedemann, 2012)",
"ref_id": "BIBREF33"
},
{
"start": 512,
"end": 534,
"text": "(Durrani et al., 2014)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "TDIL 4 is an Indian",
"sec_num": "4."
},
{
"text": "Since no prior work exists in the Biomedical domain for English-Urdu, consequently there were no separate parallel corpora available. However, Emille corpus had a small part comprising of 0.055M English and 0.075 Urdu words respectively in health domain. We used these as Biomedical corpus. Furthermore, we developed Biomedical parallel corpora by using ideas from unsupervised learning techniques successfully used for other language pairs, where translations are used as additional bi-texts to cover up for data scarcity (Lambert et al., 2011) and domain adaptation (Abdul Rauf et al., 2016; Hira et al., 2019) . We collected Biomedical parallel corpora from various sources and translated them. We are working on using domain adapted translation and language models for the biomedical domain, however, the translations used in this work are done using google translate. We used the following corpora:",
"cite_spans": [
{
"start": 523,
"end": 545,
"text": "(Lambert et al., 2011)",
"ref_id": "BIBREF22"
},
{
"start": 575,
"end": 593,
"text": "Rauf et al., 2016;",
"ref_id": "BIBREF0"
},
{
"start": 594,
"end": 612,
"text": "Hira et al., 2019)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bio-Medical",
"sec_num": "3.2."
},
{
"text": "1. Scielo 7 corpus contains documents retrieved from the scielo database comprising of titles and abstracts of published articles in bio-medical domain. Our Scielo corpus comprises of 0.022M sentences. Overall it contains 0.60M English and 0.65M Urdu words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bio-Medical",
"sec_num": "3.2."
},
{
"text": "2. Jang 8 group of news is a Pakistan based media corporation. Their newspapers are published in both Urdu and English independently,but they are not the translations of each other. We cleaned and extracted 6k",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bio-Medical",
"sec_num": "3.2."
},
{
"text": "English sentences from the health news section and translated to Urdu to be used as parallel corpus. We got a corpus of 0.11M words in English and 0.14M words in Urdu.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bio-Medical",
"sec_num": "3.2."
},
{
"text": "3. EMEA 9 is a parallel corpus extracted out of documents published by European Medical Agency. The corpus is freely available in a number of language pairs but is not available in Urdu. We downloaded English part of corpus available in plain text and selected data related to medicines, disease, treatment and instructions. We automatically translated the extracted dataset and produced Urdu parallel translations. At the end of translation process we got a parallel dataset comprising of 1.03M words in Urdu and 0.82M words in English.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bio-Medical",
"sec_num": "3.2."
},
{
"text": "This section lists the corpora and their details for religious category.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Religious",
"sec_num": "3.3."
},
{
"text": "1. UMC005 (Jawaid and Zeman, 2011) ",
"cite_spans": [
{
"start": 10,
"end": 34,
"text": "(Jawaid and Zeman, 2011)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Religious",
"sec_num": "3.3."
},
{
"text": "This consists of English-Urdu Parallel corpus from localization files of Ubuntu and Gnome. Ubuntu contains 3.03k sentences and 0.1M, 0.2M English and Urdu tokens respectively, Gnome has 0.05M English and 0.06M Urdu tokens.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Technology",
"sec_num": "3.4."
},
{
"text": "Monolingual corpus is an essential resource for building language models for SMT. We used the corpus developed by (Jawaid et al., 2014b) . This corpus consists of 95.4 million Urdu words, representing 5.4 million sentences of various domains including science, news, religion and education. We also collected Urdu monolingual documents from Jang (0.03M sentences) and other sources comprising of (0.06M sentences) as shown at the end of table 1. Urdu side of all parallel corpora was also used to build the large language model used in the indicated experiments in results.",
"cite_spans": [
{
"start": 114,
"end": 136,
"text": "(Jawaid et al., 2014b)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Monolingual Urdu Corpus",
"sec_num": "3.5."
},
{
"text": "Data cleaning and preprocessing is highly important for the performance of MT systems. The corpora provided by Emillie, NLT and Penn Tree-bank were partially parallel so we sentence aligned them using LF sentence aligner. 10 Due to the topological distance between the two languages we were not able to get fully aligned parallel corpus using LF aligner, thus manual alignment was done to ensure correctness.",
"cite_spans": [
{
"start": 222,
"end": 224,
"text": "10",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preprocessing",
"sec_num": "3.6."
},
{
"text": "To demonstrate the performance of MT systems on the corpora collected and generated in this work, we performed a number of experiments for SMT and a few experiments for NMT. This section provides the description of the experimental frameworks and settings used for building SMT and NMT systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Framework",
"sec_num": "4."
},
{
"text": "The goal of SMT is to produce a target sentence e from a source sentence f . Among all possible target language sentences the one with the highest probability is chosen: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Machine Translation:",
"sec_num": "4.1."
},
{
"text": "e * =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Machine Translation:",
"sec_num": "4.1."
},
{
"text": "where Pr(f |e) is the translation model and Pr(e) is the target language model (LM). This approach is usually referred to as the noisy source-channel approach in SMT (Brown et al., 1993) . Bilingual corpora are needed to train the translation model and monolingual texts to train the target language model. Common practice is to use phrases as translation units (Koehn et al., 2003; Och and Ney, 2003a) instead of the original word-based approach. A phrase is defined as a group of source wordsf that should be translated together into a group of target words\u1ebd. The translation model in phrase-based systems includes the phrase translation probabilities in both directions, i.e. P (\u1ebd|f ) and P (f |\u1ebd). The use of a maximum entropy approach simplifies the introduction of several additional models explaining the translation process :",
"cite_spans": [
{
"start": 166,
"end": 186,
"text": "(Brown et al., 1993)",
"ref_id": "BIBREF4"
},
{
"start": 362,
"end": 382,
"text": "(Koehn et al., 2003;",
"ref_id": "BIBREF20"
},
{
"start": 383,
"end": 402,
"text": "Och and Ney, 2003a)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Machine Translation:",
"sec_num": "4.1."
},
{
"text": "e * = arg max P r(e|f ) = arg max e {exp( i \u03bb i h i (e, f ))} (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Machine Translation:",
"sec_num": "4.1."
},
{
"text": "The feature functions h i are the system models and the \u03bb i weights are typically optimized to maximize a scoring function on a development set. In our system fourteen features functions were used, namely phrase and lexical translation probabilities in both directions, seven features for the lexicalized distortion model, a word and a phrase penalty, and a target language model. To built standard phrase-based SMT systems we used Moses toolkit (Koehn et al., 2007) , with the default settings for all the parameters. A 5-gram KenLM (Heafield, 2011) language model was used. For individual systems the language models were trained on the target side of the corpus. For experiments on size of the language model, all the available monolingual and target side corpus was used (122.5M Urdu words).",
"cite_spans": [
{
"start": 446,
"end": 466,
"text": "(Koehn et al., 2007)",
"ref_id": "BIBREF21"
},
{
"start": 534,
"end": 550,
"text": "(Heafield, 2011)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Machine Translation:",
"sec_num": "4.1."
},
{
"text": "Word-alignment was done using Giza++ (Och and Ney, 2003b) with grow-diag-final-and symmetrization method. Maximum sentence length was chosen to be 100. A distortion limit of 6 with 100-best list was used. Msdbidirectional-fe feature was used for lexical reordering with the phrase limit of 5. Systems were tuned on the development data using the MERT (Och, 2003) . BLEU (Papineni et al., 2002) scores were computed on dev and test sets of the corpora, as well as on standard test sets. BLEU scores were calculated using multi-bleu.perl. Scoring is case sensitive and includes punctuation.",
"cite_spans": [
{
"start": 37,
"end": 57,
"text": "(Och and Ney, 2003b)",
"ref_id": "BIBREF27"
},
{
"start": 351,
"end": 362,
"text": "(Och, 2003)",
"ref_id": "BIBREF28"
},
{
"start": 370,
"end": 393,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Machine Translation:",
"sec_num": "4.1."
},
{
"text": "We used OpenNMT 11 (Klein et al., 2017) for building Neural MT systems. Two layered encoder-decoder architecture with global attention (Luong et al., 2015) was used. We used RNN size of 500 and LSTM for cell structure for both encoder and decoder, applying dropout of 0.3 for each input cell. Translations were evaluated on BLEU scores to enable comparison with the corresponding SMT systems.",
"cite_spans": [
{
"start": 19,
"end": 39,
"text": "(Klein et al., 2017)",
"ref_id": "BIBREF19"
},
{
"start": 135,
"end": 155,
"text": "(Luong et al., 2015)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation:",
"sec_num": "4.2."
},
{
"text": "Most of the corpora available online had their own development (dev) and test sets, so we evaluated the systems according to these dev and test sets. To be able to compare the systems in each domain, we created Standard test set (STS) for each domain comprising of 1k sentences. We randomly selected sentences from test sets of each data source of the particular domain. This was done on the basis of data set size and combined these specific sized chunks so that each data-set is represented on the basis of its size in the standard test set. We also used the test set of CLE 9 which was used to evaluate the general domain systems and the standard Scielo test set for Bio-Medical domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Development and Test sets",
"sec_num": "4.3."
},
{
"text": "One of the endeavours of our study is to present domain specific translation results. As is common in machine learning approaches, the domain of the system being built depends on the data used to train the system. MT performance quickly degrades when the testing domain is different from the training domain. The reason for this degradation is that the statistical models closely approximate the empirical distributions of the training data (Abdul Rauf et al., 2016 ",
"cite_spans": [
{
"start": 448,
"end": 465,
"text": "Rauf et al., 2016",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5."
},
{
"text": "To build the best domain specific SMT system, we first explored the performance of each corpora for standalone SMT systems. T anzil and Genome showed the best performance for Religious and technology domains respectively. While over-fitting is observed in these two domains. The performance of the systems, built for these two domains, have shown a uniform trend for both self and standard test sets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Standalone SMT Systems",
"sec_num": "5.1."
},
{
"text": "Along with, the exploration of best SMT system for each category we also investigated the effect of the size of language model on each standalone SMT system. To explore this dimension, a large language model was also build by concatenating the Urdu text of all the bi-texts and the monolingual corpus mentioned in section 3.5. The scores for large LM are shown in the third column in table 2. It is observed that the BLEU scores of all the standalone systems approximately doubled with large LM. Figure 1 shows these results graphically for each domain. These results highlight the effect of bigger language model on SMT quality, obviously a bigger language model helps improve translation quality by improving the grammar of the output sentences.",
"cite_spans": [],
"ref_spans": [
{
"start": 496,
"end": 504,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Effect of size of Language Model",
"sec_num": "5.1.1."
},
{
"text": "After building standalone systems for each corpus, we selected the corpora which resulted in best BLEU scores, for building systems by concatenating different combinations of corpora. We selected systems on the basis of best score among the standalone systems from each domain (baseline system) and concatenated them with system having second highest BLEU score. Table 3 reports these results. Table 3 : Results of SMT on baselines and addition of bitexts.",
"cite_spans": [],
"ref_spans": [
{
"start": 363,
"end": 370,
"text": "Table 3",
"ref_id": null
},
{
"start": 394,
"end": 401,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Concatenated SMT Systems",
"sec_num": "5.2."
},
{
"text": "Bio-medical domain is an interesting domain as the corpora are not of same type. Emille are the health domain sentences taken from the Emille corpus, Jang sentences are taken from a semi-parallel comparable corpus and then sentence aligned and human corrected. Whereas, EM EA and Scielo are synthetic forward translated corpora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bio-Medical Domain",
"sec_num": "5.2.1."
},
{
"text": "EM EA was chosen as baseline, for bio-medical domain, having the highest score 44.45 amongst other three standalone systems. Then, we built a system on EM EA concatenated with the second best system Scielo, having score of 25.95 (table 2) . The BLEU score of the resultant system EM EA+Scielo is 50.34 (table 3) . We can see an improvement in the score after concatenation of these two data-sets. Note that this system is built with only forward translated synthetic corpus, and we get an appreciable BLEU score.",
"cite_spans": [],
"ref_spans": [
{
"start": 229,
"end": 238,
"text": "(table 2)",
"ref_id": "TABREF6"
},
{
"start": 302,
"end": 311,
"text": "(table 3)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Bio-Medical Domain",
"sec_num": "5.2.1."
},
{
"text": "This system EM EA + Scielo, is further concatenated with jang corpus (standalone score 17.78) and the resultant score of the EM EA + scielo + jang system is 49.76, which is a bit lower than the previous system's score. Contrary to the standard test set scores, addition of bitexts did not improve scores for dev and test, rather resulted in a de- Table 4 : Bleu scores using human translations vs machine translations as training data cline of BLEU score. EM EA and Scielo are translated from standard biomedical corpora as described in data preprocessing section 3.6 The sentences of these corpora specially of EMEA consists of concise short sentences of similar nature (we found certain redundancies in these corpora).",
"cite_spans": [],
"ref_spans": [
{
"start": 347,
"end": 354,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Bio-Medical Domain",
"sec_num": "5.2.1."
},
{
"text": "That is the reason their concatenation gave a big increase as it mounted to adding more data. On the other hand, we created Jang corpus by automatic translation of news and tips in health section of a national English news paper. This could be a reason that when concatenated with EM EA + Scielo the combined score reduced to 49.76 from 50.34. Finally, concatenating with Emille, having BLEU score 12.90 for standalone model, the score for the resultant system is 50.71 which is highest among all other systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bio-Medical Domain",
"sec_num": "5.2.1."
},
{
"text": "Emille is again a standard biomedical corpus comprising of health documents from the EMILLE corpus (section 3.6), and its concatenation improved the overall BLEU score. An increase of 6.26 points upon the addition of just 0.86M words of Scielo+Jang +Emille corpora to 1.03M words of EMEA (baseline), has been observed which is a significant gain. These are encouraging results for the development of standard corpora for the Bio-medical domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bio-Medical Domain",
"sec_num": "5.2.1."
},
{
"text": "For the religious domain we have two corpora namely T anzil and the other is concatenation of Quran, Bible and Joshua(QBJ). Firstly we built two standalone systems for both corpora as shown in Table: 2, Tanzil having BLEU score of 17.46 on test set and 19.93 on dev set. BLEU score of QBJ is 10.05 on test set and 10.37 on dev set. We did not create standard test set to evaluate these two corpora as there is a huge difference between the size of corpora, if we generate standard corpus out of these by evenly distributing them; the standard test set will mostly consist of bitexts from T anzil. In this case T anzil will perform well for that specific standard test set but QBJ would not be able to perform well. After standalone evaluation we concatenated both data sets to see the impact of corpora on each other. We got 18.36 BLEU score that is better then the standalone systems. Again the performance of system increases with the increase of the size of corpora. For the general domain, we considered Emille as a baseline on the basis of higher score 35.38 on dev set and 5.67 on test set so its average BLEU score is higher then the rest of standalone corpus ( (table 2) . Gnome corpus had a maximum sentence length of 40 to 50 whereas all other data sets had sentence size of 100 words. A combination of two yields a great improvement with respect to U buntu but a decrease for Gnome.",
"cite_spans": [],
"ref_spans": [
{
"start": 193,
"end": 199,
"text": "Table:",
"ref_id": null
},
{
"start": 1169,
"end": 1178,
"text": "(table 2)",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Religious, General and Technology Domain",
"sec_num": "5.2.2."
},
{
"text": "We performed series of experiments using transliterations, human and machine translated data to compare the performance of such systems. These results are reported in Table 4 . On the standard test set transliterations and human translations were better than google translations having scores on 2.08, 2.04 and 0.56 respectively. When evaluated on dev and test sets of individual corpora, surprisingly F lickrHumanT rans performed worst of all with minimum BLEU scores of 3.02 on dev and 2.39 on test. These are the captions from flickr 8k dataset and often the English side is not grammatically correct. More interestingly, the same corpus when translated using google gave 35.71 on dev, 27.58 on test. T ransliteration of Hindi UMC002 corpus to Urdu gave the best scores of 54.30 and 47.34 on dev and test respectively. F lickrHumanT rans is further combined with the T ransliteration data set which is machine transliterated data, to build another system in order to observe the effect of machine transliterated data on the human translations. Table 5 : Results on NMT on addition of bitexts for Bio-medical domains.",
"cite_spans": [],
"ref_spans": [
{
"start": 167,
"end": 174,
"text": "Table 4",
"ref_id": null
},
{
"start": 1047,
"end": 1054,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Impact of Various Corpora",
"sec_num": "5.3."
},
{
"text": "F lickrGooglT rans and a good improvement in scores is observed i.e. 2.66 BLEU on standard test data, 48.61 on dev data, 45.87 on test data. Now, we address the question of effect on performance by addition of these corpora to the already available resources. Emille is the already available human translated corpus, when combined with our transliterated data set, an improvement of almost 4.00 and 24.35 BLEU points on dev and test is observed, however on standard test a decline of 0.22 points is observed. Similar trend is observed when machine translated data is added to Emille + T reebank yielding improvements on all datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Impact of Various Corpora",
"sec_num": "5.3."
},
{
"text": "We are presenting NMT system performance only for Bio-Medical domain. Table 5 shows the results of our experiments for NMT. We maintained the same baseline and corpus concatenation combination as used in SMT experiments. The results of Bio-Medical NMT are lower than the corresponding SMT systems (Table 3) . This is expected as NMT systems don't perform well with small amounts of corpus. A unanimous observation is that addition of bitexts improves the systems across all dev and test sets, a slight deviation to this trend is observed when Emille is added to EM EA + Scielo + jang (last row in Table 5 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 70,
"end": 77,
"text": "Table 5",
"ref_id": null
},
{
"start": 297,
"end": 306,
"text": "(Table 3)",
"ref_id": null
},
{
"start": 597,
"end": 604,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "NMT Systems",
"sec_num": "5.4."
},
{
"text": "We presented domain based results on SMT and NMT systems for translation from English to Urdu. This is the first work being reported on several domains for the English-Urdu language pair. We collected corpora for four main domains namely Bio-medical, Religious, Technology and General. We experimented with various methods to reduce data scarcity which include, the use of automatic translations and transliterations. We also collected and compiled human translations from translation agencies as well as produced human translations of Flickr 8k dataset. We performed series of experiments in attempts to optimize the performance of each system and also to study the impact of data sources on the systems. Finally, we established a comparison of the data sources and the effect of Language Model size on statistical machine translation performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6."
},
{
"text": "http://www.cle.org.pk/software/ling resources 3 http://joshua-decoder.org/indian-parallel-corpora/ 4 http://tdil-dc.in/index.php-?lang=en 5 http://opus.nlpl.eu/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://forms.illinois.edu/sec/1713398 7 http://www.statmt.org/wmt16/biomedical-translationtask.html 8 https://jang.com.pk/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://opus.nlpl.eu/EMEA.php",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://sourceforge.net/projects/aligner/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://opennmt.net/OpenNMT/ 9 http://www.cle.org.pk/software/ling resources/testingcorpusmt.htm",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Empirical use of information retrieval to build synthetic data for smt domain adaptation",
"authors": [
{
"first": "Abdul",
"middle": [],
"last": "Rauf",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Lambert",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Nawaz",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2016,
"venue": "IEEE Transactions on Audio",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abdul Rauf, S., Schwenk, H., Lambert, P., and Nawaz, M. (2016). Empirical use of information retrieval to build synthetic data for smt domain adaptation. IEEE Trans- actions on Audio, Speech and Language Processing.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Development of Parallel Corpus and English to Urdu Statistical Machine Translation",
"authors": [
{
"first": "A",
"middle": [],
"last": "Ali",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Siddiq",
"suffix": ""
},
{
"first": "M",
"middle": [
"K"
],
"last": "Malik",
"suffix": ""
}
],
"year": 2010,
"venue": "International Journal of Engineering",
"volume": "",
"issue": "05",
"pages": "3--6",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ali, A., Siddiq, S., and Malik, M. K. (2010). Development of Parallel Corpus and English to Urdu Statistical Ma- chine Translation. International Journal of Engineering, (05):3-6.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Model for English-Urdu statistical machine translation",
"authors": [
{
"first": "A",
"middle": [],
"last": "Ali",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Hussain",
"suffix": ""
},
{
"first": "Kamran",
"middle": [],
"last": "Malik",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2013,
"venue": "World Applied Sciences Journal",
"volume": "",
"issue": "10",
"pages": "1362--1367",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ali, A., Hussain, A., and Kamran Malik, M. (2013). Model for English-Urdu statistical machine translation. World Applied Sciences Journal, 24(10):1362-1367.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Emille, a 67-million word corpus of indic languages: Data collection, mark-up and harmonisation",
"authors": [
{
"first": "P",
"middle": [],
"last": "Baker",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Hardie",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Mcenery",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Cunningham",
"suffix": ""
},
{
"first": "R",
"middle": [
"J"
],
"last": "Gaizauskas",
"suffix": ""
}
],
"year": 2002,
"venue": "LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Baker, P., Hardie, A., McEnery, T., Cunningham, H., and Gaizauskas, R. J. (2002). Emille, a 67-million word cor- pus of indic languages: Data collection, mark-up and harmonisation. In LREC.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "The mathematics of statistical machine translation",
"authors": [
{
"first": "P",
"middle": [],
"last": "Brown",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "V",
"middle": [
"J"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Mercer",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "2",
"pages": "263--311",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brown, P., Della Pietra, S., Della Pietra, V. J., and Mercer, R. (1993). The mathematics of statistical machine trans- lation. Computational Linguistics, 19(2):263-311.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Findings of the 2011 workshop on statistical machine translation",
"authors": [
{
"first": "C",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Monz",
"suffix": ""
},
{
"first": "O",
"middle": [
"F"
],
"last": "Zaidan",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Sixth Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "22--64",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Callison-Burch, C., Koehn, P., Monz, C., and Zaidan, O. F. (2011). Findings of the 2011 workshop on statistical ma- chine translation. In Proceedings of the Sixth Workshop on Statistical Machine Translation, pages 22-64. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Edinburgh's phrase-based machine translation systems for wmt-14",
"authors": [
{
"first": "N",
"middle": [],
"last": "Durrani",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Heafield",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Ninth Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "97--104",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Durrani, N., Haddow, B., Koehn, P., and Heafield, K. (2014). Edinburgh's phrase-based machine translation systems for wmt-14. In Proceedings of the Ninth Work- shop on Statistical Machine Translation, pages 97-104.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Arabic preprocessing schemes for statistical machine translation",
"authors": [
{
"first": "N",
"middle": [],
"last": "Habash",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Sadat",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers",
"volume": "",
"issue": "",
"pages": "49--52",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Habash, N. and Sadat, F. (2006). Arabic preprocessing schemes for statistical machine translation. In Proceed- ings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers, pages 49-52. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Kenlm: Faster and smaller language model queries",
"authors": [
{
"first": "K",
"middle": [],
"last": "Heafield",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Sixth Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "187--197",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Heafield, K. (2011). Kenlm: Faster and smaller language model queries. In Proceedings of the Sixth Workshop on Statistical Machine Translation, pages 187-197. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Exploring transfer learning and domain data selection for the biomedical translation",
"authors": [
{
"first": "N",
"middle": [],
"last": "Hira",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Abdul Rauf",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Kiani",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Zafar",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Nawaz",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Conference on Machine Translation",
"volume": "3",
"issue": "",
"pages": "158--165",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hira, N.-e., Abdul Rauf, S., Kiani, K., Zafar, A., and Nawaz, R. (2019). Exploring transfer learning and domain data selection for the biomedical translation. In Proceedings of the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2), pages 158-165, Florence, Italy, August. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Statistical Machine Translation in Low Resource Settings",
"authors": [
{
"first": "A",
"middle": [],
"last": "Irvine",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 NAACL HLT Student Research Workshop",
"volume": "",
"issue": "",
"pages": "54--61",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Irvine, A. (2013). Statistical Machine Translation in Low Resource Settings. Proceedings of the 2013 NAACL HLT Student Research Workshop, (June):54-61.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Word-Order Issues in English-to-Urdu Statistical Machine Translation",
"authors": [
{
"first": "B",
"middle": [],
"last": "Jawaid",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Zeman",
"suffix": ""
}
],
"year": 2011,
"venue": "Prague Bulletin of Mathematical Linguistics",
"volume": "95",
"issue": "1",
"pages": "87--106",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jawaid, B. and Zeman, D. (2011). Word-Order Is- sues in English-to-Urdu Statistical Machine Translation. Prague Bulletin of Mathematical Linguistics, 95(-1):87- 106.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "English to Urdu Statistical Machine Translation: Establishing a Baseline",
"authors": [
{
"first": "B",
"middle": [],
"last": "Jawaid",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Kamran",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Bojar",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Fifth Workshop on South and Southeast Asian Natural Language Processing",
"volume": "",
"issue": "",
"pages": "37--42",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jawaid, B., Kamran, A., and Bojar, O. (2014a). En- glish to Urdu Statistical Machine Translation: Establish- ing a Baseline. Proceedings of the Fifth Workshop on South and Southeast Asian Natural Language Process- ing, pages 37-42.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Urdu monolingual corpus. LINDAT/CLARIN digital library at the Institute of Formal and Applied Linguistics",
"authors": [
{
"first": "B",
"middle": [],
"last": "Jawaid",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Kamran",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Bojar",
"suffix": ""
}
],
"year": 2014,
"venue": "Faculty of Mathematics and Physics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jawaid, B., Kamran, A., and Bojar, O. (2014b). Urdu monolingual corpus. LINDAT/CLARIN digital library at the Institute of Formal and Applied Linguistics ( 'UFAL), Faculty of Mathematics and Physics, Charles University.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Enriching Source for English-to-Urdu Machine Translation",
"authors": [
{
"first": "B",
"middle": [],
"last": "Jawaid",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Kamran",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Bojar",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 6th Workshop on South and Southeast Asian Natural Language Processing (WSSANLP2016)",
"volume": "",
"issue": "",
"pages": "54--63",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jawaid, B., Kamran, A., and Bojar, O. (2016). Enriching Source for English-to-Urdu Machine Translation. Pro- ceedings of the 6th Workshop on South and Southeast Asian Natural Language Processing (WSSANLP2016), pages 54-63.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Verb transfer for English to Urdu Machine Translation",
"authors": [
{
"first": "N",
"middle": [],
"last": "Karamat",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karamat, N. (2006). Verb transfer for English to Urdu Ma- chine Translation. Ph.D. thesis, National University of Computer & Emerging Sciences.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "English to urdu hierarchical phrase-based statistical machine translation",
"authors": [
{
"first": "N",
"middle": [],
"last": "Khan",
"suffix": ""
},
{
"first": "M",
"middle": [
"W"
],
"last": "Anwar",
"suffix": ""
},
{
"first": "U",
"middle": [
"I"
],
"last": "Bajwa",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Durrani",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 4th Workshop on South and Southeast Asian Natural Language Processing",
"volume": "",
"issue": "",
"pages": "72--76",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Khan, N., Anwar, M. W., Bajwa, U. I., and Durrani, N. (2013). English to urdu hierarchical phrase-based sta- tistical machine translation. In Proceedings of the 4th Workshop on South and Southeast Asian Natural Lan- guage Processing, pages 72-76.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Machine Translation Approaches and Survey for Indian Languages",
"authors": [
{
"first": "N",
"middle": [
"J"
],
"last": "Khan",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Anwar",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Durrani",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "18",
"issue": "",
"pages": "47--78",
"other_ids": {
"arXiv": [
"arXiv:1701.04290"
]
},
"num": null,
"urls": [],
"raw_text": "Khan, N. J., Anwar, W., and Durrani, N. (2017). Ma- chine Translation Approaches and Survey for Indian Languages. arXiv preprint arXiv:1701.04290, 18(1):47- 78.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Statistical machine translation of Indian languages: a survey",
"authors": [
{
"first": "N",
"middle": [],
"last": "Khan Jadoon",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Anwar",
"suffix": ""
},
{
"first": "U",
"middle": [
"I"
],
"last": "Bajwa",
"suffix": ""
},
{
"first": "Ahmad",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Khan Jadoon, N., Anwar, W., Bajwa, U. I., and Ahmad, F. (2017). Statistical machine translation of Indian lan- guages: a survey. Neural Computing and Applications.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Opennmt: Open-source toolkit for neural machine translation",
"authors": [
{
"first": "G",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Senellart",
"suffix": ""
},
{
"first": "A",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Klein, G., Kim, Y., Deng, Y., Senellart, J., and Rush, A. M. (2017). Opennmt: Open-source toolkit for neural ma- chine translation. CoRR, abs/1701.02810.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Statistical phrased-based machine translation",
"authors": [
{
"first": "P",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "F",
"middle": [
"J"
],
"last": "Och",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "127--133",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Koehn, P., Och, F. J., and Marcu, D. (2003). Statistical phrased-based machine translation. pages 127-133.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Moses: Open source toolkit for statistical machine translation",
"authors": [
{
"first": "P",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Birch",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Federico",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Bertoldi",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Cowan",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Moran",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Zens",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Constantin",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Herbst",
"suffix": ""
}
],
"year": 2007,
"venue": "Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "177--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Koehn, P., Hoang, H., Birch, A., Callison-Burch, C., Fed- erico, M., Bertoldi, N., Cowan, B., Shen, W., Moran, C., Zens, R., Dyer, C., Bojar, O., Constantin, A., and Herbst, E. (2007). Moses: Open source toolkit for statistical ma- chine translation. In Meeting of the Association for Com- putational Linguistics, pages 177-180.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Investigations on translation model adaptation using monolingual data",
"authors": [
{
"first": "P",
"middle": [],
"last": "Lambert",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Servan",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Abdul-Rauf",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Sixth Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "284--293",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lambert, P., Schwenk, H., Servan, C., and Abdul-Rauf, S. (2011). Investigations on translation model adapta- tion using monolingual data. In Proceedings of the Sixth Workshop on Statistical Machine Translation, pages 284-293, Edinburgh, Scotland, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Effective approaches to attention-based neural machine translation",
"authors": [
{
"first": "M.-T",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "C",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1508.04025"
]
},
"num": null,
"urls": [],
"raw_text": "Luong, M.-T., Pham, H., and Manning, C. D. (2015). Effective approaches to attention-based neural machine translation. arXiv preprint arXiv:1508.04025.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Building a large annotated corpus of English: The Penn Treebank",
"authors": [
{
"first": "M",
"middle": [
"P"
],
"last": "Marcus",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Santorini",
"suffix": ""
},
{
"first": "M",
"middle": [
"A"
],
"last": "Marcinkiewicz",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "2",
"pages": "313--330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marcus, M. P., Santorini, B., and Marcinkiewicz, M. A. (1993). Building a large annotated corpus of En- glish: The Penn Treebank. Computational Linguistics, 19(2):313-330.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Rule based english to urdu machine translation",
"authors": [
{
"first": "Naila",
"middle": [],
"last": "Ata",
"suffix": ""
},
{
"first": "A",
"middle": [
"K"
],
"last": "Bushra Jawaid",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of Conference on Language and Technology (CLT'07)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Naila Ata, Bushra Jawaid, A. K. (2007). Rule based english to urdu machine translation. In Proceedings of Conference on Language and Technology (CLT'07).",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "A systematic comparison of various statistical alignement models",
"authors": [
{
"first": "F",
"middle": [
"J"
],
"last": "Och",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2003,
"venue": "Computational Linguistics",
"volume": "29",
"issue": "1",
"pages": "19--51",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Och, F. J. and Ney, H. (2003a). A systematic comparison of various statistical alignement models. Computational Linguistics, 29(1):19-51.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "A systematic comparison of various statistical alignment models",
"authors": [
{
"first": "F",
"middle": [
"J"
],
"last": "Och",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2003,
"venue": "Computational linguistics",
"volume": "29",
"issue": "1",
"pages": "19--51",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Och, F. J. and Ney, H. (2003b). A systematic comparison of various statistical alignment models. Computational linguistics, 29(1):19-51.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Minimum error rate training in statistical machine translation",
"authors": [
{
"first": "F",
"middle": [
"J"
],
"last": "Och",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 41st Annual Meeting on Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "160--167",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Och, F. J. (2003). Minimum error rate training in statistical machine translation. In Proceedings of the 41st Annual Meeting on Association for Computational Linguistics- Volume 1, pages 160-167. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "K",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "W.-J",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th annual meeting on association for computational linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Papineni, K., Roukos, S., Ward, T., and Zhu, W.-J. (2002). Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311- 318. Association for Computational Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Constructing parallel corpora for six indian languages via crowdsourcing",
"authors": [
{
"first": "M",
"middle": [],
"last": "Post",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Osborne",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "401--409",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Post, M., Callison-Burch, C., and Osborne, M. (2012). Constructing parallel corpora for six indian languages via crowdsourcing. Wmt-2012, pages 401-409.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Statistical Machine Translation System for English to Urdu",
"authors": [
{
"first": "Mishra",
"middle": [],
"last": "Shahnawaz",
"suffix": ""
}
],
"year": 2013,
"venue": "Int. J. Adv. Intell. Paradigms",
"volume": "5",
"issue": "3",
"pages": "182--203",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shahnawaz and Mishra. (2013). Statistical Machine Trans- lation System for English to Urdu. Int. J. Adv. Intell. Paradigms, 5(3):182-203.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "English to urdu translation system",
"authors": [
{
"first": "A",
"middle": [],
"last": "Tafseer",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Alvi",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tafseer, A. and Alvi, S. (2002). English to urdu translation system. manuscript, University of Karachi.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Parallel Data, Tools and Interfaces in OPUS",
"authors": [
{
"first": "J",
"middle": [],
"last": "Tiedemann",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12). European Language Resources Association (ELRA)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tiedemann, J. (2012). Parallel Data, Tools and Interfaces in OPUS. In Nicoletta Calzolari (Conference Chair) and Khalid Choukri and Thierry Declerck and Mehmet Ugur Dogan and Bente Maegaard and Joseph Mariani and Jan Odijk and Stelios Piperidis, editor, Proceedings of the Eight International Conference on Language Re- sources and Evaluation (LREC'12). European Language Resources Association (ELRA), Istanbul, Turkey.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Transfer Learning for Low-Resource Neural Machine Translation",
"authors": [
{
"first": "B",
"middle": [],
"last": "Zoph",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Yuret",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "May",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zoph, B., Yuret, D., May, J., and Knight, K. (2016). Trans- fer Learning for Low-Resource Neural Machine Transla- tion.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Comparison of systems built on small and large language model (x-axis represents words in millions)"
},
"TABREF0": {
"content": "<table><tr><td colspan=\"2\">Category Corpora</td><td>Size</td><td colspan=\"2\">Tokens</td><td colspan=\"2\">Sentences</td></tr><tr><td/><td/><td>(Mbs)</td><td colspan=\"2\">(Millions)</td><td/><td/></tr><tr><td/><td/><td/><td>UR</td><td>EN Train</td><td>Dev</td><td>Test</td><td>Total</td></tr><tr><td/><td>Emille</td><td>1.5</td><td>0.12</td><td>0.09 5583</td><td>176</td><td>118</td></tr><tr><td/><td>Treebank</td><td>2.3</td><td>0.18</td><td>0.13 5408</td><td>170</td><td>115</td></tr><tr><td/><td>Indic</td><td>8.8</td><td>0.63</td><td>0.49 33244</td><td>1000</td><td>1000</td></tr><tr><td/><td>NLT</td><td>3.1</td><td>0.22</td><td>0.19 10662</td><td>336</td><td>226</td></tr><tr><td>General</td><td>OPUS TDIL</td><td>4.7 0.42</td><td>0.38 0.03</td><td>0.33 46805 0.02 1141</td><td>1501 37</td><td>1002 25</td></tr><tr><td/><td>Flickr H</td><td>0.42</td><td>0.03</td><td>0.03 2578</td><td>82</td><td>55</td></tr><tr><td/><td>Flickr G</td><td>0.41</td><td>0.04</td><td>0.03 2578</td><td>82</td><td>55</td></tr><tr><td/><td>Transliterations</td><td>0.99</td><td>0.08</td><td>0.07 3441</td><td>516</td><td>172</td></tr><tr><td/><td>Total</td><td colspan=\"2\">22.64 1.71</td><td colspan=\"2\">1.38 111440 3990</td><td>2768</td></tr><tr><td/><td>Emille</td><td>0.92</td><td>0.07</td><td>0.05 2970</td><td>78</td><td>77</td></tr><tr><td>Bio-Medical</td><td>Scielo Jang Health News EMEA</td><td>9.1 1.9 14.3</td><td>0.65 0.14 1.03</td><td>0.60 21680 0.12 5450 0.82 51775</td><td>650 360 1363</td><td>492 264 1363</td></tr><tr><td/><td>Total</td><td colspan=\"2\">26.22 1.89</td><td>1.59 81875</td><td>2451</td><td>2196</td></tr><tr><td/><td>Quran</td><td>2.9</td><td>0.24</td><td>0.03 6000</td><td>214</td><td>200</td></tr><tr><td>Religious</td><td>Bible QBJ</td><td>2.5 55.5</td><td>0.20 1.13</td><td>0.21 7400 1.02 47198</td><td>300 1250</td><td>257 1062</td></tr><tr><td/><td>Tanzil</td><td>1000</td><td>23.1</td><td colspan=\"3\">19.0 710904 22449 14967</td></tr><tr><td/><td>Total</td><td colspan=\"5\">1060.9 24.67 20.26 771502 24213 16486</td></tr><tr><td>Techno-logy</td><td>Gnome Ubuntu Total</td><td>0.85 0.16 1.01</td><td>0.06 0.02 0.08</td><td>0.05 13186 0.01 2873 0.06 16059</td><td>417 90 507</td><td>278 62 340</td></tr><tr><td/><td>Jawaid</td><td colspan=\"2\">717.4 95.4</td><td>--</td><td>-</td><td>-</td></tr><tr><td>Mono-lingual</td><td>NLT Jhang All Urdu corpus</td><td colspan=\"2\">5.4 3.3 199.4 26.2 0.63 0.39</td><td>------</td><td>---</td><td>---</td></tr><tr><td/><td>Total</td><td colspan=\"2\">925.5 122.7</td><td>--</td><td>-</td><td>-</td></tr></table>",
"text": "present neural systems trained on small corpora.",
"type_str": "table",
"num": null,
"html": null
},
"TABREF6": {
"content": "<table/>",
"text": "",
"type_str": "table",
"num": null,
"html": null
},
"TABREF7": {
"content": "<table/>",
"text": "lists the BLEU scores for each system. As already mentioned we are interested in the scores obtained on standard test set, it is observed that Indic showed the best performance among the systems built on general domain corpus. Whereas; T reebank, T ransliteration and F lickrGoogleT ranslate, despite outperforming on self test have shown a decline in performance for standard test set. The standard test set includes part of the test sentences from each corpora basis of data set size. Indic has the most tokens, resultantly the standard test set includes sentences from Indic the most. This explains the best performance on STS. For the systems built on Bio-medical corpora, EM EA showed the best performance on standard set. Interestingly, in this domain we see reasonable BLEU scores on all test sets, including STS. Similar phenomenon of better scores for EMEA on STS is observed, which corresponds to more sentences from EM EA test set in STS.SMT system trained on Jang shows an abrupt decline in the perfor-",
"type_str": "table",
"num": null,
"html": null
},
"TABREF10": {
"content": "<table/>",
"text": "). We concatenated it with the T reebank having score 18.14 on dev set and 20.90 on test set,and got 23.63 score on dev set and 13.32 on test set. We further concatenated this system with N LT whose standalone BLEU score are 15.09 on dev and 8.52 on test set, and got scores of 21.08 on dev and 11.93 on test set. Finally we concatenate our last data set Indic having score 11.67 on dev and 12.23 on test set. Following the same trend as seen in the biomedical domain, we see a steady improvement in the standard test scores by the addition of bitexts. Interestingly, technology domain gave the best results. Gnome being the baseline of the domain achieved 78.58 BLEU score on dev data and 79.42 on both test sets. Whereas, U buntu had a standalone BLEU score of 10.05 and 5.36 on dev and test of both test sets",
"type_str": "table",
"num": null,
"html": null
},
"TABREF11": {
"content": "<table><tr><td>Baseline</td><td>Tokens</td><td/><td colspan=\"2\">BLEU</td></tr><tr><td/><td/><td>Dev</td><td>Test</td><td>STS Scielo</td></tr><tr><td>EMEA</td><td>1.03</td><td colspan=\"3\">26.44 40.27 39.81</td><td>5.24</td></tr><tr><td>+Scielo</td><td>1.68</td><td colspan=\"3\">26.96 35.22 45.90 14.37</td></tr><tr><td>+Scielo+jang</td><td>1.82</td><td colspan=\"3\">27.22 35.01 47.46 16.09</td></tr><tr><td>+Scileo+jang+Emille</td><td>1.89</td><td colspan=\"3\">27.55 34.62 46.28 16.72</td></tr></table>",
"text": "The performance of the resultant system is far better than the baseline system F lickrHumanT rans yielding 2.24 on standard test set, 39.23 on dev and 40.91 on test. Further, we concatenated transliterations with the baseline NMT Results on baseline + additional bitexts for Bio-Medical",
"type_str": "table",
"num": null,
"html": null
}
}
}
}