ACL-OCL / Base_JSON /prefixD /json /deeplo /2022.deeplo-1.23.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:22:25.963707Z"
},
"title": "Unified NMT models for the Indian subcontinent, transcending script-barriers",
"authors": [
{
"first": "Gokul",
"middle": [],
"last": "Nc",
"suffix": "",
"affiliation": {},
"email": "gokulnc@devnagri.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Highly accurate machine translation systems are very important in societies and countries where multilinguality is very common, and where English often does not suffice. The Indian subcontinent (or South Asia) is such a region, with all the Indic languages currently being under-represented in the NLP ecosystem. It is essential to thoroughly explore various techniques to improve the performance of such lowresource languages at least using the data available in open-source, which itself is something not very explored in the Indic ecosystem. In our work, we perform a study with a focus on improving the performance of very-low-resource South Asian languages, especially of countries in addition to India. Specifically, we propose how unified models can be built that can exploit the data from comparatively resource-rich languages of the same region. We propose strategies to unify different types of unexplored scripts, especially Perso-Arabic scripts and Indic scripts to build multilingual models for all the South Asian languages despite the script barrier. We also study how augmentation techniques like back-translation can be made useof to build unified models just using openly available raw data, to understand what levels of improvements can be expected for these Indic languages.",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "Highly accurate machine translation systems are very important in societies and countries where multilinguality is very common, and where English often does not suffice. The Indian subcontinent (or South Asia) is such a region, with all the Indic languages currently being under-represented in the NLP ecosystem. It is essential to thoroughly explore various techniques to improve the performance of such lowresource languages at least using the data available in open-source, which itself is something not very explored in the Indic ecosystem. In our work, we perform a study with a focus on improving the performance of very-low-resource South Asian languages, especially of countries in addition to India. Specifically, we propose how unified models can be built that can exploit the data from comparatively resource-rich languages of the same region. We propose strategies to unify different types of unexplored scripts, especially Perso-Arabic scripts and Indic scripts to build multilingual models for all the South Asian languages despite the script barrier. We also study how augmentation techniques like back-translation can be made useof to build unified models just using openly available raw data, to understand what levels of improvements can be expected for these Indic languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The Indian subcontinent is a well-studied linguistic area (Emeneau, 1956) , known as South Asian sprachbund. The region is home to around a quarter of the world's population, with a total which is projected to reach more than 2 billion in a decade. Despite this, the progress in natural language processing is significantly lacking for South Asian languages (or Indic languages). Especially, machine translation is of core importance since South Asia is largely a multilingual society, with more than 25 languages recognized officially across the subcontinent and more than 100s attested and spoken. Although there are quite a few number of works which have released datasets for languages of India (Siripragada et al., 2020) and studied multilingual models for the same (Philip et al., 2019) , they are not exhaustively studied. In particular, the Indic languages of other South Asian countries like Pakistan, Nepal and Sri Lanka are almost never studied together with the languages of India and Bangladesh.",
"cite_spans": [
{
"start": 58,
"end": 73,
"text": "(Emeneau, 1956)",
"ref_id": "BIBREF2"
},
{
"start": 699,
"end": 725,
"text": "(Siripragada et al., 2020)",
"ref_id": "BIBREF20"
},
{
"start": 771,
"end": 792,
"text": "(Philip et al., 2019)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we aim to study all the available Indic languages (of Indo-Aryan and Dravidian families) of all the above countries together, precisely 15 South Asian languages (listed in appendix A). Especially, we propose a simple strategy to unify digraphic languages like Hindi-Urdu, Sindhi and Punjabi which are written in Indic scripts in India and Perso-Arabic scripts in Pakistan. We propose how one can build a script-agnostic encoder which can generalize well across different types of translation models, like code-mixed, roman (social media) and formal texts. We study for the first time in literature backtranslation-based NMT for all script-unified Indic languages together, which provides significantly better performance than models trained only on parallel data, by using only freely available monolingual data. We finally provide brief recommendations for researchers working in this Indic-NMT domain, and finally mention how this work can be extended and its future scope.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Training multilingual models for neural machine translation currently the go-to approach for significantly improving the performance of low-resource languages (Ngo et al., 2020) . Especially sharing of sub-word vocabulary among related languages (of the same or similar families) is of more importance to exploit the inter-relationships between the languages (Khemchandani et al., 2021) , so that resource sharing from high-resource languages to low-resource languages is achieved. Recent works (Ramesh et al., 2021) have explored strategies to train multilingual NMT for 11 languages of India, both with and without shared vocabulary across languages, demonstrating that vocabulary sharing by script unification is significantly beneficial. It is also common to convert all the text across all languages to IPA (International Phonetic Alphabet) or any common script, especially in speech-to-text (Javed et al., 2021 ) and text-to-speech (Zhang et al., 2021) to obtain a universal representation of text across any language/script. In the case of South Asian languages, it is more convenient to map all scripts to a common Indic script (like Devanagari) which is capable of representing all phonemes used in the Indic families (Khare et al., 2021) .",
"cite_spans": [
{
"start": 159,
"end": 177,
"text": "(Ngo et al., 2020)",
"ref_id": "BIBREF15"
},
{
"start": 359,
"end": 386,
"text": "(Khemchandani et al., 2021)",
"ref_id": "BIBREF10"
},
{
"start": 495,
"end": 516,
"text": "(Ramesh et al., 2021)",
"ref_id": null
},
{
"start": 897,
"end": 916,
"text": "(Javed et al., 2021",
"ref_id": null
},
{
"start": 938,
"end": 958,
"text": "(Zhang et al., 2021)",
"ref_id": "BIBREF23"
},
{
"start": 1227,
"end": 1247,
"text": "(Khare et al., 2021)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related works",
"sec_num": "2"
},
{
"text": "This section sets provides the background required for the subsequent sections.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "3"
},
{
"text": "As mentioned earlier, our work only focuses on open-source datasets inorder to explore how performance can be improved for low-resource languages just using openly available data. Overall, the datasets used in this work are mostly from the general domain, and hyperlinks are provided to access all the datasets. The next sub-section mentions the list of all aligned datasets used in this work and further, the we mention the list of all available monolingual data sources which we exploit in this work for improving performance. Table 1 shows the list of all parallel datasets used for training our models. It is to be noted that the Samanantar (Ramesh et al., 2021) is the major source of data, for languages of India. To explore more languages as well as to study how the above data is useful for other similar Indic languages, especially focusing on other related South Asian countries, we gather more data from different sources shown in the same table. Specifically, we aim at increasing the amount of data obtainable for Indo-Aryan languages not covered in Samanantar, viz. Nepali, Sinhala, Sindhi and Urdu which are predominantly spoken in Nepal, Sri Lanka and Pakistan respectively. In addition, we also manually add new sources of data (marked *) from Anuvaad corpus and PM India corpus which were not covered in the latest Samanantar v0.2 for Assamese and Odia, although relatively very small in size.",
"cite_spans": [
{
"start": 645,
"end": 666,
"text": "(Ramesh et al., 2021)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 529,
"end": 536,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3.1"
},
{
"text": "For test set, we use the FLoRes101 benchmark (Goyal et al., 2021) which has data for 14 Indic languages, manually translated from various domains of English Wikipedia. Since this new benchmark does not have data for Sinhala, we evaluate it on the initial FLoRes benchmark (Guzm\u00e1n et al., 2019) . Note that we do not use the WAT 2021 MultiIndicMT testset (Nakazawa et al., 2021) for benchmarking, since we find the data quite very close to the distribution of the corresponding training data, as also observed by IndicBART . All BLEU scores reported in this paper are computed using sacreBLEU (Post, 2018) after generating translations with a beam decoding size of 4. Note that we compare our scores only against IndicBART (and experiment only with same architecture), as they already demonstrate superior scores over fine-tuned models like mBART and the chosen model is lighter than pretrained models like mT5 or mBART50. Table 2 shows list of all monolingual corpora used in this work. It is to be noted again that the AI4Bharat IndicCorp is the major source of data (row 1), for languages of India. For Indo-Aryan languages of other South Asian countries, we consolidate most of the available open-source corpora from different sources as shown in other rows of the table. We also try to consolidate more data for verylow-resource languages of India like Assamese and Odia.",
"cite_spans": [
{
"start": 45,
"end": 65,
"text": "(Goyal et al., 2021)",
"ref_id": "BIBREF3"
},
{
"start": 272,
"end": 293,
"text": "(Guzm\u00e1n et al., 2019)",
"ref_id": "BIBREF4"
},
{
"start": 354,
"end": 377,
"text": "(Nakazawa et al., 2021)",
"ref_id": "BIBREF14"
},
{
"start": 592,
"end": 604,
"text": "(Post, 2018)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 922,
"end": 929,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Benchmark dataset",
"sec_num": "3.1.2"
},
{
"text": "As explained earlier, script unification is essential for sub-word vocabulary sharing between related languages. It is essential for the unification to be lossless so that the resultant dataset quality is not affected. In literature, it is common to use Devanagari as the common script to unify all the Brahmic scripts of India, although any script (like IPA) can be used as the pivot. For example, for models trained only for Dravidian languages, we use the Malayalam script as the common representation for the 4 languages: Kannada, Malayalam, Tamil and Telugu. But Devanagari is predominantly chosen since it is used for many languages like Hindi, Nepali, Marathi, etc. as well as due to the fact that it is one of the few Indic scripts which supports almost all phonemes required for both the Indic language families, not just Indo-Aryan for which the script is predominantly used. One important aspect of Devanagari is a diacritic called nuqta, which is essentially a dot mark placed below the main consonants to represent non-native phonemes. Its primary use is to represent consonants of other languages, including from different families like Dravidian, Iranic (for Persian), Semitic (for Arabic). Hence, using Devanagari for all Indic languages as a common script is preferable, including languages like Urdu, Sindhi and Kashmiri which are written in Perso-Arabic scripts. In the subsequent section, we explain how the latter is achieved, which is an unexplored track in research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Script Unification",
"sec_num": "3.2"
},
{
"text": "The Perso-Arabic script is an abjad, meaning that it is based on a writing system which mostly has only consonants (in its purest form). In addition, in Perso-Arabic, two of the same consonants (w & y) are used to indicate few long-vowels (respectively: /u/, /o/ and /i/, /e/). So the reader of the script mentally fills-in / interprets most of the vowels as they read, based on their knowledge of the language and context. Devanagari is an abugida, meaning that it is an alphasyllabary system where the script is generally expected to be almost phonetic with all consonants and vowels represented. This makes a direct mapping of Perso-Arabic consonants to Devanagari slightly illegible for readers of usual Devanagari due to lack of any vowels. Figure 1 below shows an example of raw mapping for the Hindostani language. Despite this, we propose that NMT models are capable of learning both abjad and abugida forms, with a deeper understanding of the underlying language. That is, we directly use the raw mapping of Perso-Arabic consonants to Devanagari (without any phonetic transcription) to train an unified model. It is to be noted that there are some consonants in Perso-Arabic for which, although the phonemes are different, they represent the same phone. Those consonants usually are mapped to a single Devanagari phoneme. In our work, especially to generate Perso-Arabic texts, we require lossless mapping of each character from Perso-Arabic. Hence we propose to map them uniquely by creating new Devanagari consonants using nuqta.We also open-source our transliterator implementation as a python library 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 746,
"end": 754,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Mapping Devanagari and Perso-Arabic",
"sec_num": "3.2.1"
},
{
"text": "Upon training using the above unification, we see that our model is capable of understanding that the standard registers of Hindi & Urdu have the same underlying language, with only differences being in writing form and formal vocabulary. This was verified by swapping the scripts used for Hindi & Urdu to see if still produces legitimate outputs. As later described in Section 5.1, while training, we explicitly specify what is the expected output script-type and language that is to be produced by the model. Upon specifying Arabic as script for Hindi and Devanagari as script for Urdu to the trained model, we found that the model still produced Urdu and Hindi sentences respectively. Now we generate augmented data for Devanagari-Urdu and Arabic-Hindi by transliterating 1M Hindi parallel data to PersoArabic (later unified again to abjadi-Devanagari) and by transcribing 1M Urdu parallel data to Devanagari using Sangam transliterator (Lehal and Saini, 2012). We fine-tune the model for few epochs using this synthetic data. We observe that even using such small fraction of data, the model was able to easily generate translations for Urdu in proper Devanagari and for Hindi in proper-Arabic for unseen data, hence qualitatively proving the hypothesis that the script-unified model can also learn writing-system-agnostic features. Furthermore, we perform something similar for Sindhi language -Sindhi is majorly spoken in Pakistan by 30M people & written in Perso-Arabic script; in India, it is spoken by around 2M people & officially mandated to be written in Parivardhit-Devanagari, an extended version of Devanagari. Since all the Sindhi datasets available are in Perso-Arabic, we use the same Sangam transliteration API as mentioned above to generate Sindhi datasets in Devanagari. We use data this as well to train the models in Section 5, and find that the model now was also able to produce (almost) same Sindhi outputs for both the scripts. Note that we implement a similar but separate converter for Sindhi script-unification, as the Perso-Arabic script for Sindhi has significant difference from that of Urdu. Also, since the amount of Sindhi corpus is very low, we augment the dataset while training with the following synthetic data -since Gujarati is a closely-related language to Sindhi, we sample 2M random Gujarati translation-pairs and create Arabic-Gujarati dataset and train for this artifical language-script combination as well in the training described in Section 5.1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mapping Devanagari and Perso-Arabic",
"sec_num": "3.2.1"
},
{
"text": "We would also like to point out that we do not perform this for the Punjabi language, which is written in an Indic script called Gurmukhi in India, and using a Perso-Arabic alphabet called Shahmukhi in Pakistan. This is because all available Punjabi datasets are in Gurmukhi, an almost phonetic script (similar to Devanagari). Hence we directly use our transliterator to convert from Gurmukhi to Shahmukhi and return the translation if required. But it was observed that due to the formal nature of the Punjabi datasets, the generated translations were of Eastern-Punjabi literary standard, hence the outputs may not always be mutuallyintelligible to speakers who are used to Western-Punjabi literary standard. We do not find this issue significant in the case of Sindhi, as the formal Sindhi standards of both the countries do not differ much.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mapping Devanagari and Perso-Arabic",
"sec_num": "3.2.1"
},
{
"text": "Sinhala alphabet (of Sri Lanka) is mostly similar in phonetics to most other alphabets of India, except a couple of minor differences. Sinhala has separate unicode points for representing 6 prenasal consonants, whereas in Devanagari, they are represented as ligature of a nasal consonant with another consonant, as shown in Figure 2 . In addition, Sinhala also has short and long forms of the vowel /ae/ which we also map to Devanagari uniquely, for both dependent & independent vowels. The publicly available transliterators (like the transliterate sub-package in Indic-NLP-Library) are lossy, and do not handle all these cases.",
"cite_spans": [],
"ref_spans": [
{
"start": 324,
"end": 332,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Mapping Sinhala and Devanagari",
"sec_num": "3.2.2"
},
{
"text": "Sinhala \u0d9f \u0da6 \u0da5 \u0dac \u0db3 \u0db9 Devanagari Figure 2 : Example mapping of pre-nasal consonants between Sinhala and Devanagari",
"cite_spans": [],
"ref_spans": [
{
"start": 31,
"end": 39,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Mapping Sinhala and Devanagari",
"sec_num": "3.2.2"
},
{
"text": "For all the remaining scripts in this work, the mapping is mostly straightforward due to the fact that they follow the ISCII encoding scheme in which equivalent phonemes are mapped at same offsets in the unicode blocks. We use the AksharaMukha 2 tool to perform lossless transliteration between these Indic scripts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mapping between Indic scripts",
"sec_num": "3.2.3"
},
{
"text": "We also experiment with romanized models for all Indic languages in our work to translate to English. In this sub-section, we briefly explain the different ways using which we perform the romanization. Generally, there is no standard way to perform romanization for Indic languages, since the way one types it colloquially is quite personal in style. Hence we perform romanization using multiple ways. This includes machine learning-based romanization as well as rule-based romanization techniques which covers different possible ways of romanizing, which will be open-sourced 3 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Romanization of Indic languages",
"sec_num": "3.3"
},
{
"text": "In brief, for each language, we first generate 4 variants of romanization: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Romanization of Indic languages",
"sec_num": "3.3"
},
{
"text": "In this section, we explore different models for Indic to English translation using datasets mentioned in section 3.1.1. Note that before training, we perform text normalization of all datasets using the Indic-NLP-Library.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Indic to English MT",
"sec_num": "4"
},
{
"text": "The input sentence to the models is prepended with the language-tag token, \"__langcode__ \", inorder to explicitly provides cues to the model about what the source language is. All the models experimented above are transformer-based, with the same network and hyperparameter configurations as in transformer-big (Vaswani et al., 2017) , which has 6 encoder layers and 6 decoder layers inorder to be consistent with the scores comparison against the previous work . For all experiments, we use the sentence-piece tokenizer (Kudo and Richardson, 2018) to build our sub-word vocabulary, with vocabulary sizes for input and output sides respectively 32000 (Indic side) and 16000 (English side). We use Marian-NMT toolkit (Junczys-Dowmunt et al., 2018) to train all our models, with mean cross-entropy as the loss function. Note that all models are trained from scratch.",
"cite_spans": [
{
"start": 311,
"end": 333,
"text": "(Vaswani et al., 2017)",
"ref_id": null
},
{
"start": 521,
"end": 548,
"text": "(Kudo and Richardson, 2018)",
"ref_id": "BIBREF12"
},
{
"start": 716,
"end": 746,
"text": "(Junczys-Dowmunt et al., 2018)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental settings",
"sec_num": "4.1"
},
{
"text": "First, we build models from English specific to Indo-Aryan (ia2en) and Dravidian (dr2en) languages to compare how these models perform with respect to a model which is trained for both the Indic language families (in2en). As explained in section 3.2, we use Malayalam as the common script for Dravidian model and Devanagari for Indo-Aryan and Indic models. Table 3 presents the performance across languages (ia2en and dr2en models are shown in same row for simplicity). We see that the Indic model trained on both the families outperform the scores (Goyal et al., 2021) along with the scores of the existing best open-source model trained on Samanantar, taken from IndicBART paper of family-specific models. This observation is consistent with the results for many other languages, where we see significant gains in accuracy with a shared encoder, in-cases like many-to-one NMT (Arivazhagan et al., 2019) .",
"cite_spans": [
{
"start": 549,
"end": 569,
"text": "(Goyal et al., 2021)",
"ref_id": "BIBREF3"
},
{
"start": 878,
"end": 904,
"text": "(Arivazhagan et al., 2019)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 357,
"end": 364,
"text": "Table 3",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Unified models",
"sec_num": "4.2"
},
{
"text": "We generate a romanized version of the parallel dataset available as explained in Section 3.3, which is typically 6x large in size due to different ways of romanization the same data, and train a Roman-Indic-to-English model (rom_in2en). Table 3 shows the performance of this romanized model. We see that the model is slightly better (on the romanized benchmark) than the in2en model. This can be attributed to the significant reduction in alphabet size of the model: Devanagari usually requires more than 80 characters (on average) to represent all Indic languages; whereas in the roman model, only 26 characters (though a bit lossy). Based on manual analysis, we infer that romanized models are slightly more robust to noise in inputs, owing to the varied nature of the romanized data. We also note that owing to increased amount of data in abjad form (due to romanization variant-3, shown in section 3.3), the performance of Sindhi and Urdu (which use Arabic scripts) have significantly improved.",
"cite_spans": [],
"ref_spans": [
{
"start": 238,
"end": 245,
"text": "Table 3",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Script-agnostic model",
"sec_num": "4.3"
},
{
"text": "In addition, to study how our model performs with real-world code-switched (roman) data, we attempt the Microsoft GLUECoS (Khanuja et al., 2020) Machine Translation task 5 . We fine-tune our In this section, we explore one-to-many NMT models for training English to Indic translator. We initially train models using the parallel data, then train few more models using synthetic data from monolingual corpora to understand the level of improvement achievable using raw data.",
"cite_spans": [
{
"start": 122,
"end": 144,
"text": "(Khanuja et al., 2020)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Script-agnostic model",
"sec_num": "4.3"
},
{
"text": "The input sentences to all the models is prepended with a novel type of language-tag token, \"__lang-code__ __script-type__ \", inorder to explicitly provide cues to the model about what script-type is to be produced (in-addition for the given language). The possible script types are: 1. 'a' to denote Perso-Arabic writing system; 2. 'i' to denote Indic writing system; 3. 't' to denote Tamil alphabet, which is a small subset of the Indic set. 6 All the trained models follow the same network configuration (transformer-big) as in the previous experiments; see section 4.1. The sub-word vocabulary sizes for input and output sides respectively 16000 (English side) and 32000 (Indic side).",
"cite_spans": [
{
"start": 444,
"end": 445,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental settings",
"sec_num": "5.1"
},
{
"text": "We initially train 3 different models (from English) just using the parallel data: Dravidian (en2dr), Indo-Aryan (en2ia) and Indic (en2in). The results are shown in Table 3 . We see that the performance does not vary much between the family-specific models and the common model. This observation is consistent with the results for many other languages, where we see trivial to almost-no gains in accuracy with a shared decoder, in-cases like one-to-many NMT (Arivazhagan et al., 2019) .",
"cite_spans": [
{
"start": 458,
"end": 484,
"text": "(Arivazhagan et al., 2019)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 165,
"end": 172,
"text": "Table 3",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Models trained only on parallel data",
"sec_num": "5.2"
},
{
"text": "We experiment in the next subsection to understand if a common model could be more beneficial than family-specific models when a huge backtranslated data is augmented with the (upsampled) original data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models trained only on parallel data",
"sec_num": "5.2"
},
{
"text": "Using all the Indic monolingual data listed in Section 3.1.3, we generate English sentences using the rom_in2en model with a beam-search width of 6. We then train 4 models (from English) after upsampling the parallel data and concatenating with the backtranslated dataset: 1. bt_en2in: To all Indic languages after 5\u00d7 upsampling; 2. bt_en2ia: To Indo-Aryan languages after 6\u00d7 upsampling; 3. bt_en2dr: To Dravidian languages after 10\u00d7 upsampling; 4. t_bt_en2dr: To Dravidian languages after 7\u00d7 normal upsampling, and 3\u00d7 Tamilizedaugmented upsampling (by converting other Dravidian alphabets to Tamil subset and marking their script_type as 't' when prepending language token). The upsampling scale is decided such that the amount of original parallel data and backtranslated data are in ratio 1:2. Table 3 shows the performance of all the 4 models. We see that, family-specific models perform notably better than a common model (given a fixed model size). Moreover, for the t_bt_en2dr model, we observe a significant boost in accuracy for Tamil after the Tamilized-data is augmented, and a trivial improvement for Malayalam and Kannada compared to bt_en2dr.",
"cite_spans": [],
"ref_spans": [
{
"start": 797,
"end": 804,
"text": "Table 3",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Models trained on parallel and back-translated data",
"sec_num": "5.3"
},
{
"text": "Section 5.3, we further clarify on how treating Tamil as a special case could be helpful to improve its performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models trained on parallel and back-translated data",
"sec_num": "5.3"
},
{
"text": "It is also seen that, our model easily outperforms models which are fine-tuned from language models like IndicBART . This is because we use the same entire monolingual data (Kakwani et al., 2020) which was used to pretrain IndicBART, but along with supervised translation signals in the form of backtranslated data.",
"cite_spans": [
{
"start": 173,
"end": 195,
"text": "(Kakwani et al., 2020)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models trained on parallel and back-translated data",
"sec_num": "5.3"
},
{
"text": "For very-low resource languages (like Sindhi and Sinhala), we notice very significant improvements with back-translation, even with relatively lesser amount of monolingul data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models trained on parallel and back-translated data",
"sec_num": "5.3"
},
{
"text": "We demonstrate in this paper various methods to achieve improvement in performance, especially across South Asian languages which were not previously explored along with the languages of India. We believe our presented contributions are more of exploratory nature, and make fundamental proposals (like always building romanized models when the source side is Indic). Although the fact that a unified model results in better performance in low-resource scenarios has been discovered by many prior work and hence not surprising, our work merely focuses on quantitatively studying the improvement in the case of Indic languages. In this section, we provide general suggestions for research groups working on NMT for Indic languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussions and Conclusion",
"sec_num": "6"
},
{
"text": "In general, to train model for any low-resource Indic language to English, we recommend that data from all the languages is used to train a multilingual model. 7 Especially, training a romanized model would be more beneficial, since it would be a scriptagnostic model, and hence easily generalize for code-mixed and social media texts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussions and Conclusion",
"sec_num": "6"
},
{
"text": "For training English to any low-resource Indic language, it maybe be preferable to train familyspecific models when working under resourcecontrained settings. Especially for languages of the countries Pakistan, Bangladesh, Nepal and Sri Lanka, we highly recommend and encourage them to exploit the datasets made available by researchers of India. If possible, it is highly recommended to exploit the abundant monolingual data and train models using backtranslated data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussions and Conclusion",
"sec_num": "6"
},
{
"text": "As generally known, bigger models could push the improvements even further than what we have seen in our results. In fact, the recent work by (Ramesh et al., 2022) show better results on the FLoRes101 benchmark by using a transformer-4x model even without using back-translated data. We only benchmark on transformer-2x in this work for consistent comparison, and to be more practical during training and inference (as well as due to our unaffordability of large infrastructure for such experimentations). Also, we only perform one round of back-translation to study English to Indic models in Section 5.3. We encourage researchers to study multiple rounds of back-translations (which is out of scope for this paper).",
"cite_spans": [
{
"start": 142,
"end": 163,
"text": "(Ramesh et al., 2022)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Limitations",
"sec_num": "6.1"
},
{
"text": "Thorough anlaysis of the performance on codemixed (not code-switched) data using benchmarks like PHINC (Srivastava and Singh, 2020 ) is required for the rom_in2en model in Section 4.3, which is one of the on-going works in our research.",
"cite_spans": [
{
"start": 103,
"end": 130,
"text": "(Srivastava and Singh, 2020",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Limitations",
"sec_num": "6.1"
},
{
"text": "Indic-PersoArabic Script Converter",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/virtualvinodh/ aksharamukha 3 https://github.com/GokulNC/ Indic-Romanizer the same for languages that use Perso-Arabic scripts). 4. ML-based romanization using the pythonlibrary: LibIndicTrans 4 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/libindic/ indic-trans",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/microsoft/ model on the training set of the above dataset, and measure a validation BLEU score of 27.36. Unfortuanately, the leaderboard of the task is not yet out. Upon manually checking the validation results, we see that our model has performed reasonably good despite the fact that the dataset is code-mixed and romanization styles were somewhat different. Although this is not a comparable result, we hope that this is helpful in advancing further Indic-NMT research on this benchmark.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "GLUECoS#code-mixed-machine-translation-task 6 Tamil script is a lossy Indic alphabet, which has same phonemes for unvoiced and voiced consonants (like 'k' and 'g'), in-addition to a few other features (like aspirated consonants) that are not explicitly supported in the script. In",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Works like have already shown why multilingual models are more preferable for Indic languages, so we do not redemonstrate it in our work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Language (& family) ISO code Script(s) Countries ",
"cite_spans": [
{
"start": 9,
"end": 19,
"text": "(& family)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "APPENDIX A Indic languages",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Massively multilingual neural machine translation in the wild: Findings and challenges",
"authors": [
{
"first": "Naveen",
"middle": [],
"last": "Arivazhagan",
"suffix": ""
},
{
"first": "Ankur",
"middle": [],
"last": "Bapna",
"suffix": ""
},
{
"first": "Orhan",
"middle": [],
"last": "Firat",
"suffix": ""
},
{
"first": "Dmitry",
"middle": [],
"last": "Lepikhin",
"suffix": ""
},
{
"first": "Melvin",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Maxim",
"middle": [],
"last": "Krikun",
"suffix": ""
},
{
"first": "Mia",
"middle": [
"Xu"
],
"last": "Chen",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Foster",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Cherry",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Macherey",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Naveen Arivazhagan, Ankur Bapna, Orhan Firat, Dmitry Lepikhin, Melvin Johnson, Maxim Krikun, Mia Xu Chen, Yuan Cao, George Foster, Colin Cherry, Wolfgang Macherey, Zhifeng Chen, and Yonghui Wu. 2019. Massively multilingual neural machine translation in the wild: Findings and chal- lenges.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Indicbart: A pre-trained model for natural language generation of indic languages",
"authors": [
{
"first": "Raj",
"middle": [],
"last": "Dabre",
"suffix": ""
},
{
"first": "Himani",
"middle": [],
"last": "Shrotriya",
"suffix": ""
},
{
"first": "Anoop",
"middle": [],
"last": "Kunchukuttan",
"suffix": ""
},
{
"first": "Ratish",
"middle": [],
"last": "Puduppully",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Mitesh",
"suffix": ""
},
{
"first": "Pratyush",
"middle": [],
"last": "Khapra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kumar",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Raj Dabre, Himani Shrotriya, Anoop Kunchukuttan, Ratish Puduppully, Mitesh M. Khapra, and Pratyush Kumar. 2021. Indicbart: A pre-trained model for natural language generation of indic languages.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "India as a lingustic area",
"authors": [
{
"first": "M",
"middle": [
"B"
],
"last": "Emeneau",
"suffix": ""
}
],
"year": 1956,
"venue": "Language",
"volume": "32",
"issue": "1",
"pages": "3--16",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. B. Emeneau. 1956. India as a lingustic area. Lan- guage, 32(1):3-16.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The flores-101 evaluation benchmark for low-resource and multilingual machine translation",
"authors": [
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Cynthia",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Peng-Jen",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Wenzek",
"suffix": ""
},
{
"first": "Da",
"middle": [],
"last": "Ju",
"suffix": ""
},
{
"first": "Sanjana",
"middle": [],
"last": "Krishnan",
"suffix": ""
},
{
"first": "Marc'aurelio",
"middle": [],
"last": "Ranzato",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzman",
"suffix": ""
},
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.48550/ARXIV.2106.03193"
]
},
"num": null,
"urls": [],
"raw_text": "Naman Goyal, Cynthia Gao, Vishrav Chaudhary, Peng- Jen Chen, Guillaume Wenzek, Da Ju, Sanjana Kr- ishnan, Marc'Aurelio Ranzato, Francisco Guzman, and Angela Fan. 2021. The flores-101 evaluation benchmark for low-resource and multilingual ma- chine translation.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Two new evaluation datasets for low-resource machine translation: Nepali-english and sinhala-english",
"authors": [
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Peng-Jen",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Juan",
"middle": [],
"last": "Pino",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Marc'aurelio",
"middle": [],
"last": "Ranzato",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Francisco Guzm\u00e1n, Peng-Jen Chen, Myle Ott, Juan Pino, Guillaume Lample, Philipp Koehn, Vishrav Chaudhary, and Marc'Aurelio Ranzato. 2019. Two new evaluation datasets for low-resource machine translation: Nepali-english and sinhala-english.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Anoop Kunchukuttan, Pratyush Kumar, and Mitesh M. Khapra. 2021. Towards building asr systems for the next billion users",
"authors": [
{
"first": "Tahir",
"middle": [],
"last": "Javed",
"suffix": ""
},
{
"first": "Sumanth",
"middle": [],
"last": "Doddapaneni",
"suffix": ""
},
{
"first": "Abhigyan",
"middle": [],
"last": "Raman",
"suffix": ""
},
{
"first": "Kaushal",
"middle": [],
"last": "Santosh Bhogale",
"suffix": ""
},
{
"first": "Gowtham",
"middle": [],
"last": "Ramesh",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tahir Javed, Sumanth Doddapaneni, Abhigyan Raman, Kaushal Santosh Bhogale, Gowtham Ramesh, Anoop Kunchukuttan, Pratyush Kumar, and Mitesh M. Khapra. 2021. Towards building asr systems for the next billion users.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Marian: Fast neural machine translation in c++",
"authors": [
{
"first": "Marcin",
"middle": [],
"last": "Junczys-Dowmunt",
"suffix": ""
},
{
"first": "Roman",
"middle": [],
"last": "Grundkiewicz",
"suffix": ""
},
{
"first": "Tomasz",
"middle": [],
"last": "Dwojak",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [],
"last": "Heafield",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Neckermann",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Seide",
"suffix": ""
},
{
"first": "Ulrich",
"middle": [],
"last": "Germann",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.48550/ARXIV.1804.00344"
]
},
"num": null,
"urls": [],
"raw_text": "Marcin Junczys-Dowmunt, Roman Grundkiewicz, Tomasz Dwojak, Hieu Hoang, Kenneth Heafield, Tom Neckermann, Frank Seide, Ulrich Germann, Alham Fikri Aji, Nikolay Bogoychev, Andr\u00e9 F. T. Martins, and Alexandra Birch. 2018. Marian: Fast neural machine translation in c++.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "IndicNLPSuite: Monolingual Corpora, Evaluation Benchmarks and Pre-trained Multilingual Language Models for Indian Languages",
"authors": [
{
"first": "Divyanshu",
"middle": [],
"last": "Kakwani",
"suffix": ""
},
{
"first": "Anoop",
"middle": [],
"last": "Kunchukuttan",
"suffix": ""
},
{
"first": "Satish",
"middle": [],
"last": "Golla",
"suffix": ""
},
{
"first": "N",
"middle": [
"C"
],
"last": "Gokul",
"suffix": ""
},
{
"first": "Avik",
"middle": [],
"last": "Bhattacharyya",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Mitesh",
"suffix": ""
},
{
"first": "Pratyush",
"middle": [],
"last": "Khapra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kumar",
"suffix": ""
}
],
"year": 2020,
"venue": "Findings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Divyanshu Kakwani, Anoop Kunchukuttan, Satish Golla, Gokul N.C., Avik Bhattacharyya, Mitesh M. Khapra, and Pratyush Kumar. 2020. IndicNLPSuite: Monolingual Corpora, Evaluation Benchmarks and Pre-trained Multilingual Language Models for Indian Languages. In Findings of EMNLP.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "GLUECoS: An evaluation benchmark for code-switched NLP",
"authors": [
{
"first": "Simran",
"middle": [],
"last": "Khanuja",
"suffix": ""
},
{
"first": "Sandipan",
"middle": [],
"last": "Dandapat",
"suffix": ""
},
{
"first": "Anirudh",
"middle": [],
"last": "Srinivasan",
"suffix": ""
},
{
"first": "Sunayana",
"middle": [],
"last": "Sitaram",
"suffix": ""
},
{
"first": "Monojit",
"middle": [],
"last": "Choudhury",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3575--3585",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simran Khanuja, Sandipan Dandapat, Anirudh Srini- vasan, Sunayana Sitaram, and Monojit Choudhury. 2020. GLUECoS: An evaluation benchmark for code-switched NLP. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 3575-3585, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Low Resource ASR: The Surprising Effectiveness of High Resource Transliteration",
"authors": [
{
"first": "Shreya",
"middle": [],
"last": "Khare",
"suffix": ""
},
{
"first": "Ashish",
"middle": [],
"last": "Mittal",
"suffix": ""
},
{
"first": "Anuj",
"middle": [],
"last": "Diwan",
"suffix": ""
},
{
"first": "Sunita",
"middle": [],
"last": "Sarawagi",
"suffix": ""
},
{
"first": "Preethi",
"middle": [],
"last": "Jyothi",
"suffix": ""
},
{
"first": "Samarth",
"middle": [],
"last": "Bharadwaj",
"suffix": ""
}
],
"year": 2021,
"venue": "Proc. Interspeech 2021",
"volume": "",
"issue": "",
"pages": "1529--1533",
"other_ids": {
"DOI": [
"10.21437/Interspeech.2021-2062"
]
},
"num": null,
"urls": [],
"raw_text": "Shreya Khare, Ashish Mittal, Anuj Diwan, Sunita Sarawagi, Preethi Jyothi, and Samarth Bharadwaj. 2021. Low Resource ASR: The Surprising Effec- tiveness of High Resource Transliteration. In Proc. Interspeech 2021, pages 1529-1533.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Exploiting language relatedness for low web-resource language model adaptation: An Indic languages study",
"authors": [
{
"first": "Yash",
"middle": [],
"last": "Khemchandani",
"suffix": ""
},
{
"first": "Sarvesh",
"middle": [],
"last": "Mehtani",
"suffix": ""
},
{
"first": "Vaidehi",
"middle": [],
"last": "Patil",
"suffix": ""
},
{
"first": "Abhijeet",
"middle": [],
"last": "Awasthi",
"suffix": ""
},
{
"first": "Partha",
"middle": [],
"last": "Talukdar",
"suffix": ""
},
{
"first": "Sunita",
"middle": [],
"last": "Sarawagi",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 59th",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/2021.acl-long.105"
]
},
"num": null,
"urls": [],
"raw_text": "Yash Khemchandani, Sarvesh Mehtani, Vaidehi Patil, Abhijeet Awasthi, Partha Talukdar, and Sunita Sarawagi. 2021. Exploiting language relatedness for low web-resource language model adaptation: An Indic languages study. In Proceedings of the 59th",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing",
"authors": [],
"year": null,
"venue": "",
"volume": "1",
"issue": "",
"pages": "1312--1323",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Confer- ence on Natural Language Processing (Volume 1: Long Papers), pages 1312-1323, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing",
"authors": [
{
"first": "Taku",
"middle": [],
"last": "Kudo",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Richardson",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Taku Kudo and John Richardson. 2018. Sentencepiece: A simple and language independent subword tok- enizer and detokenizer for neural text processing.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Development of a complete Urdu-Hindi transliteration system",
"authors": [
{
"first": "Gurpreet",
"middle": [],
"last": "Singh Lehal",
"suffix": ""
},
{
"first": "Tejinder Singh",
"middle": [],
"last": "Saini",
"suffix": ""
}
],
"year": 2012,
"venue": "The COL-ING 2012 Organizing Committee",
"volume": "",
"issue": "",
"pages": "643--652",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gurpreet Singh Lehal and Tejinder Singh Saini. 2012. Development of a complete Urdu-Hindi transliter- ation system. In Proceedings of COLING 2012: Posters, pages 643-652, Mumbai, India. The COL- ING 2012 Organizing Committee.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Overview of the 8th workshop on Asian translation",
"authors": [
{
"first": "Toshiaki",
"middle": [],
"last": "Nakazawa",
"suffix": ""
},
{
"first": "Hideki",
"middle": [],
"last": "Nakayama",
"suffix": ""
},
{
"first": "Chenchen",
"middle": [],
"last": "Ding",
"suffix": ""
},
{
"first": "Raj",
"middle": [],
"last": "Dabre",
"suffix": ""
},
{
"first": "Shohei",
"middle": [],
"last": "Higashiyama",
"suffix": ""
},
{
"first": "Hideya",
"middle": [],
"last": "Mino",
"suffix": ""
},
{
"first": "Isao",
"middle": [],
"last": "Goto",
"suffix": ""
},
{
"first": "Win",
"middle": [
"Pa"
],
"last": "Pa",
"suffix": ""
},
{
"first": "Anoop",
"middle": [],
"last": "Kunchukuttan",
"suffix": ""
},
{
"first": "Shantipriya",
"middle": [],
"last": "Parida",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Chenhui",
"middle": [],
"last": "Chu",
"suffix": ""
},
{
"first": "Akiko",
"middle": [],
"last": "Eriguchi",
"suffix": ""
},
{
"first": "Kaori",
"middle": [],
"last": "Abe",
"suffix": ""
},
{
"first": "Yusuke",
"middle": [],
"last": "Oda",
"suffix": ""
},
{
"first": "Sadao",
"middle": [],
"last": "Kurohashi",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 8th Workshop on Asian Translation (WAT2021)",
"volume": "",
"issue": "",
"pages": "1--45",
"other_ids": {
"DOI": [
"10.18653/v1/2021.wat-1.1"
]
},
"num": null,
"urls": [],
"raw_text": "Toshiaki Nakazawa, Hideki Nakayama, Chenchen Ding, Raj Dabre, Shohei Higashiyama, Hideya Mino, Isao Goto, Win Pa Pa, Anoop Kunchukuttan, Shantipriya Parida, Ond\u0159ej Bojar, Chenhui Chu, Akiko Eriguchi, Kaori Abe, Yusuke Oda, and Sadao Kurohashi. 2021. Overview of the 8th workshop on Asian translation. In Proceedings of the 8th Workshop on Asian Trans- lation (WAT2021), pages 1-45, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Improving multilingual neural machine translation for low-resource languages: French, English -Vietnamese",
"authors": [
{
"first": "Thi-Vinh",
"middle": [],
"last": "Ngo",
"suffix": ""
},
{
"first": "Phuong-Thai",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Thanh-Le",
"middle": [],
"last": "Ha",
"suffix": ""
},
{
"first": "Khac-Quy",
"middle": [],
"last": "Dinh",
"suffix": ""
},
{
"first": "Le-Minh",
"middle": [],
"last": "Nguyen",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 3rd Workshop on Technologies for MT of Low Resource Languages",
"volume": "",
"issue": "",
"pages": "55--61",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thi-Vinh Ngo, Phuong-Thai Nguyen, Thanh-Le Ha, Khac-Quy Dinh, and Le-Minh Nguyen. 2020. Im- proving multilingual neural machine translation for low-resource languages: French, English -Viet- namese. In Proceedings of the 3rd Workshop on Technologies for MT of Low Resource Languages, pages 55-61, Suzhou, China. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A baseline neural machine translation system for indian languages",
"authors": [
{
"first": "Jerin",
"middle": [],
"last": "Philip",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Vinay",
"suffix": ""
},
{
"first": "C",
"middle": [
"V"
],
"last": "Namboodiri",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jawahar",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jerin Philip, Vinay P. Namboodiri, and C. V. Jawahar. 2019. A baseline neural machine translation system for indian languages.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A call for clarity in reporting BLEU scores",
"authors": [
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Conference on Machine Translation: Research Papers",
"volume": "",
"issue": "",
"pages": "186--191",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186- 191, Belgium, Brussels. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Anoop Kunchukuttan, Pratyush Kumar, and Mitesh Shantadevi Khapra. 2021. Samanantar: The largest publicly available parallel corpora collection for 11 indic languages",
"authors": [
{
"first": "Gowtham",
"middle": [],
"last": "Ramesh",
"suffix": ""
},
{
"first": "Sumanth",
"middle": [],
"last": "Doddapaneni",
"suffix": ""
},
{
"first": "Aravinth",
"middle": [],
"last": "Bheemaraj",
"suffix": ""
},
{
"first": "Mayank",
"middle": [],
"last": "Jobanputra",
"suffix": ""
},
{
"first": "A",
"middle": [
"K"
],
"last": "Raghavan",
"suffix": ""
},
{
"first": "Ajitesh",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Sujit",
"middle": [],
"last": "Sahoo",
"suffix": ""
},
{
"first": "Harshita",
"middle": [],
"last": "Diddee",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Mahalakshmi",
"suffix": ""
},
{
"first": "Divyanshu",
"middle": [],
"last": "Kakwani",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gowtham Ramesh, Sumanth Doddapaneni, Aravinth Bheemaraj, Mayank Jobanputra, Raghavan AK, Ajitesh Sharma, Sujit Sahoo, Harshita Diddee, Ma- halakshmi J, Divyanshu Kakwani, Navneet Ku- mar, Aswin Pradeep, Kumar Deepak, Vivek Ragha- van, Anoop Kunchukuttan, Pratyush Kumar, and Mitesh Shantadevi Khapra. 2021. Samanantar: The largest publicly available parallel corpora collection for 11 indic languages.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Anoop Kunchukuttan, Pratyush Kumar, and Mitesh Shantadevi Khapra. 2022. Samanantar: The Largest Publicly Available Parallel Corpora Collection for 11 Indic Languages. Transactions of the Association for Computational Linguistics",
"authors": [
{
"first": "Gowtham",
"middle": [],
"last": "Ramesh",
"suffix": ""
},
{
"first": "Sumanth",
"middle": [],
"last": "Doddapaneni",
"suffix": ""
},
{
"first": "Aravinth",
"middle": [],
"last": "Bheemaraj",
"suffix": ""
},
{
"first": "Mayank",
"middle": [],
"last": "Jobanputra",
"suffix": ""
},
{
"first": "A",
"middle": [
"K"
],
"last": "Raghavan",
"suffix": ""
},
{
"first": "Ajitesh",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Sujit",
"middle": [],
"last": "Sahoo",
"suffix": ""
},
{
"first": "Harshita",
"middle": [],
"last": "Diddee",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Mahalakshmi",
"suffix": ""
},
{
"first": "Divyanshu",
"middle": [],
"last": "Kakwani",
"suffix": ""
},
{
"first": "Navneet",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Aswin",
"middle": [],
"last": "Pradeep",
"suffix": ""
},
{
"first": "Srihari",
"middle": [],
"last": "Nagaraj",
"suffix": ""
},
{
"first": "Kumar",
"middle": [],
"last": "Deepak",
"suffix": ""
},
{
"first": "Vivek",
"middle": [],
"last": "Raghavan",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "10",
"issue": "",
"pages": "145--162",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00452"
]
},
"num": null,
"urls": [],
"raw_text": "Gowtham Ramesh, Sumanth Doddapaneni, Aravinth Bheemaraj, Mayank Jobanputra, Raghavan AK, Ajitesh Sharma, Sujit Sahoo, Harshita Diddee, Ma- halakshmi J, Divyanshu Kakwani, Navneet Kumar, Aswin Pradeep, Srihari Nagaraj, Kumar Deepak, Vivek Raghavan, Anoop Kunchukuttan, Pratyush Ku- mar, and Mitesh Shantadevi Khapra. 2022. Samanan- tar: The Largest Publicly Available Parallel Corpora Collection for 11 Indic Languages. Transactions of the Association for Computational Linguistics, 10:145-162.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A multilingual parallel corpora collection effort for Indian languages",
"authors": [
{
"first": "Shashank",
"middle": [],
"last": "Siripragada",
"suffix": ""
},
{
"first": "Jerin",
"middle": [],
"last": "Philip",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Vinay",
"suffix": ""
},
{
"first": "C V",
"middle": [],
"last": "Namboodiri",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jawahar",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 12th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "3743--3751",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shashank Siripragada, Jerin Philip, Vinay P. Nambood- iri, and C V Jawahar. 2020. A multilingual parallel corpora collection effort for Indian languages. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 3743-3751, Marseille, France. European Language Resources Association.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Phinc: A parallel hinglish social media code-mixed corpus for machine translation",
"authors": [
{
"first": "Vivek",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Mayank Kumar",
"middle": [],
"last": "Singh",
"suffix": ""
}
],
"year": 2020,
"venue": "WNUT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vivek Srivastava and Mayank Kumar Singh. 2020. Phinc: A parallel hinglish social media code-mixed corpus for machine translation. In WNUT.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Revisiting ipa-based crosslingual text-to-speech",
"authors": [
{
"first": "Haitong",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Haoyue",
"middle": [],
"last": "Zhan",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xinyuan",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haitong Zhang, Haoyue Zhan, Yang Zhang, Xinyuan Yu, and Yue Lin. 2021. Revisiting ipa-based cross- lingual text-to-speech.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"text": "Row-1: Perso-Arabic, Row-2: Devanagaritransliteration, Row-3: Actual Hindi spelling, Row-4: Translation",
"uris": null
},
"TABREF3": {
"type_str": "table",
"num": null,
"text": "Open-source monolingual Indic corpora (in millions), totalling 650M sentences",
"html": null,
"content": "<table/>"
},
"TABREF5": {
"type_str": "table",
"num": null,
"text": "Modelas bn gu hi kn ml mr ne or pa sd si ta te ur",
"html": null,
"content": "<table><tr><td>Indic-En -30.7 33.6 36.0 27.4 30.4 30.0 -28.6 34.2 -8.5 27.7 32.7 -21.4 30.2 32.8 36.1 25.3 27.7 28.9 35.1 28.4 34.2 24.1 12.8 22.5 29.6 24.9 23.9 31.8 33.9 36.8 28.1 30.7 30.7 36.2 31.3 35.3 24.1 15.1 27.7 33.0 25.1 24.1 31.9 34.0 37.3 28.4 30.9 30.7 36.3 31.5 35.3 24.7 15.3 28.3 33.0 25.8 En-Indic -17.3 22.6 31.3 16.7 14.2 14.7 -10.1 21.9 --14.9 20.4 -6.3 17.4 22.6 31.4 16.1 14.1 14.8 10.5 10.1 21.7 18.9 8.8 14.4 20.5 20.2 6.3 17.2 21.9 31.0 16.2 13.7 14.7 10.4 9.9 21.5 18.1 8.9 14.5 20.5 19.8 9.9 18.9 23.1 34.2 18.7 16.2 16.1 17.1 14.3 23.9 23.7 14.1 17.2 22.3 22.3 bt_en2ia, bt_en2dr 10.8 19.8 23.7 36.1 20.0 17.3 16.8 17.6 16.7 24.3 24.2 14.1 17.2 22.9 23.6 IndicBART ia2en, dr2en in2en rom_in2en IndicBART en2ia, en2dr en2in bt_en2in t_bt_en2dr ----20.1 17.5 ------18.1 22.8 -</td></tr></table>"
},
"TABREF6": {
"type_str": "table",
"num": null,
"text": "Comparison of BLEU scores of different trained models of same network architecture on FLoRes101 benchmark",
"html": null,
"content": "<table/>"
}
}
}
}