ACL-OCL / Base_JSON /prefixS /json /sigmorphon /2020.sigmorphon-1.16.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:30:54.310693Z"
},
"title": "One Model to Pronounce Them All: Multilingual Grapheme-to-Phoneme Conversion With a Transformer Ensemble",
"authors": [
{
"first": "Kaili",
"middle": [],
"last": "Vesik",
"suffix": "",
"affiliation": {
"laboratory": "Natural Language Processing Lab",
"institution": "",
"location": {}
},
"email": "kaili.vesik@ubc.ca"
},
{
"first": "Muhammad",
"middle": [],
"last": "Abdul-Mageed",
"suffix": "",
"affiliation": {
"laboratory": "Natural Language Processing Lab",
"institution": "",
"location": {}
},
"email": "muhammad.mageed@ubc.ca"
},
{
"first": "Miikka",
"middle": [],
"last": "Silfverberg",
"suffix": "",
"affiliation": {},
"email": "miikka.silfverberg@ubc.ca"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The task of grapheme-to-phoneme (G2P) conversion is important for both speech recognition and synthesis. Similar to other speech and language processing tasks, in a scenario where only small-sized training data are available, learning G2P models is challenging. We describe a simple approach of exploiting model ensembles, based on multilingual Transformers and self-training, to develop a highly effective G2P solution for 15 languages. Our models are developed as part of our participation in the SIGMORPHON 2020 Shared Task 1 focused at G2P. Our best models achieve 14.99 word error rate (WER) and 3.30 phoneme error rate (PER), a sizeable improvement over the shared task competitive baselines.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "The task of grapheme-to-phoneme (G2P) conversion is important for both speech recognition and synthesis. Similar to other speech and language processing tasks, in a scenario where only small-sized training data are available, learning G2P models is challenging. We describe a simple approach of exploiting model ensembles, based on multilingual Transformers and self-training, to develop a highly effective G2P solution for 15 languages. Our models are developed as part of our participation in the SIGMORPHON 2020 Shared Task 1 focused at G2P. Our best models achieve 14.99 word error rate (WER) and 3.30 phoneme error rate (PER), a sizeable improvement over the shared task competitive baselines.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Speech technologies are becoming increasingly pervasive in our lives. The task of graphemeto-phoneme (G2P) conversion is an important component of both speech recognition and synthesis. In G2P conversion, sequences of graphemes (the symbols used to write words) are mapped to corresponding phonemes (pronunciation symbols, e.g., symbols of the International Phonetic Alphabet). Members of the Special Interest Group on Computational Morphology and Phonology (SIGMORPHON) have proposed a G2P shared task (SIGMOR-PHON 2020 Shared Task 1) 1 involving multiple languages. In this paper, we describe our submissions to the shared task. Organizers provide an overview of the task and submitted systems in (this volume).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1 The shared task webpage is accessible at: https: //sigmorphon.github.io/sharedtasks/2020/task1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The task was introduced with data from 10 languages, with an additional 5 'surprise' languages released during the task timeline. Our goal was to develop an effective system based on modern deep learning methods as a solution. However, deep learning technologies work best with sufficiently large training data. Hence, a clear challenge we came across is the limited size of the shared task training data for each of the 15 individual languages. To ease this bottleneck, we decided to view the task through a multilingual machine translation lens where we build a single model mapping from input to output across all the languages simultaneously. In this, we hypothesized that a multilingual model would allow for shared representations across the various languages that may be more powerful than individual representations of monolingual models. Abundant evidence now exists for approaching machine translation tasks from a multilingual perspective (Johnson et al., 2017a; Dong et al., 2015; Firat et al., 2016) , which inspired our choice.",
"cite_spans": [
{
"start": 950,
"end": 973,
"text": "(Johnson et al., 2017a;",
"ref_id": "BIBREF5"
},
{
"start": 974,
"end": 992,
"text": "Dong et al., 2015;",
"ref_id": "BIBREF2"
},
{
"start": 993,
"end": 1012,
"text": "Firat et al., 2016)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In order to make use of unlabeled data, we also explore a straightforward self-training approach. In particular, we employ our trained models to convert sequences of multilingual unlabeled graphemes, taken from Wikipedia data, into multilingual phonemes. We then select sequences of phonemes predicted with our models above a certain confidence threshold to augment the shared task training data, thus re-training our models with larger (gold and silver) training data from scratch. Our models are based on the Transformer architecture which exploits effective self-attention. We show that both our multilingual model and the self-trained variation outperform the results of the competitive baseline monolingual models provided by the task organizers. Ultimately, we demonstrate how our simple modeling choices enable us to provide an effective solution to the problem in spite of the lowresource challenge. Intrinsically, our approach also enjoys the simplicity of a single model rather than 15 different models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of the paper is organized as follows: Section 2 is a description of the shared task data, evaluation metrics, and baselines. Section 3 introduces both our fully supervised, multilingual models (Section 3.1) and self-trained model (Section 3.2). We present our results in Section 4. We provide an analysis of results and report on an ablation study in Section 5. We overview related work in Section 6, and conclude in Section 7.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The data provided by the organizers of the shared task are extracted from Wiktionary 2 using the WikiPron library (Lee et al., 2020) ",
"cite_spans": [
{
"start": 114,
"end": 132,
"text": "(Lee et al., 2020)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task Data, Evaluation, and Baselines",
"sec_num": "2"
},
{
"text": "arm \u0561\u0570\u0565\u0572 A h E K \u056c\u056b\u0561\u0580\u056a\u0565\u0584 l j A R Z E k h fre front f K O\u1e7d \u00eatu v e t y Alphasyllabary: hin \u0926\u0916\u093e\u0935\u093e d I k h A: V A: \u0939\u091f\u0928\u093e H \u0259 \u00fa n A: kor \uac1c\ubcbd k e\u031e b j \u028c\u0339 k\u031a \uc624\ube60 o\u031e p \u0348 a\u0320",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Data, Evaluation, and Baselines",
"sec_num": "2"
},
{
"text": "Syllabary: Baselines. Organizers provide a number of monolingual baselines. The first is a pair n-gram model encoded as a weighted finitestate transducer (FST), implemented using the OpenGRMtoolkit 4 . The second is a bi-LSTM encoder-decoder sequence model implemented using the Fairseq toolkit 5 . The third is a Transformer model also implemented using the Fairseq toolkit. Organizer-provided shared task baselines are shown in Table 2 as WER and PER averages over the 15 languages. We now introduce our models. ",
"cite_spans": [],
"ref_spans": [
{
"start": 430,
"end": 437,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Task Data, Evaluation, and Baselines",
"sec_num": "2"
},
{
"text": "jpn \u3044\u306a\u308a i n a\u0331 R j i \u3084\u305b\u3093 j a\u0320 s \u1ebd\u031e \u0274",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Data, Evaluation, and Baselines",
"sec_num": "2"
},
{
"text": "As explained, our models are based on Transformers and we offer two primary types of models, depending on how we supervise each. We first introduce fully supervised multilin-gual models, then we introduce our semisupervised models (also multilingual). Our semi-supervised models follow a self-training set up. We now explain each of these models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "3"
},
{
"text": "We use a multilingual approach where we train a single model on data from all 15 languages. For this purpose, we prepend a token comprising a language code (e.g. fre) to each grapheme sequence source. For our implementation, we use the PyTorch Transformer architecture in the OpenNMT Neural Machine Translation Toolkit (Klein et al., 2017) . We set the model hyper-parameters as shown in Table 3, which follows those adopted by Vaswani et al. (2017) . We train the model with 3 different random seeds, and at inference we employ an ensemble consisting of the models from 4 training checkpoints (at 50k, 100k, 150k, and 200k steps) for each of the 3 models generated by the random seeds. We note that OpenNMT averages individual models' prediction distributions, which is how we deploy our ensemble. We use beam search with the OpenNMT default beam width of 5. 6",
"cite_spans": [
{
"start": 319,
"end": 339,
"text": "(Klein et al., 2017)",
"ref_id": "BIBREF7"
},
{
"start": 428,
"end": 449,
"text": "Vaswani et al. (2017)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Supervised, Multilingual Models",
"sec_num": "3.1"
},
{
"text": "One of the models we submitted to the task employs a self-training approach, as a way to augment training data. The additional data is sourced from Wikipedia articles from 12 of the 15 languages (excluding Adyghe, Japanese, and Vietnamese) 7 . We download the Wikipedia dumps from the Wikimedia website 8 and use an off-the-shelf tool 9 for extracting text. Further pre-processing involved removing any remaining XML markup, discarding leading and trailing punctuation and numerals for each word, and ignoring any words with remaining word-internal punctuation or numerals. Due to time constraints, only one million words from each language were used, and from those only unique entries were submitted to the model for translation and subsequent evaluation as potential candidates for augmenting training data. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Wikipedia Data Augmentation",
"sec_num": "3.2.1"
},
{
"text": "As explained, self-training data is drawn from the translations of Wikipedia text in 12 languages as predicted by an ensemble model. In order to select pairs to augment the training set, we first calculate the mean per-class softmax value in the development set (which we find to be at 0.11). 10 Comparatively, the average per-class softmax value for the predicted Wikipedia targets for each language ranges from 0.12 to 0.30. Based on this analysis, we select only those Wikipedia pairs whose predicted targets have a probability greater than 0.2. 11 The selected data are combined with the original (i.e., from official task) training set and the models are re-trained using the same hyper-parameters as the fully-supervised setting. Both models demonstrate lower word error rates (WER) and phoneme error rates (PER), averaged across languages, than the baseline monolingual models provided by the task organizers (see Table 2 in Section 2). Error rates per language are shown in Table 5 for the development set and Table 6 for the blind test set (results published by organizers). Table 7 10 As is known, the softmax function produces a probability distribution over the classes.",
"cite_spans": [],
"ref_spans": [
{
"start": 921,
"end": 928,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 982,
"end": 989,
"text": "Table 5",
"ref_id": "TABREF9"
},
{
"start": 1018,
"end": 1025,
"text": "Table 6",
"ref_id": "TABREF11"
},
{
"start": 1084,
"end": 1091,
"text": "Table 7",
"ref_id": "TABREF12"
}
],
"eq_spans": [],
"section": "Procedure",
"sec_num": "3.2.2"
},
{
"text": "11 There could be different ways to select predicted data for augmentation. For example, one can arbitrarily choose the top n% most confidently predicted points (with n being a hyper-parameter).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual",
"sec_num": null
},
{
"text": "Self shows examples of prediction errors, which demonstrate some of the typical minor errors in phenomena such as voicing (e.g. k vs. \u0261), epenthesis and elision (e.g. p \u0281 u vs. p \u0281 u l), and coarticulation (e.g. b\u02b2 vs. b).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual",
"sec_num": null
},
{
"text": "-trained",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual",
"sec_num": null
},
{
"text": "On average, the fully-supervised models performed slightly better than the self-trained model. We expected that the self-trained model would see (at least slightly) better performance than the fully supervised; however, due to time constraints, we were not able to augment the training data to such a degree that this hypothesized improvement would be tangible. We leave it as a question for the future whether, and if so to what extent, selftraining can improve our models. We now provide an analysis of our findings and report on an ablation study under a number of settings. appear to be a significant correlation between writing system and results on G2P conversion. For example, a total of 7 of the languages (i.e., dut, fre, hun, ice, lit, rum, vie) use the Roman alphabet, but the WERs for these languages cover a reasonably wide range (from first-to eleventh-best) of the results. It is worth noting, however, that the two languages that use the Cyrillic alphabet (ady, bul) were the two worst-performing languages of the set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual",
"sec_num": null
},
{
"text": "arm \u0566\u0578\u0582\u0563\u0561\u0580\u0561\u0576 z u k h A R A n z u g A R A n \u0561\u0576\u056d\u0576\u0561 A \u014b X \u0259 n A A \u014b X n A fre full f u l f y l proulx p K u p K u l hin \u0927 \u092f d H \u0259 n j \u0259 d H \u0259 n j \u092e\u0947 \u0939\u0930\u092c\u093e\u0928\u0940 m E:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lang Source Target Prediction",
"sec_num": null
},
{
"text": "Both prior and subsequent to the task deadline, we performed several ablations in order to assess the effectiveness of our approach. First, we compare results based on single models vs. those based on the ensemble. Table 8 shows the error rates of development set translation by the four training checkpoints used in the ensemble, in this case trained with the default (random) seed. Given that each of these results is poorer than our ensemble results for the multilingual model (WER 14.83 / PER 3.41), it is clear that the ensemble approach is superior. Clearly, the ensemble has the advantage of exploiting multiple predictions for each word. This does result in reduced error rates as compared to individual models.",
"cite_spans": [],
"ref_spans": [
{
"start": 215,
"end": 222,
"text": "Table 8",
"ref_id": "TABREF14"
}
],
"eq_spans": [],
"section": "Lang Source Target Prediction",
"sec_num": null
},
{
"text": "We also compare our multilingual model's error rates on a given language to those acquired by the respective monolingual models. We note that each of the monolingual models is otherwise initialized with the same parameters as the multilingual model described in Section 3.1. Results for the 15 monolingual models are shown in Table 9 . The average WER across all languages is almost twice as big as that of our multilingual model (whether individual or ensemble), and the per- ",
"cite_spans": [],
"ref_spans": [
{
"start": 326,
"end": 333,
"text": "Table 9",
"ref_id": "TABREF16"
}
],
"eq_spans": [],
"section": "Lang Source Target Prediction",
"sec_num": null
},
{
"text": "Avg",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lang Source Target Prediction",
"sec_num": null
},
{
"text": "Various data-driven models have been successfully applied to G2P conversion. In terms of English conversion, Bisani and Ney (2008) use co-segmentation and joint sequence models for early data-driven G2P. Novak et al. (2016) employ a joint multigram approach to generate weighted finite-state transducers for G2P. Recently, neural sequence-to-sequence models based on CNN and RNN architectures have been proposed for the G2P task delivering superior results compared to earlier non-neural approaches (Chae et al., 2018; Yolchuyeva et al., 2019a) . Similar to our approach, Yolchuyeva et al. (2019b) use transformers (Vaswani et al., 2017) to perform English G2P conversion. Multilingual training is a crucial component in our system. Our approach is closely related to multilingual neural machine translation (Johnson et al., 2017b) , where a single model is trained to translate between multiple source and target languages. Others have also explored multilingual approaches to G2P. Deri and Knight (2016) use multilingual G2P conversion for the purpose of adapting models from high-resource languages to train weighted finite-state transducers for related low-resource languages. Ni et al. (2018) experiment with multilingual training for deep learning models. They use pretrained character embeddings with LSTM encoder-decoders in order to train multilingual G2P models for Chinese, Japanese, Korean and Thai. In contrast to Ni et al. (2018) , we inspect multilingual training in the context of transformer models. For our second model, whose training data is augmented from Wikipedia, we use a selftaining method. Sun et al. (2019) investigate self-training together with ensemble distillation for English G2P conversion, using transformer models. Their setting resembles ours: A teacher model is first trained using a gold standard labeled G2P training set. The teacher model is then used to label additional grapheme data, producing a silver standard training set. Subsequently, a model ensemble is trained on the combination of the gold and silver data. Sun et al. (2019) train on nearly 200k gold standard examples and 2M silver standard examples and report small improvements. In contrast, we do not observe improvements from self-training. This might be a consequence of the small size of the shared task datasets and our silver standard Wikipedia data.",
"cite_spans": [
{
"start": 109,
"end": 130,
"text": "Bisani and Ney (2008)",
"ref_id": "BIBREF0"
},
{
"start": 204,
"end": 223,
"text": "Novak et al. (2016)",
"ref_id": "BIBREF10"
},
{
"start": 499,
"end": 518,
"text": "(Chae et al., 2018;",
"ref_id": null
},
{
"start": 519,
"end": 544,
"text": "Yolchuyeva et al., 2019a)",
"ref_id": null
},
{
"start": 572,
"end": 597,
"text": "Yolchuyeva et al. (2019b)",
"ref_id": "BIBREF15"
},
{
"start": 615,
"end": 637,
"text": "(Vaswani et al., 2017)",
"ref_id": null
},
{
"start": 808,
"end": 831,
"text": "(Johnson et al., 2017b)",
"ref_id": "BIBREF6"
},
{
"start": 983,
"end": 1005,
"text": "Deri and Knight (2016)",
"ref_id": "BIBREF1"
},
{
"start": 1181,
"end": 1197,
"text": "Ni et al. (2018)",
"ref_id": "BIBREF9"
},
{
"start": 1427,
"end": 1443,
"text": "Ni et al. (2018)",
"ref_id": "BIBREF9"
},
{
"start": 1617,
"end": 1634,
"text": "Sun et al. (2019)",
"ref_id": "BIBREF11"
},
{
"start": 2060,
"end": 2077,
"text": "Sun et al. (2019)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "We introduced a multilingual approach to G2P conversion, exploiting Transformers in a fully supervised multilingual setting. Strikingly, our choice to model all languages in a shared, nultilingual space reduces error rates (in WER and PER) by almost one half. We also showed how an ensemble of individuallytrained multilingual Transformers, is an improvement over non-ensemble models. We also leveraged multilingual Wikipedia data via a self-training strategy, though due to time constraints we were not able to incorporate enough silver labeled data into training to see the results we had hoped for 12 . Nevertheless, the multilingual models successfully surpassed all organizer-provided baselines on the task and compared favorably to several other submitted models. Our future work includes scaling up our self-training with larger Wikipedia data and choosing fully-trained models (e.g., in our case ones at 200K steps) to include in the ensemble.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "https://www.wiktionary.org/.3 We use three-character ISO-639-2 abbreviations as not all of the task languages have ISO-639-1 codes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.opengrm.org/twiki/bin/view/GRM. 5 https://github.com/pytorch/fairseq.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We also experimented with beam size 10, but did not obtain improvements on the development set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We note that there is no Adyghe Wikipedia. Also, the Japenese Wikipedia is not strictly in Hiragana and so we exclude it. By mistake, we did not include Vietnamese either. Clearly, we average results from the self-training models only on the languages for which we augment the data.8 https://dumps.wikimedia.org/. 9 https://github.com/attardi/wikiextractor",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Analysis & Ablation StudyWe suspected that languages with shared writing systems (in our multilingual models) would benefit from the shared representation and hence see better results, posing a challenges to those languages with unique orthography (i.e., orthography not shared by o=any of the other languages considered). However, our results do not support this hypothesis; there did not",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), the Social Sciences Research Council of Canada (SSHRC), and Compute Canada (www.computecanada.ca).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "12 Training on all available Wikipedia data is in progress at the time of this paper's submission Moon-jung Chae",
"authors": [
{
"first": "Maximilian",
"middle": [],
"last": "Bisani",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2008,
"venue": "2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",
"volume": "50",
"issue": "",
"pages": "2486--2490",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maximilian Bisani and Hermann Ney. 2008. Joint- sequence models for grapheme-to-phoneme con- version. Speech communication, 50(5):434-451. 12 Training on all available Wikipedia data is in progress at the time of this paper's submission Moon-jung Chae, Kyubyong Park, Jinhyun Bang, Soobin Suh, Jonghyuk Park, Namju Kim, and Longhun Park. 2018. Convolutional sequence to sequence model with non-sequential greedy de- coding for grapheme to phoneme conversion. In 2018 IEEE International Conference on Acous- tics, Speech and Signal Processing (ICASSP), pages 2486-2490. IEEE.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Grapheme-tophoneme models for (almost) any language",
"authors": [
{
"first": "Aliya",
"middle": [],
"last": "Deri",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "399--408",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aliya Deri and Kevin Knight. 2016. Grapheme-to- phoneme models for (almost) any language. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 399-408.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Multi-task learning for multiple language translation",
"authors": [
{
"first": "Daxiang",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Dianhai",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Haifeng",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "1723--1732",
"other_ids": {
"DOI": [
"10.3115/v1/P15-1166"
]
},
"num": null,
"urls": [],
"raw_text": "Daxiang Dong, Hua Wu, Wei He, Dianhai Yu, and Haifeng Wang. 2015. Multi-task learning for multiple language translation. In Proceed- ings of the 53rd Annual Meeting of the Asso- ciation for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1723-1732, Beijing, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Multi-way, multilingual neural machine translation with a shared attention mechanism",
"authors": [
{
"first": "Orhan",
"middle": [],
"last": "Firat",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "866--875",
"other_ids": {
"DOI": [
"10.18653/v1/N16-1101"
]
},
"num": null,
"urls": [],
"raw_text": "Orhan Firat, Kyunghyun Cho, and Yoshua Ben- gio. 2016. Multi-way, multilingual neural ma- chine translation with a shared attention mech- anism. In Proceedings of the 2016 Conference of the North American Chapter of the Associa- tion for Computational Linguistics: Human Lan- guage Technologies, pages 866-875, San Diego, California. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "The SIGMORPHON 2020 shared task on multilingual grapheme-to-phoneme conversion",
"authors": [
{
"first": "Kyle",
"middle": [],
"last": "Gorman",
"suffix": ""
},
{
"first": "F",
"middle": [
"E"
],
"last": "Lucas",
"suffix": ""
},
{
"first": "Aaron",
"middle": [],
"last": "Ashby",
"suffix": ""
},
{
"first": "Arya",
"middle": [
"D"
],
"last": "Goyzueta",
"suffix": ""
},
{
"first": "Shijie",
"middle": [],
"last": "Mccarthy",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "You",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kyle Gorman, Lucas F.E. Ashby, Aaron Goyzueta, Arya D. McCarthy, Shijie Wu, and Daniel You. 2020. The SIGMORPHON 2020 shared task on multilingual grapheme-to-phoneme conver- sion. In Proceedings of the 17th SIGMORPHON Workshop on Computational Research in Phonet- ics, Phonology, and Morphology.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Google's multilingual neural machine translation system: Enabling zero-shot translation",
"authors": [
{
"first": "Melvin",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Quoc",
"middle": [
"V"
],
"last": "Le",
"suffix": ""
},
{
"first": "Maxim",
"middle": [],
"last": "Krikun",
"suffix": ""
},
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Nikhil",
"middle": [],
"last": "Thorat",
"suffix": ""
},
{
"first": "Fernanda",
"middle": [],
"last": "Vi\u00e9gas",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Wattenberg",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Macduff",
"middle": [],
"last": "Hughes",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "339--351",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00065"
]
},
"num": null,
"urls": [],
"raw_text": "Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Vi\u00e9gas, Martin Watten- berg, Greg Corrado, Macduff Hughes, and Jef- frey Dean. 2017a. Google's multilingual neural machine translation system: Enabling zero-shot translation. Transactions of the Association for Computational Linguistics, 5:339-351.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Google's multilingual neural machine translation system: Enabling zero-shot translation",
"authors": [
{
"first": "Melvin",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "Maxim",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Yonghui",
"middle": [],
"last": "Krikun",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Nikhil",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Fernanda",
"middle": [],
"last": "Thorat",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Vi\u00e9gas",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Wattenberg",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Corrado",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "339--351",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Melvin Johnson, Mike Schuster, Quoc V Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Vi\u00e9gas, Martin Wat- tenberg, Greg Corrado, et al. 2017b. Google's multilingual neural machine translation system: Enabling zero-shot translation. Transactions of the Association for Computational Linguistics, 5:339-351.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "OpenNMT: Open-source toolkit for neural machine translation",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Yuntian",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Senellart",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/P17-4012"
]
},
"num": null,
"urls": [],
"raw_text": "Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander M. Rush. 2017. OpenNMT: Open-source toolkit for neural ma- chine translation. In Proc. ACL.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Massively multilingual pronunciation mining with WikiPron",
"authors": [
{
"first": "Jackson",
"middle": [
"L"
],
"last": "Lee",
"suffix": ""
},
{
"first": "F",
"middle": [
"E"
],
"last": "Lucas",
"suffix": ""
},
{
"first": "M",
"middle": [
"Elizabeth"
],
"last": "Ashby",
"suffix": ""
},
{
"first": "Yeonju",
"middle": [],
"last": "Garza",
"suffix": ""
},
{
"first": "Sean",
"middle": [],
"last": "Lee-Sikka",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "Arya",
"middle": [
"D"
],
"last": "Wong",
"suffix": ""
},
{
"first": "Kyle",
"middle": [],
"last": "Mccarthy",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gorman",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 12th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "4216--4221",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jackson L. Lee, Lucas F.E. Ashby, M. Elizabeth Garza, Yeonju Lee-Sikka, Sean Miller, Alan Wong, Arya D. McCarthy, and Kyle Gorman. 2020. Massively multilingual pronunciation min- ing with WikiPron. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 4216-4221, Marseille.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Multilingual grapheme-to-phoneme conversion with global character vectors",
"authors": [
{
"first": "Jinfu",
"middle": [],
"last": "Ni",
"suffix": ""
},
{
"first": "Yoshinori",
"middle": [],
"last": "Shiga",
"suffix": ""
},
{
"first": "Hisashi",
"middle": [],
"last": "Kawai",
"suffix": ""
}
],
"year": 2018,
"venue": "Interspeech",
"volume": "",
"issue": "",
"pages": "2823--2827",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jinfu Ni, Yoshinori Shiga, and Hisashi Kawai. 2018. Multilingual grapheme-to-phoneme con- version with global character vectors. In Inter- speech, pages 2823-2827.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Phonetisaurus: Exploring grapheme-to-phoneme conversion with joint ngram models in the wfst framework",
"authors": [
{
"first": "Josef",
"middle": [
"Robert"
],
"last": "Novak",
"suffix": ""
},
{
"first": "Nobuaki",
"middle": [],
"last": "Minematsu",
"suffix": ""
},
{
"first": "Keikichi",
"middle": [],
"last": "Hirose",
"suffix": ""
}
],
"year": 2016,
"venue": "Natural Language Engineering",
"volume": "22",
"issue": "6",
"pages": "907--938",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Josef Robert Novak, Nobuaki Minematsu, and Kei- kichi Hirose. 2016. Phonetisaurus: Exploring grapheme-to-phoneme conversion with joint n- gram models in the wfst framework. Natural Language Engineering, 22(6):907-938.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Token-level ensemble distillation for grapheme-to-phoneme conversion",
"authors": [
{
"first": "Hao",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Jun-Wei",
"middle": [],
"last": "Gan",
"suffix": ""
},
{
"first": "Hongzhi",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Sheng",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Tie-Yan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1904.03446"
]
},
"num": null,
"urls": [],
"raw_text": "Hao Sun, Xu Tan, Jun-Wei Gan, Hongzhi Liu, Sheng Zhao, Tao Qin, and Tie-Yan Liu. 2019. Token-level ensemble distillation for grapheme-to-phoneme conversion. arXiv preprint arXiv:1904.03446.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Grapheme-to-phoneme conversion with convolutional neural networks",
"authors": [],
"year": null,
"venue": "Applied Sciences",
"volume": "9",
"issue": "6",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Grapheme-to-phoneme conversion with convolutional neural networks. Applied Sciences, 9(6):1143.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Transformer based grapheme-to-phoneme conversion",
"authors": [
{
"first": "Sevinj",
"middle": [],
"last": "Yolchuyeva",
"suffix": ""
},
{
"first": "G\u00e9za",
"middle": [],
"last": "N\u00e9meth",
"suffix": ""
},
{
"first": "B\u00e1lint",
"middle": [],
"last": "Gyires-T\u00f3th",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.21437/interspeech.2019-1954"
]
},
"num": null,
"urls": [],
"raw_text": "Sevinj Yolchuyeva, G\u00e9za N\u00e9meth, and B\u00e1lint Gyires-T\u00f3th. 2019b. Transformer based grapheme-to-phoneme conversion. Interspeech 2019.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "H R b A: n i: m e: H \u0259 R b A:",
"type_str": "figure",
"num": null,
"uris": null
},
"TABREF1": {
"type_str": "table",
"html": null,
"num": null,
"text": "",
"content": "<table/>"
},
"TABREF3": {
"type_str": "table",
"html": null,
"num": null,
"text": "Baseline performance as avg. WER and PER over the 15 languages as provided by task organizers. Baselines exploit monolingual models.",
"content": "<table/>"
},
"TABREF5": {
"type_str": "table",
"html": null,
"num": null,
"text": "",
"content": "<table><tr><td>:</td><td>Multilingual Transformer hyper-</td></tr><tr><td>parameters.</td><td/></tr></table>"
},
"TABREF6": {
"type_str": "table",
"html": null,
"num": null,
"text": "summarizes the size of the Wikipedia data used for each available language. Selection methods and thresholds are discussed in Section 3.2.2.",
"content": "<table><tr><td colspan=\"3\">Language Translated Selected</td></tr><tr><td>arm</td><td>9,947</td><td>4,723</td></tr><tr><td>bul</td><td>9,999</td><td>3,197</td></tr><tr><td>dut</td><td>2,275</td><td>860</td></tr><tr><td>fre</td><td>9,985</td><td>2,888</td></tr><tr><td>geo</td><td>5,038</td><td>3,043</td></tr><tr><td>gre</td><td>9,949</td><td>3,419</td></tr><tr><td>hin</td><td>1,450</td><td>727</td></tr><tr><td>hun</td><td>10,000</td><td>3,444</td></tr><tr><td>ice</td><td>9,839</td><td>3,719</td></tr><tr><td>kor</td><td>4,282</td><td>2,681</td></tr><tr><td>lit</td><td>7,033</td><td>3,615</td></tr><tr><td>rum</td><td>9,785</td><td>3,102</td></tr><tr><td>Total</td><td>89,582</td><td>35,418</td></tr></table>"
},
"TABREF7": {
"type_str": "table",
"html": null,
"num": null,
"text": "",
"content": "<table/>"
},
"TABREF9": {
"type_str": "table",
"html": null,
"num": null,
"text": "",
"content": "<table/>"
},
"TABREF11": {
"type_str": "table",
"html": null,
"num": null,
"text": "",
"content": "<table/>"
},
"TABREF12": {
"type_str": "table",
"html": null,
"num": null,
"text": "Sample prediction errors from development data.",
"content": "<table/>"
},
"TABREF13": {
"type_str": "table",
"html": null,
"num": null,
"text": "",
"content": "<table><tr><td>Checkpoint</td><td>WER</td><td>PER</td></tr><tr><td>50k of 200k steps</td><td>16.70</td><td>3.93</td></tr><tr><td>100k of 200k steps</td><td>16.04</td><td>3.69</td></tr><tr><td>150k of 200k steps</td><td>16.25</td><td>3.78</td></tr><tr><td>200k of 200k steps</td><td>15.73</td><td>3.65</td></tr><tr><td>Ensemble</td><td>14.83</td><td>3.41</td></tr></table>"
},
"TABREF14": {
"type_str": "table",
"html": null,
"num": null,
"text": "",
"content": "<table><tr><td>: Development set results for individual</td></tr><tr><td>models vs. our ensemble</td></tr><tr><td>language results are worse across the board</td></tr><tr><td>as well. The monolingual Georgian WER</td></tr><tr><td>(25.33) was the only result to approach its</td></tr><tr><td>multilingual counterpart (24.44). Our multi-</td></tr><tr><td>lingual approach is clearly a significant</td></tr><tr><td>improvement over otherwise equivalent</td></tr><tr><td>monolingually-trained models.</td></tr></table>"
},
"TABREF16": {
"type_str": "table",
"html": null,
"num": null,
"text": "Development set results for monolingual models.",
"content": "<table/>"
}
}
}
}