ACL-OCL / Base_JSON /prefixS /json /sigmorphon /2021.sigmorphon-1.24.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:31:24.530055Z"
},
"title": "Improved pronunciation prediction accuracy using morphology",
"authors": [
{
"first": "Dravyansh",
"middle": [],
"last": "Sharma",
"suffix": "",
"affiliation": {
"laboratory": "Google LLC",
"institution": "",
"location": {}
},
"email": "dravyans@andrew.cmu.edu"
},
{
"first": "Yashmohini",
"middle": [],
"last": "Sahai",
"suffix": "",
"affiliation": {
"laboratory": "Google LLC",
"institution": "",
"location": {}
},
"email": "sahai.17@osu.edu"
},
{
"first": "Neha",
"middle": [],
"last": "Chaudhari",
"suffix": "",
"affiliation": {
"laboratory": "Google LLC",
"institution": "",
"location": {}
},
"email": "neha7.chaudhari@gmail.com"
},
{
"first": "Antoine",
"middle": [],
"last": "Bruguier",
"suffix": "",
"affiliation": {
"laboratory": "Google LLC",
"institution": "",
"location": {}
},
"email": "bruguier@almuni.caltech.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Pronunciation lexicons and prediction models are a key component in several speech synthesis and recognition systems. We know that morphologically related words typically follow a fixed pattern of pronunciation which can be described by language-specific paradigms. In this work we explore how deep recurrent neural networks can be used to automatically learn and exploit this pattern to improve the pronunciation prediction quality of words related by morphological inflection. We propose two novel approaches for supplying morphological information, using the word's morphological class and its lemma, which are typically annotated in standard lexicons. We report improvements across a number of European languages with varying degrees of phonological and morphological complexity, and two language families, with greater improvements for languages where the pronunciation prediction task is inherently more challenging. We also observe that combining bidirectional LSTM networks with attention mechanisms is an effective neural approach for the computational problem considered, across languages. Our approach seems particularly beneficial in the low resource setting, both by itself and in conjunction with transfer learning.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Pronunciation lexicons and prediction models are a key component in several speech synthesis and recognition systems. We know that morphologically related words typically follow a fixed pattern of pronunciation which can be described by language-specific paradigms. In this work we explore how deep recurrent neural networks can be used to automatically learn and exploit this pattern to improve the pronunciation prediction quality of words related by morphological inflection. We propose two novel approaches for supplying morphological information, using the word's morphological class and its lemma, which are typically annotated in standard lexicons. We report improvements across a number of European languages with varying degrees of phonological and morphological complexity, and two language families, with greater improvements for languages where the pronunciation prediction task is inherently more challenging. We also observe that combining bidirectional LSTM networks with attention mechanisms is an effective neural approach for the computational problem considered, across languages. Our approach seems particularly beneficial in the low resource setting, both by itself and in conjunction with transfer learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Morphophonology is the study of interaction between morphological and phonological processes and mostly involves description of sound changes that take place in morphemes (minimal meaningful units) when they combine to form words. For example, the plural morpheme in English appears as '-s' or '-es' in orthography and as [s] , [z] , and [Iz] Part of the work was done when D.S., N.C. and A.B. were at Google. in phonology, e.g. in cops, cogs and courses. The different forms can be thought to be derived from a common plural morphophoneme which undergoes context dependent transformations to produce the correct phones.",
"cite_spans": [
{
"start": 322,
"end": 325,
"text": "[s]",
"ref_id": null
},
{
"start": 328,
"end": 331,
"text": "[z]",
"ref_id": null
},
{
"start": 338,
"end": 342,
"text": "[Iz]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A pronunciation model, also known as a grapheme to phoneme (G2P) converter, is a system that produces a phonemic representation of a word from its written form. The word is converted from the sequence of letters in the orthographic script to a sequence of phonemes (sound symbols) in a pre-determined transcription, such as IPA or X-SAMPA. It is expensive and possibly, say in morphologically rich languages with productive compounding, infeasible to list the pronunciations for all the words. So one uses rules or learned models for this task. Pronunciation models are important components of both speech recognition (ASR) and synthesis (text-to-speech, TTS) systems. Even though end-to-end models have been gathering recent attention (Graves and Jaitly, 2014; Sotelo et al., 2017) , often state-of-the-art models in industrial production systems involve conversion to and from an intermediate phoneme layer.",
"cite_spans": [
{
"start": 736,
"end": 761,
"text": "(Graves and Jaitly, 2014;",
"ref_id": "BIBREF13"
},
{
"start": 762,
"end": 782,
"text": "Sotelo et al., 2017)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A single system of morphophonological rules which connects morphology with phonology is well-known (Chomsky and Halle, 1968) . In fact computational models for morphology such as the two-level morphology of Koskenniemi (1983) ; Kaplan and Kay (1994) have the bulk of the machinery designed to handle phonological rules. However, the approach involves encoding language-specific rules as a finite-state transducer, a tedious and expensive process requiring linguistic expertise. Linguistic rules are augmented computationally for small corpora in Ermolaeva (2018) , although scalability and applicability of the approach across languages is not tested.",
"cite_spans": [
{
"start": 99,
"end": 124,
"text": "(Chomsky and Halle, 1968)",
"ref_id": "BIBREF2"
},
{
"start": 207,
"end": 225,
"text": "Koskenniemi (1983)",
"ref_id": "BIBREF19"
},
{
"start": 228,
"end": 249,
"text": "Kaplan and Kay (1994)",
"ref_id": "BIBREF17"
},
{
"start": 546,
"end": 562,
"text": "Ermolaeva (2018)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We focus on using deep neural models to improve the quality of pronunciation prediction using morphology. G2P fits nicely in the well-studied sequence to sequence learning paradigms (Sutskever et al., 2014) , here we use extensions that can handle supplementary inputs in order to inject the morphological information. Our techniques are similar to Sharma et al. (2019) , although the goal there is to lemmatize or inflect more accurately using pronunciations. Taylor and Richmond (2020) consider improving neural G2P quality using morphology, our work differs in two respects. First, we use morphology class and lemma entries instead of morpheme boundaries for which annotations may not be as readily available. Secondly, they consider BiLSTMs and Transformer models, but we additionally consider architectures which combine BiLSTMs with attention and outperform both. We also show significant gains by morphology injection in the context of transfer learning for low resource languages where sufficient annotations are unavailable.",
"cite_spans": [
{
"start": 182,
"end": 206,
"text": "(Sutskever et al., 2014)",
"ref_id": "BIBREF33"
},
{
"start": 349,
"end": 369,
"text": "Sharma et al. (2019)",
"ref_id": "BIBREF30"
},
{
"start": 461,
"end": 487,
"text": "Taylor and Richmond (2020)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Pronunciation prediction is often studied in settings of speech recognition and synthesis. Some recent work explores new representations (Livescu et al., 2016; Sofroniev and \u00c7\u00f6ltekin, 2018; Jacobs and Mailhot, 2019) , but in this work, a pronunciation is a sequence of phonemes, syllable boundaries and stress symbols (van Esch et al., 2016) . A lot of work has been devoted to the G2P problem (e.g. see Nicolai et al. (2020) ), ranging from those focused on accuracy and model size to those discussing approaches for data-efficient scaling to low resource languages or multilingual modeling (Rao et al., 2015; Sharma, 2018; .",
"cite_spans": [
{
"start": 137,
"end": 159,
"text": "(Livescu et al., 2016;",
"ref_id": "BIBREF20"
},
{
"start": 160,
"end": 189,
"text": "Sofroniev and \u00c7\u00f6ltekin, 2018;",
"ref_id": "BIBREF31"
},
{
"start": 190,
"end": 215,
"text": "Jacobs and Mailhot, 2019)",
"ref_id": "BIBREF15"
},
{
"start": 318,
"end": 341,
"text": "(van Esch et al., 2016)",
"ref_id": "BIBREF10"
},
{
"start": 404,
"end": 425,
"text": "Nicolai et al. (2020)",
"ref_id": "BIBREF24"
},
{
"start": 592,
"end": 610,
"text": "(Rao et al., 2015;",
"ref_id": "BIBREF26"
},
{
"start": 611,
"end": 624,
"text": "Sharma, 2018;",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background and related work",
"sec_num": "2"
},
{
"text": "Morphology prediction is of independent interest and has applications in natural language generation as well as understanding. The problems of lemmatization and morphological inflection have been studied in both contextual (in a sentence, which involves morphosyntactics) and isolated settings (Cohen and Smith, 2007; Faruqui et al., 2015; Cotterell et al., 2016; Sharma et al., 2019) .",
"cite_spans": [
{
"start": 294,
"end": 317,
"text": "(Cohen and Smith, 2007;",
"ref_id": "BIBREF3"
},
{
"start": 318,
"end": 339,
"text": "Faruqui et al., 2015;",
"ref_id": "BIBREF11"
},
{
"start": 340,
"end": 363,
"text": "Cotterell et al., 2016;",
"ref_id": "BIBREF5"
},
{
"start": 364,
"end": 384,
"text": "Sharma et al., 2019)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background and related work",
"sec_num": "2"
},
{
"text": "Morphophonological prediction, by which we mean viewing morphology and pronunciation prediction as a single task with several related inputs and outputs, has received relatively less attention as a language-independent computational task, even though the significance for G2P has been argued (Coker et al., 1991) . Sharma et al. (2019) show improved morphology prediction using phonology, and Taylor and Richmond (2020) show the reverse. The present work aligns with the latter, but instead of requiring full morphological segmentation of words we work with weaker and more easily annotated morphological information like word lemmas and morphological categories.",
"cite_spans": [
{
"start": 292,
"end": 312,
"text": "(Coker et al., 1991)",
"ref_id": "BIBREF4"
},
{
"start": 315,
"end": 335,
"text": "Sharma et al. (2019)",
"ref_id": "BIBREF30"
},
{
"start": 393,
"end": 419,
"text": "Taylor and Richmond (2020)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Background and related work",
"sec_num": "2"
},
{
"text": "We consider the G2P problem, i.e. prediction of the sequence of phonemes (pronunciation) from the sequence of graphemes in a single word. The G2P problem forms a clean, simple application of seq2seq learning, which can also be used to create models that achieve state-of-the-art accuracies in pronunciation prediction. Morphology can aid this prediction in several ways. One, we could use morphological category as a non-sequential side input. Two, we could use the knowledge of the morphemes of the words and their pronunciations which may be possible with lower amounts of annotation. For example, the lemma (and its pronunciation) may already be annotated for an out-of-vocabulary word. Often standard lexicons list the lemmata of derived/inflected words, lemmatizer models can be used as a fallback. Learning from the exact morphological segmentation (Taylor and Richmond, 2020) would need more precise models and annotation (Demberg et al., 2007) .",
"cite_spans": [
{
"start": 929,
"end": 951,
"text": "(Demberg et al., 2007)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Improved pronunciation prediction",
"sec_num": "3"
},
{
"text": "Given the spelling, language specific models can predict the pronunciation by using knowledge of typical grapheme to phoneme mappings in the language. Some errors of these models may be fixed with help from morphological information as argued above. For instance, homograph pronunciations can be predicted using morphology but it is impossible to deduce correctly using just orthography. 1 The pronunciation of 'read' (/\u00f4i:d/ for present tense and noun, /\u00f4Ed/ for past and participle) can be determined by the part of speech and tense; the stress shifts from first to second syllable between 'project' noun and verb.",
"cite_spans": [
{
"start": 388,
"end": 389,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Improved pronunciation prediction",
"sec_num": "3"
},
{
"text": "We train and evaluate our models for five languages to cover some morphophonological diversity: (American) English, French, Russian, Spanish and Hungarian. For training our models, we use pronunciation lexicons (word-pronunciation pairs) and morphological lexicons (containing lex-ical form, i.e. lemma and morphology class) of only inflected words of size of the order of 10 4 for each language (see Table 5 in Appendix A). For the languages discussed, these lexicons are obtained by scraping 2 Wiktionary data and filtering for words that have annotations (including pronunciations available in the IPA format) for both the surface form and the lexical form. While this order of data is often available for high-resource languages, in Section 3.3 we discuss extension of our work to low-resource settings using Finnish and Portuguese for illustration where the Wiktionary data is about an order of magnitude smaller.",
"cite_spans": [],
"ref_spans": [
{
"start": 401,
"end": 408,
"text": "Table 5",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": "3.1"
},
{
"text": "Word (language) Morph. Class Pron. LS LP masseuses (fr) n-f-pl /ma.s\u00f8z/ masseur /ma.soeK/ fagylaltozom (hu) v-fp-s-in-pr-id /\"f6\u00cdl6ltozom/ fagylaltozik /\"f6\u00cdl6ltozik/ We keep 20% of the pronunciation lexicons aside for evaluation using word error rate (WER) metric. WER measures an output as correct if the entire output pronunciation sequence matches the ground truth annotation for the test example.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "3.1"
},
{
"text": "The morphological category of the word is appended as an ordinal encoding to the spelling, separated by a special character. That is, the categories of a given language are appended as unique integers, as opposed to one-hot vectors which may be too large in morphologically rich languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Morphological category",
"sec_num": "3.1.1"
},
{
"text": "Information about the lemma is given to the models by appending both, the lemma pronounciation LP and lemma spelling LS to the word spelling WS , all separated by special characters, like, WS \u00a7 LP \u00b6 LS . Lemma spelling can potentially help in irregular cases, for example 'be' has past forms 'gone' and 'were', so the model can reject the lemma pronunciation in this case by noting that the lemma spellings are different (but potentially still use it for 'been').",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lemma spelling and pronounciation",
"sec_num": "3.1.2"
},
{
"text": "The models described below are implemented in OpenNMT (Klein et al., 2017 (Hochreiter and Schmidhuber, 1997) allows learning of fixed length sequences, which is not a major problem for pronunciation prediction since grapheme and phoneme sequences (represented as one-hot vectors) are often of comparable length, and in fact state-of-the-art accuracies can be obtained using bidirectional LSTM (Rao et al., 2015) . We use single layer BiLSTM encoder -decoder with 256 units and 0.2 dropout to build a character level RNN. Each character is represented by a trainable embedding of dimension 30.",
"cite_spans": [
{
"start": 54,
"end": 73,
"text": "(Klein et al., 2017",
"ref_id": "BIBREF18"
},
{
"start": 74,
"end": 108,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF14"
},
{
"start": 393,
"end": 411,
"text": "(Rao et al., 2015)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model details",
"sec_num": "3.2"
},
{
"text": "Attention-based models (Vaswani et al., 2017; Chan et al., 2016; Luong et al., 2015; Xu et al., 2015) are capable of taking a weighted sample of input, allowing the network to focus on different possibly distant relevant segments of the input effectively to predict the output. We use the model defined in Section 3.2.1 with Luong attention (Luong et al., 2015).",
"cite_spans": [
{
"start": 23,
"end": 45,
"text": "(Vaswani et al., 2017;",
"ref_id": "BIBREF35"
},
{
"start": 46,
"end": 64,
"text": "Chan et al., 2016;",
"ref_id": "BIBREF1"
},
{
"start": 65,
"end": 84,
"text": "Luong et al., 2015;",
"ref_id": "BIBREF21"
},
{
"start": 85,
"end": 101,
"text": "Xu et al., 2015)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "LSTM based encoder-decoder networks with attention (BiLSTM+Attn)",
"sec_num": "3.2.2"
},
{
"text": "Transformer (Vaswani et al., 2017) uses selfattention in both encoder and decoder to learn rich text representaions. We use a similar architecture but with fewer parameters, by using 3 layers, 256 hidden units, 4 attention heads and 1024 dimensional feed forward layers with relu activation. Both the attention and feedforward dropout is 0.1. The input character embedding dimension is 30.",
"cite_spans": [
{
"start": 12,
"end": 34,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Transformer networks",
"sec_num": "3.2.3"
},
{
"text": "Both non-neural and neural approaches have been studied for transfer learning (Weiss et al., 2016) from a high-resource language for low resource language G2P setting using a variety of strategies including semi-automated bootstrapping, using acoustic data, designing representations suitable for neural learning, active learning, data augmentation and multilingual modeling (Maskey et al., 2004; Davel and Martirosian, 2009; Jyothi and Hasegawa-Johnson, 2017; Sharma, 2018; Ryan and Hulden, 2020; Peters et al., 2017; . Recently, transformer-based architectures have also been used for this task (Engelhart et al., 2021 ). Here we apply a similar approach of using representations learned from the high-resource languages as an additional input for low-resource models but for our BiLSTM+Attn architecture. We Model Inputs en fr ru es hu BiLSTM (b/+c/+l) (39.7/39.4/37.1) (8.69/8.94/7.94) (5.26/4.87/5.60) (1.13/1.44/1.30) (6.96/5.85/7.21) BiLSTM+Attn (b/+c/+l) (36.9/36.1/31.0) (4.45/4.20/4.12) (5.06/3.80/4.04) (0.32/0.32/0.29) (1.78/1.31/1.12) Transformer (b/+c/+l) (40.2/39.3/37.7) (8.19/7.11/10.6) (6.57/6.38/5.36) (2.29/1.62/2.20) (8.20/4.93/8.11) Table 2 : Models and their Word Error Rates (WERs). 'b' corresponds to baseline (vanilla G2P), '+c' refers to morphology class injection (Sec. 3.1.1) and '+l' to addition of lemma spelling and pronunciation (Sec. 3.1.2). evaluate our model for two language pairs -hu (high) -fi (low) and es (high) and pt (low) (results in Table 3 ). We perform morphology injection using lemma spelling and pronunciation (Sec. 3.1.2) since it can be easier to annotate and potentially more effective (per Table 2 ). fi and pt are not really low-resource, but have relatively fewer Wiktionary annotations for the lexical forms ( ",
"cite_spans": [
{
"start": 78,
"end": 98,
"text": "(Weiss et al., 2016)",
"ref_id": "BIBREF36"
},
{
"start": 375,
"end": 396,
"text": "(Maskey et al., 2004;",
"ref_id": "BIBREF22"
},
{
"start": 397,
"end": 425,
"text": "Davel and Martirosian, 2009;",
"ref_id": "BIBREF6"
},
{
"start": 426,
"end": 460,
"text": "Jyothi and Hasegawa-Johnson, 2017;",
"ref_id": "BIBREF16"
},
{
"start": 461,
"end": 474,
"text": "Sharma, 2018;",
"ref_id": "BIBREF29"
},
{
"start": 475,
"end": 497,
"text": "Ryan and Hulden, 2020;",
"ref_id": "BIBREF27"
},
{
"start": 498,
"end": 518,
"text": "Peters et al., 2017;",
"ref_id": "BIBREF25"
},
{
"start": 597,
"end": 620,
"text": "(Engelhart et al., 2021",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 1155,
"end": 1162,
"text": "Table 2",
"ref_id": null
},
{
"start": 1478,
"end": 1485,
"text": "Table 3",
"ref_id": "TABREF3"
},
{
"start": 1644,
"end": 1651,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Transfer learning for low resource G2P",
"sec_num": "3.3"
},
{
"text": "We discuss our results under two themes -the efficacy of the different neural models we have implemented, and the effect of the different ways of injecting morphology that were considered. We consider three neural models as described above. To compare the neural models, we first note the approximate number of parameters of each model that we trained:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "\u2022 BiLSTM: \u223c1.7M parameters,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "\u2022 BiLSTM+Attn: \u223c3.5M parameters,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "\u2022 Transformer: \u223c5.2M parameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "For BiLSTM and BiLSTM+Attn, the parameter size is based on neural architecture search i.e. we estimated sizes at which accuracies (nearly) peaked. For transformer, we believe even larger models can be more effective and the current size was chosen due to computational restrictions and for \"fairer\" comparison of model effectiveness. Under this setting, BiLSTM+Attn models seem to clearly outperform both the other models, even without morphology injection (cf. , albeit it is in the multilingual modeling context). Transformer can beat BiLSTM in some cases even with the suboptimal model size restriction, but is consistently worse when the sequence lengths are larger which is the case when we inject lemma spellings and pronunciations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "We also look at how adding lexical form information, i.e. morphological class and lemma, helps with pronunciation prediction. We notice that the improvements are particularly prominent when the G2P task itself is more complex, for example in English. In particular, ambiguous or exceptional grapheme subsequence (e.g. ough in English) to phoneme subsequence mappings, may be resolved with help from lemma pronunciations. Also morphological category seems to help for example in Russian where it can contain a lot of information due to the inherent morphological complexity (about 25% relative error reduction). See Appendix B for more detailed comparison and error analysis for the models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "Our transfer learning experiments indicate that morphology injection gives even more gains in low resource setting. In fact for both the languages considered, adding morphology gives almost as much gain as adding a high resource language to the BiLSTM+Attn model. This could be useful for low resource languages like Georgian where a high resource language from the same language family is unavailable. Even with the high resource augmentation, using morphology can give a significant further boost to the prediction accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "4"
},
{
"text": "We note that combining BiLSTM with attention seems to be the most attractive alternative in getting improvements in pronunciation prediction by leveraging morphology, and hence correspond to the most appropriate 'model bias' for the problem from among the alternatives considered. We also note that all the neural network paradigms discussed are capable of improving the G2P prediction quality when augmented with morphological information. Since our approach can potentially support partial/incomplete data (using appropriate MISSING or N/A tokens), one can use a single model which injects morphology class and/or lemma pronunciation as available. For languages where neither is available, our results suggest building word-lemma lists or utilizing effective lemma-tizers (Faruqui et al., 2015; Cotterell et al., 2016) .",
"cite_spans": [
{
"start": 774,
"end": 796,
"text": "(Faruqui et al., 2015;",
"ref_id": "BIBREF11"
},
{
"start": 797,
"end": 820,
"text": "Cotterell et al., 2016)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Our work only leverages the inflectional morphology paradigms for better pronunciation prediction. However in addition to inflection, morphology also results in word formation via derivation and compounding. Unlike inflection, derivation and compounding could involve multiple root words, so an extension would need a generalization of the above approach along with appropriate data. An alternative would be to learn these in an unsupervised way using a dictionary augmented neural network which can efficiently refer to pronunciations in a dictionary and use them to predict pronunciations of polymorphemic words using pronunciations of the base words (Bruguier et al., 2018) . It would be interesting to see if using a combination of morphological side information and dictionaryaugmentation results in a further accuracy boost. Developing non-neural approaches for the morphology injection could be interesting, although as noted before, the neural approaches are the stateof-the-art (Rao et al., 2015; .",
"cite_spans": [
{
"start": 653,
"end": 676,
"text": "(Bruguier et al., 2018)",
"ref_id": "BIBREF0"
},
{
"start": 987,
"end": 1005,
"text": "(Rao et al., 2015;",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Future work",
"sec_num": "6"
},
{
"text": "One interesting application of the present work would be to use the more accurate pronunciation prediction for morphologically related forms for efficient pronunciation lexicon development (useful for low resource languages where high-coverage lexicons currently don't exist), for example annotating the lemma pronunciation should be enough and the pronunciation of all the related forms can be predicted with high accuracy. This is hugely beneficial for languages where there are hundreds or even thousands of surface forms associated with the same lemma. Another concern for reliably using the neural approaches is explainability (Molnar, 2019) . Some recent research looks at explaining neural models with orthographic and phonological features (Sahai and Sharma, 2021) , an extension for morphological features should be useful.",
"cite_spans": [
{
"start": 632,
"end": 646,
"text": "(Molnar, 2019)",
"ref_id": "BIBREF23"
},
{
"start": 748,
"end": 772,
"text": "(Sahai and Sharma, 2021)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Future work",
"sec_num": "6"
},
{
"text": "Inputs en de es ru avg. rel. gain BiLSTM (b/+c/+l) (31.0/30.5/25.2) (17.7/15.5/12.3) (8.1/7.9/6.7) (18.4/15.6/15.9) (-/+7.9%/+20.0%) BiLSTM+Attn (b/+c/+l) (29.0/27.1/21.3) (12.0/11.6/11.6) (4.9/2.6/2.4) (14.1/13.6/13.1) (-/+15.1%/+22.0%) Table 4 : Number of total Wiktionary entries, and inflected entries with pronunciation and morphology annotations, for the languages considered.",
"cite_spans": [],
"ref_spans": [
{
"start": 238,
"end": 245,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "We record the size of data scraped from Wiktionary in Table 5 . There is marked inconsistency in the number of annotated inflected words where the pronunciation transcription is available, as a fraction of the total vocabulary, for the languages considered.",
"cite_spans": [],
"ref_spans": [
{
"start": 54,
"end": 61,
"text": "Table 5",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Appendix A On size of data",
"sec_num": null
},
{
"text": "In the main paper, we have discussed results on the publicly available Wiktionary dataset. We perform more experiments on a larger dataset (10 5 -10 6 examples of annotated inflections per language) using the same data format and methodology for (American) English, German, Spanish and Russian (Table 4) . We get very similar observations in this regime as well in terms of relative gains in model performances using our techniques, but these results are likely more representative of word error rates for the whole languages. ",
"cite_spans": [],
"ref_spans": [
{
"start": 294,
"end": 303,
"text": "(Table 4)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Appendix A On size of data",
"sec_num": null
},
{
"text": "Neural sequence to sequence models, while highly accurate on average, make \"silly\" mistakes like omitting or inserting a phoneme which are hard to explain. With that caveat in place, there are still reasonable patterns to be gleaned when comparing the outputs of the various neural models discussed here. BiLSTM+Attn model seems to not only be making fewer of these \"silly\" mistakes, but also appears to be better at learning the genuinely more challenging predictions. For example, the French word p\u00e9dagogiques ('pedagogical', plural) /pe.da.gO.Zik/ is pronounced correctly by BiLSTM+Attn, but as /pe.da.ZO.Zik/ by BiLSTM. Similarly BiLSTM+Attn predicts /\"dZaemIN/, while Transformer network says /\"dZamIN/ for jamming (en). We note that errors for Spanish often involve incorrect stress assignment since the grapheme-tophoneme mapping is highly consistent. Adding morphological class information seems to reduce the error in endings for morphologically rich languages, which can be an important source of error if there is relative scarcity of transcriptions available for the inflected words. For example, for our BiLSTM+Attn model, the pronunciation for \u0444\u0443\u0440\u0440\u0435\u043c (ru, 'furry' instrumental singular noun) is fixed from /\"fur j :em/ to /\"fur j :Im/, and koronav\u00edrusr\u00f3l (hu, 'coronavirus' delative singular) gets corrected from /\"koron6vi:ruSo:l/ to /\"ko-ron6vi:ruSro:l/. On the other hand, adding lemma pronunciation usually helps with pronouncing the root morpheme correctly. Without the lemma injection, our BiLSTM+Attn model mispronounces debriefing (en) as /dI\"b\u00f4i:fIN/ and sentences (en) as /sEn\"tEnsIz/. Based on these observations, it sounds interesting to try to inject both categorical and lemma information simultaneously.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Error analysis",
"sec_num": null
},
{
"text": "Homographs are words which are spelt identically but have different meanings and pronunciations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Dictionary Augmented Sequenceto-Sequence Neural Network for Grapheme to Phoneme prediction",
"authors": [
{
"first": "Antoine",
"middle": [],
"last": "Bruguier",
"suffix": ""
},
{
"first": "Anton",
"middle": [],
"last": "Bakhtin",
"suffix": ""
},
{
"first": "Dravyansh",
"middle": [],
"last": "Sharma",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc",
"volume": "",
"issue": "",
"pages": "3733--3737",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antoine Bruguier, Anton Bakhtin, and Dravyansh Sharma. 2018. Dictionary Augmented Sequence- to-Sequence Neural Network for Grapheme to Phoneme prediction. Proc. Interspeech 2018, pages 3733-3737.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Listen, attend and spell: A neural network for large vocabulary conversational speech recognition",
"authors": [
{
"first": "William",
"middle": [],
"last": "Chan",
"suffix": ""
},
{
"first": "Navdeep",
"middle": [],
"last": "Jaitly",
"suffix": ""
},
{
"first": "Quoc",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
}
],
"year": 2016,
"venue": "Acoustics, Speech and Signal Processing (ICASSP)",
"volume": "",
"issue": "",
"pages": "4960--4964",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William Chan, Navdeep Jaitly, Quoc Le, and Oriol Vinyals. 2016. Listen, attend and spell: A neural network for large vocabulary conversational speech recognition. In Acoustics, Speech and Signal Pro- cessing (ICASSP), 2016 IEEE International Confer- ence on, pages 4960-4964. IEEE.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The sound pattern of English",
"authors": [
{
"first": "Noam",
"middle": [],
"last": "Chomsky",
"suffix": ""
},
{
"first": "Morris",
"middle": [],
"last": "Halle",
"suffix": ""
}
],
"year": 1968,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Noam Chomsky and Morris Halle. 1968. The sound pattern of English.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Joint morphological and syntactic disambiguation",
"authors": [
{
"first": "B",
"middle": [],
"last": "Shay",
"suffix": ""
},
{
"first": "Noah A",
"middle": [],
"last": "Cohen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shay B Cohen and Noah A Smith. 2007. Joint mor- phological and syntactic disambiguation. In Pro- ceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Com- putational Natural Language Learning (EMNLP- CoNLL).",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Morphology and rhyming: Two powerful alternatives to letter-to-sound rules for speech synthesis",
"authors": [
{
"first": "H",
"middle": [],
"last": "Cecil",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [
"W"
],
"last": "Coker",
"suffix": ""
},
{
"first": "Maik",
"middle": [],
"last": "Church",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Liberman",
"suffix": ""
}
],
"year": 1991,
"venue": "The ESCA Workshop on Speech Synthesis",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cecil H Coker, Kenneth W Church, and Maik Y Liber- man. 1991. Morphology and rhyming: Two pow- erful alternatives to letter-to-sound rules for speech synthesis. In The ESCA Workshop on Speech Syn- thesis.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The SIGMORPHON 2016 shared task-morphological reinflection",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
},
{
"first": "Christo",
"middle": [],
"last": "Kirov",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Sylak-Glassman",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 14th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology",
"volume": "",
"issue": "",
"pages": "10--22",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan Cotterell, Christo Kirov, John Sylak-Glassman, David Yarowsky, Jason Eisner, and Mans Hulden. 2016. The SIGMORPHON 2016 shared task-morphological reinflection. In Proceed- ings of the 14th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 10-22.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Pronunciation dictionary development in resource-scarce environments",
"authors": [
{
"first": "Marelie",
"middle": [],
"last": "Davel",
"suffix": ""
},
{
"first": "Olga",
"middle": [],
"last": "Martirosian",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marelie Davel and Olga Martirosian. 2009. Pronuncia- tion dictionary development in resource-scarce envi- ronments.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Phonological constraints and morphological preprocessing for grapheme-to-phoneme conversion",
"authors": [
{
"first": "Vera",
"middle": [],
"last": "Demberg",
"suffix": ""
},
{
"first": "Helmut",
"middle": [],
"last": "Schmid",
"suffix": ""
},
{
"first": "Gregor",
"middle": [],
"last": "M\u00f6hler",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics",
"volume": "",
"issue": "",
"pages": "96--103",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vera Demberg, Helmut Schmid, and Gregor M\u00f6hler. 2007. Phonological constraints and morphological preprocessing for grapheme-to-phoneme conversion. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 96- 103.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Grapheme-to-Phoneme Transformer Model for Transfer Learning Dialects",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Engelhart",
"suffix": ""
},
{
"first": "Mahsa",
"middle": [],
"last": "Elyasi",
"suffix": ""
},
{
"first": "Gaurav",
"middle": [],
"last": "Bharaj",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2104.04091"
]
},
"num": null,
"urls": [],
"raw_text": "Eric Engelhart, Mahsa Elyasi, and Gaurav Bharaj. 2021. Grapheme-to-Phoneme Transformer Model for Transfer Learning Dialects. arXiv preprint arXiv:2104.04091.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Extracting morphophonology from small corpora",
"authors": [
{
"first": "Marina",
"middle": [],
"last": "Ermolaeva",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Fifteenth Workshop on Computational Research in Phonetics, Phonology, and Morphology",
"volume": "",
"issue": "",
"pages": "167--175",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marina Ermolaeva. 2018. Extracting morphophonol- ogy from small corpora. In Proceedings of the Fif- teenth Workshop on Computational Research in Pho- netics, Phonology, and Morphology, pages 167-175.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Predicting Pronunciations with Syllabification and Stress with Recurrent Neural Networks",
"authors": [
{
"first": "Mason",
"middle": [],
"last": "Daan Van Esch",
"suffix": ""
},
{
"first": "Kanishka",
"middle": [],
"last": "Chua",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rao",
"suffix": ""
}
],
"year": 2016,
"venue": "INTER-SPEECH",
"volume": "",
"issue": "",
"pages": "2841--2845",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daan van Esch, Mason Chua, and Kanishka Rao. 2016. Predicting Pronunciations with Syllabification and Stress with Recurrent Neural Networks. In INTER- SPEECH, pages 2841-2845.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Morphological inflection generation using character sequence to sequence learning",
"authors": [
{
"first": "Manaal",
"middle": [],
"last": "Faruqui",
"suffix": ""
},
{
"first": "Yulia",
"middle": [],
"last": "Tsvetkov",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1512.06110"
]
},
"num": null,
"urls": [],
"raw_text": "Manaal Faruqui, Yulia Tsvetkov, Graham Neubig, and Chris Dyer. 2015. Morphological inflection genera- tion using character sequence to sequence learning. arXiv preprint arXiv:1512.06110.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "The SIGMORPHON 2020 shared task on multilingual grapheme-to-phoneme conversion",
"authors": [
{
"first": "Kyle",
"middle": [],
"last": "Gorman",
"suffix": ""
},
{
"first": "F",
"middle": [
"E"
],
"last": "Lucas",
"suffix": ""
},
{
"first": "Aaron",
"middle": [],
"last": "Ashby",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Goyzueta",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Arya",
"suffix": ""
},
{
"first": "Shijie",
"middle": [],
"last": "Mccarthy",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "You",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology",
"volume": "",
"issue": "",
"pages": "40--50",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kyle Gorman, Lucas FE Ashby, Aaron Goyzueta, Arya D McCarthy, Shijie Wu, and Daniel You. 2020. The SIGMORPHON 2020 shared task on multilin- gual grapheme-to-phoneme conversion. In Proceed- ings of the 17th SIGMORPHON Workshop on Com- putational Research in Phonetics, Phonology, and Morphology, pages 40-50.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Towards endto-end speech recognition with recurrent neural networks",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Graves",
"suffix": ""
},
{
"first": "Navdeep",
"middle": [],
"last": "Jaitly",
"suffix": ""
}
],
"year": 2014,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "1764--1772",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Graves and Navdeep Jaitly. 2014. Towards end- to-end speech recognition with recurrent neural net- works. In International Conference on Machine Learning, pages 1764-1772.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Encoderdecoder models for latent phonological representations of words",
"authors": [
{
"first": "L",
"middle": [],
"last": "Cassandra",
"suffix": ""
},
{
"first": "Fred",
"middle": [],
"last": "Jacobs",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mailhot",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 16th Workshop on Computational Research in Phonetics, Phonology, and Morphology",
"volume": "",
"issue": "",
"pages": "206--217",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cassandra L Jacobs and Fred Mailhot. 2019. Encoder- decoder models for latent phonological representa- tions of words. In Proceedings of the 16th Workshop on Computational Research in Phonetics, Phonol- ogy, and Morphology, pages 206-217.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Low-resource grapheme-to-phoneme conversion using recurrent neural networks",
"authors": [
{
"first": "Preethi",
"middle": [],
"last": "Jyothi",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Hasegawa-Johnson",
"suffix": ""
}
],
"year": 2017,
"venue": "2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",
"volume": "",
"issue": "",
"pages": "5030--5034",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Preethi Jyothi and Mark Hasegawa-Johnson. 2017. Low-resource grapheme-to-phoneme conversion us- ing recurrent neural networks. In 2017 IEEE Inter- national Conference on Acoustics, Speech and Sig- nal Processing (ICASSP), pages 5030-5034. IEEE.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Regular models of phonological rule systems",
"authors": [
{
"first": "M",
"middle": [],
"last": "Ronald",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Kaplan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kay",
"suffix": ""
}
],
"year": 1994,
"venue": "Computational linguistics",
"volume": "20",
"issue": "3",
"pages": "331--378",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronald M Kaplan and Martin Kay. 1994. Regular mod- els of phonological rule systems. Computational lin- guistics, 20(3):331-378.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "OpenNMT: Opensource toolkit for neural machine translation",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Yuntian",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Senellart",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Rush",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of ACL 2017, System Demonstrations",
"volume": "",
"issue": "",
"pages": "67--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senel- lart, and Alexander Rush. 2017. OpenNMT: Open- source toolkit for neural machine translation. In Proceedings of ACL 2017, System Demonstrations, pages 67-72, Vancouver, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Two-Level Model for Morphological Analysis",
"authors": [
{
"first": "Kimmo",
"middle": [],
"last": "Koskenniemi",
"suffix": ""
}
],
"year": 1983,
"venue": "IJCAI",
"volume": "83",
"issue": "",
"pages": "683--685",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kimmo Koskenniemi. 1983. Two-Level Model for Morphological Analysis. In IJCAI, volume 83, pages 683-685.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Articulatory feature-based pronunciation modeling",
"authors": [
{
"first": "Karen",
"middle": [],
"last": "Livescu",
"suffix": ""
},
{
"first": "Preethi",
"middle": [],
"last": "Jyothi",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Fosler-Lussier",
"suffix": ""
}
],
"year": 2016,
"venue": "Computer Speech & Language",
"volume": "36",
"issue": "",
"pages": "212--232",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karen Livescu, Preethi Jyothi, and Eric Fosler-Lussier. 2016. Articulatory feature-based pronunciation modeling. Computer Speech & Language, 36:212- 232.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Effective Approaches to Attentionbased Neural Machine Translation",
"authors": [
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1412--1421",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective Approaches to Attention- based Neural Machine Translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412-1421.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Boostrapping phonetic lexicons for new languages",
"authors": [
{
"first": "Sameer",
"middle": [],
"last": "Maskey",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Black",
"suffix": ""
},
{
"first": "Laura",
"middle": [],
"last": "Tomokiya",
"suffix": ""
}
],
"year": 2004,
"venue": "Eighth International Conference on Spoken Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sameer Maskey, Alan Black, and Laura Tomokiya. 2004. Boostrapping phonetic lexicons for new lan- guages. In Eighth International Conference on Spo- ken Language Processing.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Interpretable Machine Learning",
"authors": [
{
"first": "Christoph",
"middle": [],
"last": "Molnar",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christoph Molnar. 2019. Interpretable Machine Learning. https://christophm.github.io/ interpretable-ml-book/.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Proceedings of the 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology",
"authors": [
{
"first": "Garrett",
"middle": [],
"last": "Nicolai",
"suffix": ""
},
{
"first": "Kyle",
"middle": [],
"last": "Gorman",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Garrett Nicolai, Kyle Gorman, and Ryan Cotterell. 2020. Proceedings of the 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology. In Proceedings of the 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Massively Multilingual Neural Grapheme-to-Phoneme Conversion",
"authors": [
{
"first": "Ben",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Jon",
"middle": [],
"last": "Dehdari",
"suffix": ""
},
{
"first": "Josef",
"middle": [],
"last": "Van Genabith",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ben Peters, Jon Dehdari, and Josef van Genabith. 2017. Massively Multilingual Neural Grapheme-to- Phoneme Conversion. EMNLP 2017, page 19.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Grapheme-to-phoneme conversion using long short-term memory recurrent neural networks",
"authors": [
{
"first": "Kanishka",
"middle": [],
"last": "Rao",
"suffix": ""
},
{
"first": "Fuchun",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Ha\u015fim",
"middle": [],
"last": "Sak",
"suffix": ""
},
{
"first": "Fran\u00e7oise",
"middle": [],
"last": "Beaufays",
"suffix": ""
}
],
"year": 2015,
"venue": "2015 IEEE International Conference on",
"volume": "",
"issue": "",
"pages": "4225--4229",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kanishka Rao, Fuchun Peng, Ha\u015fim Sak, and Fran\u00e7oise Beaufays. 2015. Grapheme-to-phoneme conversion using long short-term memory recurrent neural networks. In Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Con- ference on, pages 4225-4229. IEEE.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Data augmentation for transformer-based G2P",
"authors": [
{
"first": "Zach",
"middle": [],
"last": "Ryan",
"suffix": ""
},
{
"first": "Mans",
"middle": [],
"last": "Hulden",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology",
"volume": "",
"issue": "",
"pages": "184--188",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zach Ryan and Mans Hulden. 2020. Data augmen- tation for transformer-based G2P. In Proceedings of the 17th SIGMORPHON Workshop on Computa- tional Research in Phonetics, Phonology, and Mor- phology, pages 184-188.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Predicting and explaining french grammatical gender",
"authors": [
{
"first": "Saumya",
"middle": [],
"last": "Sahai",
"suffix": ""
},
{
"first": "Dravyansh",
"middle": [],
"last": "Sharma",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the Third Workshop on Computational Typology and Multilingual NLP",
"volume": "",
"issue": "",
"pages": "90--96",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saumya Sahai and Dravyansh Sharma. 2021. Predict- ing and explaining french grammatical gender. In Proceedings of the Third Workshop on Computa- tional Typology and Multilingual NLP, pages 90-96.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "On Training and Evaluation of Grapheme-to-Phoneme Mappings with Limited Data",
"authors": [
{
"first": "Dravyansh",
"middle": [],
"last": "Sharma",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc",
"volume": "",
"issue": "",
"pages": "2858--2862",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dravyansh Sharma. 2018. On Training and Evaluation of Grapheme-to-Phoneme Mappings with Limited Data. Proc. Interspeech 2018, pages 2858-2862.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Better Morphology Prediction for Better Speech Systems",
"authors": [
{
"first": "Dravyansh",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Melissa",
"middle": [],
"last": "Wilson",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Bruguier",
"suffix": ""
}
],
"year": 2019,
"venue": "INTERSPEECH",
"volume": "",
"issue": "",
"pages": "3535--3539",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dravyansh Sharma, Melissa Wilson, and Antoine Bruguier. 2019. Better Morphology Prediction for Better Speech Systems. In INTERSPEECH, pages 3535-3539.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Phonetic vector representations for sound sequence alignment",
"authors": [
{
"first": "Pavel",
"middle": [],
"last": "Sofroniev",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Agr\u0131 \u00c7\u00f6ltekin",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Fifteenth Workshop on Computational Research in Phonetics, Phonology, and Morphology",
"volume": "",
"issue": "",
"pages": "111--116",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pavel Sofroniev and \u00c7 agr\u0131 \u00c7\u00f6ltekin. 2018. Phonetic vector representations for sound sequence alignment. In Proceedings of the Fifteenth Workshop on Com- putational Research in Phonetics, Phonology, and Morphology, pages 111-116.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Sequence to sequence learning with neural networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1409.3215"
]
},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. arXiv preprint arXiv:1409.3215.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Enhancing Sequence-to-Sequence Text-to-Speech with Morphology",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Taylor",
"suffix": ""
},
{
"first": "Korin",
"middle": [],
"last": "Richmond",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Taylor and Korin Richmond. 2020. Enhancing Sequence-to-Sequence Text-to-Speech with Mor- phology. Submitted to IEEE ICASSP.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 31st International Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "6000--6010",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Sys- tems, pages 6000-6010.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "A survey of transfer learning",
"authors": [
{
"first": "Karl",
"middle": [],
"last": "Weiss",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Taghi",
"suffix": ""
},
{
"first": "Dingding",
"middle": [],
"last": "Khoshgoftaar",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2016,
"venue": "Journal of Big data",
"volume": "3",
"issue": "1",
"pages": "1--40",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karl Weiss, Taghi M Khoshgoftaar, and DingDing Wang. 2016. A survey of transfer learning. Journal of Big data, 3(1):1-40.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Show, attend and tell: Neural image caption generation with visual attention",
"authors": [
{
"first": "Kelvin",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Ba",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Kiros",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Aaron",
"middle": [],
"last": "Courville",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhudinov",
"suffix": ""
},
{
"first": "Rich",
"middle": [],
"last": "Zemel",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "International conference on machine learning",
"volume": "",
"issue": "",
"pages": "2048--2057",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual atten- tion. In International conference on machine learn- ing, pages 2048-2057.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"html": null,
"content": "<table/>",
"type_str": "table",
"text": "Example annotated entries. (v-fp-s-in-pr-id: Verb, first-person singular indicative present indefinite)",
"num": null
},
"TABREF2": {
"html": null,
"content": "<table><tr><td>Model</td><td>fi</td><td>fi+hu</td><td>pt</td><td>pt+es</td></tr><tr><td colspan=\"5\">BiLSTM+Attn (base) 18.53 9.81 62.65 58.87</td></tr><tr><td colspan=\"2\">BiLSTM+Attn (+lem) 9.27</td><td colspan=\"3\">8.45 59.63 55.48</td></tr></table>",
"type_str": "table",
"text": ").",
"num": null
},
"TABREF3": {
"html": null,
"content": "<table><tr><td>: Transfer learning for vanilla G2P (base) and</td></tr><tr><td>morphology augmented G2P (+lem, Sec. 3.1.2).</td></tr></table>",
"type_str": "table",
"text": "",
"num": null
},
"TABREF5": {
"html": null,
"content": "<table/>",
"type_str": "table",
"text": "Number of total Wiktionary entries, and inflected entries with pronunciation and morphology annotations, for the languages considered.",
"num": null
}
}
}
}