| { |
| "paper_id": "2021", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T15:31:46.861255Z" |
| }, |
| "title": "Linguistic Knowledge in Multilingual Grapheme-to-Phoneme Conversion", |
| "authors": [ |
| { |
| "first": "Yu-Hsiang", |
| "middle": [], |
| "last": "Lo", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "The University of British Columbia", |
| "location": {} |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Garrett", |
| "middle": [], |
| "last": "Nicolai", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "The University of British Columbia", |
| "location": {} |
| }, |
| "email": "garrett.nicolai@ubc.ca" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "This paper documents the UBC Linguistics team's approach to the SIGMORPHON 2021 Grapheme-to-Phoneme Shared Task, concentrating on the low-resource setting. Our systems expand the baseline model with simple modifications informed by syllable structure and error analysis. In-depth investigation of test-set predictions shows that our best model rectifies a significant number of mistakes compared to the baseline prediction, besting all other submissions. Our results validate the view that careful error analysis in conjunction with linguistic knowledge can lead to more effective computational modeling.", |
| "pdf_parse": { |
| "paper_id": "2021", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "This paper documents the UBC Linguistics team's approach to the SIGMORPHON 2021 Grapheme-to-Phoneme Shared Task, concentrating on the low-resource setting. Our systems expand the baseline model with simple modifications informed by syllable structure and error analysis. In-depth investigation of test-set predictions shows that our best model rectifies a significant number of mistakes compared to the baseline prediction, besting all other submissions. Our results validate the view that careful error analysis in conjunction with linguistic knowledge can lead to more effective computational modeling.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "With speech technologies becoming ever more prevalent, grapheme-to-phoneme (G2P) conversion is an important part of the pipeline. G2P conversion refers to mapping a sequence of orthographic representations in some language to a sequence of phonetic symbols, often transcribed in the International Phonetic Alphabet (IPA). This is often an early step in tasks such as text-to-speech, where the pronunciation must be determined before any speech is produced. An example of such a G2P conversion, in Amharic, is illustrated below:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "\u20ac\u2248r{ \u2192 [amar1\u00ef:a] 'Amharic'", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "For the second year, one of SIGMORPHON shared tasks concentrates on G2P. This year, the task is further broken into three subtasks of varying data levels: high-resource ( 33K training instances), medium-resource (8K training instances), and lowresource (800 training instances). Our focus is on the low-resource subtask. The language data and associated constraints in the low-resource setting will be summarized in Section 3.1; the reader interested in the other two subtasks is referred to Ashby et al. (this volume) for an overview.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this paper, we describe our methodology and approaches to the low-resource setting, including insights that informed our methods. We conclude with an extensive error analysis of the effectiveness of our approach.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "This paper is structured as follows: Section 2 overviews previous work on G2P conversion. Section 3 gives a description of the data in the lowresource subtask, evaluation metric, and baseline results, along with the baseline model architecture. Section 4 introduces our approaches as well as the motivation behind them. We present our results in Section 5 and associated error analyses in Section 6. Finally, Section 7 concludes our paper.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The techniques for performing G2P conversion have long been coupled with contemporary machine learning advances. Early paradigms utilize joint sequence models that rely on the alignment between grapheme and phoneme, usually with variants of the Expectation-Maximization (EM) algorithm (Dempster et al., 1977) . The resulting sequences of graphones (i.e., joint graphemephoneme tokens) are then modeled with n-gram models or Hidden Markov Models (e.g., Jiampojamarn et al., 2007; Bisani and Ney, 2008; Jiampojamarn and Kondrak, 2010) . A variant of this paradigm includes weighted finite-state transducers trained on such graphone sequences (Novak et al., 2012 (Novak et al., , 2015 .", |
| "cite_spans": [ |
| { |
| "start": 285, |
| "end": 308, |
| "text": "(Dempster et al., 1977)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 452, |
| "end": 478, |
| "text": "Jiampojamarn et al., 2007;", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 479, |
| "end": 500, |
| "text": "Bisani and Ney, 2008;", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 501, |
| "end": 532, |
| "text": "Jiampojamarn and Kondrak, 2010)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 640, |
| "end": 659, |
| "text": "(Novak et al., 2012", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 660, |
| "end": 681, |
| "text": "(Novak et al., , 2015", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Previous Work on G2P conversion", |
| "sec_num": "2" |
| }, |
| { |
| "text": "With the rise of various neural network techniques, neural-based methods have dominated the scene ever since. For example, bidirectional long short-term memory-based (LSTM) networks using a connectionist temporal classification layer produce comparable results to earlier n-gram models (Rao et al., 2015) . By incorporating alignment information into the model, the ceiling set by n-gram models has since been broken (Yao and Zweig, 2015) . Attention further improved the performance, as attentional encoder-decoders (Toshniwal and Livescu, 2016) learned to focus on specific input sequences. As attention became \"all that was needed\" (Vaswani et al., 2017) , transformer-based architectures have begun looming large (e.g., Yolchuyeva et al., 2019) .", |
| "cite_spans": [ |
| { |
| "start": 286, |
| "end": 304, |
| "text": "(Rao et al., 2015)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 417, |
| "end": 438, |
| "text": "(Yao and Zweig, 2015)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 517, |
| "end": 546, |
| "text": "(Toshniwal and Livescu, 2016)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 635, |
| "end": 657, |
| "text": "(Vaswani et al., 2017)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 724, |
| "end": 748, |
| "text": "Yolchuyeva et al., 2019)", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Previous Work on G2P conversion", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Recent years have also seen works that capitalize on multilingual data to train a single model with grapheme-phoneme pairs from multiple languages. For example, various systems from last year's shared task submissions learned from a multilingual signal (e.g., ElSaadany and Suter, 2020; Peters and Martins, 2020; Vesik et al., 2020) .", |
| "cite_spans": [ |
| { |
| "start": 260, |
| "end": 286, |
| "text": "ElSaadany and Suter, 2020;", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 287, |
| "end": 312, |
| "text": "Peters and Martins, 2020;", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 313, |
| "end": 332, |
| "text": "Vesik et al., 2020)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Previous Work on G2P conversion", |
| "sec_num": "2" |
| }, |
| { |
| "text": "This section provides relevant information concerning the low-resource subtask.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Low-resource Subtask", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The provided data in the low-resource subtask come from ten languages 1 : Adyghe (ady; in the Cyrillic script), Modern Greek (gre; in the Greek alphabet), Icelandic (ice), Italian (ita), Khmer (khm; in the Khmer script, which is an alphasyllabary system), Latvian (lat), Maltese transliterated into the Latin script (mlt_latn), Romanian (rum), Slovene (slv), and the South Wales dialect of Welsh (wel_sw). The data are extracted from Wikitionary 2 using WikiPron (Lee et al., 2020) , and filtered and downsampled with proprietary techniques, resulting in each language having 1,000 labeled grapheme-phoneme pairs, split into a training set of 800 pairs, a development set of 100 pairs, and a blind test set of 100 pairs.", |
| "cite_spans": [ |
| { |
| "start": 463, |
| "end": 481, |
| "text": "(Lee et al., 2020)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Task Data", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "This year, the evaluation metric is the word error rate (WER), which is simply the percentage of words for which the predicted transcription sequence differs from the ground-truth transcription. Different systems are ranked based on the macroaverage over all languages, with lower scores indicating better systems. We also adopted this metric when evaluating our models on the development sets.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Evaluation Metric", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "The official baselines for individual languages are based on an ensembled neural transducer trained with the imitation learning (IL) paradigm (Makarov and Clematide, 2018a) . The baseline WERs are tabulated in Table 3 . In what follows, we overview this baseline neural-transducer system, as our models are built on top of this system. The detailed formal description of the baseline system can be found in Clematide (2018a,b,c, 2020) .", |
| "cite_spans": [ |
| { |
| "start": 142, |
| "end": 172, |
| "text": "(Makarov and Clematide, 2018a)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 407, |
| "end": 434, |
| "text": "Clematide (2018a,b,c, 2020)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 210, |
| "end": 217, |
| "text": "Table 3", |
| "ref_id": "TABREF7" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Baselines", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "The neural transducer in question defines a conditional distribution over edit actions, such as copy, deletion, insertion, and substitution:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Baselines", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "p \u03b8 (y, a|x) = |a| j=1 p \u03b8 (a j |a <j , x),", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Baselines", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "where x denotes an input sequence of graphemes, and a = a 1 . . . a |a| stands for a sequence of edit actions. Note that the ouput sequence y is missing from the conditional probability on the right-hand side as it can be deterministically computed from x and a. The model is implemented with an LSTM decoder, coupled with a bidirectional LSTM encoder.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Baselines", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "The model is trained with IL and therefore demands an expert policy, which contains demonstrations of how the task can be optimally solved given any configuration. Cast as IL, the mapping from graphemes to phonemes can be understood as following an optimal path dictated by the expert policy that gradually turns input orthographic symbols to output IPA characters. To acquire the expert policy, a Stochastic Edit Distance (Ristad and Yianilos, 1998 ) model trained with the EM algorithm is employed to find an edit sequence consisting of four types of edits: copy, deletion, insertion, and substitution. During training time, the expert policy is queried to identify the next optimal edit that minimizes the following objective expressed in terms of Levenshtein distance and edit sequence cost:", |
| "cite_spans": [ |
| { |
| "start": 423, |
| "end": 449, |
| "text": "(Ristad and Yianilos, 1998", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Baselines", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "\u03b2ED(\u0177, y) + ED(x,\u0177), \u03b2 \u2265 1,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Baselines", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "where the first term is the Levenshtein distance between the target sequence y and the predicted sequence\u0177, and the second term measures the cost of editing x to\u0177.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Baselines", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "The baseline is run with default hyperparameter values, which include ten different initial seeds and a beam of size 4 during inference. The predictions of these individual models are ensembled using a voting majority. Early efforts to modify the ensemble to incorporate system confidence showed that a majority ensemble was sufficient.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Baselines", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "This model has proved to be competitive, judging from its performance on the previous year's G2P shared task. We therefore decided to use it as the foundation to construct our systems.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Baselines", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "This section lays out our attempted approaches. We investigate two alternatives, both linguistic in nature. The first is inspired by a universal linguistic structure-the syllable-and the other by the error patterns discerned from the baseline predictions on the development data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Our Approaches", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Our first approach originates from the observation that, in natural languages, a sequence of sounds does not just assume a flat structure. Neighboring sounds group to form units, such as the onset, nucleus, and coda. In turn, these units can further project to a syllable (see Figure 1 for an example of such projection). Syllables are useful structural units in describing various linguistic phenomena and indeed in predicting the pronunciation of a word in some languages (e.g., Treiman, 1994) . For instance, in Dutch, the vowel quality of the nucleus can be reliably inferred from the spelling after proper syllabification: .dag. 2016find that training RNNs to jointly predict phoneme sequences, syllabification, and stress leads to further performance gains in some languages, compared to models trained without syllabification and stress information. To identify syllable boundaries in the input sequence, we adopted a simple heuristic, the specific steps of which are listed below: 3 1. Find vowels in the output: We first identify the vowels in the phoneme sequence by comparing each segment with the vowel symbols from the IPA chart. 2. Find vowels in the input: Next we align the grapheme sequence with the phoneme sequence using an unsupervised many-to-many aligner (Jiampojamarn et al., 2007; Jiampojamarn and Kondrak, 2010) . By identifying graphemes that are aligned to phonemic vowels, we can identify vowels in the input. Using the Icelandic example again, the aligner produces a one-to-one mapping: t \u2192 t h , r \u2192 r, a \u2192 \u00f8, u \u2192 y, s \u2192 s, and t \u2192 t. We therefore assume that the input characters a and u represent two vowels. Note that this step is often redundant for input sequences based on the Latin script but is useful in identifying vowel symbols in other scripts.", |
| "cite_spans": [ |
| { |
| "start": 481, |
| "end": 495, |
| "text": "Treiman, 1994)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 1277, |
| "end": 1304, |
| "text": "(Jiampojamarn et al., 2007;", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 1305, |
| "end": 1336, |
| "text": "Jiampojamarn and Kondrak, 2010)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 277, |
| "end": 285, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "System 1: Augmenting Data with Unsupervised Syllable Boundaries", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "3. Find valid onsets and codas: A key step in syllabification is to identify which sequences of consonants can form an onset or a coda. Without resorting to linguistic knowledge, one way to identify valid onsets and codas is to look at the two ends of a word-consonant sequences appearing word-initially before the first vowel are valid onsets, and consonant sequences after the final vowel are valid codas. Looping through each input sequence in the training data gives us a list of valid onsets and codas. In the Icelandic example traust, the initial tr sequence must be a valid onset, and the final st sequence a valid coda.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "System 1: Augmenting Data with Unsupervised Syllable Boundaries", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "4. Break word-medial consonant sequences into an onset and a coda: Unfortunately identifying onsets and codas among wordmedial consonant sequences is not as straightforward. For example, how do we know the sequence in the input VngstrV (V for a vowel character) should be parsed as Vng.strV, as Vn.gstrV, or even as V.ngstrV? To tackle this problem, we use the valid onset and coda lists gathered from the previous step: we split the consonant sequence into two parts, and we choose the split where the first part is a valid coda and the second part a valid onset. For instance, suppose we have an onset list {str, tr} and a coda list {ng, st}. This implies that we only have a single valid split-Vng.strVso ng is treated as the coda for the previous syllable and str as the onset for the following syllable. In the case where more than one split is acceptable, we favor the split that produces a more complex onset, based on the linguistic heuristic that natural languages tend to tolerate more complex onsets than codas. For example, Vng.strV > Vngs.trV. In the situation where none of the splits produces a concatenation of a valid coda and onset, we adopt the following heuristic:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "System 1: Augmenting Data with Unsupervised Syllable Boundaries", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "\u2022 If there is only one medial consonant (such as in the case where the consonant can only occur word-internally but not in the onset or coda position), this consonant is classified as the onset for the following syllable. \u2022 If there is more than one consonant, the first consonant is classified as the coda and attached to the previous syllable while the rest as the onset of the following syllable.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "System 1: Augmenting Data with Unsupervised Syllable Boundaries", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Of course, this procedure is not free of errors (e.g., some languages have onsets that are only allowed word-medially, so word-initial onsets will naturally not include them), but overall it gives reasonable results.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "System 1: Augmenting Data with Unsupervised Syllable Boundaries", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "The last step is to put together consonant and vowel characters to form syllables. The simplest approach is to allow each vowel character to be projected as a nucleus and distribute onsets and codas around these nuclei to build syllables. If there are four vowels in the input, there are likewise four syllables. There is one important caveat, however. When there are two or more consecutive vowel characters, some languages prefer to merge them into a single vowel/nucleus in their pronunciation (e.g., Greek \u03ba\u03b1\u03b9 \u2192 [ce])", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Form syllables:", |
| "sec_num": "5." |
| }, |
| { |
| "text": "while other languages simply default to vowel hiatuses/two side-by-side nuclei (e.g., Italian badia \u2192 [badia])-indeed, both are common cross-linguistically. We again rely on the alignment results in the second step to select the vowel segmentation strategy for individual languages.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Form syllables:", |
| "sec_num": "5." |
| }, |
| { |
| "text": "After we have identified the syllables that compose each word, we augmented the input sequences with syllable boundaries. We identify four labels to distinguish different types of syllable boundaries: <cc>, <cv>. <vc>, and <vv>, depending on the classes of sound the segments straddling the syllable boundary belong to. For instance, the input sequence b \u00ed l a v e r k s t ae \u00f0 i in Icelandic will be augmented to be b \u00ed <vc> l a <vc> v e r k <cc> s t ae <vc> \u00f0 i. We applied the same syllabification algorithm to all languages to generate new input sequences, with the exception of Khmer, as the Khmer script does not permit a straightforward linear mapping between input and output sequences, which is crucial for the vowel identification step. We then used these syllabified input sequences, along with their target transcriptions, as the training data for the baseline model. 4", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Form syllables:", |
| "sec_num": "5." |
| }, |
| { |
| "text": "Our second approach focuses on the training objective of the baseline model, and is driven by the errors we observed in the baseline predictions. Specifically, we noticed that the majority of errors for the languages with a high WER-Khmer, Latvian, and Slovene-concerned vowels, some examples of which are given in Table 1 . Note the nature of these mistakes: the mismatch can be in the vowel quality (e.g., [O] additional penalties. Each incorrectly-predicted vowel incurs this penalty. The penalty acts as a regularizer that forces the model to expend more effort on learning vowels. This modification is in the same spirit as the softmax-margin objective of Gimpel and Smith (2010) , which penalizes highcost outputs more heavily, but our approach is even simpler-we merely supplement the loss with additional penalties for vowels and diacritics. We fine-tuned the vowel and diacritic penalties using a grid search on the development data, incrementing each by 0.1, from 0 to 0.5. In the cases of ties, we skewed higher as the penalties generally worked better at higher values. The final values used to generate predictions for the test data are listed in Table 2 . We also note that the vowel penalty had significantly more impact than the diacritic penalty. ", |
| "cite_spans": [ |
| { |
| "start": 408, |
| "end": 411, |
| "text": "[O]", |
| "ref_id": null |
| }, |
| { |
| "start": 661, |
| "end": 684, |
| "text": "Gimpel and Smith (2010)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 315, |
| "end": 322, |
| "text": "Table 1", |
| "ref_id": "TABREF3" |
| }, |
| { |
| "start": 1160, |
| "end": 1167, |
| "text": "Table 2", |
| "ref_id": "TABREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "System 2: Penalizing Vowel and Diacritic Errors", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "The performances of our systems, measured in WER, are juxtaposed with the official baseline results in Table 3 . We first note that the baseline was particularly strong-gains were difficult to achieve for most languages. Our first system (Syl), which is based on syllabic information, unfortunately does not outperform the baseline. It seems that extra syllable information does not help with predictions in this particular setting. It might be the case that additional syllable boundaries increase input variability without providing much useful information with the current neuraltransducer architecture. Alternatively, information about syllable boundary locations might be redundant for this set of languages. Finally, it is possible that the unsupervised nature of our syllable annotation was too noisy to aid the model. We leave these speculations as research questions for future endeavors and restrict the subsequent error analyses and discussion to the results from our vowelpenalty system. Figure 2 : Distributions of error types in test-set predictions across languages. Error types are distinguished based on whether an error involves only consonants, only vowels, or both. For example, C-V means that the error is caused by a ground-truth consonant being replaced by a vowel in the prediction. C-\u01eb means that it is a deletion error where the ground-truth consonant is missing in the prediction while \u01eb-C represents an insertion error where a consonant is wrongly added.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 103, |
| "end": 110, |
| "text": "Table 3", |
| "ref_id": "TABREF7" |
| }, |
| { |
| "start": 1000, |
| "end": 1008, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "5" |
| }, |
| { |
| "text": "C-V, V-C C-C, C-\u03f5, \u03f5-C V-V, V-\u03f5, \u03f5-V", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "5" |
| }, |
| { |
| "text": "In this section, we provide detailed error analyses on the test-set predictions from our best system. The goals of these analyses are twofold: (i) to examine the aspects in which this model outperforms the baseline and to what extent, and (ii) to get a better understanding of the nature of errors made by the system-we believe that insights and improvements can be derived from a good grasp of error patterns. We analyzed the mismatches between predicted sequences and ground-truth sequences at the segmental level. For this purpose, we again utilized many-to-many alignment (Jiampojamarn et al., 2007; Jiampojamarn and Kondrak, 2010) , but this time between a predicted sequence and the corresponding ground-truth sequence. 6 For each error along the aligned sequence, we classified it into one of the three kinds:", |
| "cite_spans": [ |
| { |
| "start": 576, |
| "end": 603, |
| "text": "(Jiampojamarn et al., 2007;", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 604, |
| "end": 635, |
| "text": "Jiampojamarn and Kondrak, 2010)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Error Analyses", |
| "sec_num": "6" |
| }, |
| { |
| "text": "\u2022 Those involving erroneous vowel insertions (e.g., \u01eb \u2192 [@]), deletions (e.g., [@] \u2192 \u01eb), or substitutions (e.g., [@] \u2192 [a]).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Error Analyses", |
| "sec_num": "6" |
| }, |
| { |
| "text": "\u2022 In the same vein, those involving erroneous consonant insertions (e.g., \u01eb \u2192 [P]), deletions boundaries does not improve the results, it is unlikely that marking constituent boundaries, which adds more variability to the input, will result in better performance, though we did not test this hypothesis. 6 The parameters used are: allowing deletion of input grapheme strings, maximum length of aligned grapheme and phoneme substring being one, and a training threshold of 0.001. (e.g., [P] \u2192 \u01eb), and substitutions (e.g., The frequency of each error type made by the baseline model and our systems for each individual language is plotted in Figure 2 . Some patterns are immediately clear. First, both systems have a similar pattern in terms of the distribution of error types across language, albeit that ours makes fewer errors on average. Second, both systems err on different elements, depending on the language. For instance, while Adyghe (ady) and Khmer (khm) have a more balanced distribution between consonant and vowel errors, Slovene (slv) and Welsh (wel_sw) are dominated by vowel errors. Third, the improvements gained in our system seem to come mostly from reduction in vowel errors, as is evident in the case of Khmer, Latvian (lav), and, to a lesser extent, Slovene.", |
| "cite_spans": [ |
| { |
| "start": 304, |
| "end": 305, |
| "text": "6", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 640, |
| "end": 648, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Error Analyses", |
| "sec_num": "6" |
| }, |
| { |
| "text": "The final observation is backed up if we zoom in on the errors in these three languages, which we visualize in Figure 3 . Many incorrect vowels generated by the baseline model are now correctly predicted. We note that there are also cases, though less common, where the baseline model gives the right prediction, but ours does not. It should be pointed out that, although our system shows improvement over the baseline, there is still plenty of room for improvement in many languages, and our system still produces incorrect vowels in many Here we only visualize the cases where either the baseline model gives the right vowel but our system does not, or vice versa. We do not include cases where both the baseline model and our system predict the correct vowel, or both predict an incorrect vowel, to avoid cluttering the view. Each baseline-ground-truth-ours line represents a set of aligned vowels in the same word; the horizontal line segment between a system and the ground-truth means that the prediction from the system agrees with the ground-truth. Color hues are used to distinguish cases where the prediction from the baseline is correct versus those where the prediction from our second system is correct. Shaded areas on the plots enclose vowels of similar vowel quality.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 111, |
| "end": 119, |
| "text": "Figure 3", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Error Analyses", |
| "sec_num": "6" |
| }, |
| { |
| "text": "instances. Finally, we look at several languages which still resulted in high WER on the test set-ady, gre, ita, khm, lav, and slv. We analyze the confusion matrix analysis to identify clusters of commonly-confused phonemes. This analysis again relies on the alignment between the groundtruth sequence and the corresponding predicted sequence to characterize error distributions. The results from this analysis are shown in Figure 4 , and some interesting patterns are discussed below. Figure 2 suggests that Khmer has an equal share of consonant and vowel errors, and the heat maps in Figure 4 reveal that these errors do not seem to follow a certain pattern. However, a different picture emerges with Latvian and Slovene. For both languages, Figure 2 indicates the dominance of errors tied to vowels; consonant errors account for a relatively small proportion of errors. This observation is borne out in Figure 4 , with the consonant heat maps for the two languages displaying a clear diagonal stripe, and the vowel heat maps showing much more off-diagonal signals. What is more interesting is that the vowel errors in fact form clusters, as highlighted by white squares on the heat maps. The general pattern is that confusion only arises within a cluster where vowels are of similar quality but differ in terms of length or pitch accent. For example, while [i:] might be incorrectly-predicted as [i], our model does not confuse it with, say, [u] . The challenges these languages present to the mod-els are therefore largely suprasegmental-vowel length and pitch accent, both of which are lexicalized and not explicitly marked in the orthography. For the other three languages, their errors also show distinct patterns: for Adyghe, consonants differing only in secondary features can get confused; in Greek, many errors can be attributed to the mixing of [r] and [R]; in Italian, front and back mid vowels can trick our model. We hope that our detailed error analyses show not only that these errors \"make linguistic sense\"and therefore attest to the power of the modelbut also point out a pathway along which future modeling can be improved.", |
| "cite_spans": [ |
| { |
| "start": 1445, |
| "end": 1448, |
| "text": "[u]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 424, |
| "end": 432, |
| "text": "Figure 4", |
| "ref_id": null |
| }, |
| { |
| "start": 486, |
| "end": 494, |
| "text": "Figure 2", |
| "ref_id": null |
| }, |
| { |
| "start": 586, |
| "end": 594, |
| "text": "Figure 4", |
| "ref_id": null |
| }, |
| { |
| "start": 744, |
| "end": 752, |
| "text": "Figure 2", |
| "ref_id": null |
| }, |
| { |
| "start": 906, |
| "end": 914, |
| "text": "Figure 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Error Analyses", |
| "sec_num": "6" |
| }, |
| { |
| "text": "This paper presented the approaches adopted by the UBC Linguistics team to tackle the SIGMOR-PHON 2021 Grapheme-to-Phoneme Conversion challenge in the low-resource setting. Our submissions build upon the baseline model with modifications inspired by syllable structure and vowel error patterns. While the first modification does not result in more accurate predictions, the second modification does lead to sizable improvements over the baseline results. Subsequent error analyses reveal that the modified model indeed reduces erroneous vowel predictions for languages whose errors are dominated by vowel mismatches. Our approaches also demonstrate that patterns uncov- Italian vowels Figure 4 : Confusion matrices of vowel and consonant predictions by our second system (VP) for languages with the test WER > 20%. Each row represents a predicted segment, with colors across columns indicating the proportion of times the predicted segment matches individual ground-truth segments. A gray row means the segment in question is absent in any predicted phoneme sequences but is present in at least one ground-truth sequence. The diagonal elements represent the number of times for which the predicted segment matches the target segment, while off-diagonal elements are those that are mis-predicted by the system. White squares are added to highlight segment groups where mismatches are common.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 685, |
| "end": 693, |
| "text": "Figure 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "7" |
| }, |
| { |
| "text": "ered from careful error analyses can inform the directions for potential improvements.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "7" |
| }, |
| { |
| "text": "All output is represented in IPA; unless specified otherwise, the input is written in the Latin alphabet.2 https://www.wiktionary.org/", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "We are aware that different languages permit distinct syllable constituents (e.g., some languages allow syllabic consonants while others do not), but given the restriction that we are not allowed to use external resources in the low-resource subtask, we simply assume that all syllables must contain a vowel.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "The hyperparameters used are the default values provided in the baseline model code: character and action embedding = 100, encoder LSTM state dimension = decoder LSTM state dimension = 200, encoder layer = decoder layer = 1, beam width = 4, roll-in hyperparameter = 1, epochs = 60, patience = 12, batch size = 5, EM iterations = 10, ensemble size = 10.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "One reviewer raised a question of why only syllable boundaries, as opposed to smaller constituents, such as onsets or codas, are marked. Our hunch is that many phonological alternations happen at syllable boundaries, and that vowel length in some languages depends on whether the nucleus vowel is in a closed or open syllable. Also, given that adding syllable", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Jointsequence models for grapheme-to-phoneme conversion", |
| "authors": [ |
| { |
| "first": "Maximilian", |
| "middle": [], |
| "last": "Bisani", |
| "suffix": "" |
| }, |
| { |
| "first": "Hermann", |
| "middle": [], |
| "last": "Ney", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Speech Communication", |
| "volume": "50", |
| "issue": "", |
| "pages": "434--451", |
| "other_ids": { |
| "DOI": [ |
| "10.1016/j.specom.2008.01.002" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Maximilian Bisani and Hermann Ney. 2008. Joint- sequence models for grapheme-to-phoneme conver- sion. Speech Communication, 50:434-451.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Maximum likelihood from incomplete data via the EM algorithm", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [ |
| "P" |
| ], |
| "last": "Dempster", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [ |
| "M" |
| ], |
| "last": "Laird", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [ |
| "B" |
| ], |
| "last": "Rubin", |
| "suffix": "" |
| } |
| ], |
| "year": 1977, |
| "venue": "Journal of the Royal Statistical Society. Series B (Methodological)", |
| "volume": "39", |
| "issue": "1", |
| "pages": "1--38", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "A. P. Dempster, N. M. Laird, and D. B. Rubin. 1977. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Soci- ety. Series B (Methodological), 39(1):1-38.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Grapheme-to-phoneme conversion with a multilingual transformer model", |
| "authors": [ |
| { |
| "first": "Omnia", |
| "middle": [], |
| "last": "Elsaadany", |
| "suffix": "" |
| }, |
| { |
| "first": "Benjamin", |
| "middle": [], |
| "last": "Suter", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of the Seventeenth SIGMORPHON Workshop on Computational Research in Phonetics", |
| "volume": "", |
| "issue": "", |
| "pages": "85--89", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/2020.sigmorphon-1.7" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Omnia ElSaadany and Benjamin Suter. 2020. Grapheme-to-phoneme conversion with a mul- tilingual transformer model. In Proceedings of the Seventeenth SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 85-89.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Predicting pronunciations with syllabification and stress with recurrent neural networks", |
| "authors": [ |
| { |
| "first": "Mason", |
| "middle": [], |
| "last": "Daan Van Esch", |
| "suffix": "" |
| }, |
| { |
| "first": "Kanishka", |
| "middle": [], |
| "last": "Chua", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Rao", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of Interspeech", |
| "volume": "", |
| "issue": "", |
| "pages": "2841--2845", |
| "other_ids": { |
| "DOI": [ |
| "10.21437/Interspeech.2016-1419" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Daan van Esch, Mason Chua, and Kanishka Rao. 2016. Predicting pronunciations with syllabification and stress with recurrent neural networks. In Proceed- ings of Interspeech 2016, pages 2841-2845.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Softmaxmargin CRFs: Training log-linear models with cost functions", |
| "authors": [ |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Gimpel", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Noah", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Smith", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "733--736", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kevin Gimpel and Noah A. Smith. 2010. Softmax- margin CRFs: Training log-linear models with cost functions. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the ACL, pages 733-736.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Letter-phoneme alignment: An exploration", |
| "authors": [ |
| { |
| "first": "Sittichai", |
| "middle": [], |
| "last": "Jiampojamarn", |
| "suffix": "" |
| }, |
| { |
| "first": "Grzegorz", |
| "middle": [], |
| "last": "Kondrak", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "780--788", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sittichai Jiampojamarn and Grzegorz Kondrak. 2010. Letter-phoneme alignment: An exploration. In Pro- ceedings of the 48th Annual Meeting of the Associa- tion for Computational Linguistics, pages 780-788.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Applying many-to-many alignments and Hidden Markov Models to letter-to-phoneme conversion", |
| "authors": [ |
| { |
| "first": "Grzegorz", |
| "middle": [], |
| "last": "Sittichai Jiampojamarn", |
| "suffix": "" |
| }, |
| { |
| "first": "Tarek", |
| "middle": [], |
| "last": "Kondrak", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Sherif", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of NAACL HLT 2007", |
| "volume": "", |
| "issue": "", |
| "pages": "372--379", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sittichai Jiampojamarn, Grzegorz Kondrak, and Tarek Sherif. 2007. Applying many-to-many alignments and Hidden Markov Models to letter-to-phoneme conversion. In Proceedings of NAACL HLT 2007, pages 372-379.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Massively multilingual pronunciation mining with WikiPron", |
| "authors": [ |
| { |
| "first": "Jackson", |
| "middle": [ |
| "L" |
| ], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [ |
| "E" |
| ], |
| "last": "Lucas", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [ |
| "Elizabeth" |
| ], |
| "last": "Ashby", |
| "suffix": "" |
| }, |
| { |
| "first": "Yeonju", |
| "middle": [], |
| "last": "Garza", |
| "suffix": "" |
| }, |
| { |
| "first": "Sean", |
| "middle": [], |
| "last": "Lee-Sikka", |
| "suffix": "" |
| }, |
| { |
| "first": "Alan", |
| "middle": [], |
| "last": "Miller", |
| "suffix": "" |
| }, |
| { |
| "first": "Arya", |
| "middle": [ |
| "D" |
| ], |
| "last": "Wong", |
| "suffix": "" |
| }, |
| { |
| "first": "Kyle", |
| "middle": [], |
| "last": "Mccarthy", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Gorman", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)", |
| "volume": "", |
| "issue": "", |
| "pages": "4223--4228", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jackson L. Lee, Lucas F. E. Ashby, M. Elizabeth Garza, Yeonju Lee-Sikka, Sean Miller, Alan Wong, Arya D. McCarthy, and Kyle Gorman. 2020. Massively mul- tilingual pronunciation mining with WikiPron. In Proceedings of the 12th Conference on Language Re- sources and Evaluation (LREC 2020), pages 4223- 4228.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Imitation learning for neural morphological string transduction", |
| "authors": [ |
| { |
| "first": "Peter", |
| "middle": [], |
| "last": "Makarov", |
| "suffix": "" |
| }, |
| { |
| "first": "Simon", |
| "middle": [], |
| "last": "Clematide", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "2877--2882", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/D18-1314" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Peter Makarov and Simon Clematide. 2018a. Imita- tion learning for neural morphological string trans- duction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2877-2882.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Neural transition-based string transduction for limitedresource setting in morphology", |
| "authors": [ |
| { |
| "first": "Peter", |
| "middle": [], |
| "last": "Makarov", |
| "suffix": "" |
| }, |
| { |
| "first": "Simon", |
| "middle": [], |
| "last": "Clematide", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 27th International Conference on Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "83--93", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Peter Makarov and Simon Clematide. 2018b. Neu- ral transition-based string transduction for limited- resource setting in morphology. In Proceedings of the 27th International Conference on Computational Linguistics, pages 83-93.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "UZH at CoNLL-SIGMORPHON 2018 shared task on universal morphological reinflection", |
| "authors": [ |
| { |
| "first": "Peter", |
| "middle": [], |
| "last": "Makarov", |
| "suffix": "" |
| }, |
| { |
| "first": "Simon", |
| "middle": [], |
| "last": "Clematide", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the CoNLL-SIGMORPHON 2018 Shared Task: Universal Morphological Reinflection", |
| "volume": "", |
| "issue": "", |
| "pages": "69--75", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/K18-3008" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Peter Makarov and Simon Clematide. 2018c. UZH at CoNLL-SIGMORPHON 2018 shared task on uni- versal morphological reinflection. In Proceedings of the CoNLL-SIGMORPHON 2018 Shared Task: Uni- versal Morphological Reinflection, pages 69-75.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "CLUZH at SIGMORPHON 2020 shared task on multilingual grapheme-to-phoneme conversion", |
| "authors": [ |
| { |
| "first": "Peter", |
| "middle": [], |
| "last": "Makarov", |
| "suffix": "" |
| }, |
| { |
| "first": "Simon", |
| "middle": [], |
| "last": "Clematide", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of the Seventeenth SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology", |
| "volume": "", |
| "issue": "", |
| "pages": "171--176", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/2020.sigmorphon-1.19" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Peter Makarov and Simon Clematide. 2020. CLUZH at SIGMORPHON 2020 shared task on multilin- gual grapheme-to-phoneme conversion. In Proceed- ings of the Seventeenth SIGMORPHON Workshop on Computational Research in Phonetics, Phonol- ogy, and Morphology, pages 171-176.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "WFST-based grapheme-to-phoneme conversion: Open source tools for alignment, modelbuilding and decoding", |
| "authors": [ |
| { |
| "first": "Josef", |
| "middle": [ |
| "R" |
| ], |
| "last": "Novak", |
| "suffix": "" |
| }, |
| { |
| "first": "Nobuaki", |
| "middle": [], |
| "last": "Minematsu", |
| "suffix": "" |
| }, |
| { |
| "first": "Keikichi", |
| "middle": [], |
| "last": "Hirose", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the 10th International Workshop on Finite State Methods and Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "45--49", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Josef R. Novak, Nobuaki Minematsu, and Keikichi Hirose. 2012. WFST-based grapheme-to-phoneme conversion: Open source tools for alignment, model- building and decoding. In Proceedings of the 10th International Workshop on Finite State Methods and Natural Language Processing, pages 45-49.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Phonetisaurus: Exploring garphemeto-phoneme conversion with joint n-gram models in the WFST framework", |
| "authors": [ |
| { |
| "first": "Josef", |
| "middle": [ |
| "Robert" |
| ], |
| "last": "Novak", |
| "suffix": "" |
| }, |
| { |
| "first": "Nobuaki", |
| "middle": [], |
| "last": "Minematsu", |
| "suffix": "" |
| }, |
| { |
| "first": "Keikichi", |
| "middle": [], |
| "last": "Hirose", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Natural Language Engineering", |
| "volume": "22", |
| "issue": "6", |
| "pages": "907--938", |
| "other_ids": { |
| "DOI": [ |
| "10.1017/S1351324915000315" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Josef Robert Novak, Nobuaki Minematsu, and Keikichi Hirose. 2015. Phonetisaurus: Exploring garpheme- to-phoneme conversion with joint n-gram models in the WFST framework. Natural Language Engineer- ing, 22(6):907-938.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "DeepSPIN at SIGMORPHON 2020: One-size-fits-all multilingual models", |
| "authors": [ |
| { |
| "first": "Ben", |
| "middle": [], |
| "last": "Peters", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [ |
| "T" |
| ], |
| "last": "Andr\u00e9", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Martins", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of the Seventeenth SIG-MORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology", |
| "volume": "", |
| "issue": "", |
| "pages": "63--69", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/2020.sigmorphon-1.4" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ben Peters and Andr\u00e9 F. T. Martins. 2020. DeepSPIN at SIGMORPHON 2020: One-size-fits-all multilin- gual models. In Proceedings of the Seventeenth SIG- MORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 63-69.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Grapheme-to-phoneme conversion using long short-term memory recurrent neural networks", |
| "authors": [ |
| { |
| "first": "Kanishka", |
| "middle": [], |
| "last": "Rao", |
| "suffix": "" |
| }, |
| { |
| "first": "Fuchun", |
| "middle": [], |
| "last": "Peng", |
| "suffix": "" |
| }, |
| { |
| "first": "Ha\u015fim", |
| "middle": [], |
| "last": "Sak", |
| "suffix": "" |
| }, |
| { |
| "first": "Fran\u00e7oise", |
| "middle": [], |
| "last": "Beaufays", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)", |
| "volume": "", |
| "issue": "", |
| "pages": "4225--4229", |
| "other_ids": { |
| "DOI": [ |
| "10.1109/ICASSP.2015.7178767" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kanishka Rao, Fuchun Peng, Ha\u015fim Sak, and Fran\u00e7oise Beaufays. 2015. Grapheme-to-phoneme conversion using long short-term memory recurrent neural networks. In IEEE International Confer- ence on Acoustics, Speech and Signal Processing (ICASSP), pages 4225-4229.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Learning string-edit distance", |
| "authors": [ |
| { |
| "first": "Eric", |
| "middle": [ |
| "Sven" |
| ], |
| "last": "Ristad", |
| "suffix": "" |
| }, |
| { |
| "first": "Peter", |
| "middle": [ |
| "N" |
| ], |
| "last": "Yianilos", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence", |
| "volume": "20", |
| "issue": "5", |
| "pages": "522--532", |
| "other_ids": { |
| "DOI": [ |
| "10.1109/34.682181" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Eric Sven Ristad and Peter N. Yianilos. 1998. Learning string-edit distance. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(5):522-532.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Jointly learning to align and convert graphemes to phonemes with neural attention models", |
| "authors": [ |
| { |
| "first": "Shubham", |
| "middle": [], |
| "last": "Toshniwal", |
| "suffix": "" |
| }, |
| { |
| "first": "Karen", |
| "middle": [], |
| "last": "Livescu", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "2016 IEEE Spoken Language Technology Workshop (SLT)", |
| "volume": "", |
| "issue": "", |
| "pages": "76--82", |
| "other_ids": { |
| "DOI": [ |
| "10.1109/SLT.2016.7846248" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Shubham Toshniwal and Karen Livescu. 2016. Jointly learning to align and convert graphemes to phonemes with neural attention models. In 2016 IEEE Spoken Language Technology Workshop (SLT), pages 76-82.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "To what extent do orthographic units in print mirror phonological units in speech", |
| "authors": [ |
| { |
| "first": "Rebecca", |
| "middle": [], |
| "last": "Treiman", |
| "suffix": "" |
| } |
| ], |
| "year": 1994, |
| "venue": "Journal of Psycholinguistic Research", |
| "volume": "23", |
| "issue": "1", |
| "pages": "91--110", |
| "other_ids": { |
| "DOI": [ |
| "10.1007/BF02143178" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rebecca Treiman. 1994. To what extent do ortho- graphic units in print mirror phonological units in speech? Journal of Psycholinguistic Research, 23(1):91-110.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Attention is all you need", |
| "authors": [ |
| { |
| "first": "Ashish", |
| "middle": [], |
| "last": "Vaswani", |
| "suffix": "" |
| }, |
| { |
| "first": "Noam", |
| "middle": [], |
| "last": "Shazeer", |
| "suffix": "" |
| }, |
| { |
| "first": "Niki", |
| "middle": [], |
| "last": "Parmar", |
| "suffix": "" |
| }, |
| { |
| "first": "Jakob", |
| "middle": [], |
| "last": "Uszkoreit", |
| "suffix": "" |
| }, |
| { |
| "first": "Llion", |
| "middle": [], |
| "last": "Jones", |
| "suffix": "" |
| }, |
| { |
| "first": "Aidan", |
| "middle": [ |
| "N" |
| ], |
| "last": "Gomez", |
| "suffix": "" |
| }, |
| { |
| "first": "\u0141ukaaz", |
| "middle": [], |
| "last": "Kaiser", |
| "suffix": "" |
| }, |
| { |
| "first": "Illia", |
| "middle": [], |
| "last": "Polosukhin", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 31st Conference on Neural Information Processing Systems", |
| "volume": "", |
| "issue": "", |
| "pages": "1--11", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, \u0141ukaaz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of the 31st Conference on Neural Information Processing Systems (NIPS 2017), pages 1-11.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "One model to pronounce them all: Multilingual grapheme-to-phoneme conversion with a Transformer ensemble", |
| "authors": [ |
| { |
| "first": "Kaili", |
| "middle": [], |
| "last": "Vesik", |
| "suffix": "" |
| }, |
| { |
| "first": "Muhammad", |
| "middle": [], |
| "last": "Abdul-Mageed", |
| "suffix": "" |
| }, |
| { |
| "first": "Miikka", |
| "middle": [], |
| "last": "Silfverberg", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of the Seventeenth SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology", |
| "volume": "", |
| "issue": "", |
| "pages": "146--152", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/2020.sigmorphon-1.16" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kaili Vesik, Muhammad Abdul-Mageed, and Miikka Silfverberg. 2020. One model to pronounce them all: Multilingual grapheme-to-phoneme conversion with a Transformer ensemble. In Proceedings of the Seventeenth SIGMORPHON Workshop on Computa- tional Research in Phonetics, Phonology, and Mor- phology, pages 146-152.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Sequenceto-sequence neural net models for grapheme-tophoneme conversion", |
| "authors": [ |
| { |
| "first": "Kaisheng", |
| "middle": [], |
| "last": "Yao", |
| "suffix": "" |
| }, |
| { |
| "first": "Geoffrey", |
| "middle": [], |
| "last": "Zweig", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of Interspeech 2015", |
| "volume": "", |
| "issue": "", |
| "pages": "3330--3334", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kaisheng Yao and Geoffrey Zweig. 2015. Sequence- to-sequence neural net models for grapheme-to- phoneme conversion. In Proceedings of Interspeech 2015, pages 3330-3334.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Transformer based grapheme-tophoneme conversion", |
| "authors": [ |
| { |
| "first": "Sevinj", |
| "middle": [], |
| "last": "Yolchuyeva", |
| "suffix": "" |
| }, |
| { |
| "first": "G\u00e9za", |
| "middle": [], |
| "last": "N\u00e9meth", |
| "suffix": "" |
| }, |
| { |
| "first": "B\u00e1lint", |
| "middle": [], |
| "last": "Gyires-T\u00f3th", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of Interspeech 2019", |
| "volume": "", |
| "issue": "", |
| "pages": "2095--2099", |
| "other_ids": { |
| "DOI": [ |
| "10.21437/Interspeech.2019-1954" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sevinj Yolchuyeva, G\u00e9za N\u00e9meth, and B\u00e1lint Gyires- T\u00f3th. 2019. Transformer based grapheme-to- phoneme conversion. In Proceedings of Interspeech 2019, pages 2095-2099.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "type_str": "figure", |
| "uris": null, |
| "text": "The syllable structure of twelfth[twE\u0142fT]", |
| "num": null |
| }, |
| "FIGREF1": { |
| "type_str": "figure", |
| "uris": null, |
| "text": "[d] \u2192 [t]).\u2022 Those involving exchanges of a vowel and a consonant (e.g., [w] \u2192 [u]) or vice versa.", |
| "num": null |
| }, |
| "FIGREF2": { |
| "type_str": "figure", |
| "uris": null, |
| "text": "Comparison of vowels predicted by the baseline model and our best system (VP) with the ground-truth vowels.", |
| "num": null |
| }, |
| "FIGREF3": { |
| "type_str": "figure", |
| "uris": null, |
| "text": "\u0257 f h j k k\u02b0 l m n \u0272 \u014b p p\u02b0 r s t t\u02b0 \u028b w z \u0294 f \u0261 j \u025f k l \u028e m n \u0272 \u014b p r \u027e s \u0283 t v w z \u0292 \u0101 a\u02d0 \u00e0\u02d0 \u00e2\u02d0 \u0101\u02d0 ae \u01e3 ae\u02d0\u01e3\u02d0 e \u0113 \u025b \u025b\u0300\u025b\u0302\u025b\u0304\u025b\u02d0 \u025b\u02d0 \u025b\u02d0 i \u00ec \u012b \u00ee\u02d0 \u012b\u02d0 o \u00f4 \u014d o\u02d0 u \u00f9 \u00fb \u016b \u00f9\u02d0 \u00fb\u02d0 \u016b\u02d0 \u00e0\u02d0 \u00e9\u02d0 \u00e8\u02d0 \u0259 \u0259\u0301\u025b \u025b\u0301\u025b\u02d0 \u025b\u02d0 i \u00ed\u02d0 \u00ec\u02d0 \u00f3\u02d0 \u00f2\u02d0 \u0254 \u0254\u0301\u0254\u02d0 \u0254\u02d0 u z d\u0361 \u0292f \u0261\u02b2 \u0261\u02b7\u0263 \u0127 j k\u02b2k\u02b2\u02bc k\u02b7k\u02bc l \u026c \u026c\u02bc \u026emn pp\u02b7\u02bc p\u02bc qq\u02b7r \u0281\u0281\u02b7s \u0282\u0282\u02b7 \u0283 \u0283\u02b7\u0283\u02b7\u02bc\u0283\u02bc t t\u0361 st\u0361 s\u02bct\u0361 \u0282 t\u0361 \u0283 t\u0361 \u0283\u02bc t\u02bc w x z \u0290\u0290\u02b7\u0291 \u0292\u0292\u02b7\u0294\u0294\u02b7\u03c7\u03c7\u02b7 d \u00f0 f \u0261 \u0263 \u029d k l \u028e m n \u0272 \u014b p r \u027e s t v x z \u03b8", |
| "num": null |
| }, |
| "TABREF1": { |
| "text": "For instance, the symbols [\u00f8] and [y] in [t h r\u00f8yst] for Icelandic traust are vowels because they match the vowel symbols [\u00f8] and [y] on the IPA chart.", |
| "html": null, |
| "type_str": "table", |
| "num": null, |
| "content": "<table/>" |
| }, |
| "TABREF3": { |
| "text": "", |
| "html": null, |
| "type_str": "table", |
| "num": null, |
| "content": "<table><tr><td>: Typical errors in the development set that in-</td></tr><tr><td>volve vowels from Khmer (khm), Latvian (lat), and</td></tr><tr><td>Slovene (slv)</td></tr></table>" |
| }, |
| "TABREF5": { |
| "text": "", |
| "html": null, |
| "type_str": "table", |
| "num": null, |
| "content": "<table/>" |
| }, |
| "TABREF7": { |
| "text": "Comparison of test-set results based on the word error rates (WERs)", |
| "html": null, |
| "type_str": "table", |
| "num": null, |
| "content": "<table/>" |
| }, |
| "TABREF8": { |
| "text": "Syl VP base Syl VP base Syl VP base Syl VP base Syl VP base Syl VP base Syl VP base Syl VP base Syl VP base Syl VP", |
| "html": null, |
| "type_str": "table", |
| "num": null, |
| "content": "<table><tr><td/><td>ady</td><td>gre</td><td>ice</td><td>ita</td><td>khm</td><td>lav</td><td>mlt_latn</td><td>rum</td><td>slv</td><td>wel_sw</td></tr><tr><td/><td>80</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td>60</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>Count</td><td>40</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td>20</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td>0</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td/><td/><td/><td/><td>Systems</td><td/><td/><td/><td/></tr><tr><td/><td/><td/><td colspan=\"2\">Error types</td><td/><td/><td/><td/><td/></tr><tr><td/><td/><td/><td/><td/><td/><td/><td>5</td><td/><td/></tr></table>" |
| } |
| } |
| } |
| } |