| { |
| "paper_id": "P07-1013", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T08:49:59.357665Z" |
| }, |
| "title": "Phonological Constraints and Morphological Preprocessing for Grapheme-to-Phoneme Conversion", |
| "authors": [ |
| { |
| "first": "Vera", |
| "middle": [], |
| "last": "Demberg", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Edinburgh Edinburgh", |
| "location": { |
| "postCode": "EH8 9LW", |
| "region": "GB" |
| } |
| }, |
| "email": "v.demberg@sms.ed.ac.uk" |
| }, |
| { |
| "first": "Helmut", |
| "middle": [], |
| "last": "Schmid", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "IMS University of Stuttgart", |
| "location": { |
| "postCode": "D-70174", |
| "settlement": "Stuttgart" |
| } |
| }, |
| "email": "schmid@ims.uni-stuttgart.de" |
| }, |
| { |
| "first": "Gregor", |
| "middle": [], |
| "last": "M\u00f6hler", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Speech Technologies", |
| "location": { |
| "addrLine": "IBM Deutschland Entwicklung", |
| "postCode": "D-71072", |
| "settlement": "B\u00f6blingen" |
| } |
| }, |
| "email": "moehler@de.ibm.com" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Grapheme-to-phoneme conversion (g2p) is a core component of any text-to-speech system. We show that adding simple syllabification and stress assignment constraints, namely 'one nucleus per syllable' and 'one main stress per word', to a joint n-gram model for g2p conversion leads to a dramatic improvement in conversion accuracy. Secondly, we assessed morphological preprocessing for g2p conversion. While morphological information has been incorporated in some past systems, its contribution has never been quantitatively assessed for German. We compare the relevance of morphological preprocessing with respect to the morphological segmentation method, training set size, the g2p conversion algorithm, and two languages, English and German.", |
| "pdf_parse": { |
| "paper_id": "P07-1013", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Grapheme-to-phoneme conversion (g2p) is a core component of any text-to-speech system. We show that adding simple syllabification and stress assignment constraints, namely 'one nucleus per syllable' and 'one main stress per word', to a joint n-gram model for g2p conversion leads to a dramatic improvement in conversion accuracy. Secondly, we assessed morphological preprocessing for g2p conversion. While morphological information has been incorporated in some past systems, its contribution has never been quantitatively assessed for German. We compare the relevance of morphological preprocessing with respect to the morphological segmentation method, training set size, the g2p conversion algorithm, and two languages, English and German.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Grapheme-to-Phoneme conversion (g2p) is the task of converting a word from its spelling (e.g. \"Sternanis\u00f6l\", Engl: star-anise oil) to its pronunciation (/\"StERnPani:sP\u00f8:l/). Speech synthesis modules with a g2p component are used in text-to-speech (TTS) systems and can be be applied in spoken dialogue systems or speech-to-speech translation systems.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In order to correctly synthesize a word, it is not only necessary to convert the letters into phonemes, but also to syllabify the word and to assign word stress.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Syllabification and Stress in g2p conversion", |
| "sec_num": "1.1" |
| }, |
| { |
| "text": "The problems of word phonemization, syllabification and word stress assignment are inter-dependent. Information about the position of a syllable boundary helps grapheme-to-phoneme conversion. (Marchand and Damper, 2005 ) report a word error rate (WER) reduction of approx. 5 percentage points for English when the letter string is augmented with syllabification information. The same holds vice-versa: we found that WER was reduced by 50% when running our syllabifier on phonemes instead of letters (see Table 4 ). Finally, word stress is usually defined on syllables; in languages where word stress is assumed 1 to partly depend on syllable weight (such as German or Dutch), it is important to know where exactly the syllable boundaries are in order to correctly calculate syllable weight. For German, (M\u00fcller, 2001) show that information about stress assignment and the position of a syllable within a word improve g2p conversion.", |
| "cite_spans": [ |
| { |
| "start": 192, |
| "end": 218, |
| "text": "(Marchand and Damper, 2005", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 803, |
| "end": 817, |
| "text": "(M\u00fcller, 2001)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 504, |
| "end": 511, |
| "text": "Table 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Syllabification and Stress in g2p conversion", |
| "sec_num": "1.1" |
| }, |
| { |
| "text": "It has been argued that using morphological information is important for languages where morphology has an important influence on pronunciation, syllabification and word stress such as German, Dutch, Swedish or, to a smaller extent, also English (Sproat, 1996; M\u00f6bius, 2001; Pounder and Kommenda, 1986; Black et al., 1998; Taylor, 2005) . Unfortunately, these papers do not quantify the contribution of morphological preprocessing in the task.", |
| "cite_spans": [ |
| { |
| "start": 246, |
| "end": 260, |
| "text": "(Sproat, 1996;", |
| "ref_id": null |
| }, |
| { |
| "start": 261, |
| "end": 274, |
| "text": "M\u00f6bius, 2001;", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 275, |
| "end": 302, |
| "text": "Pounder and Kommenda, 1986;", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 303, |
| "end": 322, |
| "text": "Black et al., 1998;", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 323, |
| "end": 336, |
| "text": "Taylor, 2005)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Morphological Preprocessing", |
| "sec_num": "1.2" |
| }, |
| { |
| "text": "Important questions when considering the integration of a morphological component into a speech synthesis system are 1) How large are the improvements to be gained from morphological preprocessing? 2) Must the morphological system be perfect or can performance improvements also be reached with relatively simple morphological components? and 3) How much does the benefit to be expected from explicit morphological information depend on the g2p algorithm? To determine these factors, we compared morphological segmentations based on manual morphological annotation from CELEX to two rule-based systems and several unsupervised data-based approaches. We also analysed the role of explicit morphological preprocessing on data sets of different sizes and compared its relevance with respect to a decision tree and a joint n-gram model for g2p conversion.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Morphological Preprocessing", |
| "sec_num": "1.2" |
| }, |
| { |
| "text": "The paper is structured as follows: We introduce the g2p conversion model we used in section 2 and explain how we implemented the phonological constraints in section 3. Section 4 is concerned with the relation between morphology, word pronunciation, syllabification and word stress in German, and presents different sources for morphological segmentation. In section 5, we evaluate the contribution of each of the components and compare our methods to state-of-the-art systems. Section 6 summarizes our results.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Morphological Preprocessing", |
| "sec_num": "1.2" |
| }, |
| { |
| "text": "We used a joint n-gram model for the graphemeto-phoneme conversion task. Models of this type have previously been shown to yield very good g2p conversion results (Bisani and Ney, 2002; Galescu and Allen, 2001; Chen, 2003) . Models that do not use joint letter-phoneme states, and therefore are not conditional on the preceding letters, but only on the actual letter and the preceding phonemes, achieved inferior results. Examples of such approaches using Hidden Markov Models are (Rentzepopoulos and Kokkinakis, 1991 ) (who applied the HMM to the related task of phoneme-to-grapheme conversion), (Taylor, 2005) and (Minker, 1996) .", |
| "cite_spans": [ |
| { |
| "start": 162, |
| "end": 184, |
| "text": "(Bisani and Ney, 2002;", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 185, |
| "end": 209, |
| "text": "Galescu and Allen, 2001;", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 210, |
| "end": 221, |
| "text": "Chen, 2003)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 480, |
| "end": 516, |
| "text": "(Rentzepopoulos and Kokkinakis, 1991", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 596, |
| "end": 610, |
| "text": "(Taylor, 2005)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 615, |
| "end": 629, |
| "text": "(Minker, 1996)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Methods", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The g2p task is formulated as searching for the most probable sequence of phonemes given the orthographic form of a word. One can think of it as a tagging problem where each letter is tagged with a (possibly empty) phoneme-sequence p. In our par-ticular implementation, the model is defined as a higher-order Hidden Markov Model, where the hidden states are a letter-phoneme-sequence pair l; p , and the observed symbols are the letters l. The output probability of a hidden state is then equal to one, since all hidden states that do not contain the observed letter are pruned directly.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Methods", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The model for grapheme-to-phoneme conversion uses the Viterbi algorithm to efficiently compute the most probable sequencep n 1 of phoneme\u015d p 1 ,p 2 , ...,p n for a given letter sequence l n 1 . The probability of a letter-phon-seq pair depends on the k preceding letter-phon-seq pairs. Dummy states '#' are appended at both ends of each word to indicate the word boundary and to ensure that all conditional probabilities are well-defined.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Methods", |
| "sec_num": "2" |
| }, |
| { |
| "text": "p n 1 = arg max p n 1 n+1 i=1 P ( l; p i | l; p i\u22121 i\u2212k )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Methods", |
| "sec_num": "2" |
| }, |
| { |
| "text": "In an integrated model where g2p conversion, syllabification and word stress assignment are all performed at the same time, a state additionally contains a syllable boundary flag b and a stress flag a, yielding l; p; b; a i .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Methods", |
| "sec_num": "2" |
| }, |
| { |
| "text": "As an alternative architecture, we also designed a modular system that comprises one component for syllabification and one for word stress assignment. The model for syllabification computes the most probable sequenceb n 1 of syllable boundary-tagsb 1 , b 2 , ...,b n for a given letter sequence l n 1 . b n 1 = arg max", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Methods", |
| "sec_num": "2" |
| }, |
| { |
| "text": "b n 1 n+1 i=1 P ( l; b i | l; b i\u22121 i\u2212k )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Methods", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The stress assignment model works on syllables. It computes the most probable sequence\u00e2 n 1 of word accent-tags\u00e2 1 ,\u00e2 2 , ...,\u00e2 n for a given syllable sequence syl n 1 . a n 1 = arg max", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Methods", |
| "sec_num": "2" |
| }, |
| { |
| "text": "a n 1 n+1 i=1 P ( syl; a i | syl; a i\u22121 i\u2212k ) 2.1 Smoothing", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Methods", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Because of major data sparseness problems, smoothing is an important issue, in particular for the stress model which is based on syllable-stress-tag pairs. Performance varied by up to 20% in function of the smoothing algorithm chosen. Best results were obtained when using a variant of Modified Kneser-Ney Smoothing 2 (Chen and Goodman, 1996) .", |
| "cite_spans": [ |
| { |
| "start": 318, |
| "end": 342, |
| "text": "(Chen and Goodman, 1996)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Methods", |
| "sec_num": "2" |
| }, |
| { |
| "text": "In the g2p-model, each letter can on average map onto one of 12 alternative phoneme-sequences. When working with 5-grams 3 , there are about 12 5 = 250,000 state sequences. To improve time and space efficiency, we implemented a simple pruning strategy that only considers the t best states at any moment in time. With a threshold of t = 15, about 120 words are processed per minute on a 1.5GHz machine. Conversion quality is only marginally worse than when the whole search space is calculated.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pruning", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Running time for English is faster, because the average number of candidate phonemes for each letter is lower. We measured running time (including training and the actual g2p conversion in 10-fold cross validation) for a Perl implementation of our algorithm on the English NetTalk corpus (20,008 words) on an Intel Pentium 4, 3.0 GHz machine. Running time was less than 1h for each of the following three test conditions: c1) g2p conversion only, c2) syllabification first, then g2p conversion, c3) simultaneous g2p conversion and syllabification, given perfect syllable boundary input, c4) simultaneous g2p conversion and syllabification when correct syllabification is not available beforehand. This is much faster than the times for Pronunciation by Analogy (PbA) (Marchand and Damper, 2005) on the same corpus. Marchand and Damper reported a processing time of several hours for c4), two days for c2) and several days for c3).", |
| "cite_spans": [ |
| { |
| "start": 767, |
| "end": 794, |
| "text": "(Marchand and Damper, 2005)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pruning", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Our current implementation of the joint n-gram model is not integrated with an automatic alignment procedure. We therefore first aligned letters and phonemes in a separate, semi-automatic step. Each letter was aligned with zero to two phonemes and, in the integrated model, zero or one syllable boundaries and stress markers.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Alignment", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "When analysing the results from the model that does g2p conversion, syllabification and stress assign-ment in a single step, we found that a large proportion of the errors was due to the violation of basic phonological constraints.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Integration of Phonological Constraints", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Some syllables had no syllable nucleus, while others contained several vowels. The reason for the errors is that German syllables can be very long and therefore sparse, often causing the model to backoff to smaller contexts. If the context is too small to cover the syllable, the model cannot decide whether the current syllable contains a nucleus.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Integration of Phonological Constraints", |
| "sec_num": "3" |
| }, |
| { |
| "text": "In stress assignment, this problem is even worse: the context window rarely covers the whole word. The algorithm does not know whether it already assigned a word stress outside the context window. This leads to a high error rate with 15-20% of incorrectly stressed words. Thereof, 37% have more than one main stress, about 27% are not assigned any stress and 36% are stressed in the wrong position. This means that we can hope to reduce the errors by almost 2/3 by using phonological constraints.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Integration of Phonological Constraints", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Word stress assignment is a difficult problem in German because the underlying processes involve some deeper morphological knowledge which is not available to the simple model. In complex words, stress mainly depends on morphological structure (i.e. on the compositionality of compounds and on the stressing status of affixes). Word stress in simplex words is assumed to depend on the syllable position within the word stem and on syllable weight. The current language-independent approach does not model these processes, but only captures some of its statistics.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Integration of Phonological Constraints", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Simple constraints can help to overcome the problem of lacking context by explicitly requiring that every syllable must have exactly one syllable nucleus and that every word must have exactly one syllable receiving primary stress.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Integration of Phonological Constraints", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Our goal is to find the most probable syllabified and stressed phonemization of a word that does not violate the constraints. We tried two different approaches to enforce the constraints.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Implementation", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "In the first variant (v1), we modified the probability model to enforce the constraints. Each state now corresponds to a sequence of 4-tuples consisting of a letter l, a phoneme sequence p, a syllable boundary tag b, an accent tag a (as before) plus two new flags A and N which indicate whether an accent/nucleus precedes or not. The A and N flags of the new state are a function of its accent and syllable boundary tag and the A and N flag of the preceding state. They split each state into four new states. The new transition probabilities are defined as:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Implementation", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "P ( l; p; b; a i | l; p; b; a i\u22121 i\u2212k , A, N )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Implementation", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "The probability is 0 if the transition violates a constraint, e.g., when the A flag is set and a i indicates another accent.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Implementation", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "A positive side effect of the syllable flag is that it stores separate phonemization probabilities for consonants in the syllable onset vs. consonants in the coda. The flag in the onset is 0 since the nucleus has not yet been encountered, whereas it is set to 1 in the coda. In German, this can e.g. help in for syllablefinal devoicing of voiced stops and fricatives.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Implementation", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "The increase in the number of states aggravates sparse-data problems. Therefore, we implemented another variant (v2) which uses the same set of states (with A and N flags), but with the transition probabilities of the original model, which did not enforce the constraints. Instead, we modified the Viterbi algorithm to eliminate the invalid transitions: For example, a transition from a state with the A flag set to a state where a i introduces a second stress, is always ignored. On small data sets, better results were achieved with v2 (see Table 5 ).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 543, |
| "end": 550, |
| "text": "Table 5", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Implementation", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "In German, information about morphological boundaries is needed to correctly insert glottal stops [P] in complex words, to determine irregular pronunciation of affixes (v is pronounced [v] in vertikal but [f] in ver+ticker+n, and the suffix syllable heit is not stressed although superheavy and word final) and to disambiguate letters (e.g. e is always pronounced /@/ when occurring in inflectional suffixes). Vowel length and quality has been argued to also depend on morphological structure (Pounder and Kommenda, 1986) . Furthermore, morphological boundaries overrun default syllabification rules, such as the maximum onset principle.", |
| "cite_spans": [ |
| { |
| "start": 185, |
| "end": 188, |
| "text": "[v]", |
| "ref_id": null |
| }, |
| { |
| "start": 493, |
| "end": 521, |
| "text": "(Pounder and Kommenda, 1986)", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Morphological Preprocessing", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Applying default syllabification to the word \"Sternanis\u00f6l\" would result in a syllabification into Ster-na-ni-s\u00f6l (and subsequent phonemization to something like /StE\u00f6\"na:niz\u00f8:l/) instead of Stern-a-nis-\u00f6l (/\"StE\u00f6nPani:sP\u00f8:l/). Syllabification in turn affects phonemization since voiced fricatives and stops are devoiced in syllable-final position. Morphological information also helps for graphemic parsing of words such as \"R\u00f6schen\" (Engl: little rose) where the morphological boundary between R\u00f6s and chen causes the string sch to be transcribed to /s\u00e7/ instead of /S/. Similar ambiguities can arise for all other sounds that are represented by several letters in orthography (e.g. doubled consonants, diphtongs, ie, ph, th), and is also valid for English. Finally, morphological information is also crucial to determine word stress in morphologically complex words.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Morphological Preprocessing", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Good segmentation performance on arbitrary words is hard to achieve. We compared several approaches with different amounts of built-in knowledge. The morphological information is encoded in the letter string, where different digits represent different kinds of morphological boundaries (prefixes, stems, derivational and inflectional suffixes).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Methods for Morphological Segmentation", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "To determine the upper bound of what can be achieved when exploiting perfect morphological information, we extracted morphological boundaries and boundary types from the CELEX database.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Manual Annotation from CELEX", |
| "sec_num": null |
| }, |
| { |
| "text": "The manual annotation is not perfect as it contains some errors and many cases where words are not decomposed entirely. The words tagged [F] for \"lexicalized inflection\", e.g. gedr\u00e4ngt (past participle of dr\u00e4ngen, Engl: push) were decomposed semiautomatically for the purpose of this evaluation. As expected, annotating words with CELEX morphological segmentation yielded the best g2p conversion results. Manual annotation is only available for a small number of words. Therefore, only automatically annotated morphological information can scale up to real applications.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Manual Annotation from CELEX", |
| "sec_num": null |
| }, |
| { |
| "text": "The traditional approach is to use large morpheme lexica and a set of rules that segment words into affixes and stems. Drawbacks of using such a system are the high development costs, limited coverage and problems with ambiguity resolution between alternative analyses of a word.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Rule-based Systems", |
| "sec_num": null |
| }, |
| { |
| "text": "The two rule-based systems we evaluated, the ETI 4 morphological system and SMOR 5 (Schmid et al., 2004) , are both high-quality systems with large lexica that have been developed over several years. Their performance results can help to estimate what can realistically be expected from an automatic segmentation system. Both of the rule-based systems achieved an F-score of approx. 80% morphological boundaries correct with respect to CELEX manual annotation.", |
| "cite_spans": [ |
| { |
| "start": 83, |
| "end": 104, |
| "text": "(Schmid et al., 2004)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Rule-based Systems", |
| "sec_num": null |
| }, |
| { |
| "text": "Most attractive among automatic systems are methods that use unsupervised learning, because these require neither an expert linguist to build large rule-sets and lexica nor large manually annotated word lists, but only large amounts of tokenized text, which can be acquired e.g. from the internet. Unsupervised methods are in principle 6 languageindependent, and can therefore easily be applied to other languages.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Unsupervised Morphological Systems", |
| "sec_num": null |
| }, |
| { |
| "text": "We compared four different state-of-the-art unsupervised systems for morphological decomposition (cf. (Demberg, 2006; Demberg, 2007) ). The algorithms were trained on a German newspaper corpus (taz), containing about 240 million words. The same algorithms have previously been shown to help a speech recognition task (Kurimo et al., 2006) .", |
| "cite_spans": [ |
| { |
| "start": 102, |
| "end": 117, |
| "text": "(Demberg, 2006;", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 118, |
| "end": 132, |
| "text": "Demberg, 2007)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 317, |
| "end": 338, |
| "text": "(Kurimo et al., 2006)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Unsupervised Morphological Systems", |
| "sec_num": null |
| }, |
| { |
| "text": "The German corpus used in these experiments is CELEX (German Linguistic User Guide, 1995). CELEX contains a phonemic representation of each 4 Eloquent Technology, Inc. (ETI) TTS system. http://www.mindspring.com/\u02dcssshp/ssshp_cd/ ss_eloq.htm 5 The lexicon used by SMOR, IMSLEX, contains morphologically complex entries, which leads to high precision and low recall. The results reported here refer to a version of SMOR, where the lexicon entries were decomposed using a rather na\u00efve high-recall segmentation method. SMOR itself does not disambiguate morphological analyses of a word. Our version used transition weights learnt from CELEX morphological annotation. For more details refer to (Demberg, 2006) .", |
| "cite_spans": [ |
| { |
| "start": 689, |
| "end": 704, |
| "text": "(Demberg, 2006)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training Set and Test Set Design", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "6 Most systems make some assumptions about the underlying morphological system, for instance that morphology is a concatenative process, that stems have a certain minimal length or that prefixing and suffixing are the most relevant phenomena.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training Set and Test Set Design", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "word, syllable boundaries and word stress information. Furthermore, it contains manually verified morphological boundaries.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training Set and Test Set Design", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Our training set contains approx. 240,000 words and the test set consists of 12,326 words. The test set is designed such that word stems in training and test sets are disjoint, i.e. the inflections of a certain stem are either all in the training set or all in the test set. Stem overlap between training and test set only occurs in compounds and derivations. If a simple random splitting (90% for training set, 10% for test set) is used on inflected corpora, results are much better: Word error rates (WER) are about 60% lower when the set of stems in training and test set are not disjoint. The same effect can also be observed for the syllabification task (see Table 4 ).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 664, |
| "end": 671, |
| "text": "Table 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Training Set and Test Set Design", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "The joint n-gram model is language-independent. An aligned corpus with words and their pronunciations is needed, but no further adaptation is required. Table 1 shows the performance of our model in comparison to alternative approaches on the German and English versions of the CELEX corpus, the English NetTalk corpus, the English Teacher's Word Book (TWB) corpus, the English beep corpus and the French Brulex corpus. The joint n-gram model performs significantly better than the decision tree (essentially based on (Lucassen and Mercer, 1984) ), and achieves scores comparable to the Pronunciation by Analogy (PbA) algorithm (Marchand and Damper, 2005) . For the Nettalk data, we also compared the influence of syllable boundary annotation from a) automatically learnt and b) manually annotated syllabification information on phoneme accuracy. Automatic syllabification for our model integrated phonological constraints (as described in section 3.1), and therefore led to an improvement in phoneme accuracy, while the word error rate increased for the PbA approach, which does not incorporate such constraints. (Chen, 2003) also used a joint n-gram model. The two approaches differ in that Chen ", |
| "cite_spans": [ |
| { |
| "start": 517, |
| "end": 544, |
| "text": "(Lucassen and Mercer, 1984)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 627, |
| "end": 654, |
| "text": "(Marchand and Damper, 2005)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 1113, |
| "end": 1125, |
| "text": "(Chen, 2003)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 1192, |
| "end": 1196, |
| "text": "Chen", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 152, |
| "end": 159, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results for the Joint n-gram Model", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "The accuracy improvements achieved by integrating the constraints (see Table 2 ) are highly statistically significant. The numbers for conditions \"Gsyllab.+stress+g2p\" and \"E-syllab.+g2p\" in Table 2 differ from the numbers for \"G-CELEX\" and \"E-Nettalk\" in Table 1 because phoneme conversion errors, syllabification errors and stress assignment errors are all counted towards word error rates reported in Table 2 . Word error rate in the combined g2p-syllablestress model was reduced from 21.5% to 13.7%. For the separate tasks, we observed similar effects: The word error rate for inserting syllable boundaries was reduced from 3.48% to 3.1% on letters and from 1.84% to 1.53% on phonemes. Most significantly, word error rate was decreased from 30.9% to 9.9% for word stress assignment on graphemes.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 71, |
| "end": 78, |
| "text": "Table 2", |
| "ref_id": null |
| }, |
| { |
| "start": 191, |
| "end": 198, |
| "text": "Table 2", |
| "ref_id": null |
| }, |
| { |
| "start": 256, |
| "end": 263, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| }, |
| { |
| "start": 404, |
| "end": 411, |
| "text": "Table 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Benefit of Integrating Constraints", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "We also found similarly important improvements when applying the syllabification constraint to English grapheme-to-phoneme conversion and syllabification. This suggests that our findings are not specific to German but that this kind of general constraints can be beneficial for a range of languages. 21.5% 13.7% G -syllab. on letters 3.5% 3.1% G -syllab. on phonemes 1.84% 1.53% G -stress assignm. on letters 30.9% 9.9% E -syllab.+g2p 40.5% 37.5% E -syllab. on phonemes 12.7% 8.8% Table 2 : Improving performance on g2p conversion, syllabification and stress assignment through the introduction of constraints. The table shows word error rates for German CELEX (G) and English NetTalk (E).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 481, |
| "end": 488, |
| "text": "Table 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Benefit of Integrating Constraints", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "Modularity is an advantage if the individual components are more specialized to their task (e.g. by applying a particular level of description of the problem, or by incorporating some additional source of knowledge).In a modular system, one component can easily be substituted by another -for example, if a better way of doing stress assignment in German was found. On the other hand, keeping everything in one module for strongly inter-dependent tasks (such as determining word stress and phonemization) allows us to simultaneously optimize for the best combination of phonemes and stress. Best results were obtained from the joint n-gram model that does syllabification, stress assignment and g2p conversion all in a single step and integrates phonological constraints for syllabification and word stress (WER = 14.4% using method v1, WER = 13.7% using method v2). If the modular architecture is chosen, best results are obtained when g2p conversion is done before syllabification and stress assignment (15.2% WER), whereas doing syllabification and stress assignment first and then g2p conversion leads to a WER of 16.6%. We can conclude from this finding that an integrated approach is superior to a pipeline architecture for strongly interdependent tasks such as these.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Modularity", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "A statistically significant (according to a two-tailed t-test) improvement in g2p conversion accuracy (from 13.7% WER to 13.2% WER) was obtained with the manually annotated morphological boundaries from CELEX. The segmentation from both of the rule-based systems (ETI and SMOR) also resulted in an accuracy increase with respect to the baseline (13.6% WER), which is not annotated with morphological boundaries. Among the unsupervised systems, best results 7 on the g2p task with morphological annotation were obtained with the RePortS system (Keshava and Pitler, 2006) . But none of the segmentations led to an error reduction when compared to a baseline that used no morphological information (see Table 3 ). Word error rate even increased when the quality of the Table 3 : Systems evaluation on German CELEX manual annotation and on the g2p task using a joint n-gram model. WERs refer to implementation v2. morphological segmentation was too low (the unsupervised algorithms achieved 52%-62% F-measure with respect to CELEX manual annotation). Table 4 shows that high-quality morphological information can also significantly improve performance on a syllabification task for German. We used the syllabifier described in (Schmid et al., 2005) , which works similar to the joint n-gram model used for g2p conversion. Just as for g2p conversion, we found a significant accuracy improvement when using the manually annotated data, a smaller improvement for using data from the rule-based morphological system, and no improvement when using segmentations from an unsupervised algorithm. Syllabification works best when performed on phonemes, because syllables are phonological units and therefore can be determined most easily in terms of phonological entities such as phonemes.", |
| "cite_spans": [ |
| { |
| "start": 543, |
| "end": 569, |
| "text": "(Keshava and Pitler, 2006)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 1223, |
| "end": 1244, |
| "text": "(Schmid et al., 2005)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 700, |
| "end": 707, |
| "text": "Table 3", |
| "ref_id": null |
| }, |
| { |
| "start": 766, |
| "end": 773, |
| "text": "Table 3", |
| "ref_id": null |
| }, |
| { |
| "start": 1047, |
| "end": 1054, |
| "text": "Table 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "The Contribution of Morphological Preprocessing", |
| "sec_num": "5.5" |
| }, |
| { |
| "text": "Whether morphological segmentation is worth the effort depends on many factors such as training set size, the g2p algorithm and the language considered.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Contribution of Morphological Preprocessing", |
| "sec_num": "5.5" |
| }, |
| { |
| "text": "disj. stems random RePortS (unsupervised morph.) 4.95% no morphology 3.10% 0.72% ETI (rule-based morph.) 2.63% CELEX (manual annot.)", |
| "cite_spans": [ |
| { |
| "start": 19, |
| "end": 48, |
| "text": "RePortS (unsupervised morph.)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Contribution of Morphological Preprocessing", |
| "sec_num": "5.5" |
| }, |
| { |
| "text": "1.91% 0.53% on phonemes 1.53% 0.18% Table 4 : Word error rates (WER) for syllabification with a joint n-gram model for two different training and test set designs (see Section 5.1).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 36, |
| "end": 43, |
| "text": "Table 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "The Contribution of Morphological Preprocessing", |
| "sec_num": "5.5" |
| }, |
| { |
| "text": "Probably the most important aspect of morphological segmentation information is that it can help to resolve data sparseness issues. Because of the additional knowledge given to the system through the morphological information, similarly-behaving letter sequences can be grouped more effectively.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Morphology for Data Sparseness Reduction", |
| "sec_num": null |
| }, |
| { |
| "text": "Therefore, we hypothesized that morphological information is most beneficial in situations where the training corpus is rather small. Our findings confirm this expectation, as the relative error reduction through morphological annotation for a training corpus of 9,600 words is 6.67%, while it is only 3.65% for a 240,000-word training corpus.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Morphology for Data Sparseness Reduction", |
| "sec_num": null |
| }, |
| { |
| "text": "In our implementation, the stress flags and syllable flags we use to enforce the phonological constraints increase data sparseness. We found v2 (the implementation that uses the states without stress and syllable flags and enforces the constraints by eliminating invalid transitions, cf. section 3.1) to outperform the integrated version, v1, and more significantly in the case of more severe data sparseness. The only condition when we found v1 to perform better than v2 was with a large data set and additional data sparseness reduction through morphological annotation, as in section 4 (see Table 5 ).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 594, |
| "end": 601, |
| "text": "Table 5", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Morphology for Data Sparseness Reduction", |
| "sec_num": null |
| }, |
| { |
| "text": "WER: designs v1 v2 data set size 240k 9.6k 240k 9.6k no morph.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Morphology for Data Sparseness Reduction", |
| "sec_num": null |
| }, |
| { |
| "text": "14.4% 32.3% 13.7% 25.5% CELEX 12.5% 29% 13.2% 23.8% Table 5 : The interactions of constraints in training and different levels of data sparseness.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 52, |
| "end": 59, |
| "text": "Table 5", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Morphology for Data Sparseness Reduction", |
| "sec_num": null |
| }, |
| { |
| "text": "The benefit of using morphological preprocessing is also affected by the algorithm that is used for g2p conversion. Therefore, we also evaluated the relative improvement of morphological annotation when using a decision tree for g2p conversion.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "g2p Conversion Algorithms", |
| "sec_num": null |
| }, |
| { |
| "text": "Decision trees were one of the first data-based approaches to g2p and are still widely used (Kienappel and Kneser, 2001; Black et al., 1998) . The tree's efficiency and ability for generalization largely depends on pruning and the choice of possible questions. In our implementation, the decision tree can ask about letters within a context window of five back and five ahead, about five phonemes back and groups of letters (e.g. consonants vs. vowels).", |
| "cite_spans": [ |
| { |
| "start": 92, |
| "end": 120, |
| "text": "(Kienappel and Kneser, 2001;", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 121, |
| "end": 140, |
| "text": "Black et al., 1998)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "g2p Conversion Algorithms", |
| "sec_num": null |
| }, |
| { |
| "text": "Both the decision tree and the joint n-gram model convert graphemes to phonemes, insert syllable boundaries and assign word stress in a single step (marked as \"WER-ss\" in Table 6 . The implementation of the joint n-gram model incorporates the phonological constraints described in section 3 (\"WER-ss+\"). Our main finding is that the joint n-gram model profits less from morphological annotation. Without the constraints, the performance difference is smaller: the joint n-gram model then achieves a word error rate of 21.5% on the nomorphology-condition.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 171, |
| "end": 178, |
| "text": "Table 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "g2p Conversion Algorithms", |
| "sec_num": null |
| }, |
| { |
| "text": "In very recent work, (Demberg, 2007) developed an unsupervised algorithm (f-meas: 68%; an extension of RePortS) whose segmentations improve g2p when using a the decision tree (PER: 3.45%). Table 6 : The effect of morphological preprocessing on phoneme error rates (PER) and word error rates (WER) in grapheme-to-phoneme conversion.", |
| "cite_spans": [ |
| { |
| "start": 21, |
| "end": 36, |
| "text": "(Demberg, 2007)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 189, |
| "end": 196, |
| "text": "Table 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "g2p Conversion Algorithms", |
| "sec_num": null |
| }, |
| { |
| "text": "We also investigated the effect of morphological information on g2p conversion and syllabification in English, using manually annotated morphological boundaries from CELEX and the automatic unsupervised RePortS system which achieves an F-score of about 77% for English. The cases where morphological information affects word pronunciation are relatively few in comparison to German, therefore the overall effect is rather weak and we did not even find improvements with perfect boundaries.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Morphology for other Languages", |
| "sec_num": null |
| }, |
| { |
| "text": "Our results confirm that the integration of phonological constraints 'one nucleus per syllable' and 'one main stress per word' can significantly boost accuracy for g2p conversion in German and English. We implemented the constraints using a joint ngram model for g2p conversion, which is languageindependent and well-suited to the g2p task.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "6" |
| }, |
| { |
| "text": "We systematically evaluated the benefit to be gained from morphological preprocessing on g2p conversion and syllabification. We found that morphological segmentations from rule-based systems led to some improvement. But the magnitude of the accuracy improvement strongly depends on the g2p algorithm and on training set size. State-ofthe-art unsupervised morphological systems do not yet yield sufficiently good segmentations to help the task, if a good conversion algorithm is used: Low quality segmentation even led to higher error rates.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "6" |
| }, |
| { |
| "text": "This issue is controversial among linguists; for an overview see(Jessen, 1998).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "For a formal definition, see(Demberg, 2006).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "There is a trade-off between long context windows which capture the context accurately and data sparseness issues. The optimal value k for the context window size depends on the source language (existence of multiletter graphemes, complexity of syllables etc.).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "For all results refer to(Demberg, 2006).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "We would like to thank Hinrich Sch\u00fctze, Frank Keller and the ACL reviewers for valuable comments and discussion. The first author was supported by Evangelisches Studienwerk e.V. Villigst.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Investigations on joint multigram models for grapheme-to-phoneme conversion", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Bisani", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Ney", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "ICSLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M. Bisani and H. Ney. 2002. Investigations on joint multigram models for grapheme-to-phoneme conversion. In ICSLP.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Issues in building general letter to sound rules", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Black", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Lenzo", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Pagel", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "ESCA on Speech Synthesis", |
| "volume": "3", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "A. Black, K. Lenzo, and V. Pagel. 1998. Issues in building gen- eral letter to sound rules. In 3. ESCA on Speech Synthesis.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "An empirical study of smoothing techniques for language modeling", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Sf Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Goodman", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "Proc. of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "SF Chen and J Goodman. 1996. An empirical study of smooth- ing techniques for language modeling. In Proc. of ACL.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Conditional and joint models for graphemeto-phoneme conversion", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [ |
| "F" |
| ], |
| "last": "Chen", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "S. F. Chen. 2003. Conditional and joint models for grapheme- to-phoneme conversion. In Eurospeech.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Letter-to-phoneme conversion for a German TTS-System. Master's thesis", |
| "authors": [ |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Demberg", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "V. Demberg. 2006. Letter-to-phoneme conversion for a Ger- man TTS-System. Master's thesis. IMS, Univ. of Stuttgart.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "A language-independent unsupervised model for morphological segmentation", |
| "authors": [ |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Demberg", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proc. of ACL-07", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "V. Demberg. 2007. A language-independent unsupervised model for morphological segmentation. In Proc. of ACL-07.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Bi-directional conversion between graphemes and phonemes using a joint n-gram model", |
| "authors": [ |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Galescu", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Allen", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Proc. of the 4th ISCA Workshop on Speech Synthesis", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "L. Galescu and J. Allen. 2001. Bi-directional conversion be- tween graphemes and phonemes using a joint n-gram model. In Proc. of the 4th ISCA Workshop on Speech Synthesis.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Center for Lexical Information. Max-Planck-Institut for Psycholinguistics", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Celex German Linguistic User Guide", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "CELEX German Linguistic User Guide, 1995. Center for Lex- ical Information. Max-Planck-Institut for Psycholinguistics, Nijmegen.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Word Prosodic Systems in the Languages of Europe", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Jessen", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M. Jessen, 1998. Word Prosodic Systems in the Languages of Europe. Mouton de Gruyter: Berlin.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "A simpler, intuitive approach to morpheme induction", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Keshava", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Pitler", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of 2nd Pascal Challenges Workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "31--35", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "S. Keshava and E. Pitler. 2006. A simpler, intuitive approach to morpheme induction. In Proceedings of 2nd Pascal Chal- lenges Workshop, pages 31-35, Venice, Italy.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Designing very compact decision trees for grapheme-to-phoneme transcription", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [ |
| "K" |
| ], |
| "last": "Kienappel", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Kneser", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "A. K. Kienappel and R. Kneser. 2001. Designing very com- pact decision trees for grapheme-to-phoneme transcription. In Eurospeech, Scandinavia.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Unsupervsied segmentation of words into morphemes -Challenge 2005: An introduction and evaluation report", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Kurimo", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Creutz", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Varjokallio", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Arisoy", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Saraclar", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proc. of 2nd Pascal Challenges Workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M. Kurimo, M. Creutz, M. Varjokallio, E. Arisoy, and M. Sar- aclar. 2006. Unsupervsied segmentation of words into mor- phemes -Challenge 2005: An introduction and evaluation report. In Proc. of 2nd Pascal Challenges Workshop, Italy.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "An information theoretic approach to the automatic determination of phonemic baseforms", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Lucassen", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Mercer", |
| "suffix": "" |
| } |
| ], |
| "year": 1984, |
| "venue": "ICASSP 9", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J. Lucassen and R. Mercer. 1984. An information theoretic approach to the automatic determination of phonemic base- forms. In ICASSP 9.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Can syllabification improve pronunciation by analogy of English? Natural Language Engineering", |
| "authors": [ |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Marchand", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [ |
| "I" |
| ], |
| "last": "Damper", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Y. Marchand and R. I. Damper. 2005. Can syllabification im- prove pronunciation by analogy of English? Natural Lan- guage Engineering.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Grapheme-to-phoneme conversion -an approach based on hidden markov models", |
| "authors": [ |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Minker", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "W. Minker. 1996. Grapheme-to-phoneme conversion -an ap- proach based on hidden markov models.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "German and Multilingual Speech Synthesis. phonetic AIMS", |
| "authors": [ |
| { |
| "first": "B", |
| "middle": [], |
| "last": "M\u00f6bius", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Arbeitspapiere des Instituts f\u00fcr Maschinelle Spachverarbeitung", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "B. M\u00f6bius. 2001. German and Multilingual Speech Synthesis. phonetic AIMS, Arbeitspapiere des Instituts f\u00fcr Maschinelle Spachverarbeitung.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Automatic detection of syllable boundaries combining the advantages of treebank and bracketed corpora training", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "M\u00fcller", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "402--409", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "K. M\u00fcller. 2001. Automatic detection of syllable boundaries combining the advantages of treebank and bracketed corpora training. In Proceedings of ACL, pages 402-409.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Morphological analysis for a German text-to-speech system", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Pounder", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Kommenda", |
| "suffix": "" |
| } |
| ], |
| "year": 1986, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "A. Pounder and M. Kommenda. 1986. Morphological analysis for a German text-to-speech system. In COLING 1986.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Phoneme to grapheme conversion using HMM", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [ |
| "A" |
| ], |
| "last": "Rentzepopoulos", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [ |
| "K" |
| ], |
| "last": "Kokkinakis", |
| "suffix": "" |
| } |
| ], |
| "year": 1991, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "P.A. Rentzepopoulos and G.K. Kokkinakis. 1991. Phoneme to grapheme conversion using HMM. In Eurospeech.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "SMOR: A German computational morphology covering derivation, composition and inflection", |
| "authors": [ |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Schmid", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Fitschen", |
| "suffix": "" |
| }, |
| { |
| "first": "U", |
| "middle": [], |
| "last": "Heid", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proc. of LREC", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "H. Schmid, A. Fitschen, and U. Heid. 2004. SMOR: A German computational morphology covering derivation, composition and inflection. In Proc. of LREC.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Tagging syllable boundaries with hidden Markov models. IMS, unpub. R. Sproat. 1996. Multilingual text analysis for text-to-speech synthesis", |
| "authors": [ |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Schmid", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "M\u00f6bius", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Weidenkaff", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proc. ICSLP '96", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "H. Schmid, B. M\u00f6bius, and J. Weidenkaff. 2005. Tagging syl- lable boundaries with hidden Markov models. IMS, unpub. R. Sproat. 1996. Multilingual text analysis for text-to-speech synthesis. In Proc. ICSLP '96, Philadelphia, PA.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Hidden Markov models for grapheme to phoneme conversion", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Taylor", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "INTERSPEECH", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "P. Taylor. 2005. Hidden Markov models for grapheme to phoneme conversion. In INTERSPEECH.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "TABREF0": { |
| "html": null, |
| "type_str": "table", |
| "content": "<table><tr><td>corpus</td><td>size jnt n-gr PbA</td><td>Chen dec.tree</td></tr><tr><td colspan=\"2\">G -CELEX 230k 7.5%</td><td>15.0%</td></tr><tr><td>E -Nettalk</td><td colspan=\"2\">20k 35.4% 34.65% 34.6%</td></tr><tr><td>a) auto.syll</td><td>35.3% 35.2%</td><td/></tr><tr><td>b) man.syll</td><td>29.4% 28.3%</td><td/></tr><tr><td>E -TWB</td><td>18k 28.5% 28.2%</td><td/></tr><tr><td>E -beep</td><td>200k 14.3% 13.3%</td><td/></tr><tr><td colspan=\"2\">E -CELEX 100k 23.7%</td><td>31.7%</td></tr><tr><td>F -Brulex</td><td>27k 10.9%</td><td/></tr></table>", |
| "num": null, |
| "text": "uses small chunks ( (l : |0..1|) : (p : |0..1|) pairs only) and iteratively optimizes letter-phoneme alignment during training. Chen smoothes higher-order Markov Models with Gaussian Priors and implements additional language modelling such as consonant doubling." |
| }, |
| "TABREF1": { |
| "html": null, |
| "type_str": "table", |
| "content": "<table/>", |
| "num": null, |
| "text": "Word error rates for different g2p conversion algorithms. Constraints were only used in the E-Nettalk auto. syll condition." |
| } |
| } |
| } |
| } |