ACL-OCL / Base_JSON /prefixO /json /O06 /O06-2002.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "O06-2002",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:07:39.367865Z"
},
"title": "Modeling Cantonese Pronunciation Variations for Large-Vocabulary Continuous Speech Recognition",
"authors": [
{
"first": "Tan",
"middle": [],
"last": "Lee",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The Chinese University of Hong Kong",
"location": {
"settlement": "Shatin, Hong Kong",
"region": "N. T"
}
},
"email": "tanlee@ee.cuhk.edu.hk"
},
{
"first": "Patgi",
"middle": [],
"last": "Kam",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The Chinese University of Hong Kong",
"location": {
"settlement": "Shatin, Hong Kong",
"region": "N. T"
}
},
"email": ""
},
{
"first": "Frank",
"middle": [
"K"
],
"last": "Soong",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The Chinese University of Hong Kong",
"location": {
"settlement": "Shatin, Hong Kong",
"region": "N. T"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper presents different methods of handling pronunciation variations in Cantonese large-vocabulary continuous speech recognition. In an LVCSR system, three knowledge sources are involved: a pronunciation lexicon, acoustic models and language models. In addition, a decoding algorithm is used to search for the most likely word sequence. Pronunciation variation can be handled by explicitly modifying the knowledge sources or improving the decoding method. Two types of pronunciation variations are defined, namely, phone changes and sound changes. Phone change means that one phoneme is realized as another phoneme. A sound change happens when the acoustic realization is ambiguous between two phonemes. Phone changes are handled by constructing a pronunciation variation dictionary to include alternative pronunciations at the lexical level or dynamically expanding the search space to include those pronunciation variants. Sound changes are handled by adjusting the acoustic models through sharing or adaptation of the Gaussian mixture components. Experimental results show that the use of a pronunciation variation dictionary and the method of dynamic search space expansion can improve speech recognition performance substantially. The methods of acoustic model refinement were found to be relatively less effective in our experiments.",
"pdf_parse": {
"paper_id": "O06-2002",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper presents different methods of handling pronunciation variations in Cantonese large-vocabulary continuous speech recognition. In an LVCSR system, three knowledge sources are involved: a pronunciation lexicon, acoustic models and language models. In addition, a decoding algorithm is used to search for the most likely word sequence. Pronunciation variation can be handled by explicitly modifying the knowledge sources or improving the decoding method. Two types of pronunciation variations are defined, namely, phone changes and sound changes. Phone change means that one phoneme is realized as another phoneme. A sound change happens when the acoustic realization is ambiguous between two phonemes. Phone changes are handled by constructing a pronunciation variation dictionary to include alternative pronunciations at the lexical level or dynamically expanding the search space to include those pronunciation variants. Sound changes are handled by adjusting the acoustic models through sharing or adaptation of the Gaussian mixture components. Experimental results show that the use of a pronunciation variation dictionary and the method of dynamic search space expansion can improve speech recognition performance substantially. The methods of acoustic model refinement were found to be relatively less effective in our experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Given a speech input, automatic speech recognition (ASR) is a process of generating possible hypotheses for the underlying word sequence. This can be done by establishing a mapping between the acoustic features and the yet to be determined linguistic representations. Given the high variability of human speech, such mapping is in general not one-to-one. Different linguistic symbols can give rise to similar speech sounds, while the same linguistic symbol may also be realized in different pronunciations. The variability is due to co-articulation, regional accents, speaking rate, speaking style, etc. Pronunciation modeling is aimed at providing an effective mechanism by which ASR systems can be adapted to pronunciation variability.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Pronunciation variations can be divided into two types: phone change and sound change [Kam 2003 ] [ Liu and Fung 2003] . In [Sara\u00e7lar and Khudanpur 2000] [Liu 2002] , they are also referred to as complete change and partial change, respectively. A phone change happens when a baseform (canonical) phoneme is realized as another phoneme, which is referred to as its surface-form. The baseform pronunciation is considered to be the \"standard\" pronunciation that the speaker is supposed to use. Surface-form pronunciations are the actual pronunciations that different speakers may use. A sound change can be described as variation in phonetic properties, such as nasalization, centralization, voicing, etc. Acoustically, the variant sound is considered to be neither the baseform nor any surface-form phoneme. In other words, we cannot find an appropriate unit in the language's phoneme inventory to represent the sound. In terms of the scope of such variations, pronunciation variations can be divided into word-internal and cross-word variations [Strik and Cucchiarini 1999] .",
"cite_spans": [
{
"start": 86,
"end": 95,
"text": "[Kam 2003",
"ref_id": "BIBREF11"
},
{
"start": 100,
"end": 118,
"text": "Liu and Fung 2003]",
"ref_id": "BIBREF20"
},
{
"start": 124,
"end": 153,
"text": "[Sara\u00e7lar and Khudanpur 2000]",
"ref_id": "BIBREF26"
},
{
"start": 154,
"end": 164,
"text": "[Liu 2002]",
"ref_id": "BIBREF19"
},
{
"start": 1045,
"end": 1073,
"text": "[Strik and Cucchiarini 1999]",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "There have been many studies on modeling pronunciation variations for improving ASR performance. They are focused mainly on two problems: 1) prediction of the pronunciation variants, and 2) effective use of pronunciation variation information in the recognition process [Strik and Cucchiarini 1999] . Knowledge-based approaches use findings from linguistic studies, existing pronunciation dictionaries, and phonological rules to predict the pronunciation variations that could be encountered in ASR [Aubert and Dugast 1995] [ Kessens et al. 1999] . Data-driven approaches attempt to discover the pronunciation variants and the underlying rules from acoustic signals. This is done by performing automatic phone recognition and aligning the recognized phone sequences with reference transcriptions to find out the surface forms [Sara\u00e7lar et al. 2000] [Wester 2003 ]. Some studies used hand-labelled corpora [Riley et al. 1999] .",
"cite_spans": [
{
"start": 270,
"end": 298,
"text": "[Strik and Cucchiarini 1999]",
"ref_id": "BIBREF27"
},
{
"start": 499,
"end": 523,
"text": "[Aubert and Dugast 1995]",
"ref_id": null
},
{
"start": 526,
"end": 546,
"text": "Kessens et al. 1999]",
"ref_id": "BIBREF14"
},
{
"start": 826,
"end": 848,
"text": "[Sara\u00e7lar et al. 2000]",
"ref_id": "BIBREF26"
},
{
"start": 849,
"end": 861,
"text": "[Wester 2003",
"ref_id": "BIBREF29"
},
{
"start": 905,
"end": 924,
"text": "[Riley et al. 1999]",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "The key components of a large-vocabulary continuous speech recognition system are the acoustic models, the pronunciation lexicon and the language models [Huang et al. 2001] . The acoustic models are a set of hidden Markov models (HMM) that characterize the statistical variations of input speech. Each HMM represents a specific sub-word unit, e.g. a phoneme. The pronunciation lexicon and the language models are used to define and constrain the ways sub-word units can be concatenated to form words and sentences. They are used to define a search space from which the most likely word string(s) can be determined with a computationally efficient decoding algorithm. Within such a framework, pronunciation",
"cite_spans": [
{
"start": 153,
"end": 172,
"text": "[Huang et al. 2001]",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "variations can be handled by modifying one or more of the knowledge sources or improving the decoding algorithm. Phone changes can be handled by replacing the baseform transcription with surface-form transcriptions, i.e. the actual pronunciations observed. In an LVCSR system, this can be done by either augmenting the baseform lexicon with the additional pronunciation variants [Kessens et al. 1999] [Liu et al. 2000] [Byrne et al. 2001] , or expanding the search space during the decoding process to include those variants [Kam and Lee 2002] . In order to deal with sound changes, pronunciation modeling must be applied at a lower level, for example, on the individual states of a hidden Markov model (HMM) [Sara\u00e7lar et al. 2000] . In general, acoustic models are trained solely with baseform transcriptions. It is assumed that all training utterances follow exactly the canonical pronunciations. This convenient, but apparently unrealistic, assumption renders the acoustic models inadequate in representing the variations of speech sounds. To alleviate this problem, various methods of acoustic model refinement were proposed [Sara\u00e7lar et al. 2000] [Venkataramani and Byrne 2001] [Liu 2002] .",
"cite_spans": [
{
"start": 379,
"end": 400,
"text": "[Kessens et al. 1999]",
"ref_id": "BIBREF14"
},
{
"start": 401,
"end": 418,
"text": "[Liu et al. 2000]",
"ref_id": null
},
{
"start": 419,
"end": 438,
"text": "[Byrne et al. 2001]",
"ref_id": "BIBREF28"
},
{
"start": 525,
"end": 543,
"text": "[Kam and Lee 2002]",
"ref_id": "BIBREF12"
},
{
"start": 709,
"end": 731,
"text": "[Sara\u00e7lar et al. 2000]",
"ref_id": "BIBREF26"
},
{
"start": 1129,
"end": 1151,
"text": "[Sara\u00e7lar et al. 2000]",
"ref_id": "BIBREF26"
},
{
"start": 1152,
"end": 1182,
"text": "[Venkataramani and Byrne 2001]",
"ref_id": "BIBREF28"
},
{
"start": 1183,
"end": 1193,
"text": "[Liu 2002]",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Large-Vocabulary Continuous Speech Recognition",
"sec_num": null
},
{
"text": "In this paper, the pronunciation variations in continuous Cantonese speech are studied. The linguistic and acoustic properties of spoken Cantonese are considered in the analysis of pronunciation variations and, subsequently, the design of pronunciation modeling techniques for LVCSR. Like in most conventional approaches, phone changes are anticipated by using an augmented pronunciation lexicon. The lexicon includes the most frequently occurring alternative pronunciations that are derived from training data. We also describe a novel method of dynamically expanding the search space during decoding to include pronunciation variants that are predicted with context-dependent pronunciation models. For sound changes, we propose to measure the similarities between confused baseform and surface-form models at the Gaussian mixture component level and, accordingly, refine the models through sharing and adaptation of the relevant mixture components.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Large-Vocabulary Continuous Speech Recognition",
"sec_num": null
},
{
"text": "In the next section, the properties of spoken Cantonese are described and the fundamentals of Cantonese LVCSR are explained. In Section 3, different methods of modeling pronunciation variations at the lexical level are presented in detail and experimental results are given. The techniques for handling sound changes through acoustic model refinement are described in Section 4. Conclusions are given in Section 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Large-Vocabulary Continuous Speech Recognition",
"sec_num": null
},
{
"text": "Cantonese is one of the major Chinese dialects. It is the mother tongue of over 60 million people in Southern China and Hong Kong [Grimes et al. 2000] . The basic unit of written Cantonese is a Chinese character [Chao 1965] . Chinese characters are ideographic, meaning that they contain no information about pronunciation. There are more than ten thousand distinctive characters. In Cantonese, each of them is pronounced as a single syllable that carries a specific tone. A sentence is spoken as a string of monosyllabic sounds. A character may have multiple pronunciations, and a syllable typically corresponds to a number of different characters.",
"cite_spans": [
{
"start": 130,
"end": 150,
"text": "[Grimes et al. 2000]",
"ref_id": null
},
{
"start": 212,
"end": 223,
"text": "[Chao 1965]",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "About Cantonese",
"sec_num": "2.1"
},
{
"text": "A Cantonese syllable is formed by concatenating two types of phonological units: the Initial and the Final, as shown in Figure 1 [Hashimoto 1972 ]. There are 20 Initials (including the null Initial) and 53 Finals in Cantonese, in contrast to 23 Initials and 37 Finals in Mandarin. ",
"cite_spans": [
{
"start": 129,
"end": 144,
"text": "[Hashimoto 1972",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 120,
"end": 128,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "About Cantonese",
"sec_num": "2.1"
},
{
"text": "N U C L E U S [m] [ng]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "About Cantonese",
"sec_num": "2.1"
},
{
"text": "From phonological points of view, Cantonese has nine tones that are featured by differently stylized pitch patterns. They are divided into two categories: entering tones and non-entering tones. The entering tones occur exclusively with syllables ending in a stop coda (-p, -t, or -k). They are contrastively shorter in duration than the non-entering tones. In terms of pitch level, each entering tone coincides roughly with a non-entering counterpart. In many transcription schemes, only six distinctive tone categories are defined. They are labeled as Tone 1 to Tone 6 in the Jyu Ping system. If tonal difference is considered, the total number of distinctive tonal syllables is about 1,800. Table 3 gives an example of a Chinese word and its spoken form in Cantonese. The word \u6211\u5011 (meaning \"we\") is pronounced as two syllables. The first syllable is formed from the ",
"cite_spans": [],
"ref_spans": [
{
"start": 693,
"end": 700,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "About Cantonese",
"sec_num": "2.1"
},
{
"text": "Over the past twenty years, there have been sociolinguistic studies on how phonetic variations in Cantonese are related with social characteristics of speakers such as sex, age, and educational background. They have revealed some systematic patterns underlying the phonetic variations [Bauer and Benedict 1997] [Bourgerie 1990 ] [Ho 1994] . Table 4 gives a summary of the major observations in these studies. [Bourgerie 1990 ]. Older people make these substitutions much less frequently than younger generations. Female speakers tend to substitute [n] with [l], and delete [ng] more frequently than males. A correlation with the formality of the speech situation was also observed [Bourgerie 1990 ]. In casual speech, [l], null Initial, and [g] occur more frequently. According to [Bauer and Benedict 1997] , the variations are also related to the development of neighboring dialects in the Pearl River Delta.",
"cite_spans": [
{
"start": 285,
"end": 310,
"text": "[Bauer and Benedict 1997]",
"ref_id": "BIBREF1"
},
{
"start": 311,
"end": 326,
"text": "[Bourgerie 1990",
"ref_id": "BIBREF2"
},
{
"start": 329,
"end": 338,
"text": "[Ho 1994]",
"ref_id": "BIBREF9"
},
{
"start": 409,
"end": 424,
"text": "[Bourgerie 1990",
"ref_id": "BIBREF2"
},
{
"start": 681,
"end": 696,
"text": "[Bourgerie 1990",
"ref_id": "BIBREF2"
},
{
"start": 781,
"end": 806,
"text": "[Bauer and Benedict 1997]",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 341,
"end": 348,
"text": "Table 4",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Linguistic Studies on Pronunciation Variations in Cantonese",
"sec_num": "2.2"
},
{
"text": "When the preceding syllable ends with a nasal coda, there is a tendency to substitute the Initial [l] of the succeeding syllable with [n] [Ho 1994 ]. Labial dissimilation is probably the cause of the change [gw]\u2192[g], when the right context is -o, for example \"gwok\" \u570b (country), pronounced as \"gok\" \u89d2 (corner). The sequence of the two lip-rounded segments -w-and -o-become redundant or unnecessary with the second one driving out the first. The change [ng]\u2192[m] is due to the fact that when [ng] occurs in the presence of a bilabial coda, its place of articulation changes to bilabial. For example, \"sap ng\" \u5341\u4e94 (fifteen) becomes \"sap m\" through the perseverance of the bilabial closure of the coda -p into the articulation of the following syllabic nasal. This is referred to as perseveratory assimilation [Bauer and Benedict 1997] .",
"cite_spans": [
{
"start": 138,
"end": 146,
"text": "[Ho 1994",
"ref_id": "BIBREF9"
},
{
"start": 805,
"end": 830,
"text": "[Bauer and Benedict 1997]",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Linguistic Studies on Pronunciation Variations in Cantonese",
"sec_num": "2.2"
},
{
"text": "Other pronunciation variations are due to the dialectal accents of non-native speakers, who may have difficulties mastering some of the Cantonese pronunciations. They sometimes use the pronunciation of their mother tongue to pronounce a Cantonese word, for example, \"ngo\" \u6211 (me) is pronounced as \"wo\" by a Mandarin speaker. Figure 2 gives the functional block diagram of a typical LVCSR system. At the front-end processing module, the input speech is analyzed and converted into a sequence of acoustic feature vectors, denoted by O . The goal of speech recognition is to determine the most probable word sequence W , given the observation O . With the Bayes' formula, the decision ",
"cite_spans": [],
"ref_spans": [
{
"start": 324,
"end": 332,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Linguistic Studies on Pronunciation Variations in Cantonese",
"sec_num": "2.2"
},
{
"text": "Usually the acoustic models are built at the sub-word level. Let B be the sub-word sequence that represents W . Eq. (1) can be written as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cantonese LVCSR: the Baseline System",
"sec_num": "2.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "* arg max ( | ) ( | ) ( ) W W P O B P B W P W = ,",
"eq_num": "(2)"
}
],
"section": "Cantonese LVCSR: the Baseline System",
"sec_num": "2.3"
},
{
"text": "where ( | ) P O B and ( ) P W are referred to as the (sub-word level) acoustic models and the language models, respectively. ( | ) P B W is given by a pronunciation lexicon.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cantonese LVCSR: the Baseline System",
"sec_num": "2.3"
},
{
"text": "In the case of Chinese speech recognition, the sub-word units can be either syllables, Initials and Finals, or phone-like units. The recognition output is typically represented as a sequence of Chinese characters. The details of our baseline system for Cantonese LVCSR are given below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cantonese LVCSR: the Baseline System",
"sec_num": "2.3"
},
{
"text": "Acoustic feature vectors are computed every 10 msec. Each feature vector is composed of 39 elements, which includes 12 Mel-frequency cepstral coefficients, log energy, and their first-order and second-order derivatives. The analysis window size is 25 msec.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Front-end processing",
"sec_num": null
},
{
"text": "The acoustic models are right-context-dependent cross-word Initials and Finals models [Wong 2000 ]. The number of HMM states for Initial and Final units are 3 and 5, respectively. Each state is represented by a mixture of 16 Gaussian components. The decision tree based state clustering approach is used to allow the sharing of parameters among models.",
"cite_spans": [
{
"start": 86,
"end": 96,
"text": "[Wong 2000",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Acoustic models",
"sec_num": null
},
{
"text": "The lexicon contains about 6,500 entries, among which 60% are multi-character words and the others are single-character words [Wong 2000 ]. These words were selected from a newspaper text corpus of 98 million Chinese characters. The out-of-vocabulary percentage is about 1% [Wong 2000 ]. For each word entry, the canonical pronunciation(s) is specified in the form of Initials and Finals [CUPDICT 2003 ]. The language models are word bi-grams that were trained with the same text corpus described above.",
"cite_spans": [
{
"start": 126,
"end": 136,
"text": "[Wong 2000",
"ref_id": "BIBREF30"
},
{
"start": 274,
"end": 284,
"text": "[Wong 2000",
"ref_id": "BIBREF30"
},
{
"start": 388,
"end": 401,
"text": "[CUPDICT 2003",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pronunciation lexicons and language models",
"sec_num": null
},
{
"text": "The search space is formed from lexical trees that are derived from the pronunciation lexicon. One-pass Viterbi search is used to determine the most probable word sequence [Choi 2001 ]. The acoustic models were trained using CUSENT, which is a read speech corpus of continuous Cantonese sentences collected at the Chinese University of Hong Kong [Lee et al. 2002] . There are over 20,000 gender-balanced training utterances. The test data in CUSENT consists of 1,200 utterances from 6 male and 6 female speakers. The performance of the LVCSR system is measured in terms of word error rate (WER) for the 1,200 test utterances. The baseline WER is 25.34%.",
"cite_spans": [
{
"start": 172,
"end": 182,
"text": "[Choi 2001",
"ref_id": "BIBREF5"
},
{
"start": 346,
"end": 363,
"text": "[Lee et al. 2002]",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Decoder",
"sec_num": null
},
{
"text": "The pronunciation lexicon used in the baseline system provides only the baseform pronunciation for each of the word entries. In real speech, the baseform pronunciations are realized differently, depending on the speakers, speaking styles, etc. Phone change means that the pronunciation variation can be considered as one or more Initial or Final (IF) unit in the baseform pronunciation being substituted by another IF unit. Note that the substituting surface-form unit is also one of the legitimate IF units, as listed in Tables 1 and 2. A pronunciation model (PM) is a descriptive and predictive model by which the surface-form pronunciation(s) can be derived from the baseform one. There have been three different types of models proposed by previous studies. They are: 1) phonological rules for generating pronunciation variations [Wester 2003 ] [Kessens et al. 2003 ], 2) a pronunciation variation dictionary (PVD) that explicitly lists alternative pronunciations [Aubert and Dugast 1995] [Kessens et al. 1999] [Liu et al. 2000] , and 3) statistical decision trees that predict pronunciation variations according to phonetic context [Riley et al. 1999 [Sara\u00e7lar et al. 2000] . In this study, two different approaches to handling phone changes in Cantonse ASR are formulated and evaluated. The first approach uses a probabilistic PVD to augment the baseform lexicon. This is a straightforward and commonly used method that has been proven effective for various tasks and languages [Strik and Cucchiarini 1999] . In the second approach, pronunciation variation information is introduced during the decoding process. Decision tree based PMs are used to dynamically expand the search space. In [Sara\u00e7lar et al. 2000 ], a similar idea was presented. Decision tree based PMs were applied to a word lattice to construct a recognition network that includes surface-form realizations.",
"cite_spans": [
{
"start": 834,
"end": 846,
"text": "[Wester 2003",
"ref_id": "BIBREF29"
},
{
"start": 849,
"end": 869,
"text": "[Kessens et al. 2003",
"ref_id": "BIBREF15"
},
{
"start": 968,
"end": 992,
"text": "[Aubert and Dugast 1995]",
"ref_id": null
},
{
"start": 993,
"end": 1014,
"text": "[Kessens et al. 1999]",
"ref_id": "BIBREF14"
},
{
"start": 1015,
"end": 1032,
"text": "[Liu et al. 2000]",
"ref_id": null
},
{
"start": 1137,
"end": 1155,
"text": "[Riley et al. 1999",
"ref_id": "BIBREF23"
},
{
"start": 1156,
"end": 1178,
"text": "[Sara\u00e7lar et al. 2000]",
"ref_id": "BIBREF26"
},
{
"start": 1484,
"end": 1512,
"text": "[Strik and Cucchiarini 1999]",
"ref_id": "BIBREF27"
},
{
"start": 1694,
"end": 1715,
"text": "[Sara\u00e7lar et al. 2000",
"ref_id": "BIBREF26"
}
],
"ref_spans": [
{
"start": 522,
"end": 537,
"text": "Tables 1 and 2.",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Handling Phone Change with Pronunciation Models",
"sec_num": "3."
},
{
"text": "In this study, the information about Cantonese pronunciation variations is obtained through the data-driven approach. This is done by aligning the baseform transcriptions with the recognized surface-form IF sequences for all training utterances. For each training utterance, the surface-form IF sequence is obtained through phoneme recognition with the acoustic models as described in Section 2.3. To reflect the syllable structure of Cantonese, the recognition output is constrained to be a sequence of Initial-Final pairs. With this approach, only substitutions at the IF level are considered pronunciation variations. Partial change of an IF unit and the deletion of an entire Initial or Final are not reflected in the surface-form IF sequences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Use of a Pronunciation Variation Dictionary (PVD)",
"sec_num": "3.1"
},
{
"text": "The surface-form phoneme sequence is then aligned with the baseform transcription. This gives a phoneme accuracy of 90.33%. The recognition errors are due, at least partially, to phoneme-level pronunciation variation. For a particular baseform phoneme b and a surface-form phoneme s, the probability of b being pronounced as s is computed based on the number of times that b is recognized as s. This probability is referred to as the variation probability (VP). As a result, each pair of IF units is described with a probability of being confused. This is also referred to as a confusion matrix [Liu et al. 2000] . It is assumed that systematic phone change can be detected by a relatively high VP, while a low VP is more likely due to recognition errors. A VP threshold is used to prune those less frequent surface-form pronunciations. As a result, for each baseform IF unit, we can find a certain number of surface-form units, each with a pre-computed VP.",
"cite_spans": [
{
"start": 595,
"end": 612,
"text": "[Liu et al. 2000]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Use of a Pronunciation Variation Dictionary (PVD)",
"sec_num": "3.1"
},
{
"text": "A straightforward way of handling pronunciation variation is to augment the basic pronunciation lexicon with alternative pronunciations [Strik and Cucchiarini 1999] . Such an augmented lexicon is named a pronunciation variation dictionary (PVD). In the PVD, each word can have multiple pronunciations, each being assigned a word-level variation probability (VP). The PVD can be obtained from the IF confusion matrix. The word-level VP is given by multiplying the phone-level VPs of all the individual phonemes in the surface-form pronunciation. With the use of the PVD, the goal of speech recognition is essentially to search for the most probable word sequence by considering all possible surface-form realizations. This can be conceptually illustrated by modifying Eq. 2 ",
"cite_spans": [
{
"start": 136,
"end": 164,
"text": "[Strik and Cucchiarini 1999]",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Use of a Pronunciation Variation Dictionary (PVD)",
"sec_num": "3.1"
},
{
"text": "The PVD includes both context-independent and context-dependent phone changes. Since each word is treated individually, the phonetic context being considered is limited to within the word. To deal with cross-word context-dependent phone changes, we propose applying pronunciation models at the decoding level. Our baseline system uses a one-pass search algorithm [Choi 2001] . The search space is structured as lexical trees. Each node on a tree corresponds to a baseform IF unit. The search is token based. Each token represents a path that reaches a particular lexical node. The propagation of tokens follows the lexical trees, which cover only the legitimate phoneme sequences as specified by the pronunciation lexicon. The search algorithm can be modified in a way that the number of alive tokens is increased to account for pronunciation variations. When a path extends from a particular IF node, its destination node can be either the legitimate node (baseform pronunciation) or any of the predicted surface-form nodes. In other words, the search space is dynamically expanded during the search process.",
"cite_spans": [
{
"start": 363,
"end": 374,
"text": "[Choi 2001]",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Prediction of Pronunciation Variation during Decoding",
"sec_num": "3.2"
},
{
"text": "In this approach, a context-dependent pronunciation model is needed to predict the surface-form phoneme given the baseform phoneme and its context. It is implemented using the decision tree clustering technique, following the approaches described in [Riley et al. 1999 ] [Fosler-Lussier 1999 . Each baseform phoneme is described using a decision tree. Given a baseform phoneme, as well as its left context (the right context is not available in a forward Viterbi search), the respective decision-tree pronunciation model (DTPM) gives all possible surface-form realizations and their corresponding VPs [Kam and Lee 2002] .",
"cite_spans": [
{
"start": 250,
"end": 268,
"text": "[Riley et al. 1999",
"ref_id": "BIBREF23"
},
{
"start": 269,
"end": 291,
"text": "] [Fosler-Lussier 1999",
"ref_id": null
},
{
"start": 601,
"end": 619,
"text": "[Kam and Lee 2002]",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Prediction of Pronunciation Variation during Decoding",
"sec_num": "3.2"
},
{
"text": "Like the confusion matrix, the DTPM is trained with the phoneme recognition outputs for the CUSENT training utterances. The training involves an optimization process by which the surface-form phonemes are clustered based on phonetic context. At a particular node of the tree, a set of \"yes/no\" questions about the phonetic context are evaluated. Each question leads to a different partition of the training data. The question that minimizes the overall conditional entropy of the surface-form realizations is selected for that node. The node-splitting process stops when there are too few training data [Kam 2003 ]. Table 5 gives the recognition results with the use of PVDs that are constructed with different values of the VP threshold. The baseline system uses the basic pronunciation lexicon that contains 6,451 words. The size of the PVD increases as the VP threshold decreases. It is obvious that the introduction of pronunciation variants improves recognition performance. The best performance is attained with a VP threshold of 0.05. In this case, the PVD contains 8,568 pronunciations for the 6,451 words, i.e. 1.33 pronunciation variants per word. With a very small value for the VP threshold, e.g. 0.02, the recognition performance is not good because there are too many pronunciation variants being included and some of them do not really represent pronunciation variation. Table 6 shows the recognition results attained by using the DTPM for dynamic search space expansion. It appears that this approach is as effective as the PVD. Unlike the results for the PVD, the performance with a VP threshold of 0.2 is better than that with a threshold of 0.05. This means that the predictions made by the DTPM should be pruned more stringently than the IF confusion matrix. Because of its context-dependent nature, the DTPM has relatively less training data, and the variation probabilities cannot be reliably estimated. It is preferable not to include those unreliably predicted pronunciation variants. By analyzing the recognition results in detail, it is observed that many errors are corrected by allowing the following pronunciation variations:",
"cite_spans": [
{
"start": 603,
"end": 612,
"text": "[Kam 2003",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 616,
"end": 623,
"text": "Table 5",
"ref_id": "TABREF7"
},
{
"start": 1386,
"end": 1393,
"text": "Table 6",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Prediction of Pronunciation Variation during Decoding",
"sec_num": "3.2"
},
{
"text": "Initials: [gw]\u2192[g], [n]\u2192[l], [ng]\u2192null Finals: [ang]\u2192[an], [ng]\u2192[m] (syllabic nasal)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results and Discussion",
"sec_num": "3.3"
},
{
"text": "These observations match well with the findings in sociolinguistic studies on Cantonese phonology (Section 2.2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results and Discussion",
"sec_num": "3.3"
},
{
"text": "Unlike phone changes, a sound change cannot be described as a simple substitution of one phoneme for another. It is regarded as a partial change from the baseform phoneme to a surface-form phoneme [Liu and Fung 2003 ]. Our approaches presented below attempt to refine the acoustic models to handle the acoustic variation caused by sound changes. The acoustic models are continuous-density HMMs. The output probability density function (pdf) at each HMM state is a mixture of Gaussian distributions. The use of multiple mixture components is intended to describe complex acoustic variabilities. The acoustic models trained only according to the baseform pronunciations are referred to as baseform models. Each baseform phoneme may have different surface-form realizations. The acoustic models representing these surface-form phonemes are referred to as surface-form models. A baseform model doesnot reflect the acoustic properties of the relevant surface-form phonemes. One way of dealing with this deficiency is through the sharing of Gaussian mixture components among the baseform and surface-form models. In [Sara\u00e7lar et al. 2000] , a state-level pronunciation model (SLPM) was proposed. It allows the HMM states of a baseform model to share the output densities of its surface-form phonemes. A state-to-state alignment was obtained from decision-tree PMs, and the most frequently confused state pairs were involved in parameter sharing. In [Liu and Fung 2004] , the method of phonetic mixtures tying was applied to deal with sound changes. A set of so-called extended phone units were derived from acoustic training data to describe the most prominent phonetic confusion. These units were then modeled by mixture tying with the baseform models. In this study, we investigate both the sharing and adaptation of the acoustic model parameters at the mixture level [Kam et al. 2003 ].",
"cite_spans": [
{
"start": 197,
"end": 215,
"text": "[Liu and Fung 2003",
"ref_id": "BIBREF20"
},
{
"start": 1110,
"end": 1132,
"text": "[Sara\u00e7lar et al. 2000]",
"ref_id": "BIBREF26"
},
{
"start": 1443,
"end": 1462,
"text": "[Liu and Fung 2004]",
"ref_id": "BIBREF21"
},
{
"start": 1864,
"end": 1880,
"text": "[Kam et al. 2003",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Handling Sound Change by Acoustic Model Refinement",
"sec_num": "4."
},
{
"text": "First of all, the states of the baseform and surface-form models are aligned. It is assumed that both models have the same number of states. Then, state j of the baseform model is aligned with state j of the surface-form model. Consider a baseform phoneme B . The output pdf at state j is given as ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sharing of Mixture Components",
"sec_num": "4.1"
},
{
"text": "where M is the number of Gaussian mixture components, and jm w is the weight for the mth mixture component. The baseform output pdf can be modified to include the contributions from the surface-form states",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sharing of Mixture Components",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "1 1 1 1 1 ( , ) { ( ) ( ) ( ) 2} 2 T f g f g f g f g g f d f g trace \u00b5 \u00b5 \u00b5 \u00b5 \u2212 \u2212 \u2212 \u2212 = \u2211 +\u2211 \u2212 \u2212 +\u2211 \u2211 +\u2211 \u2211 \u2212 \u0399 ,",
"eq_num": "(7)"
}
],
"section": "Sharing of Mixture Components",
"sec_num": "4.1"
},
{
"text": "where \u00b5 and \u03a3 denote the mean vectors and the covariance matrices of the two distributions, respectively, and I is the identity matrix.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sharing of Mixture Components",
"sec_num": "4.1"
},
{
"text": "As a result, for this pair of baseform and surface-form states, each Gaussian component ( ) B m i is associated with k surface-form components, as illustrated in Figure 3 . The centroid of these k components is computed. If the baseform B has n surface forms, there will be n such centroids. These surface-form centroids and the corresponding baseform component are weighted with the VP, and together produce a new centroid that is taken as the adapted baseform component. In this way, the adapted model is expected to shift towards the surface-form phonemes. The extent of such a shift depends on the VP. The mean and covariance of the centroid of k weighted Gaussian components can be found by minimizing the following weighted divergence ",
"cite_spans": [],
"ref_spans": [
{
"start": 162,
"end": 170,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Sharing of Mixture Components",
"sec_num": "4.1"
},
{
"text": "where n f denotes the nth component and n a is the respective weighting coefficient. Assuming diagonal covariances, the weighted centroid is given as [Myrvoll and Soong 2003] ",
"cite_spans": [
{
"start": 150,
"end": 174,
"text": "[Myrvoll and Soong 2003]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sharing of Mixture Components",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "1 1 1 1 1 1 2 1 1 1 ( ( ) ( )) ( ) '( ) ( ( ) ( )) [ ( ) ( ( ) ( )) ] '( ) ( ) k n n c n n c k n n c n k n n n c n c k n n n a i i i i a i i a i i i i a i \u00b5 \u00b5 \u00b5 \u00b5 \u2212 \u2212 = \u2212 \u2212 = = \u2212 = \u03a3 \u03a3 +\u03a3 = \u03a3 \u03a3 +\u03a3 \u03a3 \u03a3 + \u2212 \u03a3 = \u03a3 \u03a3 .",
"eq_num": "(9)"
}
],
"section": "Sharing of Mixture Components",
"sec_num": "4.1"
},
{
"text": "Large-Vocabulary Continuous Speech Recognition Table 7 gives the recognition results attained with the two methods of acoustic model refinement. The VP threshold for surface-form prediction is set at 0.05. Apparently, both approaches improve recognition performance. The sharing of mixture components seems to be more effective than adaptation. However, this is at the cost of a substantial increase in model complexity. The baseline acoustic models have a total of 32,144 Gaussian components. The adaptation approach retains the same number of Gaussian components. The models obtained with the sharing approach have 37,505 components, 17% more than the baseline. If we use an equal number of components in the baseline acoustic models, the baseline word error rate will be reduced to 24.34%, and the benefit of sharing mixture components is only marginal. With the adaptation approach, the baseform pdf is shifted towards the corresponding surface forms. If a surface-form pdf is far away from the baseform one, the extent of the modification will be substantial and, consequently, the modified pdf may fail to model the original baseform. On the other hand, the sharing approach has the problem of undesirably including redundant components in the baseform models. Thus we combine these two approaches. The idea is to perform adaptation using the surface-form components that are close to the baseform, and at the same time, to use those relatively distant components for sharing.",
"cite_spans": [],
"ref_spans": [
{
"start": 47,
"end": 54,
"text": "Table 7",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Figure 3. Mapping between baseform and surfaceform mixture components",
"sec_num": null
},
{
"text": "The values of the KLD between the baseform pdf and the nearest surface-form pdf have been analyzed. As illustrative examples, the histograms of the KLD at different states between [aak] (baseform) and [aa] (surface form), and between [aak] and [aat], are shown as in Figure 4 . There are two main types of KLD distributions: 1) concentration around small values (e.g., states 1 and 2 of the pair \"[aak]\u2192[aa]\"), and 2) a wide range of values (e.g., states 3 to 5 of the pair \"[aak]\u2192[aa]\"). A small KLD means that the mixture components of the baseform and surface forms are similar. In this case, the baseform components adapt to the surface form. In the case of a widely distributed KLD, the surface-form components should not be used to adapt the baseform components, but rather should be kept along with the modified baseform model in order to explicitly characterize irregular pronunciations. In this way, a combined approach to baseform model refinement is formulated.",
"cite_spans": [],
"ref_spans": [
{
"start": 267,
"end": 275,
"text": "Figure 4",
"ref_id": "FIGREF6"
}
],
"eq_spans": [],
"section": "Experimental Results and Discussion",
"sec_num": "4.3"
},
{
"text": "Despite the good intentions, the combined use of sharing and adaptation doesnot lead to favorable experimental results. With a total of 34,042 mixture components in the refined acoustic models, the word error rate is 24.57%. The baseline performance is 24.93% with the same model complexity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results and Discussion",
"sec_num": "4.3"
},
{
"text": "In this study, we have classified pronunciation variations into phone changes and sound changes. However, these are not well defined classifications, especially for the sound changes. There is not a clear boundary that separates a phoneme substitution (phone change) from a phoneme modification (sound change). This may partially explain why the proposed techniques of handling sound change are not as effective as the methods for handling phone change.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5."
},
{
"text": "The use of a PVD is intuitive and straightforward in implementation. It can reduce the word error rate noticeably. When constructing a PVD, the value of the VP threshold needs to be carefully determined. While a tight threshold obviously doesnot show any effect, a lax control of the PVD size leads to not only a long recognition time but also performance degradation. The method of dynamic search space expansion during decoding can bring about the same degree of performance improvement as the PVD. However, the training of context-dependent pronunciation prediction models requires a large amount of data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5."
},
{
"text": "The methods of acoustic model refinement donot improve recognition performance as much as we expected. Similar effect can be achieved by using more mixture components. Indeed, more mixture components can describe more complex acoustic variations, which include the variations caused by alternative pronunciations. The sharing of mixture Large-Vocabulary Continuous Speech Recognition components is equivalent to having more mixture components right at the beginning of acoustic models training. Adaptation of mixture components is not as effective as increasing the number of mixture components.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5."
},
{
"text": "For any of the above methods to be effective, the accurate and efficient acquisition of pronunciation variation information is most critical. Manual labeling is impractical. Automatic detection of pronunciation variations is still an open problem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5."
}
],
"back_matter": [
{
"text": "This research is partially supported by a Research Grant from the Hong Kong Research Grants Council (Ref: CUHK4206/01E).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgement",
"sec_num": null
},
{
"text": "where n S denotes the nth surface-form of B , N is the total number of surface-forms, ( , ) n VP S B is the variation probability of n S with respect to baseform B , and , ( ) More surface-form pronunciations bring in more mixture components to the modified baseform state. As the number of mixture components is changed, re-estimation of mixture weights is required.",
"cite_spans": [
{
"start": 86,
"end": 91,
"text": "( , )",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Large-Vocabulary Continuous Speech Recognition",
"sec_num": null
},
{
"text": "Although sharing mixture components yields an acoustically richer model, it also greatly increases the model size for which more memory space and higher computation complexities are required. Moreover, if the baseform and surface-form mixture components are very similar, including them all in the modified baseform is unnecessarily superfluous.We propose to refine the baseform acoustic models through parameters adaptation. The total number of model parameters remains unchanged. Like in the approach of mixture sharing, the states of the baseform and surface-form models are aligned. The surface-forms are generated from the IF confusion matrix. Consider the aligned states of the baseform phoneme The \"distance\" between two Gaussian distributions is calculated using the Kullback-Leibler divergence (KLD) [Myrvoll and Soong 2003] . Given two multivariate Gaussian distributions f and g , the symmetric KLD has the following closed form",
"cite_spans": [
{
"start": 809,
"end": 833,
"text": "[Myrvoll and Soong 2003]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptation of Mixture Components",
"sec_num": "4.2"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Improved acoustic-phonetic modeling in Philips' dictation system by handling liaisons and multiple pronunciations",
"authors": [
{
"first": "X",
"middle": [],
"last": "Aubert",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Dugast",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of 1995 European Conference on Speech Communication and Technology",
"volume": "",
"issue": "",
"pages": "767--770",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aubert, X., and C. Dugast, \"Improved acoustic-phonetic modeling in Philips' dictation system by handling liaisons and multiple pronunciations,\" In Proceedings of 1995 European Conference on Speech Communication and Technology, pp.767 -770.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Modern Cantonese Phonology",
"authors": [
{
"first": "R",
"middle": [
"S"
],
"last": "Bauer",
"suffix": ""
},
{
"first": "P",
"middle": [
"K"
],
"last": "Benedict",
"suffix": ""
}
],
"year": 1997,
"venue": "Trends in Linguistics",
"volume": "102",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bauer, R.S., and P.K. Benedict, Trends in Linguistics, Studies and Monographs 102, Modern Cantonese Phonology, Mouton de Gruyter, Berlin, New York, 1997.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A Quantitative Study of Sociolinguistic Variation in Cantonese",
"authors": [
{
"first": "D",
"middle": [
"S"
],
"last": "Bourgerie",
"suffix": ""
}
],
"year": 1990,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bourgerie, D.S., A Quantitative Study of Sociolinguistic Variation in Cantonese, PhD Thesis, The Ohio State University, 1990.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Automatic generation of pronunciation lexicons for Mandarin spontaneous speech",
"authors": [
{
"first": "W",
"middle": [],
"last": "Byrne",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Venkataramani",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Kamm",
"suffix": ""
},
{
"first": "T",
"middle": [
"F"
],
"last": "Zheng",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Fung",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "U",
"middle": [],
"last": "Ruhi",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the 2001 International Conference on Acoustics, Speech and Signal Processing",
"volume": "",
"issue": "",
"pages": "569--572",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Byrne, W., V. Venkataramani, T. Kamm, T.F. Zheng, Z. Song, P. Fung, Y. Liu and U. Ruhi, \"Automatic generation of pronunciation lexicons for Mandarin spontaneous speech,\" In Proceedings of the 2001 International Conference on Acoustics, Speech and Signal Processing, l.1, pp.569 -572.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A Grammar of Spoken Chinese",
"authors": [
{
"first": "Y",
"middle": [
"R"
],
"last": "Chao",
"suffix": ""
}
],
"year": 1965,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chao, Y.R., A Grammar of Spoken Chinese, University of California Press, 1965.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "An Efficient Decoding Method for Continuous Speech Recognition Based on a Tree-Structured Lexicon",
"authors": [
{
"first": "W",
"middle": [
"N"
],
"last": "Choi",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Choi, W.N., An Efficient Decoding Method for Continuous Speech Recognition Based on a Tree-Structured Lexicon, MPhil Thesis, The Chinese University of Hong Kong, 2001. CUPDICT: Cantonese Pronunciation Dictionary (Electronic Version), Department of Electronic Engineering, The Chinese University of Hong Kong, http://dsp.ee.cuhk.edu.hk/speech/, 2003.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Multi-level decision trees for static and dynamic pronunciation models",
"authors": [
{
"first": "E",
"middle": [],
"last": "Fosler-Lussier",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of 1999 European Conference on Speech Communication and Technology",
"volume": "",
"issue": "",
"pages": "463--466",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fosler-Lussier, E., \"Multi-level decision trees for static and dynamic pronunciation models,\" In Proceedings of 1999 European Conference on Speech Communication and Technology, pp.463 -466.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Studies in Yue Dialects 1: Phonology of Cantonese",
"authors": [
{
"first": "O.-K",
"middle": [
"Y"
],
"last": "Hashimoto",
"suffix": ""
}
],
"year": 1972,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hashimoto, O.-K. Y., Studies in Yue Dialects 1: Phonology of Cantonese, Cambridge University Press, 1972.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "(n-) and (l-) in Hong Kong Cantonese: A Sociolinguistic Case Study",
"authors": [
{
"first": "M",
"middle": [
"T"
],
"last": "Ho",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ho, M.T., (n-) and (l-) in Hong Kong Cantonese: A Sociolinguistic Case Study, MA Thesis, University of Essex, 1994.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Spoken Language Processing: A Guide to Theory, Algorithm and System Development",
"authors": [
{
"first": "X",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Acero",
"suffix": ""
},
{
"first": "H",
"middle": [
"W"
],
"last": "Hon",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Huang, X., A. Acero, and H.W. Hon, Spoken Language Processing: A Guide to Theory, Algorithm and System Development, Prentice Hall PTR., 2001.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Pronunciation Modeling for Cantonese Speech Recognition",
"authors": [
{
"first": "P",
"middle": [],
"last": "Kam",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kam, P., Pronunciation Modeling for Cantonese Speech Recognition, MPhil Thesis, The Chinese University of Hong Kong, 2003.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Modeling pronunciation variation for Cantonese speech recognition",
"authors": [
{
"first": "P",
"middle": [],
"last": "Kam",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of ISCA ITR-Workshop on Pronunciation Modeling and Lexicon Adaptation",
"volume": "",
"issue": "",
"pages": "12--17",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kam, P., and T. Lee, \"Modeling pronunciation variation for Cantonese speech recognition,\" In Proceedings of ISCA ITR-Workshop on Pronunciation Modeling and Lexicon Adaptation 2002, pp.12-17.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Modeling Cantonese pronunciation variation by acoustic model refinement",
"authors": [
{
"first": "P",
"middle": [],
"last": "Kam",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Soong",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of 2003 European Conference on Speech Communication and Technology",
"volume": "",
"issue": "",
"pages": "1477--1480",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kam, P., T. Lee and F. Soong, \"Modeling Cantonese pronunciation variation by acoustic model refinement,\" In Proceedings of 2003 European Conference on Speech Communication and Technology, pp.1477 -1480.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Improving the performance of a Dutch CSR by modeling within-word and cross-word pronunciation variation",
"authors": [
{
"first": "J",
"middle": [
"M"
],
"last": "Kessens",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Wester",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Strik",
"suffix": ""
}
],
"year": 1999,
"venue": "Speech Communication",
"volume": "29",
"issue": "",
"pages": "193--207",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kessens, J.M., M. Wester and H. Strik, \"Improving the performance of a Dutch CSR by modeling within-word and cross-word pronunciation variation,\" Speech Communication, 29, pp.193 -207, 1999.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A data driven method for modeling pronunciation variation",
"authors": [
{
"first": "J",
"middle": [
"M"
],
"last": "Kessens",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Cucchiarini",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Strik",
"suffix": ""
}
],
"year": 2003,
"venue": "Speech Communication",
"volume": "40",
"issue": "",
"pages": "517--534",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kessens, J.M., C. Cucchiarini and H. Strik, \"A data driven method for modeling pronunciation variation,\" Speech Communication, 40, pp.517 -534, 2003.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Spoken language resources for Cantonese speech processing",
"authors": [
{
"first": "T",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "W",
"middle": [
"K"
],
"last": "Lo",
"suffix": ""
},
{
"first": "P",
"middle": [
"C"
],
"last": "Ching",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Meng",
"suffix": ""
}
],
"year": 2002,
"venue": "Speech Communication",
"volume": "36",
"issue": "3-4",
"pages": "327--342",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lee, T., W.K. Lo, P.C. Ching and H. Meng, \"Spoken language resources for Cantonese speech processing,\" Speech Communication, 36, No.3-4, pp.327-342, 2002",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Linguistic Society of Hong Kong (LSHK), Hong Kong Jyut Ping Characters Table",
"authors": [],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Linguistic Society of Hong Kong (LSHK), Hong Kong Jyut Ping Characters Table (\u7cb5\u8a9e\u62fc\u97f3 \u5b57\u8868). Linguistic Society of Hong Kong Press (\u9999\u6e2f\u8a9e\u8a00\u5b78\u6703\u51fa\u7248), 1997.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Mandarin accent adaptation based on context-independent/context-dependent pronunciation modeling",
"authors": [
{
"first": "M",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the 2000 International Conference on Acoustics, Speech and Signal Processing",
"volume": "2",
"issue": "",
"pages": "1025--1028",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liu, M., B. Xu, T. Huang, Y. Deng and C. Li, \"Mandarin accent adaptation based on context-independent/context-dependent pronunciation modeling,\" In Proceedings of the 2000 International Conference on Acoustics, Speech and Signal Processing, 2, pp.1025-1028.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Pronunciation Modeling for Spontaneous Mandarin Speech Recognition",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liu, Y., Pronunciation Modeling for Spontaneous Mandarin Speech Recognition, PhD Thesis, The Hong Kong University of Science and Technology, 2002.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Modeling partial pronunciation variations for spontaneous Mandarin speech recognition",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Fung",
"suffix": ""
}
],
"year": 2003,
"venue": "Computer Speech and Language",
"volume": "17",
"issue": "",
"pages": "357--379",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liu, Y. and P. Fung, \"Modeling partial pronunciation variations for spontaneous Mandarin speech recognition,\" Computer Speech and Language, 17, 2003, pp.357 -379.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "State-dependent phonetic tied mixtures with pronunication modeling for spontaneous speech recognition",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Fung",
"suffix": ""
}
],
"year": 2004,
"venue": "IEEE Trans. Speech and Audio Processing",
"volume": "12",
"issue": "4",
"pages": "351--364",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liu, Y. and P. Fung, \"State-dependent phonetic tied mixtures with pronunication modeling for spontaneous speech recognition,\" IEEE Trans. Speech and Audio Processing, 12(4), 2004, pp.351 -364.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Optimal clustering of multivariate normal distributions using divergence and its application to HMM adaptation",
"authors": [
{
"first": "T",
"middle": [
"A"
],
"last": "Myrvoll",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Soong",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the 2003 International Conference on Acoustics, Speech and Signal Processing",
"volume": "1",
"issue": "",
"pages": "552--555",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Myrvoll, T.A. and F. Soong, \"Optimal clustering of multivariate normal distributions using divergence and its application to HMM adaptation\", In Proceedings of the 2003 International Conference on Acoustics, Speech and Signal Processing, 1, pp.552 -555.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Stochastic pronunciation modeling from hand-labelled phonetic corpora",
"authors": [
{
"first": "M",
"middle": [],
"last": "Riley",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Byrne",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Finke",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Khudanpur",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Ljolje",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Mcdonough",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Nock",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Sara\u00e7lar",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Wooters",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Zavaliagkos",
"suffix": ""
}
],
"year": 1999,
"venue": "Speech Communication",
"volume": "29",
"issue": "",
"pages": "209--224",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Riley, M., W. Byrne, M. Finke, S. Khudanpur, A. Ljolje, J. McDonough, H. Nock, M. Sara\u00e7lar, C. Wooters and G. Zavaliagkos, \"Stochastic pronunciation modeling from hand-labelled phonetic corpora,\" Speech Communication, 29, 1999, pp.209 -224.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Pronunciation ambiguity vs. pronunciation variability in speech recognition",
"authors": [
{
"first": "M",
"middle": [],
"last": "Sara\u00e7lar",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Khudanpur",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the 2000 International Conference on Acoustics, Speech and Signal Processing",
"volume": "3",
"issue": "",
"pages": "1679--1682",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sara\u00e7lar, M. and S. Khudanpur, \"Pronunciation ambiguity vs. pronunciation variability in speech recognition,\" In Proceedings of the 2000 International Conference on Acoustics, Speech and Signal Processing, 3, pp.1679-1682.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Large-Vocabulary Continuous Speech Recognition",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Large-Vocabulary Continuous Speech Recognition",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Pronunciation modeling by sharing Gaussian densities across phonetic models",
"authors": [
{
"first": "M",
"middle": [],
"last": "Sara\u00e7lar",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Nock",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Khudanpur",
"suffix": ""
}
],
"year": 2000,
"venue": "Computer Speech and Language",
"volume": "14",
"issue": "",
"pages": "137--160",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sara\u00e7lar, M., H. Nock and S. Khudanpur, \"Pronunciation modeling by sharing Gaussian densities across phonetic models,\" Computer Speech and Language, 14, 2000, pp.137 -160.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Modeling pronunciation variation for ASR: a survey of the literature",
"authors": [
{
"first": "H",
"middle": [],
"last": "Strik",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Cucchiarini",
"suffix": ""
}
],
"year": 1999,
"venue": "Speech Communication",
"volume": "29",
"issue": "",
"pages": "255--246",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Strik, H. and C. Cucchiarini, \"Modeling pronunciation variation for ASR: a survey of the literature,\" Speech Communication, 29, 1999, pp.255 -246.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "MLLR adaptation techniques for pronunciation modeling",
"authors": [
{
"first": "V",
"middle": [],
"last": "Venkataramani",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Byrne",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of IEEE Workshop on Automatic Speech Recognition and Understanding",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Venkataramani, V. and W. Byrne, \"MLLR adaptation techniques for pronunciation modeling,\" In Proceedings of IEEE Workshop on Automatic Speech Recognition and Understanding 2001, CD-ROM.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Pronunciation modeling for ASR -knowledge-based and data-derived methods",
"authors": [
{
"first": "M",
"middle": [],
"last": "Wester",
"suffix": ""
}
],
"year": 2003,
"venue": "Computer Speech and Language",
"volume": "17",
"issue": "",
"pages": "69--85",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wester, M., \"Pronunciation modeling for ASR -knowledge-based and data-derived methods,\" Computer Speech and Language, 17, 2003, pp.69 -85.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Large Vocabulary Continuous Speech Recognition for Cantonese",
"authors": [
{
"first": "Y",
"middle": [
"W"
],
"last": "Wong",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wong, Y.W., Large Vocabulary Continuous Speech Recognition for Cantonese, MPhil Thesis, The Chinese University of Hong Kong, 2000.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "The composition of a Cantonese syllable. [] means optional."
},
"FIGREF1": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Initial [ng] and the Final [o], with Tone 5. The second syllable is formed from the Initial [m] and the Final [un], with Tone 4."
},
"FIGREF6": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "KLD distributions for variation pairs [aak]\u2192[aa] and [aak]\u2192 [aat] [aak]\u2192[aa] [aak]\u2192[aat]"
},
"TABREF0": {
"html": null,
"type_str": "table",
"content": "<table><tr><td>Jyut Ping,</td></tr></table>",
"num": null,
"text": "Table 2 list the Initials and Finals of Cantonese. They are labeled using"
},
"TABREF1": {
"html": null,
"type_str": "table",
"content": "<table><tr><td colspan=\"2\">Jyut Ping symbols Manner of Articulation</td><td>Place of Articulation</td></tr><tr><td>[b]</td><td>Plosive, unaspirated</td><td>Labial</td></tr><tr><td>[d]</td><td>Plosive, unaspirated</td><td>Alveolar</td></tr><tr><td>[g]</td><td>Plosive, unaspirated</td><td>Velar</td></tr><tr><td>[p]</td><td>Plosive, aspirated</td><td>Labial</td></tr><tr><td>[t]</td><td>Plosive, aspirated</td><td>Alveolar</td></tr><tr><td>[k]</td><td>Plosive, aspirated</td><td>Velar</td></tr><tr><td>[gw]</td><td>Plosive, unaspirated, lip-rounded</td><td>Velar, labial</td></tr><tr><td>[kw]</td><td>Plosive, aspirated, lip-rounded</td><td>Velar, labial</td></tr><tr><td>[z]</td><td>Affricate, unaspirated</td><td>Alveolar</td></tr><tr><td>[c]</td><td>Affricate, aspirated</td><td>Alveolar</td></tr><tr><td>[s]</td><td>Fricative</td><td>Alveolar</td></tr><tr><td>[f]</td><td>Fricative</td><td>Dental-labial</td></tr><tr><td>[h]</td><td>Fricative</td><td>Vocal</td></tr><tr><td>[j]</td><td>Glide</td><td>Alveolar</td></tr><tr><td>[w]</td><td>Glide</td><td>Labial</td></tr><tr><td>[l]</td><td>Liquid</td><td>Lateral</td></tr><tr><td>[m]</td><td>Nasal</td><td>Labial</td></tr><tr><td>[n]</td><td>Nasal</td><td>Alveolar</td></tr><tr><td>[ng]</td><td>Nasal</td><td>Velar</td></tr></table>",
"num": null,
"text": ""
},
"TABREF2": {
"html": null,
"type_str": "table",
"content": "<table><tr><td>Word</td><td>Chinese characters</td><td>Base syllables</td><td>Initial &amp; Final</td><td>Tone</td></tr><tr><td>\u6211\u5011</td><td>\u6211 \u5011</td><td>ngo mun</td><td>[ng] [o] [m] [un]</td><td>5 4</td></tr></table>",
"num": null,
"text": ""
},
"TABREF3": {
"html": null,
"type_str": "table",
"content": "<table><tr><td/><td colspan=\"2\">[n] ~ [l]</td><td>Inter-change between nasal and lateral Initials</td></tr><tr><td>Initial consonants</td><td colspan=\"2\">[ng]~ null [gw] \u2192 [g]</td><td>Inter-change between velar nasal and null Initial. Change from labialized velar to delabialized velar before</td></tr><tr><td/><td/><td/><td>back-round vowel [o]</td></tr><tr><td>Syllabic nasal</td><td>\u2192 [m]</td><td>[ng]</td><td>Change from velar nasal to bilabial nasal</td></tr><tr><td/><td colspan=\"2\">-ng \u2192 -n</td><td>Change from velar nasal coda to dental nasal coda</td></tr><tr><td colspan=\"2\">Final consonants -k ~ -t</td><td/><td>Inter-change between velar stop coda and dental or glottal</td></tr><tr><td/><td>-k ~ -p</td><td/><td>stop coda</td></tr><tr><td colspan=\"4\">It was found that [n]\u2192[l], [ng]\u2192null, and [gw]\u2192[g] correlate with the sex and age of a</td></tr><tr><td>speaker</td><td/><td/><td/></tr></table>",
"num": null,
"text": ""
},
"TABREF4": {
"html": null,
"type_str": "table",
"content": "<table><tr><td colspan=\"2\">can be made as</td><td/><td/><td/></tr><tr><td>W</td><td colspan=\"4\">* arg max ( | ) arg max ( | ) ( ) PW O POWPW = =</td><td>.</td></tr><tr><td/><td/><td>W</td><td>W</td><td/></tr><tr><td/><td/><td/><td colspan=\"3\">Knowledge Sources</td></tr><tr><td/><td/><td/><td/><td colspan=\"2\">Pronunciation</td></tr><tr><td/><td/><td colspan=\"2\">Language</td><td colspan=\"2\">Lexicon</td><td>Acoustic</td></tr><tr><td/><td/><td/><td>Model</td><td colspan=\"2\">P(B|W)</td><td>Model</td></tr><tr><td/><td/><td/><td>P(W)</td><td/><td>P(O|B)</td></tr><tr><td colspan=\"2\">Input</td><td/><td/><td/></tr><tr><td colspan=\"2\">Speech</td><td>Front-End Processing</td><td>Acoustic</td><td colspan=\"2\">Decoder</td><td>Recognized word sequence</td></tr><tr><td/><td/><td/><td>Feature</td><td/></tr><tr><td/><td/><td/><td>vectors</td><td/></tr><tr><td/><td/><td/><td colspan=\"3\">Figure 2. A typical LVCSR system</td></tr></table>",
"num": null,
"text": "Large-Vocabulary Continuous Speech Recognition"
},
"TABREF7": {
"html": null,
"type_str": "table",
"content": "<table><tr><td/><td>Baseline</td><td>0.02</td><td>0.05</td><td>VP threshold 0.10</td><td>0.15</td><td>0.20</td></tr><tr><td>Word error rate (%)</td><td>25.34</td><td>23.91</td><td>23.49</td><td>23.70</td><td>23.64</td><td>23.58</td></tr><tr><td>No. of word entries in the PVD</td><td>6,451</td><td>20,840</td><td>8,568</td><td>7,356</td><td>7,210</td><td>7,171</td></tr></table>",
"num": null,
"text": ""
},
"TABREF8": {
"html": null,
"type_str": "table",
"content": "<table><tr><td/><td>Baseline</td><td colspan=\"2\">VP threshold 0.05 0.2</td></tr><tr><td>Word error rate (%)</td><td>25.34</td><td>23.53</td><td>23.27</td></tr></table>",
"num": null,
"text": ""
},
"TABREF9": {
"html": null,
"type_str": "table",
"content": "<table><tr><td/><td>Baseline</td><td>Sharing</td><td>Adaptation</td></tr><tr><td>Word error rate (%)</td><td>25.34</td><td>23.96</td><td>24.70</td></tr></table>",
"num": null,
"text": ""
}
}
}
}