| { |
| "paper_id": "O08-5004", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T08:02:11.984578Z" |
| }, |
| "title": "An HNM Based Scheme for Synthesizing Mandarin Syllable Signal", |
| "authors": [ |
| { |
| "first": "Hung-Yan", |
| "middle": [], |
| "last": "Gu", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "National Taiwan University of Science and Technology", |
| "location": { |
| "addrLine": "43 Keelung Rd., Sec. 4", |
| "settlement": "Taipei", |
| "country": "Taiwan" |
| } |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Yan-Zuo", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "National Taiwan University of Science and Technology", |
| "location": { |
| "addrLine": "43 Keelung Rd., Sec. 4", |
| "settlement": "Taipei", |
| "country": "Taiwan" |
| } |
| }, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "In this paper, an HNM based scheme is developed to synthesize Mandarin syllable signals. With this scheme, a Mandarin syllable can be recorded just once, and diverse prosodic characteristics can be synthesized for it without suffering significant signal-quality degradation. In our scheme, a synthetic syllable's duration is subdivided to its comprising phonemes and a piece-wise linear mapping function is constructed. With this mapping function, a control point on a synthetic syllable can be mapped to locate its corresponding analysis frames. Then, the analysis frames' HNM parameters are interpolated to obtain the HNM parameters for the control point. Furthermore, for pitch-height adjusting, another timbre-preserving interpolation is performed on the HNM parameters of a control point. Thereafter, signal samples are synthesized according to the HNM synthesis equations rewritten here. This HNM based scheme has been programmed to synthesize Mandarin speech. According to the perception tests, our HNM based scheme is found to be apparently better than a PSOLA based scheme in signal clarity, i.e. much clearer and no reverberation.", |
| "pdf_parse": { |
| "paper_id": "O08-5004", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "In this paper, an HNM based scheme is developed to synthesize Mandarin syllable signals. With this scheme, a Mandarin syllable can be recorded just once, and diverse prosodic characteristics can be synthesized for it without suffering significant signal-quality degradation. In our scheme, a synthetic syllable's duration is subdivided to its comprising phonemes and a piece-wise linear mapping function is constructed. With this mapping function, a control point on a synthetic syllable can be mapped to locate its corresponding analysis frames. Then, the analysis frames' HNM parameters are interpolated to obtain the HNM parameters for the control point. Furthermore, for pitch-height adjusting, another timbre-preserving interpolation is performed on the HNM parameters of a control point. Thereafter, signal samples are synthesized according to the HNM synthesis equations rewritten here. This HNM based scheme has been programmed to synthesize Mandarin speech. According to the perception tests, our HNM based scheme is found to be apparently better than a PSOLA based scheme in signal clarity, i.e. much clearer and no reverberation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Since the introduction of PSOLA (pitch synchronous overlap and add) [Moulines et al. 1900] , it has been widely used to synthesize speech signal. However, the signal quality of the synthetic speech by PSOLA is not stable. The quality will be degraded a lot if the pitch-contours or durations of the recorded syllables are considerably changed [Dutoit 1997 ].", |
| "cite_spans": [ |
| { |
| "start": 68, |
| "end": 90, |
| "text": "[Moulines et al. 1900]", |
| "ref_id": null |
| }, |
| { |
| "start": 343, |
| "end": 355, |
| "text": "[Dutoit 1997", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "Here, signal quality actually means signal clarity, i.e. a signal that is less reverberant and less noisy is better in quality. It may be argued that the prosodic characteristics of a syllable need only be slightly changed in a corpus-based approach [Chou 1999; Chang 2005] . This argument will hold only if a sufficiently large quantity of speech data is recorded and used. Otherwise, pitch contours between some adjacent syllables may not be smoothly connected and the speaking rate may not be kept constant within a synthetic sentence. Then, pitch-contours and durations will still need to be changed considerably. In addition, the potential for economically transferring a speech synthesis scheme from Mandarin to another language (e.g., Min-nan or Hakka) is an important consideration factor for us. Therefore, we tend not to adopt an expensive approach, such as corpus-based re-sequencing.", |
| "cite_spans": [ |
| { |
| "start": 250, |
| "end": 261, |
| "text": "[Chou 1999;", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 262, |
| "end": 273, |
| "text": "Chang 2005]", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "Mandarin is a tonal language, and the distinction of the five tones of Mandarin mainly relies on the height and shape of a syllable's pitch-contour. When a signal-model based approach is adopted, the pitch-contour and duration of a syllable inevitably needs considerable change. Thus, the synthesis method, PSOLA, will not be adequate for use, and another suitable technique should be found or developed. Recently, we have found that HNM (harmonic-plus-noise model) is a good base because it can be improved to synthesize Mandarin syllable signals with much higher signal quality.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "HNM was proposed by Y. Stylianou to model speech signals to retain high signal quality after such processing as coding and synthesis [Stylianou 1996; Stylianou 2005] . It may be viewed as improving the sinusoidal model [Quatieri 2002 ] to better model the noise signal components in the higher frequency band of speech signal. In HNM, an MVF (maximum voiced frequency) detection method is provided to divide a speech frame's spectrum into lower and higher frequency parts. The lower-frequency part is modeled as a sum of harmonic partials as in sinusoidal model. In contrast, the higher-frequency part is modeled with a smoothed spectrum envelope that is represented with some cepstrum coefficients.", |
| "cite_spans": [ |
| { |
| "start": 133, |
| "end": 149, |
| "text": "[Stylianou 1996;", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 150, |
| "end": 165, |
| "text": "Stylianou 2005]", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 219, |
| "end": 233, |
| "text": "[Quatieri 2002", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "When applying HNM to synthesize Mandarin syllables, we find some issues that are not clearly explained or solved in the literature on HNM. The first issue (not clearly explained) is how to keep the timbre of synthetic syllables consistent, i.e. the timbre consistent issue. Note that we intend to record each of the 408 different Mandarin syllables just once then modify the height and shape of a recorded syllable's pitch-contour to that of a different tone's. When the pitch-contour of a syllable to be synthesized is given, the parameter values of the harmonic partials should be adjusted in a way that the timbre can be kept consistent. The second issue is how to determine the HNM parameter values for a control point [Dodge 1997; Moore 1990] placed at the synthetic time axis (of a synthetic syllable), i.e. the parameter determination issue. In speech synthesis, one must adjust a recorded syllable's duration to meet the duration requirement given by the prosodic parameter generation unit. When a control point at the synthetic time axis is mapped to a time point between two analysis frames of a recorded syllable, some method of interpolation is needed to determine the HNM parameter values for the control point. In addition, the third issue is how to warp the time axis of a synthetic syllable in order that more fluent syllables and sentences can be synthesized, i.e. the time warping issue. This issue is more relevant to speech synthesis than HNM. When a syllable's duration needs to be lengthened or shortened, a simple time warping method, i.e. linear warping, will usually result in lower perceived fluency.", |
| "cite_spans": [ |
| { |
| "start": 723, |
| "end": 735, |
| "text": "[Dodge 1997;", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 736, |
| "end": 747, |
| "text": "Moore 1990]", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "In this paper, the three issues mentioned above are investigated, and equations for signal synthesis with HNM are rewritten in a clearer notation. In addition, a system based on the extensions and rewritten equations for HNM signal synthesis is developed to synthesize Mandarin syllable signal. The main processing flow of the system is drawn in Figure 1 . When a syllable's signal is to be synthesized, its prosodic parameters' values are readily determined by the prosody unit. Hence, in the first block of Figure 1 , a synthetic syllable's time length can be planned and subdivided to its comprising phonemes. For example, the syllable /man/ has three phonemes, /m/, /a/, and /n/. Then, a piece-wise linear time mapping function is constructed to map the synthetic phonemes to their corresponding phonemes in the recorded syllables. In the second block of Figure 1 , control points are uniformly placed on the synthetic time axis. Then, HNM parameters' values for each control point are determined. In the following blocks, three types of signals are classified and synthesized separately. Here, the signal of a short unvoiced syllable-initial is directly copied from the recorded to the synthesized. The signal of a long unvoiced syllable-initial is synthesized as noise signal components in HNM while the signals of voiced initial and syllable-final are synthesized as the sum of both the harmonic and noise signal components. ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 346, |
| "end": 354, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| }, |
| { |
| "start": 509, |
| "end": 517, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| }, |
| { |
| "start": 859, |
| "end": 867, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "The issues of duration planning and time-axis mapping are not mentioned in the literature on HNM [Stylianou 1996; Stylianou 2005] . Mandarin syllables have the structure, C x VC n . The component, C x , may be null, a voiced consonant, or an unvoiced consonant while the component, C n , may be null or a nasal /n/ or /ng/. Also, the component, V, may be a vowel, diphthong, or triphthong. When C x is an unvoiced consonant, we classify it as a short-unvoiced (e.g. /b/, not aspirated) or long-unvoiced (e.g. /p/, aspirated). For a short-unvoiced, its signal will be directly copied from the initial part of the recorded syllable to the initial part of the synthetic syllable. This processing is indicated in the block at the right side of Figure 1 . However, for a long-unvoiced, its signal will be synthesized as the sum of noise signal components with HNM. This processing is indicated in the block at the left side of Figure 1 . In addition, C x is a voiced consonant or null, and it will be synthesized together with the syllable final, VC n , as the sum of both the harmonic and noise signal components.", |
| "cite_spans": [ |
| { |
| "start": 97, |
| "end": 113, |
| "text": "[Stylianou 1996;", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 114, |
| "end": 129, |
| "text": "Stylianou 2005]", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 740, |
| "end": 748, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| }, |
| { |
| "start": 922, |
| "end": 930, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Phoneme Duration Planning and Time Axis Mapping", |
| "sec_num": "2." |
| }, |
| { |
| "text": "When a syllable is started with a short-unvoiced consonant, e.g. /bau/, the time length of the consonant is planned as the corresponding consonant's length in the recorded syllable. In contrast, when started with a long-unvoiced consonant, the length of the consonant is planned by multiplying its original length with a factor, Fu. The value of Fu is first computed as the synthetic syllable's length divided by its corresponding recorded syllable's length. However, the value, Fu, is restricted to the range from 0.6 to 1.4, i.e. set to 1.4 when larger than 1.4 and set to 0.6 when smaller than 0.6. After the length of the unvoiced part, Du, is determined, the length of the voiced part, Dv, is apparently the synthetic syllable's length minus Du.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Phoneme Duration Planning and Time Axis Mapping", |
| "sec_num": "2." |
| }, |
| { |
| "text": "To plan the lengths of the phonemes within the voiced part, consider the example syllable, /man/. Suppose that in the recorded signal of /man/, the three phonemes, /m/, /a/, and /n/, occupy Rm, Ra, and Rn seconds, respectively, and Rv = Rm + Ra + Rn. Also, suppose that Dm, Da, and Dn represent the time lengths of the three phonemes within the synthetic syllable, and Dv = Dm + Da + Dn. Note that Dm (or Rm) is used here to denote the time length of the initial voiced consonant of a syllable, Da (or Ra) denotes the time length of the vowel nucleus, and Dn (or Rn) denotes the time length of the final nasal consonant. In this study, the values of Dm, Da, and Dn are planned according to an observation. That is, the consonant-to-vowel duration ratio, (Rm + Rn) / Rv, will become smaller when the syllable is uttered within a sentence instead of uttered in isolation. The planning procedure is as below. In this procedure, the value of Dm is planned by multiplying a duration reduction rate, r, with the time ratio (Rm / Rv) of its counterpart, Rm, in the recorded syllable. In the same way, the value of Dn is planned. By trying to decrease the value of r iteratively, the values of Dm and Dn are decreased gradually, and the value of Da finally becomes sufficiently large. As to the initial value of r, i.e. 0.85, and the vowel duration threshold, i.e. 0.5, they are set according to analyzing some real spoken sentences.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Phoneme Duration Planning and Time Axis Mapping", |
| "sec_num": "2." |
| }, |
| { |
| "text": "If the structure of a syllable is same as /san/ or /an/, i.e. without voiced initial consonant, then the values of Rm and Dm can be set to zero directly. Similarly, if the structure of a syllable is the same /ma/, i.e. without an ending nasal, then the values of Rn and Dn can be set to zero directly. After the values of Dm, Da, and Dn are determined, a mapping function from the phonemes in the synthetic syllable to their corresponding phonemes in the recorded syllable can be established and used in the second block of Figure 1 . The mapping function adopted here is as depicted in Figure 2 . That is, it is a piece-wise linear function. Although a simple mapping function is adopted here, we think the fluency level of the synthetic speech can still be improved a lot. In the future, we will study the mapping problem between the source and synthetic syllables with a more systematic method. ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 524, |
| "end": 532, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| }, |
| { |
| "start": 587, |
| "end": 595, |
| "text": "Figure 2", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Phoneme Duration Planning and Time Axis Mapping", |
| "sec_num": "2." |
| }, |
| { |
| "text": "In this paper, the source syllables are recorded at a sampling rate of 22,050Hz. In analyzing HNM parameters, frame size is set to 512 sample points (23.2ms), and frame shift is set to 256 sample points. However, in signal synthesis processing, the concept of a \"control point\" is adopted, which is commonly used in computer music synthesis [Dodge 1997; Moore 1990 ]. The term \"control point\" is used instead of \"frame\" because, in our scheme, the HNM parameters for a control point located at voiced part are obtained by interpolating the parameters from two corresponding analysis frames, i.e. not directly copying parameters from a frame into a control point (note that original HNM uses only direct copying). However, in synthesizing a long-unvoiced part, the HNM parameters of an analysis frame located at the unvoiced part are directly copied and used for a control point corresponding to it. These different manners of HNM parameter determination for voiced and long-unvoiced parts are illustrated in Figure 3 . From Figure 3 , it can be seen that the number of control points in the synthetic unvoiced part is same as the number of analysis frames in the recorded unvoiced part. Hence, the time axis is simply linearly shortened or lengthened. However, in the synthetic voiced part, adjacent control points are always placed 100 sample points (4.5ms) apart. Thus, the number of control points depends on the time length planned. Here, a fixed pace, 100 sample points, is adopted because an accurate control of spectrum progressing within the synthetic voiced part is intended. ", |
| "cite_spans": [ |
| { |
| "start": 341, |
| "end": 353, |
| "text": "[Dodge 1997;", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 354, |
| "end": 364, |
| "text": "Moore 1990", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 1008, |
| "end": 1016, |
| "text": "Figure 3", |
| "ref_id": "FIGREF3" |
| }, |
| { |
| "start": 1024, |
| "end": 1032, |
| "text": "Figure 3", |
| "ref_id": "FIGREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Control Point Placement", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "To determine the HNM parameter values for a control point within the synthetic voiced part, the first step is to do time-position mapping according to the constructed mapping function as shown in Figure 2 . Suppose the control point's time position, t s , on the synthetic time axis is mapped to t r on the recorded-syllable time axis. Then, we use the HNM parameters analyzed from the two frames numbered \u23a9 t r \u23ad and \u23a9 t r \u23ad+1 to interpolate out HNM parameters for the control point. Currently, we do the interpolation in a linear way. That is:", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 196, |
| "end": 204, |
| "text": "Figure 2", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Pitch-original HNM Parameters", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "1 (1 ) , , n n i i i r r A w A w A n t w t n + = \u2212 \u22c5 + \u22c5 = = \u2212 \u23a2 \u23a5 \u23a3 \u23a6 (1) 1 (1 ) n n i i i F w F w F + = \u2212 \u22c5 + \u22c5 (2) 1 ( ) n n n i i i i w \u03b8 \u03b8 \u03b8 \u03b8 + = \u22c5 \u2212 +", |
| "eq_num": "(3)" |
| } |
| ], |
| "section": "Pitch-original HNM Parameters", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "where n i A , n i F , and n i \u03b8 denote the amplitude, frequency, and phase of the i-th harmonic partial in the n-th analysis frame, and i A , i F , and i \u03b8 denote the amplitude, frequency, and phase of the i-th harmonic partial for the control point. Note that in Equation 3,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pitch-original HNM Parameters", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "1n i \u03b8 + represents the unwrapped phase of 1 n i \u03b8 + versus n i \u03b8 , i.e. 1 1 ( , ) n n n i i i puw \u03b8 \u03b8 \u03b8 + + = .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pitch-original HNM Parameters", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "The phase 1 n i \u03b8 + is unwrapped in order that the phase difference is within the range from -\u03c0 to \u03c0. Here, our modified phase unwrapping is done as:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pitch-original HNM Parameters", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "1 1 1 ( , ) 2 n n n n i i i i puw M \u03b8 \u03b8 \u03b8 \u03b8 \u03c0 + + + = = \u2212 \u22c5 (4) ( ) 1 1 1 , if , 2 , otherwise n n n n i i i i c \u03b8 \u03b8 \u03b8 \u03b8 \u03c0 \u03c0 + + \u23a7 \u2265 \u23aa \u23a2 \u23a5 = \u2212 + = \u23a8 \u23a2 \u23a5 \u23a3 \u23a6 \u2212 \u23aa \u23a9", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pitch-original HNM Parameters", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "In original HNM, the noise signal components are represented with 10 cepstrum coefficients. Therefore, for each control point, 10 cepstrum coefficients should be derived. Here, the cepstrum coefficients from the two mapped analysis frames are linearly interpolated to derive the cepstrum coefficients for the control point.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pitch-original HNM Parameters", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "On a control point, after the parameters, i A , i F , and i \u03b8 , for pitch-original harmonic partials are computed, the parameters, k A , k F , and k \u03b8 , for pitch-tuned harmonic partials should be computed next. Note that the pitch-height defined by i F is the original pitch predetermined in recording time. Thus, the pitch-height of a control point must be tuned in order to follow the pitch contour given by the prosody unit. For example, let the pitch defined by the harmonic frequencies, i F , be 100Hz, and a pitch-height of 150Hz is needed according to the assigned pitch-contour. Apparently, a simple tuning method is to set the values of Figure 4 . From this figure, it can be seen that the pitch can indeed be tuned from 100Hz to 150Hz. However, the formant frequencies are also scaled up. For example, the first formant is shifted from 240Hz to 360Hz in Figure 4 . The shifting of formant frequencies will cause the timbre be distinctly changed. As a result, the timbre of a synthetic syllable will not be consistent and will vary with the scaling factors (e.g. 150/100) set for different control points. To preserve the timbre while tuning the pitch-height of a control point, one principle is to keep the spectral envelope unchanged [Dodge 1997 ]. This implies that the amplitude k A of the pitch-tuned harmonic partial located at frequency k F must be computed according to an estimated spectral envelope. Here, considering both factors of efficient processing and sufficient accuracy, we estimate the spectral envelope by Lagrange interpolating the sequence of pairs, ( i F , i A ). In details, for the k-th harmonic frequency k F , we first find a pitch-original harmonic frequency j F , from 1 F , 2 F , 3 F , \u2026, that is nearest to and less than k F . Then, the four pitch-original partials of the frequencies, ", |
| "cite_spans": [ |
| { |
| "start": 1246, |
| "end": 1257, |
| "text": "[Dodge 1997", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 647, |
| "end": 655, |
| "text": "Figure 4", |
| "ref_id": "FIGREF5" |
| }, |
| { |
| "start": 865, |
| "end": 873, |
| "text": "Figure 4", |
| "ref_id": "FIGREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Pitch-tuned HNM Parameters", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "2 2 1 1 j j k h k m m j h j m h h m F F A A F F + + = \u2212 = \u2212 \u2260 \u2212 = \u22c5 \u2212 \u2211 \u220f", |
| "eq_num": "(5)" |
| } |
| ], |
| "section": "Pitch-tuned HNM Parameters", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "A figure that illustrates this method of pitch tuning without changing spectral envelope is shown in Figure 5 . In this figure, the pitch is scaled up by a factor of 1.25 but the timbre is preserved. Similarly, the phase k \u03b8 of the pitch-tuned harmonic partial located at frequency k F can also be interpolated with the four pitch-original partials of frequencies, ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 101, |
| "end": 109, |
| "text": "Figure 5", |
| "ref_id": "FIGREF7" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Pitch-tuned HNM Parameters", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "For the synthetic voiced part of Figure 3 , the synthetic signal, S(t), consists of harmonic and noise components. ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 33, |
| "end": 41, |
| "text": "Figure 3", |
| "ref_id": "FIGREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Signal Waveform Synthesis", |
| "sec_num": "4." |
| }, |
| { |
| "text": "For the harmonic signal, H(t), between the n-th and (n+1)-th control points, its sample values are computed with these equations (rewritten by us):", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Harmonic Signal Synthesis", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "( ) 0 ( ) ( ) cos ( ) , 0,1, ,99, L n n k k k H t a t t t \u03c6 = = = \u2211 \u2026 (6) ( ) 1 ( ) , 100 n n n n k k k k t a t A A A + = + \u2212 (7) ( ) ( 1) 2 ( ) / 22,050 , (0) , n n n n n k k k k k t t f t \u03c6 \u03c6 \u03c0 \u03c6 \u03b8 = \u2212 + = (8) 1 ( ) ( ), 100 n n n n k k k k t f t F F F + = + \u2212 (9)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Harmonic Signal Synthesis", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "where L is number of harmonic partials, 100 is the number of samples between adjacent control points, 22,050 is the sampling rate, , is generally not continued at the boundary time points, i.e. t=0 or t=100. These kinds of discontinuities, i.e. (100)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Harmonic Signal Synthesis", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "n k \u03c6 \u2260 1 (0) n k \u03c6 +", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Harmonic Signal Synthesis", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": ", will induce amplitude discontinuities to signal waveform, and cause clicks to be heard. To prevent these kinds of discontinuities, the amount of mismatched phase, n k \u03be , at the boundary point, t=100, must be computed beforehand. Then, this amount can be divided and shared among the 100 sample points between two adjacent control points. Accordingly, the phases of the signal samples (especially those around the boundary point) will advance smoothly. Here, we compute the amount of mismatched phase as:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Harmonic Signal Synthesis", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "( ) 1 1 (100),", |
| "eq_num": "(0)" |
| } |
| ], |
| "section": "Harmonic Signal Synthesis", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "(0) n n n n k k k k puw \u03be \u03c6 \u03c6 \u03c6 + + = \u2212", |
| "eq_num": "(10)" |
| } |
| ], |
| "section": "Harmonic Signal Synthesis", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "where the phase unwrapping function, puw(x, y), is as defined in Equation 4, and according to our derivation ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Harmonic Signal Synthesis", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "The formula in Equation 11is obtained by recursively evaluating Equations (8) and (9). Then, by dividing and sharing n k \u03be to the samples between two control points, Equation (6) is modified to:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Harmonic Signal Synthesis", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "0 ( ) ( )cos ( ) , 0,1, ,99, 100 L n n n k k k k t H t a t t t \u03c6 \u03be = \u239b \u239e \u2032 = \u2212 \u22c5 = \u239c \u239f \u239d \u23a0 \u2211 \u2026", |
| "eq_num": "(12)" |
| } |
| ], |
| "section": "Harmonic Signal Synthesis", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Let L n be the number of harmonic partials on the n-th control point. The value of L n is computed as dividing the MVF by the pitch frequency, i.e. L n = MVF(n) / 1 n F . In general, L n may not be equal to L n+1 . Hence, we set the value of L , i.e. the number of partials, in Equations (6) and (12) to the greater of L n and L n+1 . Suppose here that L n is less than L n+1 . Then, the parameter values for the extended partials on the n-th control point must be defined. Here, from the consideration of signal-waveform continuity, we simply let n", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Harmonic Signal Synthesis", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "k A =0, n k F = 1 n k F + , n k \u03b8 = 1 n k \u03b8 + , for k = 1+L n , 2+L n , \u2026, L n+1 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Harmonic Signal Synthesis", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "For the noise signal, N(t), we decide to synthesize it as a summation of sinusoidal signal components [Stylianou 1996] . Let G k be the frequency of the k-th sinusoid. As G k does not change with time, we need not to distinguish G k for different control points. Here, we let G k =100\u22c5k (Hz). However, for the n-th control point, the index k of G k is not started from 1 and its starting value, n s K , is determined by the MVF of this control point, i.e. n s K = \u23a7MVF(n) / 100\u23ab. In contrast, the end value of the index k is always a fixed value, e K =\u23a911,025 / 100\u23ad, because G k cannot be greater than half of the sampling frequency.", |
| "cite_spans": [ |
| { |
| "start": 102, |
| "end": 118, |
| "text": "[Stylianou 1996]", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Noise Signal Synthesis", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "On the other hand, let n k B be the amplitude of the k-th sinusoid on the n-th control point.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Noise Signal Synthesis", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "For the synthesis of the long unvoiced part in Figures 1 and 3 , the Equations (13), 14and (15) can still be used to generate signal samples. However, the lower bound of the summation index, k, in Equation (13) will now be fixed to 1. This is equivalent to setting all the MVF values to the constant, 0Hz, for all the control points within the unvoiced part.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 47, |
| "end": 62, |
| "text": "Figures 1 and 3", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Noise Signal Synthesis", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Several years ago, we proposed a synthesis method called TIPW (time-proportioned interpolation of pitch waveform) [Gu et al. 1998 ] that is an improved variant of PSOLA. Therefore, we intend to compare the three synthesis methods, based on PSOLA, TIPW, and HNM respectively, in signal clarity. A synthetic speech signal is considered to have better clarity if it is less noisy and less reverberant. Since signal clarity is the primary concern here, we use the same text analysis unit and prosody parameter generation unit for the three methods [Gu et al. 2000; Gu et al. 2007] . When run on a personal computer with an Intel Pentium 2.6 GHz CPU, the three methods can all be executed in real-time. However, the execution speeds are very different. In detail, the CPU time consumed by the HNM based method is 19.4% of the time length of the synthetic speech file, i.e. the speed is about 5 times real-time. On the other hand, the CPU time consumed by the TIPW and PSOLA based methods are as little as 3.5% and 4.2% of the time length of the synthetic speech file, i.e. the speeds are about 28 and 24 times real-time.", |
| "cite_spans": [ |
| { |
| "start": 114, |
| "end": 129, |
| "text": "[Gu et al. 1998", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 544, |
| "end": 560, |
| "text": "[Gu et al. 2000;", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 561, |
| "end": 576, |
| "text": "Gu et al. 2007]", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Signal Synthesis Experiment and Perception Test", |
| "sec_num": "5." |
| }, |
| { |
| "text": "For comparison of signal clarity, the Mandarin short sentence, /syuen-2 zhuan-3 li-4/ ( , rotating power), is taken as an example and used to synthesize speech signals with the three methods. The spectrogram in Figure 6 is obtained by analyzing the signal synthesized by the HNM based method while the spectrograms in Figures 7 and 8 are obtained respectively by analyzing the signals synthesized by the TIPW and PSOLA based methods. By comparing Figure 6 with Figures 7 and 8 , we find that more fragments exist in Figures 7 and 8 than in Figure 6 , and the traces of the harmonic partials in the lower frequency band in Figure 6 are more continuous and concrete (less vibrating) than those in Figures 7 and 8 . Therefore, the signal synthesized by the HNM based method should be clearer than the signal synthesized by the TIPW and PSOLA based methods.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 211, |
| "end": 219, |
| "text": "Figure 6", |
| "ref_id": "FIGREF11" |
| }, |
| { |
| "start": 318, |
| "end": 333, |
| "text": "Figures 7 and 8", |
| "ref_id": "FIGREF12" |
| }, |
| { |
| "start": 447, |
| "end": 455, |
| "text": "Figure 6", |
| "ref_id": "FIGREF11" |
| }, |
| { |
| "start": 461, |
| "end": 476, |
| "text": "Figures 7 and 8", |
| "ref_id": "FIGREF12" |
| }, |
| { |
| "start": 516, |
| "end": 531, |
| "text": "Figures 7 and 8", |
| "ref_id": "FIGREF12" |
| }, |
| { |
| "start": 540, |
| "end": 548, |
| "text": "Figure 6", |
| "ref_id": "FIGREF11" |
| }, |
| { |
| "start": 622, |
| "end": 631, |
| "text": "Figure 6", |
| "ref_id": "FIGREF11" |
| }, |
| { |
| "start": 696, |
| "end": 711, |
| "text": "Figures 7 and 8", |
| "ref_id": "FIGREF12" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Signal Synthesis Experiment and Perception Test", |
| "sec_num": "5." |
| }, |
| { |
| "text": "In addition, we have used the three methods to synthesize an article and obtain three speech signal files. The article selected is a simple composition, with a total of 132 syllables, by an elementary school student. Then, the three synthetic speech files are played to 15 participants for perception tests. A score of 0 is defined if the clarity of two compared synthetic speech files cannot be distinguished. A score of 1 (or -1) is defined if the latter (or former) played is slightly better. Besides, a score of 2 (or -2) is defined if the latter (or former) played is sufficiently better. Each participant is requested to do two comparisons and give two scores. One comparison is to compare the signal clarity of the two files synthesized respectively by PSOLA and HNM based methods. And the other comparison is to compare the two files synthesized respectively by PSOLA and TIPW based methods. According to the scores given by the participants, the averaged scores are 1.2 for the first comparison and 0.33 for the second comparison. That is, the HNM based method is significantly better than the PSOLA based method in signal clarity, but the two methods, PSOLA and TIPW, are difficult to distinguish. For demonstration, we have set up a web page, http://guhy.csie.ntust.edu.tw/hmtts/hnm-demo.html. It can be browsed to listen to the synthesized Mandarin speeches using the three methods. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Signal Synthesis Experiment and Perception Test", |
| "sec_num": "5." |
| }, |
| { |
| "text": "In this study, we used HNM to develop a scheme for synthesizing a Mandarin syllable's signal. Each Mandarin syllable needs only be recorded once. With this scheme, diverse prosodic characteristics can still be synthesized for a syllable without suffering significant signal-quality degradation. Three relevant issues are investigated. That is, (a) the /syuen-2/ /zhuan-3/ /li-4/ /syuen-2/ /zhuan-3/ /li-4/ determination of the HNM parameters for the control points that are placed with a fixed pace on the time axis of a synthetic syllable (note that pace widths are varied in original HNM); (b) keeping timbre consistent when the HNM parameters of a control point are adjusted to have a different pitch height (implementation method is not clearly explained in original HNM); (c) the construction of a time warping function to map between the two time axes of a synthetic syllable and its corresponding source syllable in order to synthesize more fluent syllable signal (this issue is not mentioned in original HNM). For these three issues, we have proposed feasible solutions (considering both signal quality and implementation practice). With these solutions, our scheme is therefore called an HNM based and extended syllable signal synthesis scheme (HNMES).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Concluding Remarks", |
| "sec_num": "6." |
| }, |
| { |
| "text": "To test the signal clarity of the synthetic speech, the HNMES scheme has been programmed and integrated with the other units of text analysis and prosodic parameter generation that were developed earlier. Since signal clarity is the primary concern here, the same units of text analysis and prosodic parameter generation are also used for the PSOLA and TIPW based methods. According to spectrogram inspecting and perception test results, we conclude that the HNMES scheme can significantly outperform the PSOLA and TIPW based schemes in signal clarity (much clearer and no reverberation). Therefore, the HNMES scheme is recommended for synthesizing speech signal of not only Mandarin but also other syllable-prominent languages.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Concluding Remarks", |
| "sec_num": "6." |
| }, |
| { |
| "text": "Note that in this study, signal clarity is the primary concern and prosodic parameters are generated with only simple rules. Therefore, the synthetic speeches are not natural and felt of a machine tongue when listening to the example synthetic speeches. In the future, we will study to construct a more powerful prosodic parameter generation unit. Then, we will combine it with the syllable signal synthesis scheme, HNMES.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Concluding Remarks", |
| "sec_num": "6." |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "This study is partially supported by National Science Council under the contract number, NSC 96-2221-E-011-163.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| }, |
| { |
| "text": "To determine the values of n k B , the 10 cepstrum coefficients, of the n-th control point, representing the noise spectral envelope are first appended with zero values and inversely transformed (inverse discrete Fourier transform) to the spectral domain. Then exponentiation is taken to obtain the corresponding spectral magnitude coefficients, X j , j=0,1,\u2026,2047. According to X j , the value of n k B can be obtained by linearly interpolating the two adjacent X i whose frequencies indicated by the index, i, surround the frequency of G k .When the values of 14, the time-varying amplitude, ( ) n k b t , is only linearly interpolated.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "annex", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "A Mandarin Text-to-speech System Using a Large Number of Words as Synthesis Units", |
| "authors": [ |
| { |
| "first": "T", |
| "middle": [ |
| "Y" |
| ], |
| "last": "Chang", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chang, T. Y., A Mandarin Text-to-speech System Using a Large Number of Words as Synthesis Units, Master thesis, National Chung Hsing University, Taichung, Taiwan, 2005. (in Chinese)", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Corpus-based Technologies for Chinese Text-to-Speech Synthesis", |
| "authors": [ |
| { |
| "first": "F", |
| "middle": [ |
| "C" |
| ], |
| "last": "Chou", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chou, F. C., Corpus-based Technologies for Chinese Text-to-Speech Synthesis. PhD thesis, National Taiwan University, Taipei, Taiwan, 1999.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Computer Music: Synthesis, Composition, and Performance, 2 nd edition", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Dodge", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [ |
| "A" |
| ], |
| "last": "Jerse", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dodge, C., and T. A. Jerse, Computer Music: Synthesis, Composition, and Performance, 2 nd edition, Schirmer Books, New York, 1997.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "An Introduction to Text-to-Speech Synthesis", |
| "authors": [ |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Dutoit", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dutoit, T., An Introduction to Text-to-Speech Synthesis, Kluwer Academic Publishers, Dordrecht, The Netherlands, 1997.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "A Mandarin-Syllable Signal Synthesis Method with Increased Flexibility in Duration, Tone and Timbre Control", |
| "authors": [ |
| { |
| "first": "H", |
| "middle": [ |
| "Y" |
| ], |
| "last": "Gu", |
| "suffix": "" |
| }, |
| { |
| "first": "W", |
| "middle": [ |
| "L" |
| ], |
| "last": "Shiu", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Proceedings of the National Science Council ROC(A)", |
| "volume": "22", |
| "issue": "3", |
| "pages": "385--395", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gu, H. Y., and W. L. Shiu, \"A Mandarin-Syllable Signal Synthesis Method with Increased Flexibility in Duration, Tone and Timbre Control,\" Proceedings of the National Science Council ROC(A), 22(3), 1998, pp. 385-395.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "A Sentence-Pitch-Contour Generation Method Using VQ/HMM for Mandarin Text-to-speech", |
| "authors": [ |
| { |
| "first": "H", |
| "middle": [ |
| "Y" |
| ], |
| "last": "Gu", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [ |
| "C" |
| ], |
| "last": "Yang", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "International Symposium on Chinese Spoken Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "125--128", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gu, H. Y., and C. C. Yang, \"A Sentence-Pitch-Contour Generation Method Using VQ/HMM for Mandarin Text-to-speech,\" International Symposium on Chinese Spoken Language Processing, 2000, Beijing, China, pp. 125-128.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "A System Framework for Integrated Synthesis of Mandarin, Min-nan, and Hakka Speech", |
| "authors": [ |
| { |
| "first": "H", |
| "middle": [ |
| "Y" |
| ], |
| "last": "Gu", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [ |
| "Z" |
| ], |
| "last": "Zhou", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [ |
| "L" |
| ], |
| "last": "Liau", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "International Journal of Computational Linguistics and Chinese Language Processing", |
| "volume": "12", |
| "issue": "4", |
| "pages": "371--390", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gu, H. Y., Y. Z. Zhou, and H. L. Liau, \"A System Framework for Integrated Synthesis of Mandarin, Min-nan, and Hakka Speech,\" International Journal of Computational Linguistics and Chinese Language Processing, 12(4), 2007, pp. 371-390.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Elements of Computer Music", |
| "authors": [ |
| { |
| "first": "F", |
| "middle": [ |
| "R" |
| ], |
| "last": "Moore", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Moore, F. R., Elements of Computer Music, Prentice-Hall, New Jersey, 1990.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Pitch-synchronous Waveform Processing Techniques for Text-to-speech Synthesis Using Diphones", |
| "authors": [ |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Moulines", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Charpentier", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "Speech Communication", |
| "volume": "9", |
| "issue": "5", |
| "pages": "453--467", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Moulines, E., and E Charpentier, \"Pitch-synchronous Waveform Processing Techniques for Text-to-speech Synthesis Using Diphones,\" Speech Communication, 9(5), 1990, pp. 453-467.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Discrete-Time Speech Signal Processing", |
| "authors": [ |
| { |
| "first": "T", |
| "middle": [ |
| "F" |
| ], |
| "last": "Quatieri", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Quatieri, T. F., Discrete-Time Speech Signal Processing, Prentice-Hall, New Jersey, 2002.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Harmonic plus Noise Models for Speech, Combined with Statistical Methods, for Speech and Speaker Modification", |
| "authors": [ |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Stylianou", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Stylianou, Y., Harmonic plus Noise Models for Speech, Combined with Statistical Methods, for Speech and Speaker Modification, PhD thesis, Ecole Nationale Sup\u00e8rieure des T\u00e9l\u00e9communications, Paris, France, 1996.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Modeling Speech Based on Harmonic Plus Noise Models", |
| "authors": [ |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Stylianou", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Nonlinear Speech Modeling and Applications", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Stylianou, Y., \"Modeling Speech Based on Harmonic Plus Noise Models,\" Nonlinear Speech Modeling and Applications, Springer-Verlag, Germany, 2005.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "uris": null, |
| "type_str": "figure", |
| "text": "S ig n al S yn th es is D eter m in e H N M p ar am eters f or each co ntr o l po in t D eterm in e ph o n em es ' len g ths an d co n stru ct tim e-wa rp ing fu n ctio n T he in itial is sh o rt u n v oiced ? Vo ic ed p a rt: Sy nth esize HN M h arm o n ic sig n al Sy nth esize HN M n o ise s ig n al sto p Directly co p y sig n al sam p les of th e un v o iced in itial Sy n th esize th e lo n g un v o iced in itial as HN M no ise sign al Th e in itial is lo n g u nv o iced ? Main processing flow of the HNM based syllable-signal synthesis scheme.", |
| "num": null |
| }, |
| "FIGREF1": { |
| "uris": null, |
| "type_str": "figure", |
| "text": "> 0 && Dm/Db < 0.35) { Dm = 0.35*Db; Dn=Db-Dm; } if (Dn > 0 && Dn/Db < 0.35) { Dn = 0.35*Db; Dm=Db-Dn; }", |
| "num": null |
| }, |
| "FIGREF2": { |
| "uris": null, |
| "type_str": "figure", |
| "text": "A piece-wise linear mapping function.", |
| "num": null |
| }, |
| "FIGREF3": { |
| "uris": null, |
| "type_str": "figure", |
| "text": "Analysis frame to control point mapping.", |
| "num": null |
| }, |
| "FIGREF4": { |
| "uris": null, |
| "type_str": "figure", |
| "text": "is illustrated in", |
| "num": null |
| }, |
| "FIGREF5": { |
| "uris": null, |
| "type_str": "figure", |
| "text": "Pitch tuning with spectral envelope scaled simultaneously.", |
| "num": null |
| }, |
| "FIGREF6": { |
| "uris": null, |
| "type_str": "figure", |
| "text": "instead in the interpolation processing.", |
| "num": null |
| }, |
| "FIGREF7": { |
| "uris": null, |
| "type_str": "figure", |
| "text": "Pitch tuning without changing spectral envelope.", |
| "num": null |
| }, |
| "FIGREF8": { |
| "uris": null, |
| "type_str": "figure", |
| "text": "That is, S(t) = H(t) + N(t) where H(t) represents the summation of the harmonic partials and N(t) represents the summation of the noise signal components. The synthesis methods for H(t) and N(t) are described in details in the following subsections.", |
| "num": null |
| }, |
| "FIGREF9": { |
| "uris": null, |
| "type_str": "figure", |
| "text": "the time-varying amplitude of the k-th partial at time t from the start of the n-th control point, the time-varying frequency for the k-th partial, and \u02c6nk i.e. unwrapped phase of n k \u03b8 versus 1n k\u03b8 \u2212 . In Equations (7) and (9), linear interpolation is used, which seems enough according to perception tests.Note that, when using Equation (6) to synthesize signal samples, the cumulated phase,", |
| "num": null |
| }, |
| "FIGREF11": { |
| "uris": null, |
| "type_str": "figure", |
| "text": "Spectrogram of the signal synthesized by the HNM based method.", |
| "num": null |
| }, |
| "FIGREF12": { |
| "uris": null, |
| "type_str": "figure", |
| "text": "Spectrogram of the signal synthesized by the TIPW based method.", |
| "num": null |
| }, |
| "FIGREF13": { |
| "uris": null, |
| "type_str": "figure", |
| "text": "Spectrogram of the signal synthesized by the PSOLA based method.", |
| "num": null |
| } |
| } |
| } |
| } |