ACL-OCL / Base_JSON /prefixO /json /O11 /O11-3001.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "O11-3001",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:05:41.958754Z"
},
"title": "Performance Evaluation of Speaker-Identification Systems for Singing Voice Data",
"authors": [
{
"first": "Wei-Ho",
"middle": [],
"last": "Tsai",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Taipei University of Technology",
"location": {
"settlement": "Taipei",
"country": "Taiwan"
}
},
"email": "whtsai@ntut.edu.tw"
},
{
"first": "Hsin-Chieh",
"middle": [],
"last": "Lee",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Taipei University of Technology",
"location": {
"settlement": "Taipei",
"country": "Taiwan"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Automatic speaker-identification (SID) has long been an important research topic. It is aimed at identifying who among a set of enrolled persons spoke a given utterance. This study extends the conventional SID problem to examining if an SID system trained using speech data can identify the singing voices of the enrolled persons. Our experiment found that a standard SID system fails to identify most singing data, due to the significant differences between singing and speaking for a majority of people. In order for an SID system to handle both speech and singing data, we examine the feasibility of using model-adaptation strategy to enhance the generalization of a standard SID. Our experiments show that a majority of the singing clips can be correctly identified after adapting speech-derived voice models with some singing data.",
"pdf_parse": {
"paper_id": "O11-3001",
"_pdf_hash": "",
"abstract": [
{
"text": "Automatic speaker-identification (SID) has long been an important research topic. It is aimed at identifying who among a set of enrolled persons spoke a given utterance. This study extends the conventional SID problem to examining if an SID system trained using speech data can identify the singing voices of the enrolled persons. Our experiment found that a standard SID system fails to identify most singing data, due to the significant differences between singing and speaking for a majority of people. In order for an SID system to handle both speech and singing data, we examine the feasibility of using model-adaptation strategy to enhance the generalization of a standard SID. Our experiments show that a majority of the singing clips can be correctly identified after adapting speech-derived voice models with some singing data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "As an independent capability in biometric applications or as part of speech-recognition systems, automatic speaker-identification (SID) (Rosenberg, 1976; Reynolds & Rose, 1995; Reynolds, 1995; Campbell, 1997; Reynolds et al., 2000; Bimbot et al., 2004; Nakagawa et al., 2004 Nakagawa et al., , 2006 Murty & Yegnanarayana, 2006; Matusi & Tanabe, 2006; Beigi, 2011) has been an attractive research topic for more than three decades. It is aimed at identifying who among a set of enrolled persons spoke a given utterance. Currently, existing SID systems operate in two phases, training and testing, where the former models each person's voice characteristics using his/her spoken data and the latter determines unknown speech utterances based on some comparisons between models and utterances. As the purpose of SID is distinguishing one 2 Wei-Ho Tsai and Hsin-Chieh Lee person's voice from another's, it is worth investigating if an SID system can not only identify speech voices but also singing voices.",
"cite_spans": [
{
"start": 136,
"end": 153,
"text": "(Rosenberg, 1976;",
"ref_id": "BIBREF17"
},
{
"start": 154,
"end": 176,
"text": "Reynolds & Rose, 1995;",
"ref_id": "BIBREF14"
},
{
"start": 177,
"end": 192,
"text": "Reynolds, 1995;",
"ref_id": "BIBREF13"
},
{
"start": 193,
"end": 208,
"text": "Campbell, 1997;",
"ref_id": "BIBREF3"
},
{
"start": 209,
"end": 231,
"text": "Reynolds et al., 2000;",
"ref_id": "BIBREF15"
},
{
"start": 232,
"end": 252,
"text": "Bimbot et al., 2004;",
"ref_id": "BIBREF1"
},
{
"start": 253,
"end": 274,
"text": "Nakagawa et al., 2004",
"ref_id": "BIBREF11"
},
{
"start": 275,
"end": 298,
"text": "Nakagawa et al., , 2006",
"ref_id": "BIBREF12"
},
{
"start": 299,
"end": 327,
"text": "Murty & Yegnanarayana, 2006;",
"ref_id": "BIBREF10"
},
{
"start": 328,
"end": 350,
"text": "Matusi & Tanabe, 2006;",
"ref_id": "BIBREF9"
},
{
"start": 351,
"end": 363,
"text": "Beigi, 2011)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "There are a number of real applications where an SID system may need to deal with singing voices. For example, if we record the sounds from TV, it is very likely that the recording contains performers speaking then singing or singing then speaking. In such a case, an SID system capable of handling both speech and singing voices would be very useful to index the recording. Another example is when people gather to sing at a Karaoke. It would be helpful to record everyone's performance onto CDs or DVDs to capture memories of the pleasant time. For the audio in CDs or DVDs to be searchable, audio data would preferably be written in separate tracks, each labeled with the respective person. In this case, an SID system capable of identifying both speech and singing voices will be helpful to automate the labeling process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "To the best of our knowledge, there is no prior literature devoted to the problem of using an SID system to identify singing voices. Most related work (Rosenau, 1999; Gerhard, 2004 Gerhard, , 2003 has investigated the differences between singing and speech. Some studies have developed methods for singing voice synthesis (Bonada & Serra, 2007; Kenmochi & Ohshita, 2007; Saino et al., 2006; Saitou et al., 2005) , and some have discussed how to convert speech into singing (Saitou et al., 2007) according to the specified melody. In this paper, we begin our investigation by evaluating the performance of an SID system trained using speech voices when the testing samples are changed from speech to singing voices. Then, a well-studied model-adaptation strategy is applied to improve the system's capability in handling singing voices. Our final experiments show that a majority of the singing clips can be correctly identified after adapting speech-derived voice models with some singing data.",
"cite_spans": [
{
"start": 151,
"end": 166,
"text": "(Rosenau, 1999;",
"ref_id": "BIBREF16"
},
{
"start": 167,
"end": 180,
"text": "Gerhard, 2004",
"ref_id": null
},
{
"start": 181,
"end": 196,
"text": "Gerhard, , 2003",
"ref_id": "BIBREF7"
},
{
"start": 322,
"end": 344,
"text": "(Bonada & Serra, 2007;",
"ref_id": "BIBREF2"
},
{
"start": 345,
"end": 370,
"text": "Kenmochi & Ohshita, 2007;",
"ref_id": "BIBREF8"
},
{
"start": 371,
"end": 390,
"text": "Saino et al., 2006;",
"ref_id": "BIBREF20"
},
{
"start": 391,
"end": 411,
"text": "Saitou et al., 2005)",
"ref_id": "BIBREF19"
},
{
"start": 473,
"end": 494,
"text": "(Saitou et al., 2007)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "The rest of this paper is organized as follows. Section 2 reviews a prevalent SID system. Section 3 describes an improved SID system using some singing data to adapt speech-derived voice models. Then, Section 4 discusses our experiment results. In Section 5, we present our concluding remarks. Figure 1 shows the most prevalent SID system currently, stemming from (Reynolds & Rose, 1995) . The system operates in two phases: training and testing. During training, a group of N persons is represented by N Gaussian mixture models (GMMs), \u03bb 1 , \u03bb 2 , \u2026, \u03bb N . It is found that GMMs provide good approximations of arbitrarily shaped densities of a spectrum over a long span of time (Murty & Yegnanarayana, 2006) ; hence, they can reflect the vocal tract configurations of individual persons. The parameters of GMM \u03bb i , composed of means, covariances, and mixture weights, are estimated using the speech utterances of the i-th person. The estimation consists of k-means initialization and Expectation-Maximization (EM) Performance Evaluation of 3 Speaker-Identification Systems for Singing Voice Data (Dempster et al., 1977) .",
"cite_spans": [
{
"start": 364,
"end": 387,
"text": "(Reynolds & Rose, 1995)",
"ref_id": "BIBREF14"
},
{
"start": 679,
"end": 708,
"text": "(Murty & Yegnanarayana, 2006)",
"ref_id": "BIBREF10"
},
{
"start": 1098,
"end": 1121,
"text": "(Dempster et al., 1977)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 294,
"end": 302,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Prior to Gaussian mixture modeling, audio waveforms are converted, frame-by-frame, into Mel-scale frequency cepstral coefficients (MFCCs) (Davis & Mermelstein, 1980) . The merit of MFCCs lies in the auditory modeling, which has been shown to be superior to other speech-production-based features in numerous studies. Given a test voice sample, the system computes its MFCCs Y = {y 1 , y 2 ,..., y T } and the likelihood probability Pr(Y|\u03bb i ) for each model",
"cite_spans": [
{
"start": 138,
"end": 165,
"text": "(Davis & Mermelstein, 1980)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Popular Speaker-Identification (SID) System",
"sec_num": "2."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03bb i : ( ) ( ) ( ) 1 1 Pr( | ) ( ; , ) T K k k k i t i i i k t w = = \u03bb = \u22c5 \u2211 \u220f Y y\u03bc C N ,",
"eq_num": "(1)"
}
],
"section": "A Popular Speaker-Identification (SID) System",
"sec_num": "2."
},
{
"text": "( ) ( ) ( ) ( ) ( ) ( ) ( ) 1 ( ) 1 ( ; , ) exp | | k k k k k t t t i i i i i k N i \u03c0 \u2212 \u2032 \u23a7 \u23ab = \u2212 \u2212 \u2212 \u23a8 \u23ac \u23a9 \u23ad y \u03bc C y \u03bc C y \u03bc C N (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Popular Speaker-Identification (SID) System",
"sec_num": "2."
},
{
"text": "where K is the number of mixture Gaussian components; ( ) ( ) ( ) , , and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Popular Speaker-Identification (SID) System",
"sec_num": "2."
},
{
"text": "k k k i i i w \u03bc C",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Popular Speaker-Identification (SID) System",
"sec_num": "2."
},
{
"text": "are the k-th mixture weight, mean, and covariance of model \u03bb i , respectively; and prime (\u2032) denotes the vector transpose. According to the maximum likelihood (ML) decision rule, the system decides in favor of person I * when the condition in Eq. 3is satisfied: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Popular Speaker-Identification (SID) System",
"sec_num": "2."
},
{
"text": "* 1 arg max Pr( | ) i i N I \u2264 \u2264 = \u03bb Y . (3) i \u03bb 2 \u03bb N \u03bb 1 \u03bb ) \u03bb | Pr( argmax 1 i N i Y \u2264 \u2264",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Popular Speaker-Identification (SID) System",
"sec_num": "2."
},
{
"text": "Our experiments, discussed in detail in Section 4, find that the above-described SID system performs rather poorly in identifying singing voices of enrolled persons, since a person's singing voice can be significantly different from his/her speech voice. To see if the system can be improved, we apply a well-studied model-adaptation strategy to adapt each person's GMM using some of his/her singing voice data. The adaptation is based on the Maximum A Posterior (MAP) estimation of GMM parameters (Reynolds et al., 2000) . We assume that the amount of available singing data for adaptation is very limited; hence, only the mean vectors of GMMs are adapted. For the i-th person's GMM, the mean vector of the k-th mixture is updated using",
"cite_spans": [
{
"start": 498,
"end": 521,
"text": "(Reynolds et al., 2000)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "An SID System Based on Model Adaptation for Singing Voices",
"sec_num": "3."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "( ) ( ) ( ) ( ) ( ) ( ) k k k k i i i i k k i i \u03c4 \u03b3 \u03c4 \u03b3 \u03c4 \u03b3 = + + + \u03bc \u03bc \u03bc ,",
"eq_num": "(4)"
}
],
"section": "An SID System Based on Model Adaptation for Singing Voices",
"sec_num": "3."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "( ) 1 Pr( | , \u03bb ) L k i i k \u03c4 = = \u2211 x ,",
"eq_num": "(5)"
}
],
"section": "An SID System Based on Model Adaptation for Singing Voices",
"sec_num": "3."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "( ) ( ) 1 1 Pr( | , \u03bb ) L k i i k i k \u03c4 = = \u2211 \u03bc x x ,",
"eq_num": "(6)"
}
],
"section": "An SID System Based on Model Adaptation for Singing Voices",
"sec_num": "3."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "( ) ( ) ( ) ( ) ( ) ( ) 1 ( ; , ) Pr( | , \u03bb ) ( ; , ) k k k i i i i K n n n i i i n w k w = = \u2211 x \u03bc C x x \u03bc C N N ,",
"eq_num": "(7)"
}
],
"section": "An SID System Based on Model Adaptation for Singing Voices",
"sec_num": "3."
},
{
"text": "where x , 1 \u2264 \u2264 L, are the MFCCs of the available adaptation (singing) data,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An SID System Based on Model Adaptation for Singing Voices",
"sec_num": "3."
},
{
"text": "( ) k i \u03bc",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An SID System Based on Model Adaptation for Singing Voices",
"sec_num": "3."
},
{
"text": "is the resulting mean vector after the adaptation, N (\u22c5) is a multivariate Gaussian density function, and \u03b3 is a weighting factor of the a priori knowledge to the adaptation data. The block diagram of the system based on MAP adaptation is shown in Figure 2 . ",
"cite_spans": [],
"ref_spans": [
{
"start": 248,
"end": 256,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "An SID System Based on Model Adaptation for Singing Voices",
"sec_num": "3."
},
{
"text": "i \u03bb i \u03bb 1 \u03bb 2 \u03bb N \u03bb ) \u03bb | Pr( argmax 1 i N i Y \u2264 \u2264",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An SID System Based on Model Adaptation for Singing Voices",
"sec_num": "3."
},
{
"text": "We created a database of test recordings ourselves, since no public corpus of voice data currently meets the specific criteria we set up for this study. The database contains vocal recordings by twenty male participants between the ages of 20 and 39. We asked each person to perform 30 passages of Mandarin pop songs using a karaoke machine in a quiet room. All of the passages were recorded at 22.05 kHz, 16 bits, in mono PCM wave. The karaoke accompaniments were output to a headset and were not captured in the recordings. The duration of each passage ranges from 17 to 26 seconds. We denoted the resulting 600 recordings by DB-Singing. Next, we asked each person to read the lyrics of the 30 song passages at a normal speed. All of the read utterances were recorded using the same conditions as those in DB-Singing. The resulting 600 utterances were denoted as DB-Speech.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Voice Data",
"sec_num": "4.1"
},
{
"text": "For ease of discussion in the following sections, we use a term \"parallel\" to represent the association between a speech utterance and singing recording that are based on the same texts. For example, when the texts are in turn spoken and sung by a person, the speech utterance is referred to as the \"parallel\" speech utterance of the resulting singing recording, and vice-versa. In addition, for use in different purposes, we divided DB-Singing into two subsets, DB-Singing-1 and DB-Singing-2, where the former contains the first 15 recordings per person and the latter contains the last 15 recordings per person. Similarly, DB-Speech was divided into subsets DB-Speech-1 and DB-Speech-2, where the former contains the first 15 speech utterances per person and the latter contains the last 15 speech utterances per person.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Voice Data",
"sec_num": "4.1"
},
{
"text": "We used the 15 speech utterances per person in DB-Speech-1 to train each person-specific GMM, and tested the singing recordings in DB-Singing-2. To obtain a statistically-significant experimental result, we repeated the experiment using the 15 speech utterances in DB-Speech-2 to train each person-specific GMM and tested the singing recordings in DB-Singing-1. The number of Gaussian components used in each GMM was tuned to optimum according to the amount of training data. The SID performance was assessed with the accuracy: #correctly-identified recordings SID Accuracy (in %)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Results",
"sec_num": "4.2"
},
{
"text": "100% . # testing recordings = \u00d7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Results",
"sec_num": "4.2"
},
{
"text": "In addition, to make sure if the system could work well for the conventional SID task, we also evaluated the SID performance using DB-Speech-1 to train each person-specific GMM and tested the speech utterances in DB-Speech 2. Also, in order for the result to be statistically significant, the experiments were repeated using DB-Speech-2 to train each person-specific GMM before testing the speech utterances in DB-Speech-1. Table 1 shows the SID results. We can see from Table 1 (a) and (b) that the system trained using a set of speech data can perfectly identify the speakers of another set of speech data. Nevertheless, the system fails to identify most persons' voices in DB-Singing-1 and DB-Singing-2. Such poor results indicate the significant differences between most people's speaking and singing voices. Table 2 shows the confusion matrix of the SID results in Table 1 . The columns of the matrix correspond to the ground-truth of the singing recording, while the rows indicate the hypotheses. It can be seen from Table 2 that there are a large number of persons whose voice recordings were completely mis-identified. There were only a few people, e.g., #4 and #9, whose singing recordings mostly could be identified well. Further analysis found that persons #4 and #9 are not good at singing, and often cannot follow the tune. They cannot modify their voices properly to make the singing melodious either. Perhaps due to a lack of singing practice, persons #4 and #9 do not change their normal speech voices too much during singing; hence, the system trained using their speaking voices can identify their singing voices well.",
"cite_spans": [],
"ref_spans": [
{
"start": 424,
"end": 431,
"text": "Table 1",
"ref_id": "TABREF0"
},
{
"start": 813,
"end": 820,
"text": "Table 2",
"ref_id": "TABREF1"
},
{
"start": 870,
"end": 877,
"text": "Table 1",
"ref_id": "TABREF0"
},
{
"start": 1023,
"end": 1030,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Experiment Results",
"sec_num": "4.2"
},
{
"text": "To gain insight into the SID errors with respect to different persons, we analyzed the spectrograms of the singing recordings and their parallel speech utterances produced by persons #9 and #10. The waveforms were divided into segments of 512 samples with 50% overlap for the computation of short-term Fourier transform. We can see from Figure 3 (a) and (b) that the formant structure of #9's singing recording is relatively similar to that of his speech utterance, compared with the case of #10, shown in Figure 3 (c) and (d). There is almost no vibrato in #9's singing voice. This is consistent with the observation that #9's voice does not differ too much from speech to singing; thus, it can be handled with speech-derived GMM. ",
"cite_spans": [],
"ref_spans": [
{
"start": 337,
"end": 349,
"text": "Figure 3 (a)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiment Results",
"sec_num": "4.2"
},
{
"text": "Hypothesized Person Index Accuracy (%) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 1 3 1 0 1 2 1 1 0 2 2 3 4 1 1 2 1 3 0 2 0 10.0 2 2 1 1 0 1 2 2 1 1 3 2 1 0 0 0 0 5 3 4 1 3.3 3 2 0 2 1 1 3 4 2 0 1 4 1 1 3 1 0 0 4 0 0 6.7 4 0 0 0 25 0 1 0 1 2 0 0 0 0 1 0 0 0 0 0 0 83.3 5 0 0 3 0 3 1 2 1 5 0 6 2 0 0 3 0 0 3 1 0 10.0 6 5 1 0 0 2 3 1 4 0 0 3 1 1 0 3 4 1 1 0 0 10.0 7 1 0 0 0 2 3 2 0 0 0 3 6 1 0 0 0 8 0 0 4 6.7 8 2 1 0 0 7 0 3 4 2 0 5 0 0 4 3 4 1 1 1 0 13.3 9 0 0 0 0 1 0 0 0 28 1 0 0 0 0 0 0 0 0 0 0 93.3 10 3 2 0 5 0 0 1 0 2 2 1 1 1 4 0 0 5 0 0 3 6.7 11 0 0 4 0 2 0 3 0 1 1 3 0 6 0 1 1 2 0 6 0 10.0 12 2 1 1 1 0 3 0 0 5 0 0 2 1 1 0 4 6 2 0 1 6.7 13 0 1 1 0 1 3 4 1 1 1 1 2 3 2 6 0 0 1 0 2 10.0 14 1 1 1 2 4 0 8 0 0 5 1 0 0 5 0 0 1 1 0 0 16.7 15 0 2 6 1 1 0 1 1 0 0 0 3 5 0 2 3 3 0 0 2 6.7 16 0 0 0 4 0 7 0 1 1 1 3 3 0 0 0 3 2 4 1 0 10.0 17 4 0 8 0 0 1 1 1 0 1 3 0 0 4 0 4 2 1 0 0 6.7 18 2 1 1 0 1 1 3 4 0 7 0 0 5 1 1 1 0 2 0 0 6.7 19 0 0 0 0 5 1 1 0 1 5 4 0 0 0 2 2 2 3 4 0 13.3 20 1 0 8 0 0 2 3 2 1 1 1 1 0 0 0 0 4 0 1 5 16.7 Figure 3. (a) spectrogram of a speech utterance produced by person #9, (b) spectrogram of a singing recording produced by person #9, (c) spectrogram of a speech utterance produced by person #10, and (d) spectrogram of a singing recording produced by person #10, where all the singing recordings and speech utterances are based on the same lyrics: \"/ni/ /man/ /iau/ /kuai/ /le/ /iau/ /tian/ /chang/ /di/ /jiou/\".",
"cite_spans": [],
"ref_spans": [
{
"start": 1035,
"end": 1299,
"text": "Figure 3. (a) spectrogram of a speech utterance produced by person #9, (b) spectrogram of a singing recording produced by person #9, (c) spectrogram of a speech utterance produced by person #10, and (d) spectrogram of a singing recording produced by person #10,",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Actual Person Index",
"sec_num": null
},
{
"text": "Next, the SID performance of the \"MAP-adaptation-based system\" described in Sec. 3 was evaluated. We used the 15 speech utterances per person in DB-Speech-1 to train the person-specific GMMs. Each GMM then was adapted using J randomly-selected singing recordings per person in DB-Singing-1, where J = 5, 10, and 15. Based on the adapted GMMs, the system identified the persons of the singing recordings in DB-Singing-2. In addition, to obtain statistically-significant experiment results, we repeated the experiment by using DB-Speech-2 as the training data, DB-Singing-2 as the adaptation data, and DB-Singing-1 as the testing data. The identification accuracy then was computed as the percentage of the correctly-identified recordings. Figure 4 shows the SID accuracies obtained with the MAP-adaptation-based system. It can be seen from Figure 4 that, as expected, the SID accuracies increase with the increase in the amount of singing data used.",
"cite_spans": [],
"ref_spans": [
{
"start": 738,
"end": 746,
"text": "Figure 4",
"ref_id": null
},
{
"start": 839,
"end": 847,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Performance Evaluation of 9 Speaker-Identification Systems for Singing Voice Data",
"sec_num": null
},
{
"text": "As the MAP-adaptation-based system uses more voice data than the system using speech data only, it is worth comparing the SID performance of the MAP-adaptation-based system with that of the system trained using both speech data and singing data. We thus generated an SID system using 15 utterances plus J singing recordings per person in Gaussian mixture modeling. Figure 5 shows our experiment results. We can see from Figure 5 that the system trained using both speech data and singing data cannot achieve comparable performance to the MAP-adaptation-based system, especially when the amount of singing data is small. This may be because a GMM trained using a mix of speech and singing data tends to model the common voice characteristics of speech and singing, but overlooks their individual differences. In addition, it is worth examining if the MAP-adaptation-based system is still capable of identifying speech data, since its models have been adapted to handle singing data. Figure 6 shows the SID accuracies of testing speech utterances using the MAP-adaptation-based system. For the purpose of comparison, we also evaluated the SID accuracies obtained with the system trained using both speech and singing data. It can be seen from Figure 6 that both of the systems work well in identifying speech utterances. This indicates that the GMMs in the MAP-adaptation-based system do not lose the essence of covering the speaking voice characteristics after they are adapted to cover the singing voice characteristics. Figure 7 presents the accuracies of identifying all of the speech utterances and singing recordings in our database. We can see from Figure 7 that the MAP-adaptation-based system performs better overall than the system trained using both speech and singing data. ",
"cite_spans": [],
"ref_spans": [
{
"start": 365,
"end": 373,
"text": "Figure 5",
"ref_id": null
},
{
"start": 420,
"end": 428,
"text": "Figure 5",
"ref_id": null
},
{
"start": 982,
"end": 990,
"text": "Figure 6",
"ref_id": null
},
{
"start": 1241,
"end": 1249,
"text": "Figure 6",
"ref_id": null
},
{
"start": 1521,
"end": 1529,
"text": "Figure 7",
"ref_id": null
},
{
"start": 1654,
"end": 1662,
"text": "Figure 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "(a)Testing DB-Singing-1 (b) Testing DB-Singing-2 Figure 4. SID accuracies obtained with the MAP-adaptation-based System.",
"sec_num": null
},
{
"text": "In this study, the problem of speaker identification has been extended from identifying a person's speech utterances to identifying a person's singing recordings. Our experiment found that a standard SID system trained using speech utterances fails to identify most singing data, due to the significant differences between singing and speaking for a majority of people. In order for an SID system to handle both speech and singing data, we examine the feasibility of applying a well-known model-adaptation strategy to enhance the generalization of a standard SID. The basic strategy is to use a small sample of the singing voice to adapt each speech-derived GMM based on MAP estimation. The experiments show that, after the model adaptation, the system can identify a majority of the singing clips, while retaining the capability of identifying speech utterances.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5."
},
{
"text": "Although this study shows that a speech-derived SID system can be improved significantly through the use of a model-adaptation strategy, the system pays the cost of acquiring the singing voice data from each person. In realistic applications, acquiring singing voice data in the training phase may not be feasible. As a result, further investigation on robust audio features invariant to speech and singing would be needed. Our future work will focus on this topic and extend our voice database to a larger scale.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5."
},
{
"text": "Wei-Ho Tsai and Hsin-Chieh Lee",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This research was partially supported by the National Science Council, Taiwan, ROC, under Grant NSC 98-2622-E-027-035-CC3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Fundamentals of Speaker Recognition",
"authors": [
{
"first": "H",
"middle": [],
"last": "Beigi",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Beigi, H. (2011). Fundamentals of Speaker Recognition. New York: Springer. ISBN 978-0-387-77591-3, 2011.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A tutorial on text-independent speaker verification",
"authors": [
{
"first": "F",
"middle": [
"J"
],
"last": "Bimbot",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Bonastre",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Fredouille",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Gravier",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Magrin-Chagnolleau",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Meignier",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Merlin",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Ortega-Garcia",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Petrovska-Delacretaz",
"suffix": ""
},
{
"first": "D",
"middle": [
"A"
],
"last": "Reynolds",
"suffix": ""
}
],
"year": 2004,
"venue": "EURASIP J. Appl. Signal Process",
"volume": "",
"issue": "",
"pages": "430--451",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bimbot, F. J., Bonastre, F., Fredouille, C., Gravier, G., Magrin-Chagnolleau, I., Meignier, S., Merlin, T., Ortega-Garcia, J., Petrovska-Delacretaz, D., & Reynolds, D. A. (2004). A tutorial on text-independent speaker verification. EURASIP J. Appl. Signal Process., 430-451.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Synthesis of the singing voice by performance sampling and spectral models",
"authors": [
{
"first": "J",
"middle": [],
"last": "Bonada",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Serra",
"suffix": ""
}
],
"year": 2007,
"venue": "IEEE Signal Processing Magazine",
"volume": "24",
"issue": "2",
"pages": "67--79",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bonada, J., & Serra, X. (2007). Synthesis of the singing voice by performance sampling and spectral models. IEEE Signal Processing Magazine, 24(2), 67-79.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Speaker recognition: a tutorial",
"authors": [
{
"first": "J",
"middle": [
"P"
],
"last": "Campbell",
"suffix": ""
}
],
"year": 1997,
"venue": "Proc. IEEE",
"volume": "85",
"issue": "",
"pages": "1437--1462",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Campbell, J. P. (1997). Speaker recognition: a tutorial. Proc. IEEE, 85(9), 1437-1462.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Comparison of parametric representations for monosyllabic word recognition in continuously spoken sentences",
"authors": [
{
"first": "S",
"middle": [
"B"
],
"last": "Davis",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Mermelstein",
"suffix": ""
}
],
"year": 1980,
"venue": "IEEE Trans. Acoust., Sspeech, Signal Process",
"volume": "28",
"issue": "",
"pages": "357--366",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Davis, S. B., & Mermelstein, P. (1980). Comparison of parametric representations for monosyllabic word recognition in continuously spoken sentences. IEEE Trans. Acoust., Sspeech, Signal Process., 28, 357-366.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Maximum likelihood from incomplete data via the EM algorithm",
"authors": [
{
"first": "A",
"middle": [],
"last": "Dempster",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Laird",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Rubin",
"suffix": ""
}
],
"year": 1977,
"venue": "J. R. Statist. Soc",
"volume": "39",
"issue": "",
"pages": "1--38",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dempster, A., Laird, N., & Rubin, D. (1977). Maximum likelihood from incomplete data via the EM algorithm. J. R. Statist. Soc., 39, 1-38.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Pitch-based acoustic feature analysis for the discrimination of speech and monophonic singing",
"authors": [
{
"first": "D",
"middle": [],
"last": "Gerhard",
"suffix": ""
}
],
"year": 2002,
"venue": "Journal of the Canadian Acoustical Association",
"volume": "30",
"issue": "3",
"pages": "152--153",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gerhard, D. (2002). Pitch-based acoustic feature analysis for the discrimination of speech and monophonic singing. Journal of the Canadian Acoustical Association, 30(3), 152-153.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Computationally measurable differences between speech and song",
"authors": [
{
"first": "D",
"middle": [],
"last": "Gerhard",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gerhard, D. (2003). Computationally measurable differences between speech and song. Ph.D. dissertation, Simon Fraser University.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "VOCALO-ID -commercial singing synthesizer based on sample concatenation",
"authors": [
{
"first": "H",
"middle": [],
"last": "Kenmochi",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ohshita",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. Interspeech",
"volume": "",
"issue": "",
"pages": "4011--4010",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenmochi, H., & Ohshita, H. (2007). VOCALO-ID -commercial singing synthesizer based on sample concatenation. In Proc. Interspeech, 4011-4010.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Comparative study of speaker identification methods: DPLRM, SVM and GMM",
"authors": [
{
"first": "T",
"middle": [],
"last": "Matusi",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Tanabe",
"suffix": ""
}
],
"year": 2006,
"venue": "IEICE Trans. on Information and Systems",
"volume": "",
"issue": "3",
"pages": "1066--1073",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matusi, T., & Tanabe, K. (2006). Comparative study of speaker identification methods: DPLRM, SVM and GMM. IEICE Trans. on Information and Systems, E89-D(3), 1066-1073.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Combining evidence from residual phase and MFCC features for speaker verification",
"authors": [
{
"first": "K",
"middle": [
"S R"
],
"last": "Murty",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Yegnanarayana",
"suffix": ""
}
],
"year": 2006,
"venue": "IEEE Signal Process. Lett",
"volume": "13",
"issue": "1",
"pages": "52--55",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Murty, K. S. R., & Yegnanarayana, B. (2006). Combining evidence from residual phase and MFCC features for speaker verification. IEEE Signal Process. Lett., 13(1), 52-55.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Text-independent speaker recognition by combining speaker specific GMM with speaker adapted syllable-based HMM",
"authors": [
{
"first": "S",
"middle": [],
"last": "Nakagawa",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Takahashi",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. ICASSP, I",
"volume": "",
"issue": "",
"pages": "81--84",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nakagawa, S., Zhang, W., & Takahashi, M. (2004). Text-independent speaker recognition by combining speaker specific GMM with speaker adapted syllable-based HMM. In Proc. ICASSP, I, 81-84.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Text-independnt/text-prompted speaker recognition by combining speaker-specific GMM with speaker adapted syllable-based HMM",
"authors": [
{
"first": "S",
"middle": [],
"last": "Nakagawa",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Takahashi",
"suffix": ""
}
],
"year": 2006,
"venue": "IEICE Trans. on Information and Systems",
"volume": "",
"issue": "3",
"pages": "1058--1064",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nakagawa, S., Zhang, W., & Takahashi, M. (2006). Text-independnt/text-prompted speaker recognition by combining speaker-specific GMM with speaker adapted syllable-based HMM. IEICE Trans. on Information and Systems, E89-D(3), 1058-1064.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Speaker identification and verification using Gaussian mixture speaker models",
"authors": [
{
"first": "D",
"middle": [
"A"
],
"last": "Reynolds",
"suffix": ""
}
],
"year": 1995,
"venue": "Speech Commun",
"volume": "17",
"issue": "1-2",
"pages": "91--108",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Reynolds, D. A. (1995). Speaker identification and verification using Gaussian mixture speaker models. Speech Commun., 17(1-2), 91-108.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Robust text-independent speaker identification using Gaussian mixture speaker models",
"authors": [
{
"first": "D",
"middle": [],
"last": "Reynolds",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Rose",
"suffix": ""
}
],
"year": 1995,
"venue": "IEEE Trans. Speech Audio Process",
"volume": "3",
"issue": "1",
"pages": "72--83",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Reynolds, D., & Rose, R. (1995). Robust text-independent speaker identification using Gaussian mixture speaker models. IEEE Trans. Speech Audio Process., 3(1), 72-83.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Speaker verification using adapted Gaussian mixture models",
"authors": [
{
"first": "D",
"middle": [
"A"
],
"last": "Reynolds",
"suffix": ""
},
{
"first": "T",
"middle": [
"F"
],
"last": "Quatieri",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Dunn",
"suffix": ""
}
],
"year": 2000,
"venue": "Dig. Signal Process",
"volume": "10",
"issue": "1-3",
"pages": "19--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Reynolds, D. A., Quatieri, T. F., & Dunn, R. (2000). Speaker verification using adapted Gaussian mixture models. Dig. Signal Process., 10(1-3), 19-41.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "An analysis of phonetic differences between German singing and speaking voices",
"authors": [
{
"first": "S",
"middle": [],
"last": "Rosenau",
"suffix": ""
}
],
"year": 1999,
"venue": "Proc. 14th Int. Congress of Phonetic Sciences (ICPhS)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rosenau, S. (1999). An analysis of phonetic differences between German singing and speaking voices. In Proc. 14th Int. Congress of Phonetic Sciences (ICPhS).",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Automatic speaker verification: A review",
"authors": [
{
"first": "A",
"middle": [
"E"
],
"last": "Rosenberg",
"suffix": ""
}
],
"year": 1976,
"venue": "Proc. IEEE",
"volume": "64",
"issue": "",
"pages": "475--487",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rosenberg, A. E. (1976). Automatic speaker verification: A review. In Proc. IEEE, 64(4), 475-487.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Speech-to-singing synthesis: vocal conversion from speaking voices to singing voices by controlling acoustic features unique to singing voices",
"authors": [
{
"first": "T",
"middle": [],
"last": "Saitou",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Goto",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Unoki",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Akagi",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA2007)",
"volume": "",
"issue": "",
"pages": "215--218",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saitou, T., Goto, M., Unoki, M., & Akagi, M. (2007). Speech-to-singing synthesis: vocal conversion from speaking voices to singing voices by controlling acoustic features unique to singing voices. In Proc. IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA2007), 215-218.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Development of an F0 control model based on F0 dynamic characteristics for singing-voice synthesis",
"authors": [
{
"first": "T",
"middle": [],
"last": "Saitou",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Unoki",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Akagi",
"suffix": ""
}
],
"year": 2005,
"venue": "Speech Comm",
"volume": "46",
"issue": "",
"pages": "405--417",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saitou, T., Unoki, M., & Akagi, M. (2005). Development of an F0 control model based on F0 dynamic characteristics for singing-voice synthesis. Speech Comm., 46, 405-417.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "HMM-based singing voice synthesis system",
"authors": [
{
"first": "K",
"middle": [],
"last": "Saino",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Zen",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Nankaku",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Tokuda",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. Int. Conf. Spoken Lang. Process. (ICSLP)",
"volume": "",
"issue": "",
"pages": "1141--1144",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saino, K., Zen, H., Nankaku, Y., Lee, A., & Tokuda, K. (2006). HMM-based singing voice synthesis system. In Proc. Int. Conf. Spoken Lang. Process. (ICSLP), 1141-1144.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "The most prevalent SID system.",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF1": {
"text": "An SID system based on MAP adaptation of a speaker GMM to a singer GMM.",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF3": {
"text": "of the SID performance of the MAP-adaptation-based system with that of the system trained using both speech data and singing data.",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF4": {
"text": "of identifying both speech utterances and singing recordings.",
"type_str": "figure",
"uris": null,
"num": null
},
"TABREF0": {
"html": null,
"num": null,
"text": "",
"content": "<table><tr><td>Testing Data</td><td>SID Accuracy (%)</td></tr><tr><td>DB-Speech-2</td><td>100.0</td></tr><tr><td>DB-Singing-2</td><td>17.7</td></tr><tr><td colspan=\"2\">(b) System trained using DB-Speeech-2</td></tr><tr><td>Testing Data</td><td>SID Accuracy (%)</td></tr><tr><td>DB-Speech-1</td><td>100.0</td></tr><tr><td>DB-Singing-1</td><td>16.3</td></tr></table>",
"type_str": "table"
},
"TABREF1": {
"html": null,
"num": null,
"text": "",
"content": "<table><tr><td>Performance Evaluation of</td><td>7</td></tr><tr><td>Speaker-Identification Systems for Singing Voice Data</td><td/></tr></table>",
"type_str": "table"
}
}
}
}