mishig HF Staff commited on
Commit
5019332
·
verified ·
1 Parent(s): b4266c3

Add 1 files

Browse files
Files changed (1) hide show
  1. 2409/2409.18584.md +316 -0
2409/2409.18584.md ADDED
@@ -0,0 +1,316 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Title: A Comprehensive Mandarin Speech Dataset for Young Children Aged 3-5
2
+
3
+ URL Source: https://arxiv.org/html/2409.18584
4
+
5
+ Markdown Content:
6
+ Jiaming Zhou 1, Shiyao Wang 1, Shiwan Zhao 1, Jiabei He 1, Haoqin Sun 1, Hui Wang 1,
7
+
8
+ Cheng Liu 1, Aobo Kong 1, Yujie Guo 1, Xi Yang 2, Yequan Wang 2, Yonghua Lin 2, Yong Qin 1
9
+ 1 College of Computer Science, Nankai University,
10
+
11
+ 2 Beijing Academy of Artificial Intelligence, Beijing, China,
12
+
13
+ Correspondence:[zhoujiaming@mail.nankai.edu.cn](mailto:zhoujiaming@mail.nankai.edu.cn), [qinyong@nankai.edu.cn](mailto:qinyong@nankai.edu.cn)
14
+
15
+ ###### Abstract
16
+
17
+ Automatic speech recognition (ASR) systems have advanced significantly with models like Whisper, Conformer, and self-supervised frameworks such as Wav2vec 2.0 and HuBERT. However, developing robust ASR models for young children’s speech remains challenging due to differences in pronunciation, tone, and pace compared to adult speech. In this paper, we introduce a new Mandarin speech dataset focused on children aged 3 to 5, addressing the scarcity of resources in this area. The dataset comprises 41.25 hours of speech with carefully crafted manual transcriptions, collected from 397 speakers across various provinces in China, with balanced gender representation. We provide a comprehensive analysis of speaker demographics, speech duration distribution and geographic coverage. Additionally, we evaluate ASR performance on models trained from scratch, such as Conformer, as well as fine-tuned pre-trained models like HuBERT and Whisper, where fine-tuning demonstrates significant performance improvements. Furthermore, we assess speaker verification (SV) on our dataset, showing that, despite the challenges posed by the unique vocal characteristics of young children, the dataset effectively supports both ASR and SV tasks. This dataset is a valuable contribution to Mandarin child speech research. The dataset is now open-source and freely available for all academic purposes on [https://github.com/flageval-baai/ChildMandarin](https://github.com/flageval-baai/ChildMandarin).
18
+
19
+ ChildMandarin: A Comprehensive Mandarin Speech Dataset
20
+
21
+ for Young Children Aged 3-5
22
+
23
+ Jiaming Zhou 1, Shiyao Wang 1, Shiwan Zhao 1, Jiabei He 1, Haoqin Sun 1, Hui Wang 1,Cheng Liu 1, Aobo Kong 1, Yujie Guo 1, Xi Yang 2, Yequan Wang 2, Yonghua Lin 2, Yong Qin 1††thanks: Yong Qin is the corresponding author.1 College of Computer Science, Nankai University,2 Beijing Academy of Artificial Intelligence, Beijing, China,Correspondence:[zhoujiaming@mail.nankai.edu.cn](mailto:zhoujiaming@mail.nankai.edu.cn), [qinyong@nankai.edu.cn](mailto:qinyong@nankai.edu.cn)
24
+
25
+ Table 1: Summary of Chinese child speech datasets: age range, speaker count, duration, and availability. Dur.: duration. Trans.: transcriptions (P: partial). Avail.: availability.
26
+
27
+ Table 2: Summary of child speech datasets in other languages, where K denotes kindergarten while G denotes grade.
28
+
29
+ 1 Introduction
30
+ --------------
31
+
32
+ Automatic Speech Recognition (ASR) technology has become increasingly prevalent across various applications, ranging from virtual assistants and educational tools to accessibility services for individuals with disabilities (Kennedy et al., [2017](https://arxiv.org/html/2409.18584v3#bib.bib28)). In particular, child speech recognition holds great potential in educational settings, such as language learning applications, reading tutors, and interactive systems. However, despite the rapid advancements in ASR technology, the performance of most systems—whether state-of-the-art or commercial—remains suboptimal when applied to children’s speech (Fan et al., [2024](https://arxiv.org/html/2409.18584v3#bib.bib15)).
33
+
34
+ ASR systems are predominantly trained on adult speech Zhou et al. ([2024](https://arxiv.org/html/2409.18584v3#bib.bib53)), making them highly effective for everyday interactions but ill-suited for children due to physiological differences in vocal tract development, higher pitch, and inconsistent pronunciation (Lee et al., [1997](https://arxiv.org/html/2409.18584v3#bib.bib31); Gerosa et al., [2009](https://arxiv.org/html/2409.18584v3#bib.bib19)). Children’s speech also exhibits considerable variability in articulation, speech patterns, and vocabulary, further complicating the recognition process (Benzeghiba et al., [2007](https://arxiv.org/html/2409.18584v3#bib.bib4); Bhardwaj et al., [2022](https://arxiv.org/html/2409.18584v3#bib.bib5)). These challenges are compounded by the lack of sufficient child-specific training data, which is crucial for developing ASR systems that can accurately and reliably understand children’s speech across different age groups. However, datasets focused on young children are extremely rare (Graave et al., [2024](https://arxiv.org/html/2409.18584v3#bib.bib20)). Most existing speech datasets either concentrate on adult speakers or cover older children, overlooking the unique linguistic and developmental characteristics of younger children. This gap is critical, as the scarcity of training data limits the ability of ASR systems to perform well on speech from this age group Zhou et al. ([2023](https://arxiv.org/html/2409.18584v3#bib.bib52)).
35
+
36
+ Although there are a few open-source Mandarin speech datasets for children (Xiangjun and Yip, [2017](https://arxiv.org/html/2409.18584v3#bib.bib47); Gao et al., [2012](https://arxiv.org/html/2409.18584v3#bib.bib16); Yu et al., [2021](https://arxiv.org/html/2409.18584v3#bib.bib49); Chen et al., [2016](https://arxiv.org/html/2409.18584v3#bib.bib7)), they are often limited in scope. For instance, the Tong Corpus (Xiangjun and Yip, [2017](https://arxiv.org/html/2409.18584v3#bib.bib47)) records the speech of a single child from ages 1;7 to 3;4, which is useful for certain research areas, but insufficient for ASR development due to the lack of speaker diversity. Similarly, while the CASS CHILD corpus (Gao et al., [2012](https://arxiv.org/html/2409.18584v3#bib.bib16)) includes data from 23 children aged 1 to 4 years, a portion of 80 hours is transcribed, it is not publicly available, restricting its use in ASR research. Children’s speech poses unique challenges, with frequent mispronunciations, ungrammatical expressions, and child-specific vocabulary. To address these issues, it is essential to collect data from a large number of speakers, ensuring substantial amounts of data per speaker to capture linguistic variability and improve the generalization of ASR models. Existing datasets, such as the SingaKids-Mandarin (Chen et al., [2016](https://arxiv.org/html/2409.18584v3#bib.bib7)) and SLT-CSRC (Yu et al., [2021](https://arxiv.org/html/2409.18584v3#bib.bib49)), primarily focus on older children (aged 7-12), leaving a gap for younger age groups.
37
+
38
+ Constructing a dedicated speech dataset for young children is crucial. It addresses a significant gap in existing resources and provides a foundation for developing ASR systems specifically tailored to young children. In this paper, we introduce a Mandarin speech dataset designed for children aged 3 to 5, comprising 41.25 hours of speech from 397 speakers across 22 of China’s 34 provincial-level administrative divisions. Our evaluations of ASR models and speaker verification (SV) tasks demonstrate substantial improvements, underscoring the dataset’s effectiveness in advancing technology for children’s speech. This dataset bridges the gap in age-specific speech data by incorporating a wide range of speakers and extensive regional diversity. It represents a valuable contribution to Mandarin child speech research and holds significant potential for applications in educational technology and child-computer interaction.
39
+
40
+ 2 Related Work
41
+ --------------
42
+
43
+ ### 2.1 Child Speech Recognition Corpora in Mandarin Chinese
44
+
45
+ Publicly available child speech corpora for Mandarin Chinese are highly limited, particularly for younger age groups, as shown in Table [1](https://arxiv.org/html/2409.18584v3#S0.T1 "Table 1 ‣ ChildMandarin: A Comprehensive Mandarin Speech Dataset for Young Children Aged 3-5"). The few existing datasets are either too small in terms of speakers or lack accessibility, which restricts their utility for developing robust ASR systems.
46
+
47
+ The Tong Corpus (Xiangjun and Yip, [2017](https://arxiv.org/html/2409.18584v3#bib.bib47)) is a longitudinal dataset that records the speech of a single child, Tong, with one hour of recordings per week from ages 1;7 to 3;4. Although this corpus is valuable for research on language acquisition, its use in ASR development is limited by its single-speaker nature, which cannot provide the diversity needed for model generalization.
48
+
49
+ Gao et al. ([2012](https://arxiv.org/html/2409.18584v3#bib.bib16)) collected the CASS CHILD dataset, which contains 631 hours of speech from 23 children aged 1 to 4 years. However, only about 80 hours of this dataset are labeled with transcriptions, and, critically, the dataset is not publicly accessible. This restricts its use in ASR experiments and highlights the difficulty of obtaining child speech corpora in Mandarin.
50
+
51
+ The SingaKids-Mandarin Corpus (Chen et al., [2016](https://arxiv.org/html/2409.18584v3#bib.bib7)) contains 75 hours of speech data from 255 children aged 7 to 12, which is suitable for ASR training. This corpus encompasses diverse linguistic contexts. However, it focuses exclusively on children aged 7 to 12 and does not address the speech of younger children, which represents a significant gap in Mandarin ASR research.
52
+
53
+ Another important dataset is SLT-CSRC (Yu et al., [2021](https://arxiv.org/html/2409.18584v3#bib.bib49)), which consists of two collections: SLT-CSRC C1 and C2. The former includes 28.6 hours of reading-style speech from 927 children aged 7 to 11, while the latter consists of 29.5 hours of conversational speech from 54 children aged 4 to 11. Although these datasets provide valuable speech data for Mandarin ASR, they were only available for participants of the SLT 2021 challenge and are no longer publicly accessible.
54
+
55
+ In summary, for Mandarin child speech, only the Tong Corpus and SingaKids-Mandarin datasets are available upon request, and both are limited in terms of speaker diversity and age range coverage. This lack of publicly accessible child speech corpora, particularly for younger children, continues to be a significant challenge in Mandarin ASR development.
56
+
57
+ ### 2.2 Child Speech Corpora in Other Languages
58
+
59
+ In other languages, especially English, a wider variety of child speech corpora exists, as shown in Table [2](https://arxiv.org/html/2409.18584v3#S0.T2 "Table 2 ‣ ChildMandarin: A Comprehensive Mandarin Speech Dataset for Young Children Aged 3-5"). These corpora differ significantly in size, age range, and speaker diversity, reflecting various research priorities. However, many still lack sufficient coverage for younger children, a crucial age group for advancing ASR development.
60
+
61
+ English corpora, in particular, are among the most well-represented. For example, the Providence(Demuth et al., [2006](https://arxiv.org/html/2409.18584v3#bib.bib11)) and Lyon Corpora(Demuth and Tremblay, [2008](https://arxiv.org/html/2409.18584v3#bib.bib12)) focus on early childhood speech (ages 1-3), offering 363 and 185 hours of recordings, respectively. Despite their extensive durations, these datasets are limited in the number of speakers, with only 6 and 4 children represented, respectively. On the other hand, larger datasets such as the MyST Corpus(Pradhan et al., [2024](https://arxiv.org/html/2409.18584v3#bib.bib37)) offer 393 hours of conversational speech from virtual tutoring sessions in elementary school science, collected from 1,371 children in grades 3 to 5. This broader speaker diversity is highly advantageous for training robust ASR systems.
62
+
63
+ Other notable English datasets include the CSLU Kids’ Speech Corpus(Shobaki et al., [2007](https://arxiv.org/html/2409.18584v3#bib.bib42)), which features reading recordings from over 1,100 children from kindergarten through grade 10 including simple words,digits and sentences, and the TBALL Corpus(Kazemzadeh et al., [2005](https://arxiv.org/html/2409.18584v3#bib.bib27)), which contains speech from 256 children in kindergarten through grade 4. These datasets contribute valuable resources for developing ASR systems for various childhood age ranges and linguistic styles.
64
+
65
+ Child speech datasets in other languages are less common and typically smaller. For example, the Demuth Sesotho Corpus(Demuth, [1992](https://arxiv.org/html/2409.18584v3#bib.bib10)) offers 98 hours of speech from 59 children aged 2 to 4, focusing on a non-Indo-European language, while the CHIEDE corpus(Garrote and Moreno Sandoval, [2008](https://arxiv.org/html/2409.18584v3#bib.bib18)) contains around 8 hours of speech from 59 Spanish-speaking children aged 3 to 6. The IESC-Child Corpus(Pérez-Espinosa et al., [2020](https://arxiv.org/html/2409.18584v3#bib.bib36)) provides about 35 hours of Spanish speech from 174 children aged 6 to 11.
66
+
67
+ For European languages, the JASMIN-CGN Corpus(Cucchiarini et al., [2008](https://arxiv.org/html/2409.18584v3#bib.bib9)) offers 64 hours of Dutch speech from children aged 7 to 16, and the Swedish NICE Corpus(Bell et al., [2005](https://arxiv.org/html/2409.18584v3#bib.bib3)) features data from 5,580 children aged 8 to 15. Although the NICE Corpus stands out for its large number of speakers, the total duration of recordings is relatively short, and similar limitations regarding younger children persist across these corpora.
68
+
69
+ Although these corpora are valuable, they reveal a significant shortage of publicly accessible child speech datasets for many languages, particularly for younger children and non-European languages. This gap underscores the urgent need for diverse, well-annotated child speech corpora to support ASR systems capable of generalizing across different languages, age ranges, and regions.
70
+
71
+ Our Mandarin Chinese dataset alleviates this gap by focusing on children aged 3 to 5, a critical yet underrepresented age group in ASR research. With 397 speakers and 41.25 hours of diverse, geographically distributed speech data, it offers a significant contribution to the field, especially given the scarcity of similar datasets for young children in non-European languages.
72
+
73
+ Table 3: Summary of dataset splits, including the number of speakers (# Spk.) and utterances (# Utt.), total duration (Dur.), and average utterance length (Avg.).
74
+
75
+ ![Image 1: Refer to caption](https://arxiv.org/html/2409.18584v3/x1.png)
76
+
77
+ Figure 1: Distribution of speakers by age and gender in our dataset
78
+
79
+ 3 Dataset description
80
+ ---------------------
81
+
82
+ ### 3.1 Dataset details
83
+
84
+ The dataset consists of 41.25 hours of speech data with carefully crafted character-level manual transcriptions, collected from Mandarin-speaking children aged 3 to 5 years. The gender distribution is balanced across all age groups. To ensure geographic coverage, speakers were selected from different regions of China. A total of 397 speakers participated, representing 22 out of 34 provincial-level administrative divisions. Accents were classified into three categories: heavy (H), moderate (M), and light (L).
85
+
86
+ Our data collection occurred in a conversational context to promote natural interaction, with parents present throughout the sessions to provide emotional comfort and support for the children. The recording content was unrestricted, focusing on age-appropriate daily communication, ensuring that children engaged in familiar and non-stressful activities.
87
+
88
+ Prior to data collection, informed consent was obtained from the parents or legal guardians of all participants. The consent process included detailed explanations of the study’s purpose, procedures, and the intended use of the data for academic research. Parents were explicitly informed about their right to withdraw consent at any time without any repercussions.
89
+
90
+ All recordings followed standardized collection and annotation protocols. Speech samples were captured using smartphones, with a nearly even split between Android (216) and iPhone (181) devices. Each session took place in quiet indoor environments, with minimal background noise tolerated due to the young age of participants. The recordings were in WAV PCM format, with a 16kHz sampling rate and 16-bit precision, ensuring high-quality audio without clipping or volume inconsistencies. Silence segments of approximately 0.3 seconds were preserved at the beginning and end of each valid speech segment, and utterances containing fewer than three characters were excluded.
91
+
92
+ Character-level manual annotations were performed by professional transcribers, who meticulously adhered to the audio content, including stutters, disfluencies, and developmental speech patterns. Regional pronunciation variations were transcribed faithfully. Additionally, numbers were transcribed as pronounced, maintaining consistency with the intended meaning of the speech.
93
+
94
+ ### 3.2 Statistics
95
+
96
+ ![Image 2: Refer to caption](https://arxiv.org/html/2409.18584v3/x2.png)
97
+
98
+ Figure 2: Utterance-level and speaker-level duration distribution in our dataset
99
+
100
+ ![Image 3: Refer to caption](https://arxiv.org/html/2409.18584v3/x3.png)
101
+
102
+ Figure 3: Geographic distribution of speakers in our dataset
103
+
104
+ As shown in Table [3](https://arxiv.org/html/2409.18584v3#S2.T3 "Table 3 ‣ 2.2 Child Speech Corpora in Other Languages ‣ 2 Related Work ‣ ChildMandarin: A Comprehensive Mandarin Speech Dataset for Young Children Aged 3-5"), our dataset consists of three subsets: training (317 speakers), validation (39 speakers), and test (41 speakers), with no overlap between speakers across the subsets. We further analyze the distribution of speakers based on age, gender, birthplace, accent and recording device.
105
+
106
+ The age and gender distribution in the dataset, depicted in Figure [1](https://arxiv.org/html/2409.18584v3#S2.F1 "Figure 1 ‣ 2.2 Child Speech Corpora in Other Languages ‣ 2 Related Work ‣ ChildMandarin: A Comprehensive Mandarin Speech Dataset for Young Children Aged 3-5"), highlights a decrease in the number of speakers as age decreases, which reflects the challenges in recruiting younger participants. Despite this, the gender distribution remains balanced across all age groups.
107
+
108
+ The distribution of utterance lengths and total speaking duration per speaker is presented in Figure [2](https://arxiv.org/html/2409.18584v3#S3.F2 "Figure 2 ‣ 3.2 Statistics ‣ 3 Dataset description ‣ ChildMandarin: A Comprehensive Mandarin Speech Dataset for Young Children Aged 3-5"). Most utterances are between 1 and 5 seconds long, with very few exceeding 10 seconds. Additionally, the majority of speakers have a total speaking duration between 200 and 600 seconds, which is essential for developing ASR systems tailored to young children.
109
+
110
+ The geographic distribution of speakers, spanning 22 of China’s 34 provincial-level administrative divisions, is summarized in Figure [3](https://arxiv.org/html/2409.18584v3#S3.F3 "Figure 3 ‣ 3.2 Statistics ‣ 3 Dataset description ‣ ChildMandarin: A Comprehensive Mandarin Speech Dataset for Young Children Aged 3-5"). Despite recruitment challenges, broad regional representation was achieved, with Shanxi contributing the highest number of participants (136), followed by Jiangsu (40) and Henan (39). Provinces such as Shaanxi, Shandong, and Hunan also contribute significantly. Although some regions, including Gansu, Heilongjiang, and Chongqing, have fewer participants, their inclusion enhances the dataset’s comprehensive geographic coverage.
111
+
112
+ Speaker accents and recording devices are analyzed in Figure [4](https://arxiv.org/html/2409.18584v3#S3.F4 "Figure 4 ‣ 3.2 Statistics ‣ 3 Dataset description ‣ ChildMandarin: A Comprehensive Mandarin Speech Dataset for Young Children Aged 3-5"). Accents are categorized into three levels: heavy (H), moderate (M), and light (L), with the majority of speakers exhibiting light accent variation. Only around 4% of speakers are categorized as having moderate or heavy accents. Furthermore, a balanced representation of iPhone and Android devices was achieved to support diverse ASR system requirements.
113
+
114
+ ![Image 4: Refer to caption](https://arxiv.org/html/2409.18584v3/x4.png)
115
+
116
+ Figure 4: Proportions of accents and recording devices in our dataset
117
+
118
+ Table 4: Decoding performance (CER, %) of Transformer, Conformer, and Paraformer models trained from scratch
119
+
120
+ Table 5: Details of pre-trained baseline models. Enc and Dec stand for encoder and decoder, while Sup. and Self-sup. represent supervised and self-supervised learning. (B) and (L) denote the base and large versions.
121
+
122
+ 4 Tasks and baselines
123
+ ---------------------
124
+
125
+ In this section, we evaluate our dataset on both ASR and SV tasks.
126
+
127
+ ### 4.1 Speech recognition
128
+
129
+ For child speech recognition, we trained several baseline models from scratch and fine-tuned pre-trained models to assess performance on our dataset. We use the Character Error Rate (CER, %) as the evaluation metric. Refer to the Appendix [A](https://arxiv.org/html/2409.18584v3#A1 "Appendix A Experimental configurations ‣ ChildMandarin: A Comprehensive Mandarin Speech Dataset for Young Children Aged 3-5") for the complete hyperparameter configurations.
130
+
131
+ #### 4.1.1 Baselines trained from scratch
132
+
133
+ We utilize the open-source Wenet toolkit (Yao et al., [2021](https://arxiv.org/html/2409.18584v3#bib.bib48)) to train ASR models from scratch. Three architectures are chosen: Transformer (Vaswani, [2017](https://arxiv.org/html/2409.18584v3#bib.bib44)), Conformer (Gulati et al., [2020](https://arxiv.org/html/2409.18584v3#bib.bib23)), and Paraformer (Gao et al., [2022](https://arxiv.org/html/2409.18584v3#bib.bib17)). These models incorporate different approaches, including Connectionist Temporal Classification (CTC) (Graves et al., [2006](https://arxiv.org/html/2409.18584v3#bib.bib22)), RNN-Transducer (RNN-T) (Graves, [2012](https://arxiv.org/html/2409.18584v3#bib.bib21)), and Attention-based encoder-decoder (AED) (Chorowski et al., [2014](https://arxiv.org/html/2409.18584v3#bib.bib8); Chan et al., [2015](https://arxiv.org/html/2409.18584v3#bib.bib6)).
134
+
135
+ The following models are considered:
136
+
137
+ * •Transformer: We trained the widely-used Transformer model with joint CTC/AED training. The training process follows the recipe and configuration provided by Wenet.
138
+ * •Conformer: The Conformer (Gulati et al., [2020](https://arxiv.org/html/2409.18584v3#bib.bib23)) integrates convolutions with self-attention for ASR. We trained two models using both CTC and RNN-T loss functions respectively, following the Wenet recipe.
139
+ * •Paraformer: Proposed by Gao et al. (Gao et al., [2022](https://arxiv.org/html/2409.18584v3#bib.bib17)), Paraformer is a fast and accurate parallel transformer model.
140
+
141
+ #### 4.1.2 Results of training models from scratch
142
+
143
+ Table [8](https://arxiv.org/html/2409.18584v3#S4.T8 "Table 8 ‣ 4.1.4 Results of fine-tuning pre-trained models ‣ 4.1 Speech recognition ‣ 4 Tasks and baselines ‣ ChildMandarin: A Comprehensive Mandarin Speech Dataset for Young Children Aged 3-5") presents the results of models trained from scratch on our dataset, evaluated using various decoding methods provided by Wenet (Yao et al., [2021](https://arxiv.org/html/2409.18584v3#bib.bib48)). For Transformer and Conformer models with joint CTC and AED training (Kim et al., [2017](https://arxiv.org/html/2409.18584v3#bib.bib29)), we report CTC greedy and beam search decoding results. For Conformer models with RNN-T and attention loss, we include RNN-T greedy and beam search decoding results. All beam searches use a beam size of 10. Attention decoding and attention rescoring decoding results are also reported for Transformer and Conformer.
144
+
145
+ Conformer with CTC-AED performs best overall, achieving the lowest CER of 27.38% with attention rescoring. Its CTC greedy and beam search methods yield nearly identical results (28.73% and 28.72%). In contrast, the Transformer model performs worse, with its best result being 32.15% CER from attention rescoring, while Paraformer achieves competitive results, particularly with beam search (28.94%). RNN-T for Conformer performs less effectively, with no significant improvement from attention rescoring. Overall, Conformer with CTC-AED provides the most reliable performance, especially with attention rescoring.
146
+
147
+ #### 4.1.3 Pre-trained baselines
148
+
149
+ We evaluate our dataset using a range of pre-trained baselines, including both supervised and self-supervised models. The details of these baselines are summarized in Table [5](https://arxiv.org/html/2409.18584v3#S3.T5 "Table 5 ‣ 3.2 Statistics ‣ 3 Dataset description ‣ ChildMandarin: A Comprehensive Mandarin Speech Dataset for Young Children Aged 3-5"). For the supervised baselines, we include Conformer pre-trained on WenetSpeech (Zhang et al., [2022](https://arxiv.org/html/2409.18584v3#bib.bib50)) and Whisper (Radford et al., [2023](https://arxiv.org/html/2409.18584v3#bib.bib39)). For the self-supervised models, we utilize Wav2vec 2.0 (Baevski et al., [2020](https://arxiv.org/html/2409.18584v3#bib.bib1)) and HuBERT (Hsu et al., [2021](https://arxiv.org/html/2409.18584v3#bib.bib25)), integrating a CTC decoder with the encoder to perform the ASR task.
150
+
151
+ * •
152
+ * •
153
+ * •
154
+ * •Whisper: Whisper (Radford et al., [2023](https://arxiv.org/html/2409.18584v3#bib.bib39)) is a Transformer-based multilingual ASR model trained on 680,000 hours of labeled speech data by OpenAI. We include various versions of Whisper, ranging from tiny to large, with model sizes from 39M to 1550M.4 4 4[https://github.com/openai/whisper](https://github.com/openai/whisper)
155
+
156
+ Table 6: CER (%) of self-supervised pre-trained baselines with greedy and beam search decoding
157
+
158
+ Table 7: CER (%) of supervised pre-trained baselines in zero-shot and fine-tuned settings
159
+
160
+ #### 4.1.4 Results of fine-tuning pre-trained models
161
+
162
+ Table [6](https://arxiv.org/html/2409.18584v3#S4.T6 "Table 6 ‣ 4.1.3 Pre-trained baselines ‣ 4.1 Speech recognition ‣ 4 Tasks and baselines ‣ ChildMandarin: A Comprehensive Mandarin Speech Dataset for Young Children Aged 3-5") shows the CER for fine-tuning various self-supervised pre-trained models, including Wav2vec 2.0 and HuBERT, using both greedy and beam search decoding methods. HuBERT consistently outperforms Wav2vec 2.0, which is consistent with recent research (wen Yang et al., [2021](https://arxiv.org/html/2409.18584v3#bib.bib46)). Additionally, HuBERT (L) demonstrates better performance compared to its smaller counterpart, HuBERT (B). However, Wav2vec 2.0 (L) underperforms relative to Wav2vec 2.0 (B), likely due to overfitting, given the limited data size.
163
+
164
+ Table [7](https://arxiv.org/html/2409.18584v3#S4.T7 "Table 7 ‣ 4.1.3 Pre-trained baselines ‣ 4.1 Speech recognition ‣ 4 Tasks and baselines ‣ ChildMandarin: A Comprehensive Mandarin Speech Dataset for Young Children Aged 3-5") presents the CER results for Conformer-WenetSpeech (CW) and Whisper models in zero-shot and fine-tuning settings. Fine-tuning results in substantial CER improvements for all supervised models. Despite Whisper’s large parameter size and extensive training data, the limited size of our dataset causes Whisper-medium to perform slightly worse than Whisper-Small after fine-tuning. Overall, CW achieves the best performance in both zero-shot and fine-tuned settings, highlighting its robust ASR capabilities learned from WenetSpeech.
165
+
166
+ ![Image 5: Refer to caption](https://arxiv.org/html/2409.18584v3/x5.png)
167
+
168
+ Figure 5: CER (%) comparison of zero-shot and fine-tuning methods using CW model across different age-gender groups
169
+
170
+ Table 8: In-depth comparison of different error types (S: Substitutions, D: Deletions, I: Insertions) between zero-shot and fine-tuning methods using CW model across different age-gender groups
171
+
172
+ Table 9: Results of fine-tuning baselines on the speaker verification task, where Dim indicates the dimension of the extracted embeddings and Dev represents the accuracy on the validation set.
173
+
174
+ #### 4.1.5 Performance Analysis
175
+
176
+ Figure [5](https://arxiv.org/html/2409.18584v3#S4.F5 "Figure 5 ‣ 4.1.4 Results of fine-tuning pre-trained models ‣ 4.1 Speech recognition ‣ 4 Tasks and baselines ‣ ChildMandarin: A Comprehensive Mandarin Speech Dataset for Young Children Aged 3-5") shows ASR performance across age and gender groups on the CW model. CER decreases with age, with 3-year-olds exhibiting higher error rates than 5-year-olds, reflecting greater variability in younger children’s speech. Fine-tuning significantly reduces CER across all age groups, demonstrating its effectiveness in adapting models to children’s speech.
177
+
178
+ Gender Trends: Male speakers consistently exhibit higher CER than female speakers of the same age. This disparity may arise from greater pitch and articulation variability in young male children.
179
+
180
+ Error Types: We further investigate error types in Tabel [8](https://arxiv.org/html/2409.18584v3#S4.T8 "Table 8 ‣ 4.1.4 Results of fine-tuning pre-trained models ‣ 4.1 Speech recognition ‣ 4 Tasks and baselines ‣ ChildMandarin: A Comprehensive Mandarin Speech Dataset for Young Children Aged 3-5"). Substitutions dominate error types, followed by deletions and insertions. Younger children, particularly 3-year-olds, exhibit higher substitution and deletion rates, reflecting challenges in speech recognition.
181
+
182
+ In summary, age and gender notably influence ASR performance, with younger and male speakers posing greater challenges. Fine-tuning mitigates these issues, highlighting the importance of targeted adaptation strategies. Detailed utterance analysis can be found in Appendix [B](https://arxiv.org/html/2409.18584v3#A2 "Appendix B Analysis of fine-Tuning performance on specific utterances ‣ ChildMandarin: A Comprehensive Mandarin Speech Dataset for Young Children Aged 3-5").
183
+
184
+ ### 4.2 Speaker verification
185
+
186
+ In this section, we evaluate our dataset on the SV task. The evaluation is organized into three parts: dataset repartition, baselines, and results.
187
+
188
+ #### 4.2.1 Dataset repartition
189
+
190
+ For the speaker verification task, the training and validation sets were merged, resulting in a total of 356 speakers. This combined data was then split into new training and validation sets with a 9:1 ratio for each speaker, while the test set remained unchanged. Although the training and validation sets share speakers, their speech samples are distinct. Verification trials were generated entirely from the test set, consisting of 20,000 trials and 41 speakers, with positive and negative trials evenly distributed (50% each). The trials uniformly covered same-speaker pairs (s⁢p⁢k a,s⁢p⁢k a)𝑠 𝑝 subscript 𝑘 𝑎 𝑠 𝑝 subscript 𝑘 𝑎{(spk_{a},spk_{a})}( italic_s italic_p italic_k start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT , italic_s italic_p italic_k start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT ) and different-speaker pairs (s⁢p⁢k a,s⁢p⁢k b)𝑠 𝑝 subscript 𝑘 𝑎 𝑠 𝑝 subscript 𝑘 𝑏{(spk_{a},spk_{b})}( italic_s italic_p italic_k start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT , italic_s italic_p italic_k start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT ).
191
+
192
+ #### 4.2.2 Speaker verification baselines
193
+
194
+ In this study, three popular speaker embedding extractors, pre-trained on VoxCeleb (Nagrani et al., [2017](https://arxiv.org/html/2409.18584v3#bib.bib34)), were fine-tuned on our dataset: x-vector 5 5 5[https://huggingface.co/speechbrain/spkrec-xvect-voxceleb](https://huggingface.co/speechbrain/spkrec-xvect-voxceleb)(Snyder et al., [2018](https://arxiv.org/html/2409.18584v3#bib.bib43)), ECAPA-TDNN 6 6 6[https://huggingface.co/speechbrain/spkrec-ecapa-voxceleb](https://huggingface.co/speechbrain/spkrec-ecapa-voxceleb)(Desplanques et al., [2020](https://arxiv.org/html/2409.18584v3#bib.bib13)), and ResNet-TDNN 7 7 7[https://huggingface.co/speechbrain/spkrec-resnet-voxceleb](https://huggingface.co/speechbrain/spkrec-resnet-voxceleb)(Villalba et al., [2020](https://arxiv.org/html/2409.18584v3#bib.bib45)). These models were implemented using the SpeechBrain (Ravanelli et al., [2021](https://arxiv.org/html/2409.18584v3#bib.bib41)) toolkit and fine-tuned for 40 epochs. The embeddings extracted from the verification trials were then used to evaluate the models’ performance on the SV task. Refer to the Appendix [A](https://arxiv.org/html/2409.18584v3#A1 "Appendix A Experimental configurations ‣ ChildMandarin: A Comprehensive Mandarin Speech Dataset for Young Children Aged 3-5") for the complete hyperparameter configurations.
195
+
196
+ #### 4.2.3 Results of speaker verification
197
+
198
+ For evaluation, two scoring methods were applied: Probabilistic Linear Discriminant Analysis (PLDA) (Prince and Elder, [2007](https://arxiv.org/html/2409.18584v3#bib.bib38)) and Cosine Similarity. Performance was measured using two metrics: Equal Error Rate (EER) and Minimum Detection Cost Function (minDCF). EER is computed by finding the verification threshold where the false rejection and false acceptance rates (p m⁢i⁢s⁢s subscript 𝑝 𝑚 𝑖 𝑠 𝑠 p_{miss}italic_p start_POSTSUBSCRIPT italic_m italic_i italic_s italic_s end_POSTSUBSCRIPT and p f⁢a subscript 𝑝 𝑓 𝑎 p_{fa}italic_p start_POSTSUBSCRIPT italic_f italic_a end_POSTSUBSCRIPT) are equal, such that EER =p f⁢a=p m⁢i⁢s⁢s absent subscript 𝑝 𝑓 𝑎 subscript 𝑝 𝑚 𝑖 𝑠 𝑠=p_{fa}=p_{miss}= italic_p start_POSTSUBSCRIPT italic_f italic_a end_POSTSUBSCRIPT = italic_p start_POSTSUBSCRIPT italic_m italic_i italic_s italic_s end_POSTSUBSCRIPT. The DCF is calculated using:
199
+
200
+ C δ=c m⁢i⁢s⁢s⋅p m⁢i⁢s⁢s⋅p t⁢a⁢r⁢g⁢e⁢t+c f⁢a⋅p f⁢a⋅(1−p t⁢a⁢r⁢g⁢e⁢t)subscript 𝐶 𝛿⋅subscript 𝑐 𝑚 𝑖 𝑠 𝑠 subscript 𝑝 𝑚 𝑖 𝑠 𝑠 subscript 𝑝 𝑡 𝑎 𝑟 𝑔 𝑒 𝑡⋅subscript 𝑐 𝑓 𝑎 subscript 𝑝 𝑓 𝑎 1 subscript 𝑝 𝑡 𝑎 𝑟 𝑔 𝑒 𝑡 C_{\delta}=c_{miss}\cdot p_{miss}\cdot p_{target}+c_{fa}\cdot p_{fa}\cdot(1-p_% {target})italic_C start_POSTSUBSCRIPT italic_δ end_POSTSUBSCRIPT = italic_c start_POSTSUBSCRIPT italic_m italic_i italic_s italic_s end_POSTSUBSCRIPT ⋅ italic_p start_POSTSUBSCRIPT italic_m italic_i italic_s italic_s end_POSTSUBSCRIPT ⋅ italic_p start_POSTSUBSCRIPT italic_t italic_a italic_r italic_g italic_e italic_t end_POSTSUBSCRIPT + italic_c start_POSTSUBSCRIPT italic_f italic_a end_POSTSUBSCRIPT ⋅ italic_p start_POSTSUBSCRIPT italic_f italic_a end_POSTSUBSCRIPT ⋅ ( 1 - italic_p start_POSTSUBSCRIPT italic_t italic_a italic_r italic_g italic_e italic_t end_POSTSUBSCRIPT )
201
+
202
+ where c m⁢i⁢s⁢s subscript 𝑐 𝑚 𝑖 𝑠 𝑠 c_{miss}italic_c start_POSTSUBSCRIPT italic_m italic_i italic_s italic_s end_POSTSUBSCRIPT is the cost of false rejection, c f⁢a subscript 𝑐 𝑓 𝑎 c_{fa}italic_c start_POSTSUBSCRIPT italic_f italic_a end_POSTSUBSCRIPT is the cost of false acceptance, and p t⁢a⁢r⁢g⁢e⁢t subscript 𝑝 𝑡 𝑎 𝑟 𝑔 𝑒 𝑡 p_{target}italic_p start_POSTSUBSCRIPT italic_t italic_a italic_r italic_g italic_e italic_t end_POSTSUBSCRIPT represents the probability that the target speaker appears in the verification set. In this case, c m⁢i⁢s⁢s=c f⁢a=1 subscript 𝑐 𝑚 𝑖 𝑠 𝑠 subscript 𝑐 𝑓 𝑎 1 c_{miss}=c_{fa}=1 italic_c start_POSTSUBSCRIPT italic_m italic_i italic_s italic_s end_POSTSUBSCRIPT = italic_c start_POSTSUBSCRIPT italic_f italic_a end_POSTSUBSCRIPT = 1 and p t⁢a⁢r⁢g⁢e⁢t=10−2 subscript 𝑝 �� 𝑎 𝑟 𝑔 𝑒 𝑡 superscript 10 2 p_{target}=10^{-2}italic_p start_POSTSUBSCRIPT italic_t italic_a italic_r italic_g italic_e italic_t end_POSTSUBSCRIPT = 10 start_POSTSUPERSCRIPT - 2 end_POSTSUPERSCRIPT.
203
+
204
+ Table [9](https://arxiv.org/html/2409.18584v3#S4.T9 "Table 9 ‣ 4.1.4 Results of fine-tuning pre-trained models ‣ 4.1 Speech recognition ‣ 4 Tasks and baselines ‣ ChildMandarin: A Comprehensive Mandarin Speech Dataset for Young Children Aged 3-5") summarizes the performance of the models on the dataset, with both PLDA and Cosine Similarity evaluated using EER and minDCF metrics. Two key insights emerge from the results: First, the dataset proves to be well-suited for speaker-related tasks, as indicated by the strong performance of the three fine-tuned baseline models. However, the underdeveloped vocal characteristics of young children present challenges, potentially masking gender-related features and other distinguishing attributes. Second, due to the relatively small size of the dataset, the larger ECAPA-TDNN model underperformed compared to ResNet and x-vector, likely due to overfitting.
205
+
206
+ 5 Conclusion
207
+ ------------
208
+
209
+ In conclusion, this paper introduces a valuable Mandarin speech dataset specifically designed for young children aged 3 to 5, addressing a crucial gap in ASR resources for this age group. Comprising 41.25 hours of speech data from 397 speakers across diverse provinces in China, the dataset ensures balanced gender representation and board geographic coverage. Our evaluations of ASR models and speaker verification show significant improvements, highlighting the dataset’s effectiveness in advancing children’s speech technology. This work represents a significant contribution to Mandarin child speech research and holds great promise for applications in educational technology and child-computer interaction.
210
+
211
+ Limitations
212
+ -----------
213
+
214
+ Despite the dataset comprising 41.25 hours of speech data, it remains relatively small compared to adult speech datasets, which typically encompass much larger volumes. Additionally, while the dataset covers 22 provinces across China, the geographic distribution is not fully balanced, and expanding representation from underrepresented regions could improve diversity. Overfitting can occur when fine-tuning pre-trained models with a large number of parameters, particularly on smaller datasets. To address this, parameter-efficient fine-tuning methods like LoRA (Hu et al., [2022](https://arxiv.org/html/2409.18584v3#bib.bib26)) could be explored to enhance model performance.
215
+
216
+ Ethics Statement
217
+ ----------------
218
+
219
+ This study adhered to strict ethical standards to safeguard the well-being and rights of participants. Recordings were conducted in a conversational context to encourage natural interaction, with parents present to provide emotional support. The content was unrestricted, focusing on age-appropriate, familiar communication to ensure a stress-free environment.
220
+
221
+ Informed consent was obtained from parents or legal guardians, who were fully briefed on the study’s purpose, procedures, and data use for academic research. They were informed of their right to withdraw at any time. Each child received a fair compensation of 150 RMB (about $20 USD), carefully calibrated to avoid undue influence.
222
+
223
+ To protect privacy, all data was anonymized, removing personal identifiers and replacing them with coded labels. The dataset is securely stored, with access restricted to authorized researchers for academic purposes. The publicly available dataset will be licensed to prohibit commercial use and ensure compliance with ethical research practices.
224
+
225
+ While the study posed minimal risks, measures such as parental presence and familiar settings were implemented to ensure children’s psychological comfort. This dataset aims to advance automatic speech recognition (ASR) technologies for underrepresented groups like young children. However, we recognize the potential misuse of ASR technologies and have taken steps to mitigate risks by restricting dataset use to academic research and promoting ethical applications.
226
+
227
+ In summary, this study emphasizes informed consent, privacy protection, fair compensation, and ethical use of the data, ensuring respect for participants’ rights and well-being while contributing responsibly to the research community.
228
+
229
+ References
230
+ ----------
231
+
232
+ * Baevski et al. (2020) Alexei Baevski, Yuhao Zhou, Abdelrahman Mohamed, and Michael Auli. 2020. wav2vec 2.0: A framework for self-supervised learning of speech representations. _Advances in neural information processing systems_, 33:12449–12460.
233
+ * Batliner et al. (2005) Anton Batliner, Mats Blomberg, Shona D’Arcy, Daniel Elenius, Diego Giuliani, Matteo Gerosa, Christian Hacker, Martin Russell, Stefan Steidl, and Michael Wong. 2005. [The pf_star children’s speech corpus](https://doi.org/10.21437/Interspeech.2005-705). pages 2761–2764.
234
+ * Bell et al. (2005) Linda Bell, Johan Boye, Joakim Gustafson, Mattias Heldner, Anders Lindström, and Mats Wirén. 2005. The swedish nice corpus–spoken dialogues between children and embodied characters in a computer game scenario. In _Interspeech 2005-Eurospeech, 9th European Conference on Speech Communication and Technology, Lisbon, Portugal, September 4-8, 2005_, pages 2765–2768. ISCA.
235
+ * Benzeghiba et al. (2007) Mohamed Benzeghiba, Renato De Mori, Olivier Deroo, Stephane Dupont, Teodora Erbes, Denis Jouvet, Luciano Fissore, Pietro Laface, Alfred Mertins, Christophe Ris, et al. 2007. Automatic speech recognition and speech variability: A review. _Speech communication_, 49(10-11):763–786.
236
+ * Bhardwaj et al. (2022) Vivek Bhardwaj, Mohamed Tahar Ben Othman, Vinay Kukreja, Youcef Belkhier, Mohit Bajaj, B Srikanth Goud, Ateeq Ur Rehman, Muhammad Shafiq, and Habib Hamam. 2022. Automatic speech recognition (asr) systems for children: A systematic literature review. _Applied Sciences_, 12(9):4419.
237
+ * Chan et al. (2015) William Chan, Navdeep Jaitly, Quoc V Le, and Oriol Vinyals. 2015. Listen, attend and spell. _arXiv preprint arXiv:1508.01211_.
238
+ * Chen et al. (2016) Nancy F Chen, Rong Tong, Darren Wee, Pei Xuan Lee, Bin Ma, and Haizhou Li. 2016. Singakids-mandarin: Speech corpus of singaporean children speaking mandarin chinese. In _Interspeech_, pages 1545–1549.
239
+ * Chorowski et al. (2014) Jan Chorowski, Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. End-to-end continuous speech recognition using attention-based recurrent nn: First results. _arXiv preprint arXiv:1412.1602_.
240
+ * Cucchiarini et al. (2008) Catia Cucchiarini, Joris Driesen, H Van Hamme, and EP Sanders. 2008. Recording speech of children, non-natives and elderly people for hlt applications: the jasmin-cgn corpus.
241
+ * Demuth (1992) Katherine Demuth. 1992. Acquisition of sesotho. In _The Cross-Linguistic Study of Language Acquisition_, pages 557–638. Lawrence Erlbaum Associates.
242
+ * Demuth et al. (2006) Katherine Demuth, Jennifer Culbertson, and Jennifer Alter. 2006. Word-minimality, epenthesis and coda licensing in the early acquisition of english. _Language and speech_, 49(2):137–173.
243
+ * Demuth and Tremblay (2008) Katherine Demuth and Annie Tremblay. 2008. Prosodically-conditioned variability in children’s production of french determiners. _Journal of child language_, 35(1):99–127.
244
+ * Desplanques et al. (2020) Brecht Desplanques, Jenthe Thienpondt, and Kris Demuynck. 2020. Ecapa-tdnn: Emphasized channel attention, propagation and aggregation in tdnn based speaker verification. _arXiv preprint arXiv:2005.07143_.
245
+ * Eskenazi et al. (1997) Maxine Eskenazi, Jack Mostow, and David Graff. 1997. The cmu kids corpus. _Linguistic Data Consortium_, 11.
246
+ * Fan et al. (2024) Ruchao Fan, Natarajan Balaji Shankar, and Abeer Alwan. 2024. [Benchmarking children’s asr with supervised and self-supervised speech foundation models](https://doi.org/10.21437/Interspeech.2024-1353). In _Interspeech 2024_, pages 5173–5177.
247
+ * Gao et al. (2012) Jun Gao, Aijun Li, and Ziyu Xiong. 2012. Mandarin multimedia child speech corpus: Cass_child. In _2012 International Conference on Speech Database and Assessments_, pages 7–12. IEEE.
248
+ * Gao et al. (2022) Zhifu Gao, ShiLiang Zhang, Ian McLoughlin, and Zhijie Yan. 2022. [Paraformer: Fast and accurate parallel transformer for non-autoregressive end-to-end speech recognition](https://doi.org/10.21437/Interspeech.2022-9996). In _Interspeech 2022_, pages 2063–2067.
249
+ * Garrote and Moreno Sandoval (2008) Marta Garrote and A Moreno Sandoval. 2008. Chiede, a spontaneous child language corpus of spanish. In _Proceedings of the 3rd International LABLITA Workshop in Corpus Linguistics_.
250
+ * Gerosa et al. (2009) Matteo Gerosa, Diego Giuliani, Shrikanth Narayanan, and Alexandros Potamianos. 2009. A review of asr technologies for children’s speech. In _Proceedings of the 2nd Workshop on Child, Computer and Interaction_, pages 1–8.
251
+ * Graave et al. (2024) Thomas Graave, Zhengyang Li, Timo Lohrenz, and Tim Fingscheidt. 2024. [Mixed children/adult/childrenized fine-tuning for children’s asr: How to reduce age mismatch and speaking style mismatch](https://doi.org/10.21437/Interspeech.2024-499). In _Interspeech 2024_, pages 5188–5192.
252
+ * Graves (2012) Alex Graves. 2012. Sequence transduction with recurrent neural networks. _arXiv preprint arXiv:1211.3711_.
253
+ * Graves et al. (2006) Alex Graves, Santiago Fernández, Faustino Gomez, and Jürgen Schmidhuber. 2006. Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. In _ICML_, pages 369–376.
254
+ * Gulati et al. (2020) Anmol Gulati, James Qin, Chung-Cheng Chiu, Niki Parmar, Yu Zhang, Jiahui Yu, Wei Han, Shibo Wang, Zhengdong Zhang, Yonghui Wu, and Ruoming Pang. 2020. [Conformer: Convolution-augmented transformer for speech recognition](https://doi.org/10.21437/Interspeech.2020-3015). In _Interspeech 2020_, pages 5036–5040.
255
+ * Hagen et al. (2003) Andreas Hagen, Bryan Pellom, and Ronald Cole. 2003. Children’s speech recognition with application to interactive books and tutors. In _2003 IEEE Workshop on Automatic Speech Recognition and Understanding (IEEE Cat. No. 03EX721)_, pages 186–191. IEEE.
256
+ * Hsu et al. (2021) Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, and Abdelrahman Mohamed. 2021. Hubert: Self-supervised speech representation learning by masked prediction of hidden units. _IEEE/ACM transactions on audio, speech, and language processing_, 29:3451–3460.
257
+ * Hu et al. (2022) Edward J Hu, yelong shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2022. [LoRA: Low-rank adaptation of large language models](https://openreview.net/forum?id=nZeVKeeFYf9). In _International Conference on Learning Representations_.
258
+ * Kazemzadeh et al. (2005) Abe Kazemzadeh, Hong You, Markus Iseli, Barbara Jones, Xiaodong Cui, Margaret Heritage, Patti Price, Elaine Andersen, Shrikanth S Narayanan, and Abeer Alwan. 2005. Tball data collection: the making of a young children’s speech corpus. In _Interspeech_, pages 1581–1584.
259
+ * Kennedy et al. (2017) James Kennedy, Séverin Lemaignan, Caroline Montassier, Pauline Lavalade, Bahar Irfan, Fotios Papadopoulos, Emmanuel Senft, and Tony Belpaeme. 2017. Child speech recognition in human-robot interaction: evaluations and recommendations. In _Proceedings of the 2017 ACM/IEEE international conference on human-robot interaction_, pages 82–90.
260
+ * Kim et al. (2017) Suyoun Kim, Takaaki Hori, and Shinji Watanabe. 2017. Joint ctc-attention based end-to-end speech recognition using multi-task learning. In _2017 IEEE international conference on acoustics, speech and signal processing (ICASSP)_, pages 4835–4839. IEEE.
261
+ * Kruyt et al. (2024) Joanna Kruyt, Róbert Sabo, Katarína Polónyiová, Daniela Ostatníková, and Štefan Beňuš. 2024. The slovak autistic and non-autistic child speech corpus: Task-oriented child-adult interactions. In _Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)_, pages 16094–16099.
262
+ * Lee et al. (1997) Sungbok Lee, Alexandros Potamianos, and Shrikanth Narayanan. 1997. Analysis of children’s speech: Duration, pitch and formants. In _Fifth European Conference on Speech Communication and Technology_.
263
+ * Lee et al. (1999) Sungbok Lee, Alexandros Potamianos, and Shrikanth Narayanan. 1999. Acoustics of children’s speech: Developmental changes of temporal and spectral parameters. _The Journal of the Acoustical Society of America_, 105(3):1455–1468.
264
+ * Leonard and Doddington (1993) R.Gary Leonard and George Doddington. 1993. Tidigits ldc93s10. _Linguistic Data Consortium_.
265
+ * Nagrani et al. (2017) Arsha Nagrani, Joon Son Chung, and Andrew Zisserman. 2017. [Voxceleb: A large-scale speaker identification dataset](https://doi.org/10.21437/Interspeech.2017-950). In _Interspeech 2017_, pages 2616–2620.
266
+ * Pascual and Guevara (2012) Ronald M Pascual and Rowena Cristina L Guevara. 2012. Developing a children’s filipino speech corpus for application in automatic detection of reading miscues and disfluencies. In _TENCON 2012 IEEE Region 10 Conference_, pages 1–6. IEEE.
267
+ * Pérez-Espinosa et al. (2020) Humberto Pérez-Espinosa, Juan Martínez-Miranda, Ismael Espinosa-Curiel, Josefina Rodríguez-Jacobo, Luis Villaseñor-Pineda, and Himer Avila-George. 2020. Iesc-child: an interactive emotional children’s speech corpus. _Computer Speech & Language_, 59:55–74.
268
+ * Pradhan et al. (2024) Sameer Pradhan, Ronald Cole, and Wayne Ward. 2024. My science tutor (myst)–a large corpus of children’s conversational speech. In _Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)_, pages 12040–12045.
269
+ * Prince and Elder (2007) Simon JD Prince and James H Elder. 2007. Probabilistic linear discriminant analysis for inferences about identity. In _2007 IEEE 11th international conference on computer vision_, pages 1–8. IEEE.
270
+ * Radford et al. (2023) Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, and Ilya Sutskever. 2023. Robust speech recognition via large-scale weak supervision. In _International conference on machine learning_, pages 28492–28518. PMLR.
271
+ * Radha and Bansal (2022) Kodali Radha and Mohan Bansal. 2022. Audio augmentation for non-native children’s speech recognition through discriminative learning. _Entropy_, 24(10):1490.
272
+ * Ravanelli et al. (2021) Mirco Ravanelli, Titouan Parcollet, Peter Plantinga, Aku Rouhe, Samuele Cornell, Loren Lugosch, Cem Subakan, Nauman Dawalatabad, Abdelwahab Heba, Jianyuan Zhong, et al. 2021. Speechbrain: A general-purpose speech toolkit. _arXiv preprint arXiv:2106.04624_.
273
+ * Shobaki et al. (2007) Khaldoun Shobaki, John-Paul Hosom, and Ronald Cole. 2007. Cslu: Kids‘ speech version 1.1. In _Linguistic Data Consortium_.
274
+ * Snyder et al. (2018) David Snyder, Daniel Garcia-Romero, Gregory Sell, Daniel Povey, and Sanjeev Khudanpur. 2018. X-vectors: Robust dnn embeddings for speaker recognition. In _2018 IEEE international conference on acoustics, speech and signal processing (ICASSP)_, pages 5329–5333. IEEE.
275
+ * Vaswani (2017) A Vaswani. 2017. Attention is all you need. _Advances in Neural Information Processing Systems_.
276
+ * Villalba et al. (2020) Jesús Villalba, Nanxin Chen, David Snyder, Daniel Garcia-Romero, Alan McCree, Gregory Sell, Jonas Borgstrom, Leibny Paola García-Perera, Fred Richardson, Réda Dehak, Pedro A. Torres-Carrasquillo, and Najim Dehak. 2020. [State-of-the-art speaker recognition with neural network embeddings in nist sre18 and speakers in the wild evaluations](https://doi.org/10.1016/j.csl.2019.101026). _Computer Speech & Language_, 60:101026.
277
+ * wen Yang et al. (2021) Shu wen Yang, Po-Han Chi, Yung-Sung Chuang, Cheng-I Jeff Lai, Kushal Lakhotia, Yist Y. Lin, Andy T. Liu, Jiatong Shi, Xuankai Chang, Guan-Ting Lin, Tzu-Hsien Huang, Wei-Cheng Tseng, Ko tik Lee, Da-Rong Liu, Zili Huang, Shuyan Dong, Shang-Wen Li, Shinji Watanabe, Abdelrahman Mohamed, and Hung yi Lee. 2021. [Superb: Speech processing universal performance benchmark](https://doi.org/10.21437/Interspeech.2021-1775). In _Interspeech 2021_, pages 1194–1198.
278
+ * Xiangjun and Yip (2017) Deng Xiangjun and Virginia Yip. 2017. A multimedia corpus of child mandarin: The tong corpus. _Journal of Chinese Linguistics_.
279
+ * Yao et al. (2021) Zhuoyuan Yao, Di Wu, Xiong Wang, Binbin Zhang, Fan Yu, Chao Yang, Zhendong Peng, Xiaoyu Chen, Lei Xie, and Xin Lei. 2021. [Wenet: Production oriented streaming and non-streaming end-to-end speech recognition toolkit](https://doi.org/10.21437/Interspeech.2021-1983). In _Interspeech 2021_, pages 4054–4058.
280
+ * Yu et al. (2021) Fan Yu, Zhuoyuan Yao, Xiong Wang, Keyu An, Lei Xie, Zhijian Ou, Bo Liu, Xiulin Li, and Guanqiong Miao. 2021. The slt 2021 children speech recognition challenge: Open datasets, rules and baselines. In _2021 IEEE Spoken Language Technology Workshop (SLT)_, pages 1117–1123. IEEE.
281
+ * Zhang et al. (2022) Binbin Zhang, Hang Lv, Pengcheng Guo, Qijie Shao, Chao Yang, Lei Xie, Xin Xu, Hui Bu, Xiaoyu Chen, Chenchen Zeng, et al. 2022. Wenetspeech: A 10000+ hours multi-domain mandarin corpus for speech recognition. In _ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_, pages 6182–6186. IEEE.
282
+ * Zhang et al. (2021) Junbo Zhang, Zhiwen Zhang, Yongqing Wang, Zhiyong Yan, Qiong Song, Yukai Huang, Ke Li, Daniel Povey, and Yujun Wang. 2021. [speechocean762: An open-source non-native english speech corpus for pronunciation assessment](https://doi.org/10.21437/Interspeech.2021-1259). In _Interspeech 2021_, pages 3710–3714.
283
+ * Zhou et al. (2023) Jiaming Zhou, Shiwan Zhao, Ning Jiang, Guoqing Zhao, and Yong Qin. 2023. Madi: Inter-domain matching and intra-domain discrimination for cross-domain speech recognition. In _ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_, pages 1–5. IEEE.
284
+ * Zhou et al. (2024) Jiaming Zhou, Shiwan Zhao, Yaqi Liu, Wenjia Zeng, Yong Chen, and Yong Qin. 2024. knn-ctc: Enhancing asr via retrieval of ctc pseudo labels. In _ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_, pages 11006–11010. IEEE.
285
+
286
+ Appendix A Experimental configurations
287
+ --------------------------------------
288
+
289
+ Table A.1: Hyperparameters for training ASR models from scratch.
290
+
291
+ Table A.2: Hyperparameters for fine-tuning pre-trained ASR models.
292
+
293
+ Table A.3: Hyperparameters for training speaker verification models.
294
+
295
+ This section provides detailed configurations and hyperparameters used for training and fine-tuning the ASR and speaker verification (SV) models discussed in the paper. All experiments were conducted using four GTX 3090 or GTX 4090 GPUs over several hours.
296
+
297
+ ### A.1 ASR model training from scratch
298
+
299
+ The hyperparameters for training ASR models from scratch are summarized in Table [A.1](https://arxiv.org/html/2409.18584v3#A1.T1 "Table A.1 ‣ Appendix A Experimental configurations ‣ ChildMandarin: A Comprehensive Mandarin Speech Dataset for Young Children Aged 3-5"). These models were trained using the Wenet toolkit with the configurations shown below.
300
+
301
+ ### A.2 ASR model fine-tuning
302
+
303
+ Table [A.2](https://arxiv.org/html/2409.18584v3#A1.T2 "Table A.2 ‣ Appendix A Experimental configurations ‣ ChildMandarin: A Comprehensive Mandarin Speech Dataset for Young Children Aged 3-5") presents the fine-tuning hyperparameters for pre-trained ASR models, including Wav2vec 2.0, HuBERT, Whisper, and Conformer-WenetSpeech. Fine-tuning was performed using the training subset of our dataset.
304
+
305
+ ### A.3 Speaker verification (SV) model training
306
+
307
+ Table [A.3](https://arxiv.org/html/2409.18584v3#A1.T3 "Table A.3 ‣ Appendix A Experimental configurations ‣ ChildMandarin: A Comprehensive Mandarin Speech Dataset for Young Children Aged 3-5") provides the training configurations for speaker verification models, including ECAPA-TDNN, ResNet-TDNN, and X-vector. These models were trained and evaluated on our dataset for speaker verification tasks.
308
+
309
+ Appendix B Analysis of fine-Tuning performance on specific utterances
310
+ ---------------------------------------------------------------------
311
+
312
+ As presented in Figure [B.1](https://arxiv.org/html/2409.18584v3#A2.F1 "Figure B.1 ‣ Appendix B Analysis of fine-Tuning performance on specific utterances ‣ ChildMandarin: A Comprehensive Mandarin Speech Dataset for Young Children Aged 3-5"), the fine-tuning process significantly improved the ASR model’s performance across various utterances, with a clear reduction in character error rate (CER). In general, fine-tuning allowed the model to adapt to specific child speech variations, addressing common issues such as phoneme substitutions and mispronunciations. Despite these improvements, some residual errors were still observed, particularly for more complex or longer utterances. Overall, the results demonstrate the effectiveness of fine-tuning for enhancing ASR performance on child speech, though further optimization is necessary to fully address all challenges.
313
+
314
+ ![Image 6: Refer to caption](https://arxiv.org/html/2409.18584v3/x6.png)
315
+
316
+ Figure B.1: Performance comparison of zero-shot and fine-tuned models on specific utterances