|
|
--- |
|
|
license: cc-by-4.0 |
|
|
language: |
|
|
- zh |
|
|
- en |
|
|
tags: |
|
|
- audio |
|
|
- speech |
|
|
--- |
|
|
|
|
|
### Dataset Description |
|
|
|
|
|
This dataset consists of 50 short audio clips of Mandarin Chinese sentences recorded by a 21-year-old female native speaker. The sentences cover various daily topics such as greetings, weather, numbers, and travel inquiries. |
|
|
|
|
|
The total duration of the dataset is approximately 4 minutes. All audio files are recorded in a quiet environment, segmented into sentence-level clips, and saved in 16-bit PCM WAV format at 44.1kHz. This dataset is designed for educational purposes in computational linguistics and Text-to-Speech (TTS) model training. |
|
|
|
|
|
### Issues Encountered & Solutions |
|
|
|
|
|
During the data preparation process, I encountered two main technical issues: |
|
|
|
|
|
1. Handling Breath Sounds: I noticed that my breathing sounds were quite audible between phrases. Including them would introduce unnecessary noise into the dataset. |
|
|
|
|
|
|
|
|
Solution: Instead of applying digital silence which might sound unnatural, I adopted a precise segmentation strategy. When selecting the start and end points of each clip in Praat, I carefully adjusted the boundaries to exclude the audible inhalations, ensuring each file begins and ends cleanly with the speech signal. |
|
|
|
|
|
3. Inconsistent Amlitude: Initially, some sentences were recorded louder than others because my distance from the microphone varied slightly. |
|
|
|
|
|
|
|
|
Solution: I re-recorded the sentences that were too quiet and kept a fixed distance (about 15cm) from the microphone. I also visually checked the waveforms in Praat to ensure they had a healthy volume level without clipping (staying within the 0.3 to 0.7 range). |