Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -46,3 +46,193 @@ configs:
|
|
| 46 |
- split: validated
|
| 47 |
path: data/validated-*
|
| 48 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 46 |
- split: validated
|
| 47 |
path: data/validated-*
|
| 48 |
---
|
| 49 |
+
|
| 50 |
+
# Improving CommonVoice 17 Turkish Dataset
|
| 51 |
+
|
| 52 |
+
I recently worked on enhancing the Mozilla CommonVoice 17 Turkish dataset to create a higher quality training set for speech recognition models.
|
| 53 |
+
Here's an overview of my process and findings.
|
| 54 |
+
|
| 55 |
+
## Initial Analysis and Split Organization
|
| 56 |
+
|
| 57 |
+
My first step was analyzing the dataset organization to understand its structure.
|
| 58 |
+
Through analysis of filename stems as unique keys, I revealed and documented an important aspect of CommonVoice's design that might not be immediately clear to all users:
|
| 59 |
+
|
| 60 |
+
- The validated set (113,699 total files) completely contained all samples from:
|
| 61 |
+
- Train split (35,035 files)
|
| 62 |
+
- Test split (11,290 files)
|
| 63 |
+
- Validation split (11,247 files)
|
| 64 |
+
- Additionally, the validated set had ~56K unique samples not present in any other split
|
| 65 |
+
|
| 66 |
+
This design follows CommonVoice's documentation, where dev/test/train are carefully reviewed subsets of the validated data.
|
| 67 |
+
However, this structure needs to be clearly understood to avoid potential data leakage when working with the dataset.
|
| 68 |
+
For example, using the validated set for training while evaluating on the test split would be problematic since the test data is already included in the validated set.
|
| 69 |
+
|
| 70 |
+
To create a clean dataset without overlaps, I:
|
| 71 |
+
|
| 72 |
+
1. Identified all overlapping samples using filename stems as unique keys
|
| 73 |
+
2. Removed samples that were already in train/test/validation splits from the validated set
|
| 74 |
+
3. Created a clean, non-overlapping validated split with unique samples only
|
| 75 |
+
|
| 76 |
+
This approach ensures that researchers can either:
|
| 77 |
+
- Use the original train/test/dev splits as curated by CommonVoice, OR
|
| 78 |
+
- Use my cleaned validated set with their own custom splits
|
| 79 |
+
|
| 80 |
+
Both approaches are valid, but mixing them could lead to evaluation issues.
|
| 81 |
+
|
| 82 |
+
## Audio Processing and Quality Improvements
|
| 83 |
+
|
| 84 |
+
### Silence Trimming
|
| 85 |
+
I processed all audio files to remove unnecessary silence and noise:
|
| 86 |
+
- Used Silero VAD with a threshold of 0.6 to detect speech segments
|
| 87 |
+
- Trimmed leading and trailing silences
|
| 88 |
+
- Removed microphone noise and clicks at clip boundaries
|
| 89 |
+
|
| 90 |
+
### Duration Filtering and Analysis
|
| 91 |
+
|
| 92 |
+
I analyzed each split separately after trimming silences. Here are the detailed findings per split:
|
| 93 |
+
|
| 94 |
+
| Split | Files Before | Files After | Short Files | Duration Before (hrs) | Duration After (hrs) | Duration Reduction % | Short Files Duration (hrs) | Files Reduction % |
|
| 95 |
+
|---|--:|--:|--:|--:|--:|--:|--:|--:|
|
| 96 |
+
| Train | 11,290 | 9,651 | 1,626 | 13.01 | 7.34 | 43.6% | 0.37 | 14.5% |
|
| 97 |
+
| Validation | 11,247 | 8,640 | 2,609 | 11.17 | 6.27 | 43.9% | 0.60 | 23.2% |
|
| 98 |
+
| Test | 35,035 | 26,501 | 8,633 | 35.49 | 19.84 | 44.1% | 2.00 | 24.4% |
|
| 99 |
+
| Validated | 56,127 | 46,348 | 9,991 | 56.71 | 32.69 | 42.4% | 2.29 | 17.4% |
|
| 100 |
+
| **Total** | **113,699** | **91,140** | **22,859** | **116.38** | **66.14** | **43.2%** | **5.26** | **19.8%** |
|
| 101 |
+
|
| 102 |
+
Note: Files with duration shorter than 1.0 seconds were removed from the dataset.
|
| 103 |
+
|
| 104 |
+
#### Validation Split Analysis (formerly Eval)
|
| 105 |
+
- Original files: 11,247
|
| 106 |
+
- Found 2,609 files shorter than 1.0s
|
| 107 |
+
- Statistics for short files:
|
| 108 |
+
- Total duration: 26.26 minutes
|
| 109 |
+
- Average duration: 0.83 seconds
|
| 110 |
+
- Shortest file: 0.65 seconds
|
| 111 |
+
- Longest file: 0.97 seconds
|
| 112 |
+
|
| 113 |
+
#### Train Split Analysis
|
| 114 |
+
- Original files: 35,035
|
| 115 |
+
- Found 8,633 files shorter than 1.0s
|
| 116 |
+
- Statistics for short files:
|
| 117 |
+
- Total duration: 2.29 hours
|
| 118 |
+
- Average duration: 0.82 seconds
|
| 119 |
+
- Shortest file: 0.08 seconds
|
| 120 |
+
- Longest file: 0.97 seconds
|
| 121 |
+
|
| 122 |
+
#### Test Split Analysis
|
| 123 |
+
- Original files: 11,290
|
| 124 |
+
- Found 1,626 files shorter than 1.0s
|
| 125 |
+
- Statistics for short files:
|
| 126 |
+
- Total duration: 56.26 minutes
|
| 127 |
+
- Average duration: 0.85 seconds
|
| 128 |
+
- Shortest file: 0.65 seconds
|
| 129 |
+
- Longest file: 0.97 seconds
|
| 130 |
+
|
| 131 |
+
#### Validated Split Analysis
|
| 132 |
+
- Original files: 56,127
|
| 133 |
+
- Found 9,991 files shorter than 1.0s
|
| 134 |
+
- Statistics for short files:
|
| 135 |
+
- Total duration: 36.26 minutes
|
| 136 |
+
- Average duration: 0.83 seconds
|
| 137 |
+
- Shortest file: 0.65 seconds
|
| 138 |
+
- Longest file: 0.97 seconds
|
| 139 |
+
|
| 140 |
+
All short clips were removed from the dataset to ensure consistent quality. The final dataset maintains only clips longer than 1.0 seconds, with average durations between 2.54-2.69 seconds across splits.
|
| 141 |
+
|
| 142 |
+
### Final Split Statistics
|
| 143 |
+
The cleaned dataset was organized into:
|
| 144 |
+
- Train: 26,501 files (19.84 hours, avg duration: 2.69s, min: 1.04s, max: 9.58s)
|
| 145 |
+
- Test: 9,650 files (7.33 hours, avg duration: 2.74s, min: 1.08s, max: 9.29s)
|
| 146 |
+
- Validation: 8,639 files (6.27 hours, avg duration: 2.61s, min: 1.04s, max: 9.18s)
|
| 147 |
+
- Validated: 46,345 files (32.69 hours, avg duration: 2.54s, min: 1.04s, max: 9.07s)
|
| 148 |
+
|
| 149 |
+
### Final Dataset Split Metrics
|
| 150 |
+
|
| 151 |
+
| Split | Files | Duration (hours) | Avg Duration (s) | Min Duration (s) | Max Duration (s) |
|
| 152 |
+
|-------------|--------|------------------|------------------|------------------|------------------|
|
| 153 |
+
| TRAIN | 26501 | 19.84 | 2.69 | 1.04 | 9.58 |
|
| 154 |
+
| TEST | 9650 | 7.33 | 2.74 | 1.08 | 9.29 |
|
| 155 |
+
| VALIDATION | 8639 | 6.27 | 2.61 | 1.04 | 9.18 |
|
| 156 |
+
| VALIDATED | 46345 | 32.69 | 2.54 | 1.04 | 9.07 |
|
| 157 |
+
|
| 158 |
+
Total files processed: 91,135
|
| 159 |
+
Valid entries created: 91,135
|
| 160 |
+
Files skipped: 0
|
| 161 |
+
Total dataset duration: 66.13 hours
|
| 162 |
+
Average duration across all splits: 2.61 seconds
|
| 163 |
+
|
| 164 |
+
The dataset was processed in the following order:
|
| 165 |
+
1. Train split (26,501 files)
|
| 166 |
+
2. Test split (9,650 files)
|
| 167 |
+
3. Validation split (8,639 files) - Note: Also known as "eval" split in some CommonVoice versions
|
| 168 |
+
4. Validated split (46,348 files)
|
| 169 |
+
|
| 170 |
+
Note: The validation split (sometimes referred to as "eval" split in CommonVoice documentation) serves the same purpose - it's a held-out set for model validation during training.
|
| 171 |
+
We've standardized the naming to "validation" throughout this documentation for consistency with common machine learning terminology.
|
| 172 |
+
|
| 173 |
+
One text file in the validated split was flagged for being too short (2 characters), but was still included in the final dataset.
|
| 174 |
+
|
| 175 |
+
The processed dataset was saved as 'commonvoice_17_tr_fixed' with corresponding split metrics in JSON format.
|
| 176 |
+
|
| 177 |
+
### Detailed Split Metrics (JSON)
|
| 178 |
+
|
| 179 |
+
```json
|
| 180 |
+
{
|
| 181 |
+
"train": {
|
| 182 |
+
"file_count": 26501,
|
| 183 |
+
"total_duration_hours": 19.84,
|
| 184 |
+
"min_duration_seconds": 1.04,
|
| 185 |
+
"max_duration_seconds": 9.58,
|
| 186 |
+
"avg_duration_seconds": 2.69
|
| 187 |
+
},
|
| 188 |
+
"test": {
|
| 189 |
+
"file_count": 9650,
|
| 190 |
+
"total_duration_hours": 7.33,
|
| 191 |
+
"min_duration_seconds": 1.08,
|
| 192 |
+
"max_duration_seconds": 9.29,
|
| 193 |
+
"avg_duration_seconds": 2.74
|
| 194 |
+
},
|
| 195 |
+
"validation": {
|
| 196 |
+
"file_count": 8639,
|
| 197 |
+
"total_duration_hours": 6.27,
|
| 198 |
+
"min_duration_seconds": 1.04,
|
| 199 |
+
"max_duration_seconds": 9.18,
|
| 200 |
+
"avg_duration_seconds": 2.61
|
| 201 |
+
},
|
| 202 |
+
"validated": {
|
| 203 |
+
"file_count": 46345,
|
| 204 |
+
"total_duration_hours": 32.69,
|
| 205 |
+
"min_duration_seconds": 1.04,
|
| 206 |
+
"max_duration_seconds": 9.07,
|
| 207 |
+
"avg_duration_seconds": 2.54
|
| 208 |
+
}
|
| 209 |
+
}
|
| 210 |
+
```
|
| 211 |
+
|
| 212 |
+
This JSON format makes it easy to use these metrics programmatically in other tools and analyses.
|
| 213 |
+
|
| 214 |
+
## Text Processing and Standardization
|
| 215 |
+
|
| 216 |
+
### Character Set Optimization
|
| 217 |
+
- Created a comprehensive charset from all text labels
|
| 218 |
+
- Simplified the character set by:
|
| 219 |
+
- Standardizing quotation marks
|
| 220 |
+
- Removing infrequently used special characters
|
| 221 |
+
- Normalizing Turkish-specific characters
|
| 222 |
+
|
| 223 |
+
### Text Quality Improvements
|
| 224 |
+
- Generated word frequency metrics to identify potential issues
|
| 225 |
+
- Corrected common Turkish typos and grammar errors
|
| 226 |
+
- Standardized punctuation and spacing
|
| 227 |
+
- Fixed inconsistent letter casing
|
| 228 |
+
|
| 229 |
+
## Results
|
| 230 |
+
|
| 231 |
+
The final dataset shows significant improvements:
|
| 232 |
+
- Clean, non-overlapping splits preventing data leakage
|
| 233 |
+
- Removed unnecessary silence and noise from audio
|
| 234 |
+
- Consistent audio durations above 1.0 seconds
|
| 235 |
+
- Standardized text with corrected Turkish grammar and typography
|
| 236 |
+
- Maintained original metadata (age, upvotes, etc.)
|
| 237 |
+
|
| 238 |
+
These improvements make the dataset more suitable for training speech recognition models while maintaining the diversity and richness of the original CommonVoice collection.
|