MEscriva commited on
Commit
23ee427
·
verified ·
1 Parent(s): 534a8c2

Add complete README with transcription methodology and dataset documentation

Browse files
Files changed (1) hide show
  1. README.md +235 -41
README.md CHANGED
@@ -1,41 +1,235 @@
1
- ---
2
- license: cc-by-4.0
3
- configs:
4
- - config_name: default
5
- data_files:
6
- - split: train
7
- path: data/train-*
8
- - split: validation
9
- path: data/validation-*
10
- dataset_info:
11
- features:
12
- - name: id
13
- dtype: string
14
- - name: audio
15
- dtype:
16
- audio:
17
- sampling_rate: 16000
18
- - name: text
19
- dtype: string
20
- - name: duration
21
- dtype: float32
22
- - name: category
23
- dtype: string
24
- - name: quality
25
- dtype: string
26
- - name: source
27
- dtype: string
28
- - name: speaker_role
29
- dtype: string
30
- - name: domain
31
- dtype: string
32
- splits:
33
- - name: train
34
- num_bytes: 1397393290.44
35
- num_examples: 3720
36
- - name: validation
37
- num_bytes: 81247861.0
38
- num_examples: 213
39
- download_size: 1433246274
40
- dataset_size: 1478641151.44
41
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ task_categories:
4
+ - automatic-speech-recognition
5
+ - audio-classification
6
+ language:
7
+ - fr
8
+ tags:
9
+ - speech
10
+ - education
11
+ - french
12
+ - whisper
13
+ - asr
14
+ - speech-recognition
15
+ - fine-tuning
16
+ size_categories:
17
+ - 1K<n<10K
18
+ ---
19
+
20
+ # French Education Speech - Transcribed Dataset
21
+
22
+ High-quality French educational speech dataset transcribed with OpenAI Whisper API, prepared for training automatic speech recognition (ASR) models.
23
+
24
+ ## Dataset Summary
25
+
26
+ This dataset contains 3,933 transcribed audio segments from the French educational domain, totaling approximately 12.82 hours of audio. All transcriptions were performed using OpenAI Whisper API (optimized Whisper-1 model) to ensure maximum accuracy, especially for educational terminology and acronyms.
27
+
28
+ - **Total segments**: 3,933 (3,720 train + 213 validation)
29
+ - **Total duration**: 12.82 hours (12.12h train + 0.70h validation)
30
+ - **Average segment duration**: 11.7 seconds
31
+ - **Language**: French
32
+ - **Domain**: Education (conferences, podcasts, courses, interviews)
33
+ - **Transcription quality**: High-precision commercial API (OpenAI Whisper)
34
+
35
+ ## Dataset Structure
36
+
37
+ ### Splits
38
+
39
+ - **train**: 3,720 segments (12.12 hours)
40
+ - **validation**: 213 segments (0.70 hours)
41
+
42
+ ### Features
43
+
44
+ Each example contains:
45
+
46
+ - `id` (string): Unique segment identifier
47
+ - `audio` (Audio): Audio file at 16kHz sampling rate
48
+ - `text` (string): Transcribed text
49
+ - `duration` (float32): Duration in seconds
50
+ - `category` (string): Segment category (conferences, podcasts, cours, interviews)
51
+ - `quality` (string): Audio quality (clean, medium)
52
+ - `source` (string): Original source
53
+ - `speaker_role` (string): Speaker role (teacher, student, etc.)
54
+ - `domain` (string): Educational domain
55
+
56
+ ## Dataset Creation Methodology
57
+
58
+ ### Source Dataset
59
+
60
+ The dataset is based on `MEscriva/french-education-speech`, which contains 13,711 audio segments (12,988 train + 723 validation) totaling 16.57 hours of audio from the French educational domain.
61
+
62
+ ### Transcription Process
63
+
64
+ #### 1. Model Selection
65
+
66
+ After evaluating multiple transcription options, OpenAI Whisper API was selected for transcription:
67
+
68
+ - **Reason**: Highest precision available, optimized version of Whisper
69
+ - **Advantages**:
70
+ - Superior accuracy for French language
71
+ - Excellent handling of educational terminology and acronyms
72
+ - Commercial-grade reliability
73
+ - Better performance than open-source Whisper models
74
+
75
+ #### 2. Quality Filtering
76
+
77
+ To ensure transcription quality and minimize hallucinations, a minimum duration filter was applied:
78
+
79
+ - **Filter**: Segments >= 4.0 seconds
80
+ - **Rationale**: Shorter segments (< 4s) showed higher hallucination rates (e.g., YouTube-style end-of-video subtitles)
81
+ - **Result**: 3,990 segments >= 4.0s selected from original 13,711 segments
82
+
83
+ #### 3. Transcription Execution
84
+
85
+ The transcription process was executed systematically:
86
+
87
+ - **Tool**: Custom Python script (`transcribe_premium.py`)
88
+ - **API**: OpenAI Whisper API (model: whisper-1)
89
+ - **Language**: French (fr)
90
+ - **Process**:
91
+ - Automatic resumption: Script could be stopped and resumed without data loss
92
+ - Periodic saving: Every 50 transcriptions to prevent data loss
93
+ - Error handling: Robust error handling for API failures
94
+ - Progress tracking: Real-time progress monitoring
95
+
96
+ #### 4. Quality Control
97
+
98
+ ##### Hallucination Detection
99
+
100
+ A systematic hallucination detection system was implemented:
101
+
102
+ - **Detection keywords**: Common YouTube-style phrases ("Sous-titres réalisés", "Merci d'avoir regardé", "n'oubliez pas de vous abonner", etc.)
103
+ - **Monitoring**: Real-time detection during transcription
104
+ - **Logging**: All detected hallucinations logged for analysis
105
+ - **Rate**: 0.93% hallucination rate detected (37 out of 3,970 segments)
106
+
107
+ ##### Hallucination Removal
108
+
109
+ All detected hallucinations were removed from the final dataset:
110
+
111
+ - **Removed**: 35 hallucinations from train set, 2 from validation set
112
+ - **Final count**: 3,720 train segments, 213 validation segments
113
+ - **Quality assurance**: Manual verification confirmed removal of all hallucinated content
114
+
115
+ ### Data Cleaning Pipeline
116
+
117
+ 1. **Duration filtering**: Segments < 4.0s excluded
118
+ 2. **Transcription**: OpenAI Whisper API transcription
119
+ 3. **Hallucination detection**: Automated keyword-based detection
120
+ 4. **Hallucination removal**: All detected hallucinations removed
121
+ 5. **Validation**: Final dataset verified for quality
122
+
123
+ ### Statistics
124
+
125
+ #### Original Dataset
126
+ - Total segments: 13,711
127
+ - Segments >= 4.0s: 3,990 (29.1%)
128
+ - Total duration: 16.57 hours
129
+
130
+ #### Final Dataset
131
+ - Total segments: 3,933 (28.7% of original)
132
+ - Segments >= 4.0s: 3,933 (100% of filtered)
133
+ - Total duration: 12.82 hours (77.4% of original duration)
134
+ - Hallucination rate: 0.93% (removed)
135
+
136
+ #### Quality Metrics
137
+ - Average segment duration: 11.7 seconds
138
+ - Average transcription length: 159 characters
139
+ - Audio quality distribution: 47.4% clean, 52.6% medium, 0% noisy
140
+ - Category distribution: 67.4% conferences, 30.0% podcasts, 2.6% courses
141
+
142
+ ## Usage
143
+
144
+ ### Loading the Dataset
145
+
146
+ ```python
147
+ from datasets import load_dataset
148
+
149
+ dataset = load_dataset("MEscriva/french-education-speech-transcribed")
150
+
151
+ # Access train and validation splits
152
+ train = dataset['train']
153
+ validation = dataset['validation']
154
+
155
+ # Example usage
156
+ print(train[0])
157
+ # {
158
+ # 'id': 'f7bc61a3091c0886646b4f80a388114f',
159
+ # 'audio': {'path': '...', 'array': [...], 'sampling_rate': 16000},
160
+ # 'text': "d'accessibilité.",
161
+ # 'duration': 4.95,
162
+ # 'category': 'conferences',
163
+ # 'quality': 'clean',
164
+ # ...
165
+ # }
166
+ ```
167
+
168
+ ### Training an ASR Model
169
+
170
+ ```python
171
+ from datasets import load_dataset
172
+
173
+ dataset = load_dataset("MEscriva/french-education-speech-transcribed")
174
+
175
+ # Use with transformers or other ASR training frameworks
176
+ # The dataset is ready for fine-tuning Whisper or other ASR models
177
+ ```
178
+
179
+ ## Dataset Characteristics
180
+
181
+ ### Audio Quality
182
+ - **Sampling rate**: 16kHz
183
+ - **Format**: WAV
184
+ - **Quality**: 47.4% clean, 52.6% medium quality
185
+ - **No noisy segments**: All segments are clean or medium quality
186
+
187
+ ### Content Distribution
188
+ - **Conferences**: 67.4% (primary domain)
189
+ - **Podcasts**: 30.0%
190
+ - **Courses**: 2.6%
191
+ - **Interviews**: <0.1%
192
+
193
+ ### Speaker Roles
194
+ - Teachers, students, and educational professionals
195
+ - Various educational contexts and domains
196
+
197
+ ## Limitations and Considerations
198
+
199
+ 1. **Duration filter**: Only segments >= 4.0s are included. Shorter segments were excluded to minimize hallucinations.
200
+
201
+ 2. **Domain specificity**: The dataset is focused on educational content. Performance may vary for other domains.
202
+
203
+ 3. **Hallucination removal**: While hallucination rate is low (0.93%), some false positives may have been removed. Manual verification confirmed high quality.
204
+
205
+ 4. **Audio paths**: Original audio files must be accessible. The dataset references local file paths that may need adjustment for different environments.
206
+
207
+ ## Citation
208
+
209
+ If you use this dataset, please cite:
210
+
211
+ ```bibtex
212
+ @dataset{french_education_speech_transcribed_2024,
213
+ title={French Education Speech - Transcribed Dataset},
214
+ author={MEscriva},
215
+ year={2024},
216
+ url={https://huggingface.co/datasets/MEscriva/french-education-speech-transcribed},
217
+ note={Transcribed with OpenAI Whisper API}
218
+ }
219
+ ```
220
+
221
+ ## Acknowledgments
222
+
223
+ - Source dataset: `MEscriva/french-education-speech`
224
+ - Transcription: OpenAI Whisper API
225
+ - Quality assurance: Systematic hallucination detection and removal
226
+
227
+ ## License
228
+
229
+ CC-BY-4.0 (Creative Commons Attribution 4.0 International)
230
+
231
+ This dataset is derived from `MEscriva/french-education-speech` which uses CC-BY-4.0 license. The transcriptions are original work created using OpenAI Whisper API, but the audio content follows the same licensing terms as the source dataset.
232
+
233
+ ## Contact
234
+
235
+ For questions or issues, please open an issue on the Hugging Face dataset repository.