MEscriva's picture
Fix README: remove non-official task category
83db6f8 verified
metadata
license: mit
task_categories:
  - automatic-speech-recognition
  - audio-classification
language:
  - fr
tags:
  - speech
  - education
  - french
  - whisper
  - asr
  - speech-recognition
  - fine-tuning
size_categories:
  - 1K<n<10K

French Education Speech - Transcribed Dataset

High-quality French educational speech dataset transcribed with OpenAI Whisper API, prepared for training automatic speech recognition (ASR) models.

Dataset Summary

This dataset contains 3,933 transcribed audio segments from the French educational domain, totaling approximately 12.82 hours of audio. All transcriptions were performed using OpenAI Whisper API (optimized Whisper-1 model) to ensure maximum accuracy, especially for educational terminology and acronyms.

  • Total segments: 3,933 (3,720 train + 213 validation)
  • Total duration: 12.82 hours (12.12h train + 0.70h validation)
  • Average segment duration: 11.7 seconds
  • Language: French
  • Domain: Education (conferences, podcasts, courses, interviews)
  • Transcription quality: High-precision commercial API (OpenAI Whisper)

Dataset Structure

Splits

  • train: 3,720 segments (12.12 hours)
  • validation: 213 segments (0.70 hours)

Features

Each example contains:

  • id (string): Unique segment identifier
  • audio (Audio): Audio file at 16kHz sampling rate
  • text (string): Transcribed text
  • duration (float32): Duration in seconds
  • category (string): Segment category (conferences, podcasts, cours, interviews)
  • quality (string): Audio quality (clean, medium)
  • source (string): Original source
  • speaker_role (string): Speaker role (teacher, student, etc.)
  • domain (string): Educational domain

Dataset Creation Methodology

Source Dataset

The dataset is based on MEscriva/french-education-speech, which contains 13,711 audio segments (12,988 train + 723 validation) totaling 16.57 hours of audio from the French educational domain.

Transcription Process

1. Model Selection

After evaluating multiple transcription options, OpenAI Whisper API was selected for transcription:

  • Reason: Highest precision available, optimized version of Whisper
  • Advantages:
    • Superior accuracy for French language
    • Excellent handling of educational terminology and acronyms
    • Commercial-grade reliability
    • Better performance than open-source Whisper models

2. Quality Filtering

To ensure transcription quality and minimize hallucinations, a minimum duration filter was applied:

  • Filter: Segments >= 4.0 seconds
  • Rationale: Shorter segments (< 4s) showed higher hallucination rates (e.g., YouTube-style end-of-video subtitles)
  • Result: 3,990 segments >= 4.0s selected from original 13,711 segments

3. Transcription Execution

The transcription process was executed systematically:

  • Tool: Custom Python script (transcribe_premium.py)
  • API: OpenAI Whisper API (model: whisper-1)
  • Language: French (fr)
  • Process:
    • Automatic resumption: Script could be stopped and resumed without data loss
    • Periodic saving: Every 50 transcriptions to prevent data loss
    • Error handling: Robust error handling for API failures
    • Progress tracking: Real-time progress monitoring

4. Quality Control

Hallucination Detection

A systematic hallucination detection system was implemented:

  • Detection keywords: Common YouTube-style phrases ("Sous-titres réalisés", "Merci d'avoir regardé", "n'oubliez pas de vous abonner", etc.)
  • Monitoring: Real-time detection during transcription
  • Logging: All detected hallucinations logged for analysis
  • Rate: 0.93% hallucination rate detected (37 out of 3,970 segments)
Hallucination Removal

All detected hallucinations were removed from the final dataset:

  • Removed: 35 hallucinations from train set, 2 from validation set
  • Final count: 3,720 train segments, 213 validation segments
  • Quality assurance: Manual verification confirmed removal of all hallucinated content

Data Cleaning Pipeline

  1. Duration filtering: Segments < 4.0s excluded
  2. Transcription: OpenAI Whisper API transcription
  3. Hallucination detection: Automated keyword-based detection
  4. Hallucination removal: All detected hallucinations removed
  5. Validation: Final dataset verified for quality

Statistics

Original Dataset

  • Total segments: 13,711
  • Segments >= 4.0s: 3,990 (29.1%)
  • Total duration: 16.57 hours

Final Dataset

  • Total segments: 3,933 (28.7% of original)
  • Segments >= 4.0s: 3,933 (100% of filtered)
  • Total duration: 12.82 hours (77.4% of original duration)
  • Hallucination rate: 0.93% (removed)

Quality Metrics

  • Average segment duration: 11.7 seconds
  • Average transcription length: 159 characters
  • Audio quality distribution: 47.4% clean, 52.6% medium, 0% noisy
  • Category distribution: 67.4% conferences, 30.0% podcasts, 2.6% courses

Usage

Loading the Dataset

from datasets import load_dataset

dataset = load_dataset("MEscriva/french-education-speech-transcribed")

# Access train and validation splits
train = dataset['train']
validation = dataset['validation']

# Example usage
print(train[0])
# {
#     'id': 'f7bc61a3091c0886646b4f80a388114f',
#     'audio': {'path': '...', 'array': [...], 'sampling_rate': 16000},
#     'text': "d'accessibilité.",
#     'duration': 4.95,
#     'category': 'conferences',
#     'quality': 'clean',
#     ...
# }

Training an ASR Model

from datasets import load_dataset

dataset = load_dataset("MEscriva/french-education-speech-transcribed")

# Use with transformers or other ASR training frameworks
# The dataset is ready for fine-tuning Whisper or other ASR models

Dataset Characteristics

Audio Quality

  • Sampling rate: 16kHz
  • Format: WAV
  • Quality: 47.4% clean, 52.6% medium quality
  • No noisy segments: All segments are clean or medium quality

Content Distribution

  • Conferences: 67.4% (primary domain)
  • Podcasts: 30.0%
  • Courses: 2.6%
  • Interviews: <0.1%

Speaker Roles

  • Teachers, students, and educational professionals
  • Various educational contexts and domains

Limitations and Considerations

  1. Duration filter: Only segments >= 4.0s are included. Shorter segments were excluded to minimize hallucinations.

  2. Domain specificity: The dataset is focused on educational content. Performance may vary for other domains.

  3. Hallucination removal: While hallucination rate is low (0.93%), some false positives may have been removed. Manual verification confirmed high quality.

  4. Audio paths: Original audio files must be accessible. The dataset references local file paths that may need adjustment for different environments.

Citation

If you use this dataset, please cite:

@dataset{french_education_speech_transcribed_2024,
  title={French Education Speech - Transcribed Dataset},
  author={MEscriva},
  year={2024},
  url={https://huggingface.co/datasets/MEscriva/french-education-speech-transcribed},
  note={Transcribed with OpenAI Whisper API}
}

Acknowledgments

  • Source dataset: MEscriva/french-education-speech
  • Transcription: OpenAI Whisper API
  • Quality assurance: Systematic hallucination detection and removal

License

MIT License

Contact

For questions or issues, please open an issue on the Hugging Face dataset repository.