Datasets:
Tasks:
Audio Classification
Modalities:
Audio
Formats:
soundfolder
Languages:
English
Size:
10K - 100K
License:
Search is not available for this dataset
audio audioduration (s) 0.16 164 | label class label 2
classes |
|---|---|
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes | |
0fakes |
End of preview. Expand in Data Studio
VoxGuard Synthetic Speech Dataset
A dataset of 10,000+ AI-generated (deepfake) speech samples created for training and evaluating deepfake speech detection models.
Dataset Description
| Samples | ~10,040 synthetic WAV files |
| Format | WAV, 16kHz mono |
| Generation | Voice cloning via Qwen3-TTS (Replicate API) |
| Source speakers | 280+ unique speakers from LibriSpeech |
| Source subsets | clean (train.100, train.360, validation, test) + other (train.500, validation, test) |
Generation Process
- Downloaded ~10,040 real speech samples from LibriSpeech (2-30s duration, diverse speakers)
- Transcribed each sample using OpenAI Whisper (medium)
- Generated voice clones using Qwen3-TTS in \ mode:
- Reference audio: original LibriSpeech sample
- Reference text: Whisper transcription
- Target text: one of 100 diverse sentences (news, conversation, instructions, etc.)
- Result: synthetic speech that mimics each speaker's voice saying a different sentence
Contents
- \ - ~10,040 synthetic WAV files
- \ to \ (original batch, 4-digit naming)
- \ to \ (extended batch, 5-digit naming)
- \ - Whisper transcriptions for each source sample
Usage
This dataset is used to train the VoxGuard deepfake detection LoRA adapter, which fine-tunes the DF Arena 1B base model.
Note: Only synthetic (fake) speech is included. The corresponding real speech samples are from LibriSpeech and should be obtained separately.
Related
- Model: gereon/voxguard-lora - LoRA adapter for deepfake detection
- Base model: Speech-Arena-2025/DF_Arena_1B_V_1
- Code: gereonelvers/voxguard
- Downloads last month
- 38