RVCBench / Multispeaker_libri /README_MANIFEST.md
Nanboy's picture
Add files using upload-large-folder tool
da16c69 verified
# Multispeaker_libri Dataset Manifest
## Overview
The `manifest.json` file provides a structured mapping of all audio files in the Multispeaker_libri dataset, following the same format as the Bilingual_uedin dataset (MF1.json).
## Key Features
- **800 total entries** - Each mixture file at a different SNR level gets its own entry
- **100 unique groundtruth files** - Clean reference audios (10 per target speaker)
- **8 mixtures per groundtruth** - Each groundtruth is shared by:
- 4 SNR levels: -5dB, 0dB, +5dB, +10dB
- 2 interferers: 121 (female), 672 (male)
## Structure
Each JSON entry contains:
```json
{
"ori_pth": "target_61/interferer_121/snr_-5dB/61-70970-0003_mixture.wav",
"ori_spk": "61",
"ori_lang": "EN",
"ori_text": "IF FOR A WHIM YOU BEGGAR YOURSELF I CANNOT STAY YOU",
"ori_phonemes": "",
"ori_tone": "",
"ori_word2ph": "",
"gt_pth": "target_61/groundtruth/61-70970-0037.wav",
"gt_spk": "61",
"gt_lang": "EN",
"gt_text": "INDEED HE IS INFORMED ON THESE POINTS...",
"gt_phonemes": "",
"gt_tone": "",
"gt_word2ph": "",
"snr": "-5dB",
"interferer_id": "121"
}
```
### Fields Explanation
**Original (Mixture) Audio:**
- `ori_pth`: Path to the mixture file (target + interferer at specific SNR)
- `ori_spk`: Target speaker ID
- `ori_lang`: Language (always "EN" for LibriSpeech)
- `ori_text`: Transcription of the target speaker's utterance
- `ori_phonemes`, `ori_tone`, `ori_word2ph`: Empty (can be filled with phonemizer)
**Groundtruth (Clean Reference) Audio:**
- `gt_pth`: Path to clean groundtruth file (no interference)
- `gt_spk`: Same as target speaker ID
- `gt_lang`: Language (always "EN")
- `gt_text`: Transcription of the groundtruth utterance
- `gt_phonemes`, `gt_tone`, `gt_word2ph`: Empty (can be filled with phonemizer)
**Additional Metadata:**
- `snr`: Signal-to-Noise Ratio level (+10dB, +5dB, +0dB, -5dB)
- `interferer_id`: ID of interfering speaker (121 or 672)
## Use Cases
### 1. Voice Cloning with Noisy References
Use mixture files (`ori_pth`) as enrollment audio with controlled interference:
```python
# Load mixture at different SNR levels
mixture_5db = load_audio(entry['ori_pth']) # SNR = +5dB
mixture_0db = load_audio(entry['ori_pth']) # SNR = 0dB
# Use same clean groundtruth for comparison
clean_ref = load_audio(entry['gt_pth'])
```
### 2. Studying SNR Impact
Compare voice cloning quality across SNR levels using the same groundtruth:
```python
# Get all SNR versions of same utterance (share same gt_pth)
entries_for_utterance = [e for e in manifest if e['gt_pth'] == target_gt]
# entries will have -5dB, 0dB, +5dB, +10dB versions
```
### 3. Multi-Speaker Interference Analysis
Test how different interferers affect the same target speaker:
```python
# Compare female interferer (121) vs male interferer (672)
female_interferer = [e for e in manifest if e['interferer_id'] == '121']
male_interferer = [e for e in manifest if e['interferer_id'] == '672']
```
## Dataset Statistics
- **Target Speakers**: 10 (5 male, 5 female)
- Male: 61, 908, 2300, 2830, 7729
- Female: 237, 1221, 1284, 4970, 6829
- **Interferer Speakers**: 2
- Female: 121
- Male: 672
- **SNR Levels**: 4 (-5dB, 0dB, +5dB, +10dB)
- **Audio Files**:
- 800 mixture files
- 800 target files (clean segments used in mixtures)
- 100 groundtruth files (separate clean references)
## Example: Multiple SNRs → Same Groundtruth
```
Groundtruth: target_61/groundtruth/61-70970-0037.wav
├── Mixture 1: target_61/interferer_121/snr_-5dB/61-70970-0003_mixture.wav
├── Mixture 2: target_61/interferer_121/snr_+0dB/61-70970-0003_mixture.wav
├── Mixture 3: target_61/interferer_121/snr_+5dB/61-70970-0003_mixture.wav
├── Mixture 4: target_61/interferer_121/snr_+10dB/61-70970-0003_mixture.wav
├── Mixture 5: target_61/interferer_672/snr_-5dB/61-70970-0003_mixture.wav
├── Mixture 6: target_61/interferer_672/snr_+0dB/61-70970-0003_mixture.wav
├── Mixture 7: target_61/interferer_672/snr_+5dB/61-70970-0003_mixture.wav
└── Mixture 8: target_61/interferer_672/snr_+10dB/61-70970-0003_mixture.wav
```
## Loading the Manifest
```python
import json
with open('manifest.json', 'r') as f:
manifest = json.load(f)
# Access entries
for entry in manifest:
mixture_path = entry['ori_pth']
groundtruth_path = entry['gt_pth']
snr_level = entry['snr']
interferer = entry['interferer_id']
# Your processing code here
```
## Generation Script
The manifest was generated using `scripts/generate_multispeaker_manifest.py`.
To regenerate:
```bash
python scripts/generate_multispeaker_manifest.py
```
## Notes
- Phoneme fields are currently empty. You can populate them using tools like [phonemizer](https://github.com/bootphon/phonemizer).
- All audio files are WAV format, 16kHz sample rate, mono.
- Transcriptions are extracted from LibriSpeech's `.trans.txt` files.
- Groundtruth files are different utterances than the mixture utterances (separate clean references for testing).
## Version
- Generated: November 20, 2025
- Dataset: Multispeaker_libri v1.0
- Based on: LibriSpeech test-clean subset