file_id
stringlengths 19
19
| dataset
stringclasses 1
value | speaker_id
int32 0
0
| speaker_name
stringclasses 1
value | duration
float32 1.11
10.1
| text
stringlengths 12
187
| mel_spectrogram
array 2D | audio
audioduration (s) 1.11
10.1
|
|---|---|---|---|---|---|---|---|
ljspeech_LJ001-0001
|
ljspeech
| 0
|
LJ
| 9.655011
| "Printing, in the only sense with which we are at present concerned, differs from most if not from a(...TRUNCATED)
| [[-4.0,-4.0,-4.0,-4.0,-4.0,-4.0,-4.0,-4.0,-4.0,-4.0,-4.0,-4.0,-4.0,-3.8352653980255127,-3.4661121368(...TRUNCATED)
| |
ljspeech_LJ001-0002
|
ljspeech
| 0
|
LJ
| 1.899547
|
in being comparatively modern.
| [[-4.0,-4.0,-4.0,-4.0,-3.5856261253356934,-2.9034247398376465,-2.7264392375946045,-1.950161218643188(...TRUNCATED)
| |
ljspeech_LJ001-0003
|
ljspeech
| 0
|
LJ
| 9.666621
| "For although the Chinese took impressions from wood blocks engraved in relief for centuries before (...TRUNCATED)
| [[-3.9682581424713135,-3.9436304569244385,-4.0,-4.0,-3.6473052501678467,-3.4860174655914307,-3.08923(...TRUNCATED)
| |
ljspeech_LJ001-0004
|
ljspeech
| 0
|
LJ
| 5.13873
|
produced the block books, which were the immediate predecessors of the true printed book,
| [[-4.0,-4.0,-4.0,-4.0,-4.0,-4.0,-3.214458703994751,-3.9125609397888184,-4.0,-3.821331262588501,-3.62(...TRUNCATED)
| |
ljspeech_LJ001-0005
|
ljspeech
| 0
|
LJ
| 8.110885
| "the invention of movable metal letters in the middle of the fifteenth century may justly be conside(...TRUNCATED)
| [[-4.0,-4.0,-3.6279852390289307,-3.7079389095306396,-3.343703031539917,-2.5588011741638184,-3.290549(...TRUNCATED)
| |
ljspeech_LJ001-0006
|
ljspeech
| 0
|
LJ
| 5.684399
|
And it is worth mention in passing that, as an example of fine typography,
| [[-4.0,-4.0,-4.0,-4.0,-4.0,-4.0,-4.0,-3.8858470916748047,-3.9814746379852295,-4.0,-4.0,-3.3645892143(...TRUNCATED)
| |
ljspeech_LJ001-0007
|
ljspeech
| 0
|
LJ
| 8.389524
| "the earliest book printed with movable types, the Gutenberg, or \"forty-two line Bible\" of about f(...TRUNCATED)
| [[-4.0,-3.8514208793640137,-3.536076545715332,-2.548466205596924,-2.0193238258361816,-1.511915206909(...TRUNCATED)
| |
ljspeech_LJ001-0008
|
ljspeech
| 0
|
LJ
| 1.783447
|
has never been surpassed.
| [[-3.6698663234710693,-3.3484485149383545,-2.7888011932373047,-1.970872163772583,-1.9131536483764648(...TRUNCATED)
| |
ljspeech_LJ001-0009
|
ljspeech
| 0
|
LJ
| 7.553606
| "Printing, then, for our purpose, may be considered as the art of making books by means of movable t(...TRUNCATED)
| [[-4.0,-4.0,-4.0,-3.9972832202911377,-4.0,-3.518651008605957,-3.018923282623291,-3.2284083366394043,(...TRUNCATED)
| |
ljspeech_LJ001-0010
|
ljspeech
| 0
|
LJ
| 8.819093
| "Now, as all books not primarily intended as picture-books consist principally of types composed to (...TRUNCATED)
| [[-4.0,-4.0,-4.0,-3.45656681060791,-3.977020263671875,-2.6772570610046387,-1.4459142684936523,-1.130(...TRUNCATED)
|
End of preview. Expand
in Data Studio
Unified Vocoder Training Dataset
This dataset combines LJSpeech, VCTK, and LibriTTS for training neural vocoders. Each dataset is available as a separate configuration, with a default 'all' configuration that combines all datasets. You can train on individual datasets or use the combined dataset as needed.
Dataset Statistics
- Total files: 43,691
- Total duration: 71.2 hours
- Total speakers: 326
Per-dataset breakdown:
- libritts:
- Files: 43,691
- Duration: 71.2 hours
- Speakers: 326
Audio Processing Parameters
All audio files have been processed with the following parameters:
- Sample rate: 22,050 Hz
- Mel-spectrogram bins: 80
- FFT size: 1024
- Hop length: 256
- Window length: 1024
- Frequency range: 0.0-8000.0 Hz
- Normalization: [-4, 4]
Usage
The dataset is organized by source dataset (ljspeech, vctk, libritts) with each stored as a separate configuration. The default 'all' configuration automatically combines all datasets.
Loading Combined Dataset (Default)
from datasets import load_dataset
# Load all datasets combined (default configuration)
all_data = load_dataset("consulted-graphs/rhea-tts-vocoder", split="train")
all_data_val = load_dataset("consulted-graphs/rhea-tts-vocoder", split="validation")
# Or explicitly specify the 'all' configuration
all_data = load_dataset("consulted-graphs/rhea-tts-vocoder", "all", split="train")
# Access a sample
sample = all_data[0]
audio = sample['audio']['array'] # Audio waveform
mel_spectrogram = sample['mel_spectrogram'] # Mel-spectrogram
speaker_id = sample['speaker_id'] # Global speaker ID
dataset_name = sample['dataset'] # Source dataset (ljspeech, vctk, or libritts)
Loading Individual Datasets
from datasets import load_dataset
# Load individual datasets - HuggingFace will automatically load all parquet files
ljspeech = load_dataset("consulted-graphs/rhea-tts-vocoder", "ljspeech", split="train")
vctk = load_dataset("consulted-graphs/rhea-tts-vocoder", "vctk", split="train")
libritts = load_dataset("consulted-graphs/rhea-tts-vocoder", "libritts", split="train")
# Each dataset has its own train/validation splits where applicable
libritts_val = load_dataset("consulted-graphs/rhea-tts-vocoder", "libritts", split="validation")
# Access a sample
sample = ljspeech[0]
audio = sample['audio']['array'] # Audio waveform
mel_spectrogram = sample['mel_spectrogram'] # Mel-spectrogram
speaker_id = sample['speaker_id'] # Global speaker ID
Manual Dataset Combination
from datasets import load_dataset, concatenate_datasets
# Method 1: Load and concatenate manually
all_train_datasets = []
for ds_name in ['ljspeech', 'vctk', 'libritts']:
try:
ds = load_dataset("consulted-graphs/rhea-tts-vocoder", ds_name, split="train")
all_train_datasets.append(ds)
except:
print(f"Skipping {ds_name} (not found)")
# Combine all datasets
combined_train = concatenate_datasets(all_train_datasets)
print(f"Total training samples: {len(combined_train)}")
# Method 2: Load with specific speaker filtering
def load_with_speaker_filter(max_speakers_per_dataset=10):
filtered_datasets = []
for ds_name in ['ljspeech', 'vctk', 'libritts']:
ds = load_dataset("consulted-graphs/rhea-tts-vocoder", ds_name, split="train")
# Get unique speakers
unique_speakers = ds.unique('speaker_id')[:max_speakers_per_dataset]
# Filter dataset
filtered_ds = ds.filter(lambda x: x['speaker_id'] in unique_speakers)
filtered_datasets.append(filtered_ds)
return concatenate_datasets(filtered_datasets)
# Load with limited speakers for faster experimentation
small_train = load_with_speaker_filter(max_speakers_per_dataset=5)
Streaming Mode
For large-scale training without downloading the entire dataset:
from datasets import load_dataset, interleave_datasets
# Stream all datasets combined (default configuration)
all_stream = load_dataset("consulted-graphs/rhea-tts-vocoder", split="train", streaming=True)
# Or stream individual datasets
ljspeech_stream = load_dataset("consulted-graphs/rhea-tts-vocoder", "ljspeech", split="train", streaming=True)
vctk_stream = load_dataset("consulted-graphs/rhea-tts-vocoder", "vctk", split="train", streaming=True)
libritts_stream = load_dataset("consulted-graphs/rhea-tts-vocoder", "libritts", split="train", streaming=True)
# Method 1: Interleave streams with custom probabilities
combined_stream = interleave_datasets(
[ljspeech_stream, vctk_stream, libritts_stream],
probabilities=[0.2, 0.4, 0.4] # More weight on multi-speaker datasets
)
# Method 2: Stream specific datasets only
multi_speaker_stream = interleave_datasets(
[vctk_stream, libritts_stream],
probabilities=[0.3, 0.7] # Focus on LibriTTS
)
# Use in training loop
for batch in combined_stream.batch(batch_size=16):
# batch['audio'] - audio arrays
# batch['mel_spectrogram'] - mel spectrograms
# batch['speaker_id'] - speaker IDs
# Process batch...
Training Examples
Single Dataset Training
import torch
from torch.utils.data import DataLoader
from datasets import load_dataset
# Load a single dataset for focused training
ljspeech = load_dataset("consulted-graphs/rhea-tts-vocoder", "ljspeech", split="train")
# Create PyTorch dataset wrapper
class VocoderDataset(torch.utils.data.Dataset):
def __init__(self, hf_dataset):
self.dataset = hf_dataset
def __len__(self):
return len(self.dataset)
def __getitem__(self, idx):
sample = self.dataset[idx]
return {
'audio': torch.tensor(sample['audio']['array']),
'mel': torch.tensor(sample['mel_spectrogram']).T,
'speaker_id': sample['speaker_id']
}
# Single speaker training (LJSpeech)
train_dataset = VocoderDataset(ljspeech)
train_loader = DataLoader(train_dataset, batch_size=32, shuffle=True)
Multi-Dataset Training
from datasets import load_dataset, concatenate_datasets
# Load all datasets
datasets = []
for config in ['ljspeech', 'vctk', 'libritts']:
try:
ds = load_dataset("consulted-graphs/rhea-tts-vocoder", config, split="train")
datasets.append(ds)
print(f"Loaded {config}: {len(ds)} samples")
except:
print(f"Skipping {config}")
# Combine for multi-speaker training
combined_dataset = concatenate_datasets(datasets)
print(f"Total samples: {len(combined_dataset)}")
# Create balanced sampler for equal speaker representation
from torch.utils.data import WeightedRandomSampler
def create_balanced_sampler(dataset):
speaker_counts = {}
for i in range(len(dataset)):
spk_id = dataset[i]['speaker_id']
speaker_counts[spk_id] = speaker_counts.get(spk_id, 0) + 1
# Create weights inverse to speaker frequency
weights = []
for i in range(len(dataset)):
spk_id = dataset[i]['speaker_id']
weights.append(1.0 / speaker_counts[spk_id])
return WeightedRandomSampler(weights, len(weights))
# Multi-speaker training with balanced sampling
train_dataset = VocoderDataset(combined_dataset)
sampler = create_balanced_sampler(combined_dataset)
train_loader = DataLoader(train_dataset, batch_size=16, sampler=sampler)
Citation
Please cite the original datasets:
- LJSpeech: https://keithito.com/LJ-Speech-Dataset/
- VCTK: https://datashare.ed.ac.uk/handle/10283/3443
- LibriTTS: https://openslr.org/60/
License
Please refer to the individual dataset licenses.
- Downloads last month
- 116