|
|
--- |
|
|
language: |
|
|
- en |
|
|
- zh |
|
|
license: other |
|
|
license_name: mixed-cc-by-4.0-apache-2.0 |
|
|
task_categories: |
|
|
- text-to-speech |
|
|
pretty_name: Multilingual TTS Dataset (LJSpeech Format) |
|
|
size_categories: |
|
|
- 10K<n<100K |
|
|
tags: |
|
|
- audio |
|
|
- speech |
|
|
- tts |
|
|
- text-to-speech |
|
|
- multilingual |
|
|
- english |
|
|
- chinese |
|
|
--- |
|
|
|
|
|
# Multilingual TTS Dataset (LJSpeech Format) |
|
|
|
|
|
A high-quality multilingual Text-to-Speech dataset combining English and Chinese speech data, optimized for TTS training and suitable for commercial use. |
|
|
|
|
|
## π― Quick Start |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset |
|
|
|
|
|
# Load the entire dataset |
|
|
dataset = load_dataset("ayousanz/multi-dataset-v2") |
|
|
|
|
|
# Access data |
|
|
for item in dataset["train"]: |
|
|
audio = item["audio"] # 22050Hz mono audio |
|
|
text = item["transcription"] # Original text |
|
|
speaker = item["speaker_id"] # Speaker identifier |
|
|
language = item["language"] # "en" or "zh" |
|
|
``` |
|
|
|
|
|
## π Dataset Statistics |
|
|
|
|
|
| **Metric** | **Value** | |
|
|
|------------|-----------| |
|
|
| **Total Duration** | 97.2 hours | |
|
|
| **Total Utterances** | 95,568 | |
|
|
| **Languages** | English, Chinese | |
|
|
| **Speakers** | 421 unique speakers | |
|
|
| **Audio Format** | 22050Hz, 16-bit, mono WAV | |
|
|
|
|
|
### Language Breakdown |
|
|
|
|
|
| **Language** | **Hours** | **Speakers** | **Utterances** | |
|
|
|--------------|-----------|--------------|----------------| |
|
|
| English | 48.6 | 247 | 32,310 | |
|
|
| Chinese | 48.6 | 174 | 63,258 | |
|
|
|
|
|
### Duration Distribution |
|
|
|
|
|
| **Range** | **Count** | **Percentage** | |
|
|
|-----------|-----------|----------------| |
|
|
| 0-2s | 28,555 | 29.9% | |
|
|
| 2-5s | 48,261 | 50.5% | |
|
|
| 5-10s | 14,167 | 14.8% | |
|
|
| 10-15s | 3,417 | 3.6% | |
|
|
| 15-20s | 1,168 | 1.2% | |
|
|
| 20s+ | 0 | 0.0% | |
|
|
|
|
|
## π Repository Structure |
|
|
|
|
|
``` |
|
|
βββ audio/ # Audio files (ZIP compressed) |
|
|
β βββ train_english.zip # English training audio |
|
|
β βββ train_chinese.zip # Chinese training audio |
|
|
β βββ validation_english.zip # English validation audio |
|
|
β βββ validation_chinese.zip # Chinese validation audio |
|
|
β βββ test_english.zip # English test audio |
|
|
β βββ test_chinese.zip # Chinese test audio |
|
|
βββ metadata/ # Metadata files |
|
|
β βββ train.csv # Training metadata (all languages) |
|
|
β βββ validation.csv # Validation metadata (all languages) |
|
|
β βββ test.csv # Test metadata (all languages) |
|
|
βββ dataset_info.json # Dataset statistics and info |
|
|
βββ multilingual_tts_ljspeech.py # Dataset loader script |
|
|
βββ README.md # This file |
|
|
``` |
|
|
|
|
|
## πΎ Download Instructions |
|
|
|
|
|
### Option 1: Using Hugging Face CLI (Recommended) |
|
|
|
|
|
```bash |
|
|
# Install Hugging Face CLI |
|
|
pip install huggingface-hub |
|
|
|
|
|
# Download entire dataset |
|
|
huggingface-cli download ayousanz/multi-dataset-v2 --repo-type dataset --local-dir ./multilingual-tts |
|
|
|
|
|
# Download specific files only |
|
|
huggingface-cli download ayousanz/multi-dataset-v2 audio/train_english.zip metadata/train.csv --repo-type dataset --local-dir ./multilingual-tts |
|
|
``` |
|
|
|
|
|
### Option 2: Using Python |
|
|
|
|
|
```python |
|
|
from huggingface_hub import snapshot_download |
|
|
|
|
|
# Download entire dataset |
|
|
snapshot_download( |
|
|
repo_id="ayousanz/multi-dataset-v2", |
|
|
repo_type="dataset", |
|
|
local_dir="./multilingual-tts" |
|
|
) |
|
|
``` |
|
|
|
|
|
### Extracting Audio Files |
|
|
|
|
|
After downloading, extract the ZIP files: |
|
|
|
|
|
```bash |
|
|
cd multilingual-tts |
|
|
for zip_file in audio/*.zip; do |
|
|
unzip "$zip_file" -d audio_extracted/ |
|
|
done |
|
|
``` |
|
|
|
|
|
## π Usage Examples |
|
|
|
|
|
### Basic Usage |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset |
|
|
|
|
|
dataset = load_dataset("ayousanz/multi-dataset-v2") |
|
|
|
|
|
# Filter by language |
|
|
english_data = dataset["train"].filter(lambda x: x["language"] == "en") |
|
|
chinese_data = dataset["train"].filter(lambda x: x["language"] == "zh") |
|
|
|
|
|
# Filter by speaker |
|
|
speaker_data = dataset["train"].filter(lambda x: x["speaker_id"] == "en_1234") |
|
|
``` |
|
|
|
|
|
### For TTS Training |
|
|
|
|
|
```python |
|
|
# Example with PyTorch DataLoader |
|
|
from torch.utils.data import DataLoader |
|
|
|
|
|
def collate_fn(batch): |
|
|
audios = [item["audio"]["array"] for item in batch] |
|
|
texts = [item["transcription"] for item in batch] |
|
|
speakers = [item["speaker_id"] for item in batch] |
|
|
return audios, texts, speakers |
|
|
|
|
|
dataloader = DataLoader( |
|
|
dataset["train"], |
|
|
batch_size=32, |
|
|
collate_fn=collate_fn, |
|
|
shuffle=True |
|
|
) |
|
|
``` |
|
|
|
|
|
## π Data Format |
|
|
|
|
|
Each sample contains: |
|
|
|
|
|
- **`audio_id`**: Unique identifier for the audio file |
|
|
- **`audio`**: Audio data (22050Hz, 16-bit, mono) |
|
|
- **`transcription`**: Original text transcription |
|
|
- **`normalized_text`**: Normalized text for TTS training |
|
|
- **`speaker_id`**: Speaker identifier with language prefix (`en_*` or `zh_*`) |
|
|
- **`language`**: Language code (`en` for English, `zh` for Chinese) |
|
|
|
|
|
## π License |
|
|
|
|
|
This dataset combines data from multiple sources: |
|
|
|
|
|
- **English data (LibriTTS-R)**: CC BY 4.0 - requires attribution |
|
|
- **Chinese data (AISHELL-3)**: Apache 2.0 |
|
|
|
|
|
### Attribution Requirements |
|
|
|
|
|
When using this dataset, please cite: |
|
|
|
|
|
```bibtex |
|
|
@dataset{{multilingual_tts_ljspeech, |
|
|
title={{Multilingual TTS Dataset in LJSpeech Format}}, |
|
|
year={{2024}}, |
|
|
note={{English: LibriTTS-R (CC BY 4.0), Chinese: AISHELL-3 (Apache 2.0)}} |
|
|
}} |
|
|
``` |
|
|
|
|
|
## π Source Datasets |
|
|
|
|
|
- **LibriTTS-R**: https://openslr.org/141/ |
|
|
- **AISHELL-3**: https://openslr.org/93/ |
|
|
|
|
|
## β‘ Performance Notes |
|
|
|
|
|
- Audio files are stored in ZIP format for faster download |
|
|
- Use `datasets` library's built-in caching for optimal performance |
|
|
- Consider using `streaming=True` for large-scale training to save memory |
|
|
|
|
|
## π€ Contributing |
|
|
|
|
|
Found an issue? Please report it on the [repository issues page](https://huggingface.co/datasets/ayousanz/multi-dataset-v2/discussions). |
|
|
|