Gigaspeech Part 4
This is Part 4 of 8 of a large-scale speech dataset, split to accommodate HuggingFace's repository size limits.
Multi-Part Dataset
This dataset is split across multiple repositories:
- Part 1: shahdsaf/gigaspeech-part-1
- Part 2: shahdsaf/gigaspeech-part-2
- Part 3: shahdsaf/gigaspeech-part-3
- Part 4 (current): shahdsaf/gigaspeech-part-4
- Part 5: shahdsaf/gigaspeech-part-5
- Part 6: shahdsaf/gigaspeech-part-6
- Part 7: shahdsaf/gigaspeech-part-7
- Part 8: shahdsaf/gigaspeech-part-8
This Repository (Part 4)
- Total parquet files: 272
- Total size: 289.78 GB
- Failed uploads: 0
- Subfolders: 3
Files by Subfolder:
- xl/train: 270 files (287.66 GB)
- dev/validation: 1 files (1.06 GB)
- xl/validation: 1 files (1.06 GB)
Usage
from datasets import load_dataset
# Load this part of the dataset
dataset = load_dataset("shahdsaf/gigaspeech-part-4")
# Load specific subfolder from this part
dataset = load_dataset("shahdsaf/gigaspeech-part-4",
data_files="data/xl/train/*.parquet")
# To load the complete multi-part dataset:
import datasets
parts = []
for i in range(1, 9):
part_repo = "shahdsaf/gigaspeech-part-" + str(i)
part_data = load_dataset(part_repo)
parts.append(part_data['train'])
# Concatenate all parts
complete_dataset = datasets.concatenate_datasets(parts)
File Organization
All parquet files are stored under the data/ directory, maintaining the original subfolder structure.
Important Notes
- This is part of a multi-repository dataset due to size constraints
- Each part maintains the original folder structure
- Use the concatenation approach above to work with the complete dataset
- Files are distributed to balance repository sizes (max 290 GB per repo)
- Downloads last month
- 1