Gigaspeech Part 1
This is Part 1 of 8 of a large-scale speech dataset, split to accommodate HuggingFace's repository size limits.
Multi-Part Dataset
This dataset is split across multiple repositories:
- Part 1 (current): shahdsaf/gigaspeech-part-1
- Part 2: shahdsaf/gigaspeech-part-2
- Part 3: shahdsaf/gigaspeech-part-3
- Part 4: shahdsaf/gigaspeech-part-4
- Part 5: shahdsaf/gigaspeech-part-5
- Part 6: shahdsaf/gigaspeech-part-6
- Part 7: shahdsaf/gigaspeech-part-7
- Part 8: shahdsaf/gigaspeech-part-8
This Repository (Part 1)
- Total parquet files: 2
- Total size: 288.94 GB
- Failed uploads: 0
- Subfolders: 5
Files by Subfolder:
- dev/validation: 1 files (1.59 GB)
- xl/validation: 1 files (1.59 GB)
- test/test: 6 files (8.00 GB)
- xl/test: 6 files (8.00 GB)
- xl/train: 248 files (269.76 GB)
Usage
from datasets import load_dataset
# Load this part of the dataset
dataset = load_dataset("shahdsaf/gigaspeech-part-1")
# Load specific subfolder from this part
dataset = load_dataset("shahdsaf/gigaspeech-part-1",
data_files="data/xl/train/*.parquet")
# To load the complete multi-part dataset:
import datasets
parts = []
for i in range(1, 9):
part_repo = "shahdsaf/gigaspeech-part-" + str(i)
part_data = load_dataset(part_repo)
parts.append(part_data['train'])
# Concatenate all parts
complete_dataset = datasets.concatenate_datasets(parts)
File Organization
All parquet files are stored under the data/ directory, maintaining the original subfolder structure.
Important Notes
- This is part of a multi-repository dataset due to size constraints
- Each part maintains the original folder structure
- Use the concatenation approach above to work with the complete dataset
- Files are distributed to balance repository sizes (max 290 GB per repo)
- Downloads last month
- 2