|
|
--- |
|
|
license: apache-2.0 |
|
|
task_categories: |
|
|
- automatic-speech-recognition |
|
|
- text-to-speech |
|
|
language: |
|
|
- en |
|
|
dataset_info: |
|
|
config_name: default |
|
|
--- |
|
|
|
|
|
# Gigaspeech Part 2 |
|
|
|
|
|
This is **Part 2 of 8** of a large-scale speech dataset, split to accommodate HuggingFace's repository size limits. |
|
|
|
|
|
## Multi-Part Dataset |
|
|
|
|
|
This dataset is split across multiple repositories: |
|
|
|
|
|
- Part 1: [shahdsaf/gigaspeech-part-1](https://huggingface.co/datasets/shahdsaf/gigaspeech-part-1) |
|
|
- **Part 2** (current): [shahdsaf/gigaspeech-part-2](https://huggingface.co/datasets/shahdsaf/gigaspeech-part-2) |
|
|
- Part 3: [shahdsaf/gigaspeech-part-3](https://huggingface.co/datasets/shahdsaf/gigaspeech-part-3) |
|
|
- Part 4: [shahdsaf/gigaspeech-part-4](https://huggingface.co/datasets/shahdsaf/gigaspeech-part-4) |
|
|
- Part 5: [shahdsaf/gigaspeech-part-5](https://huggingface.co/datasets/shahdsaf/gigaspeech-part-5) |
|
|
- Part 6: [shahdsaf/gigaspeech-part-6](https://huggingface.co/datasets/shahdsaf/gigaspeech-part-6) |
|
|
- Part 7: [shahdsaf/gigaspeech-part-7](https://huggingface.co/datasets/shahdsaf/gigaspeech-part-7) |
|
|
- Part 8: [shahdsaf/gigaspeech-part-8](https://huggingface.co/datasets/shahdsaf/gigaspeech-part-8) |
|
|
|
|
|
|
|
|
## This Repository (Part 2) |
|
|
|
|
|
- **Total parquet files**: 4 |
|
|
- **Total size**: 289.84 GB |
|
|
- **Failed uploads**: 0 |
|
|
- **Subfolders**: 1 |
|
|
|
|
|
### Files by Subfolder: |
|
|
- **xl/train**: 269 files (289.84 GB) |
|
|
|
|
|
|
|
|
## Usage |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset |
|
|
|
|
|
# Load this part of the dataset |
|
|
dataset = load_dataset("shahdsaf/gigaspeech-part-2") |
|
|
|
|
|
# Load specific subfolder from this part |
|
|
dataset = load_dataset("shahdsaf/gigaspeech-part-2", |
|
|
data_files="data/xl/train/*.parquet") |
|
|
|
|
|
# To load the complete multi-part dataset: |
|
|
import datasets |
|
|
parts = [] |
|
|
for i in range(1, 9): |
|
|
part_repo = "shahdsaf/gigaspeech-part-" + str(i) |
|
|
part_data = load_dataset(part_repo) |
|
|
parts.append(part_data['train']) |
|
|
|
|
|
# Concatenate all parts |
|
|
complete_dataset = datasets.concatenate_datasets(parts) |
|
|
``` |
|
|
|
|
|
## File Organization |
|
|
|
|
|
All parquet files are stored under the `data/` directory, maintaining the original subfolder structure. |
|
|
|
|
|
## Important Notes |
|
|
|
|
|
- This is part of a multi-repository dataset due to size constraints |
|
|
- Each part maintains the original folder structure |
|
|
- Use the concatenation approach above to work with the complete dataset |
|
|
- Files are distributed to balance repository sizes (max 290 GB per repo) |
|
|
|