gigaspeech-part-2 / README.md
shahdsaf's picture
Upload README.md with huggingface_hub
f1d7dc9 verified
|
raw
history blame
2.38 kB
metadata
license: apache-2.0
task_categories:
  - automatic-speech-recognition
  - text-to-speech
language:
  - en
dataset_info:
  config_name: default

Gigaspeech Part 2

This is Part 2 of 8 of a large-scale speech dataset, split to accommodate HuggingFace's repository size limits.

Multi-Part Dataset

This dataset is split across multiple repositories:

This Repository (Part 2)

  • Total parquet files: 4
  • Total size: 289.84 GB
  • Failed uploads: 0
  • Subfolders: 1

Files by Subfolder:

  • xl/train: 269 files (289.84 GB)

Usage

from datasets import load_dataset

# Load this part of the dataset
dataset = load_dataset("shahdsaf/gigaspeech-part-2")

# Load specific subfolder from this part
dataset = load_dataset("shahdsaf/gigaspeech-part-2", 
                      data_files="data/xl/train/*.parquet")

# To load the complete multi-part dataset:
import datasets
parts = []
for i in range(1, 9):
    part_repo = "shahdsaf/gigaspeech-part-" + str(i)
    part_data = load_dataset(part_repo)
    parts.append(part_data['train'])

# Concatenate all parts
complete_dataset = datasets.concatenate_datasets(parts)

File Organization

All parquet files are stored under the data/ directory, maintaining the original subfolder structure.

Important Notes

  • This is part of a multi-repository dataset due to size constraints
  • Each part maintains the original folder structure
  • Use the concatenation approach above to work with the complete dataset
  • Files are distributed to balance repository sizes (max 290 GB per repo)