Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,48 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: apache-2.0
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
language:
|
| 4 |
+
- en
|
| 5 |
+
---
|
| 6 |
+
|
| 7 |
+
### Dataset Summary
|
| 8 |
+
This dataset is a modified version of the [gigaspeech](https://huggingface.co/datasets/speechcolab/gigaspeech/tree/main/data) corpus, converted into parquet format to facilitate optimized I/O operations in high-performance and distributed computing environments.
|
| 9 |
+
GigaSpeech is an evolving, multi-domain English speech recognition corpus with 10,000 hours of high quality labeled audio suitable for supervised training. The transcribed audio data is collected from audiobooks, podcasts and YouTube, covering both read and spontaneous speaking styles, and a variety of topics, such as arts, science, sports, etc.
|
| 10 |
+
|
| 11 |
+
---
|
| 12 |
+
|
| 13 |
+
### Source Data
|
| 14 |
+
|
| 15 |
+
- **Original Dataset**: [gigaspeech](https://huggingface.co/datasets/speechcolab/gigaspeech)
|
| 16 |
+
- **License**: This derived dataset is shared under the same license, with modifications only to format for efficiency.
|
| 17 |
+
|
| 18 |
+
### Modifications
|
| 19 |
+
|
| 20 |
+
- **Data Format**: Converted to parquet format to enhance I/O performance for distributed training, reducing latency during data loading and retrieval.
|
| 21 |
+
- **Efficiency Optimization**: Restructured for reduced storage footprint and faster I/O on high-performance clusters by leveraging parquet’s efficient compression and columnar storage.
|
| 22 |
+
|
| 23 |
+
### Dataset Structure
|
| 24 |
+
|
| 25 |
+
- **File Format**: Parquet files.
|
| 26 |
+
- **Languages**: English
|
| 27 |
+
- **Audio Sampling Rate**: Matches original dataset specifications for high-fidelity speech data.
|
| 28 |
+
|
| 29 |
+
### Usage
|
| 30 |
+
|
| 31 |
+
This dataset is ideal for use in large-scale speech-to-text translation tasks, especially in distributed and high-performance computing environments. The parquet format enhances usability by minimizing I/O overhead, making it well-suited for high-throughput training.
|
| 32 |
+
|
| 33 |
+
### Attribution
|
| 34 |
+
|
| 35 |
+
This dataset is based on the original [gigaspeech](https://huggingface.co/datasets/speechcolab/gigaspeech) dataset, with modifications for I/O optimization by converting to parquet format. Please cite the original CoVoST dataset in any publications or projects using this dataset.
|
| 36 |
+
|
| 37 |
+
### Citation Information
|
| 38 |
+
|
| 39 |
+
Please cite this paper if you find this work useful:
|
| 40 |
+
|
| 41 |
+
```bibtext
|
| 42 |
+
@inproceedings{GigaSpeech2021,
|
| 43 |
+
title={GigaSpeech: An Evolving, Multi-domain ASR Corpus with 10,000 Hours of Transcribed Audio},
|
| 44 |
+
booktitle={Proc. Interspeech 2021},
|
| 45 |
+
year=2021,
|
| 46 |
+
author={Guoguo Chen, Shuzhou Chai, Guanbo Wang, Jiayu Du, Wei-Qiang Zhang, Chao Weng, Dan Su, Daniel Povey, Jan Trmal, Junbo Zhang, Mingjie Jin, Sanjeev Khudanpur, Shinji Watanabe, Shuaijiang Zhao, Wei Zou, Xiangang Li, Xuchen Yao, Yongqing Wang, Yujun Wang, Zhao You, Zhiyong Yan}
|
| 47 |
+
}
|
| 48 |
+
```
|