Datasets:
File size: 5,362 Bytes
0cb7bf5 a398eed c8cf876 0cb7bf5 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 |
---
datasets:
- freococo/rohingya_asr_audio
language:
- rhg
tags:
- speech
- audio
- voa
- rohingya
- self-supervised
- webdataset
- public-domain
pretty_name: VOA Rohingya ASR
license: pddl
task_categories:
- automatic-speech-recognition
- audio-to-audio
- audio-classification
language_creators:
- found
source_datasets:
- original
---
**This is the first public Rohingya language ASR dataset in AI history.**
## Overview
This dataset contains broadcast audio recordings from the **Voice of America (VOA) Rohingya Service**. Each file represents a daily news segment, typically 30 minutes in length, automatically segmented into chunks of 5–15 seconds for use in **self-supervised ASR**, **pretraining**, **language identification**, and more.
The content was aired publicly as part of VOA’s Rohingya-language radio program and is therefore released under a **public domain dedication** (U.S. Government speech, [17 U.S.C. § 105](https://www.govinfo.gov/content/pkg/USCODE-2011-title17/html/USCODE-2011-title17-chap1-sec105.htm)).
The dataset is stored in **WebDataset format**, with each `.tar` archive containing paired `.audio` (MP3) and `.json` metadata files for each segment.
## Acknowledgments
This dataset would not exist without the dedication and professionalism of the **Voice of America Rohingya Service** — especially the **journalists, editors, producers, and engineers** who continue broadcasting trusted news and public service content to marginalized communities.
Special gratitude goes to:
- VOA multilingual teams who **created, edited, and voiced** this content
- The **American people**, whose hard-earned taxpayer contributions make public media like VOA possible
- The open-source, low-resource, and humanitarian tech community — for tools, models, and continued support
This dataset is released in the hope that it will:
- Advance multilingual speech technology
- Empower access to information
- Amplify underrepresented voices across the world
## Metrics
| Metric | Value |
|-------------------|--------------|
| Total audio hours | **357.55 h** |
| Audio chunks | **131,860** |
| Shard count | **14** |
| Average chunk size| 6–15 sec |
| Format | WebDataset |
| License | Public Domain (VOA / U.S. Gov) |
## Quick-start
You can load and stream the dataset from Hugging Face using the `datasets` library:
from datasets import load_dataset
dataset = load_dataset(
"freococo/rohingya_asr_audio",
split="train",
streaming=True
)
for sample in dataset:
print(sample["audio"]) # Audio object
print(sample["file_name"]) # Chunk file name
print(sample["download_url"]) # Original source URL
print(sample["duration"]) # Duration in seconds
## Known Limitations
This dataset was created through automatic chunking of full-length VOA Rohingya news broadcasts. As a result, developers should be aware of the following limitations:
- **No transcriptions** are included. This dataset is not aligned for supervised training unless transcribed independently.
- Some chunks may contain **non-speech segments** such as:
- Music intros and outros
- Jingles or filler transitions
- Background crowd noise or environmental sounds
- Silent or low-audio intervals
- **No speaker labeling** is provided. Voice diversity, accents, and gender variation exist, but are unlabeled.
- **Broadcast mixing artifacts** may affect ASR performance in noisy conditions (e.g., overlayed music, crossfades, background hum).
Despite these challenges, the dataset is suitable for:
- Pretraining ASR models (wav2vec2-style)
- Unsupervised learning
- Language ID and diarization
- Synthetic data generation
We recommend applying **speech detection filters**, **VAD**, or **manual quality control** for downstream supervised tasks.
## Dataset Details
Each training sample is stored as:
- `.audio` — MP3 audio content (~5–15 seconds)
- `.json` — metadata with:
- `file_name`: full chunk filename (e.g., `20250310_0001.audio`)
- `original_file`: e.g., `20250310`
- `publish_date`: ISO 8601 format (e.g., `2025-03-10`)
- `download_url`: original VOA source URL
- `duration`: chunk duration in seconds
These files are stored in `.tar` archives, split into ~10,000-sample shards named like:
rohingya-00000.tar
rohingya-00001.tar
...
Each archive follows [WebDataset format](https://github.com/webdataset/webdataset), making it easy to use with PyTorch and Hugging Face streaming.
## License & Reuse
All content is in the **public domain** under U.S. law:
> U.S. Government speech recordings (VOA staff broadcasts) are public domain under [17 U.S.C. § 105](https://www.govinfo.gov/content/pkg/USCODE-2011-title17/html/USCODE-2011-title17-chap1-sec105.htm).
Some broadcasts may contain music or third-party clips. Please verify manually if using for commercial purposes.
## Citation
If you use this dataset in research, please cite:
> **Freococo (2025).**
> *VOA Rohingya ASR*
> Hugging Face: [https://huggingface.co/datasets/freococo/rohingya_asr_audio](https://huggingface.co/datasets/freococo/rohingya_asr_audio)
> Public-domain speech segments from VOA Rohingya news programming.
> Released under `pddl`.
|