File size: 4,378 Bytes
117b3a3 402d2ea 117b3a3 288ba57 da9b3f4 2d03341 da9b3f4 b19ab28 da9b3f4 320bf0e | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 | ---
dataset_info:
features:
- name: text
dtype: string
- name: speaker
dtype: string
- name: nano_layer_1
list: int64
- name: nano_layer_2
list: int64
- name: nano_layer_3
list: int64
- name: nano_layer_4
list: int64
- name: encoded_len
dtype: int64
splits:
- name: train
num_bytes: 58048206
num_examples: 9949
download_size: 8076967
dataset_size: 58048206
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: mit
task_categories:
- text-to-speech
language:
- hi
---
# 🗣️ tts-quantized-dataset
This dataset contains **quantized Hindi Text-to-Speech (TTS)** samples generated using NVIDIA’s [`nemo-nano-codec-22khz-0.6kbps-12.5fps`](https://huggingface.co/nvidia/nemo-nano-codec-22khz-0.6kbps-12.5fps) neural audio codec.
It is designed for **training lightweight speech synthesis models**, such as token-based TTS models, audio language models, or text-to-codec models.
## 📚 Dataset Summary
| Field | Description |
|--------|--------------|
| **text** | The transcription (Hindi text) corresponding to each audio sample. |
| **speaker** | Speaker identity (derived from the file name). |
| **nano_layer_1–4** | Discrete audio tokens from four quantization layers of the NeMo Nano Codec model. |
| **encoded_len** | Length of each encoded token sequence. |
## 🧠 Source Dataset
- **Original dataset:** [`SayantanJoker/original_data_hindi_tts`](https://huggingface.co/datasets/SayantanJoker/original_data_hindi_tts)
- **Conversion tool:** [NVIDIA NeMo AudioCodecModel](https://docs.nvidia.com/nemo-framework/user-guide/docs/en/stable/tts/overview.html#audio-codec-model)
- **Sampling rate:** 22.05 kHz
- **Compression:** 0.6 kbps (4 quantization layers, 12.5 fps)
## ⚙️ Data Preparation Pipeline
The dataset was preprocessed using the following steps:
1. Load original Hindi TTS dataset from Hugging Face.
2. Resample all audio files to **22,050 Hz**.
3. Encode audio waveforms using `AudioCodecModel` from NVIDIA NeMo.
4. Store the resulting discrete codec tokens into 4 quantizer layers.
5. Save as Hugging Face `DatasetDict` and push to Hub.
The exact preprocessing script used:
```python
https://huggingface.co/ArunKr/tts-quantized-dataset/blob/main/data_prep.py
````
## 🧩 Example Entry
```json
{
"text": "यह एक परीक्षण वाक्य है।",
"speaker": "sample_001",
"nano_layer_1": [3, 18, 94, 105, 78, ...],
"nano_layer_2": [11, 45, 67, 53, 21, ...],
"nano_layer_3": [32, 98, 76, 12, 43, ...],
"nano_layer_4": [25, 14, 89, 64, 72, ...],
"encoded_len": 120
}
```
## 🧰 Usage Example
```python
from datasets import load_dataset
ds = load_dataset("ArunKr/tts-quantized-dataset", split="train")
print(ds[0]["text"])
# "यह एक परीक्षण वाक्य है।"
codes = [ds[0]["nano_layer_1"], ds[0]["nano_layer_2"], ds[0]["nano_layer_3"], ds[0]["nano_layer_4"]]
```
You can easily convert back to waveform using the same NeMo codec:
```python
from nemo.collections.tts.models import AudioCodecModel
import torch
codec = AudioCodecModel.from_pretrained("nvidia/nemo-nano-codec-22khz-0.6kbps-12.5fps").eval()
tokens = torch.tensor([codes])
audio = codec.decode(tokens)
```
## 📦 Dataset Structure
```
tts_quantized_dataset/
├── data_prep.py
├── README.md
├── data/
```
## 🧾 License
* **Source Dataset License:** As provided by the original dataset ([`SayantanJoker/original_data_hindi_tts`](https://huggingface.co/datasets/SayantanJoker/original_data_hindi_tts))
* **Generated Dataset License:** CC BY 4.0
(Attribution required for use and redistribution)
## 🙏 Acknowledgments
* [NVIDIA NeMo](https://github.com/NVIDIA/NeMo) for the neural codec model.
* [Hugging Face Datasets](https://huggingface.co/docs/datasets) for dataset hosting and preprocessing utilities.
* [SayantanJoker/original_data_hindi_tts](https://huggingface.co/datasets/SayantanJoker/original_data_hindi_tts) for providing the original Hindi speech dataset.
## 💬 Citation
If you use this dataset, please cite:
```bibtex
@dataset{arunkr_tts_quantized_dataset_2025,
author = {Arun Kumar Tiwary},
title = {Hindi TTS Quantized Dataset using NeMo Nano Codec},
year = {2025},
url = {https://huggingface.co/datasets/ArunKr/tts-quantized-dataset}
}
``` |