ArunKr's picture
Upload dataset
402d2ea verified
metadata
dataset_info:
  features:
    - name: text
      dtype: string
    - name: speaker
      dtype: string
    - name: nano_layer_1
      list: int64
    - name: nano_layer_2
      list: int64
    - name: nano_layer_3
      list: int64
    - name: nano_layer_4
      list: int64
    - name: encoded_len
      dtype: int64
  splits:
    - name: train
      num_bytes: 58048206
      num_examples: 9949
  download_size: 8076967
  dataset_size: 58048206
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
license: mit
task_categories:
  - text-to-speech
language:
  - hi

🗣️ tts-quantized-dataset

This dataset contains quantized Hindi Text-to-Speech (TTS) samples generated using NVIDIA’s nemo-nano-codec-22khz-0.6kbps-12.5fps neural audio codec.
It is designed for training lightweight speech synthesis models, such as token-based TTS models, audio language models, or text-to-codec models.

📚 Dataset Summary

Field Description
text The transcription (Hindi text) corresponding to each audio sample.
speaker Speaker identity (derived from the file name).
nano_layer_1–4 Discrete audio tokens from four quantization layers of the NeMo Nano Codec model.
encoded_len Length of each encoded token sequence.

🧠 Source Dataset

⚙️ Data Preparation Pipeline

The dataset was preprocessed using the following steps:

  1. Load original Hindi TTS dataset from Hugging Face.
  2. Resample all audio files to 22,050 Hz.
  3. Encode audio waveforms using AudioCodecModel from NVIDIA NeMo.
  4. Store the resulting discrete codec tokens into 4 quantizer layers.
  5. Save as Hugging Face DatasetDict and push to Hub.

The exact preprocessing script used:

https://huggingface.co/ArunKr/tts-quantized-dataset/blob/main/data_prep.py

🧩 Example Entry

{
  "text": "यह एक परीक्षण वाक्य है।",
  "speaker": "sample_001",
  "nano_layer_1": [3, 18, 94, 105, 78, ...],
  "nano_layer_2": [11, 45, 67, 53, 21, ...],
  "nano_layer_3": [32, 98, 76, 12, 43, ...],
  "nano_layer_4": [25, 14, 89, 64, 72, ...],
  "encoded_len": 120
}

🧰 Usage Example

from datasets import load_dataset
ds = load_dataset("ArunKr/tts-quantized-dataset", split="train")

print(ds[0]["text"])
# "यह एक परीक्षण वाक्य है।"

codes = [ds[0]["nano_layer_1"], ds[0]["nano_layer_2"], ds[0]["nano_layer_3"], ds[0]["nano_layer_4"]]

You can easily convert back to waveform using the same NeMo codec:

from nemo.collections.tts.models import AudioCodecModel
import torch

codec = AudioCodecModel.from_pretrained("nvidia/nemo-nano-codec-22khz-0.6kbps-12.5fps").eval()
tokens = torch.tensor([codes])
audio = codec.decode(tokens)

📦 Dataset Structure

tts_quantized_dataset/
├── data_prep.py
├── README.md
├── data/

🧾 License

  • Source Dataset License: As provided by the original dataset (SayantanJoker/original_data_hindi_tts)
  • Generated Dataset License: CC BY 4.0 (Attribution required for use and redistribution)

🙏 Acknowledgments

💬 Citation

If you use this dataset, please cite:

@dataset{arunkr_tts_quantized_dataset_2025,
  author = {Arun Kumar Tiwary},
  title  = {Hindi TTS Quantized Dataset using NeMo Nano Codec},
  year   = {2025},
  url    = {https://huggingface.co/datasets/ArunKr/tts-quantized-dataset}
}