voxtream-train-9k / README.md
herimor's picture
Update README.md
926f870 verified
---
language:
- en
license: cc-by-4.0
configs:
- config_name: default
data_files:
- split: emilia
path: emilia/*
- split: hifitts2
path: hifitts2/*
splits:
- name: emilia
num_examples: 1693423
- name: hifitts2
num_examples: 1220574
pipeline_tag: text-to-speech
tags:
- voxtream
- text-to-speech
task_categories:
- text-to-speech
---
# Model Card for VoXtream training dataset
This repository contains a training dataset for [VoXtream](https://huggingface.co/herimor/voxtream) TTS model.
The dataset contains 9k hours:
- 4.5k hours sampled from [Emilia](https://huggingface.co/datasets/amphion/Emilia-Dataset) dataset. We applied additional diarization to remove multi-speaker utterances and discarded utterances with invalid automatic transcripts. We also used [NISQA](https://github.com/gabrielmittag/NISQA) model to remove low-quality utterances.
- 4.5k hours sampled from [HiFiTTS2](https://huggingface.co/datasets/nvidia/hifitts-2) dataset (22 kHz subset). We selected only single-speaker utterances and filtered the dataset by the WER.
All utterances are 25-seconds long. For shorter audio clips we concatenated multiple utterances within the same speaker. Sampling rate: 24kHz.
### Description
- **mimi_codes_16cb** - Tokens extracted by the [Mimi](https://huggingface.co/kyutai/mimi) audio codec (16 codebooks).
- **phone_emb_indices** - Alignment of phoneme tokens to Mimi audio frames extracted by [MFA](https://montreal-forced-aligner.readthedocs.io).
- **phone_tokens** - Phoneme tokens.
- **sem_label_shifts** - Monotonic phoneme alignment labels.
- **spk_templates** - Speaker templates for the first 3 seconds of audio extracted by [ReDimNet](https://github.com/IDRnD/redimnet) model.
### Sources
- **Repository:** [repo](https://github.com/herimor/voxtream)
- **Paper:** [paper](https://arxiv.org/pdf/2509.15969)
- **Demo:** [demo](https://herimor.github.io/voxtream)
## Get started
To download the dataset, use the following code:
```bash
from huggingface_hub import snapshot_download
local_dir = snapshot_download('herimor/voxtream-train-9k', repo_type='dataset')
```
Clone our [repo](https://github.com/herimor/voxtream) and follow the instructions in the README file.
## Sample Usage
The following examples demonstrate how to use the VoXtream model (trained on this dataset) for output streaming and full streaming.
### Installation
```bash
pip install voxtream
```
### Output streaming
```bash
voxtream \
--prompt-audio assets/audio/male.wav \
--prompt-text "The liquor was first created as 'Brandy Milk', produced with milk, brandy and vanilla." \
--text "In general, however, some method is then needed to evaluate each approximation." \
--output "output_stream.wav"
```
* Note: Initial run may take some time to download model weights.
### Full streaming
```bash
voxtream \
--prompt-audio assets/audio/female.wav \
--prompt-text "Betty Cooper helps Archie with cleaning a store room, when Reggie attacks her." \
--text "Staff do not always do enough to prevent violence." \
--output "full_stream.wav" \
--full-stream
```
## Citation
```
@article{torgashov2025voxtream,
author = {Torgashov, Nikita and Henter, Gustav Eje and Skantze, Gabriel},
title = {Vo{X}tream: Full-Stream Text-to-Speech with Extremely Low Latency},
journal = {arXiv:2509.15969},
year = {2025}
}
```