File size: 3,535 Bytes
c8c308e
ad21b1c
 
c8c308e
11924d1
b020626
11924d1
 
 
 
 
 
 
 
 
 
c8c308e
 
 
 
ad21b1c
 
c8c308e
 
 
 
 
 
 
 
 
20c458f
c8c308e
 
 
 
 
 
 
 
 
 
 
 
 
2cbef07
b8e5d4a
c8c308e
 
 
 
b8c1b5d
 
 
 
 
 
 
 
 
c8c308e
ad21b1c
 
 
 
926f870
 
 
bcf97ea
926f870
 
ad21b1c
 
926f870
ad21b1c
 
 
 
 
926f870
ad21b1c
 
 
926f870
ad21b1c
 
 
 
 
 
 
c8c308e
 
 
bcf97ea
 
 
 
 
 
 
c8c308e
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
---
language:
- en
license: cc-by-4.0
configs:
- config_name: default
  data_files:
  - split: emilia
    path: emilia/*
  - split: hifitts2
    path: hifitts2/*
splits:
- name: emilia
  num_examples: 1693423
- name: hifitts2
  num_examples: 1220574
pipeline_tag: text-to-speech
tags:
- voxtream
- text-to-speech
task_categories:
- text-to-speech
---

# Model Card for VoXtream training dataset

This repository contains a training dataset for [VoXtream](https://huggingface.co/herimor/voxtream) TTS model.

The dataset contains 9k hours:

- 4.5k hours sampled from [Emilia](https://huggingface.co/datasets/amphion/Emilia-Dataset) dataset. We applied additional diarization to remove multi-speaker utterances and discarded utterances with invalid automatic transcripts. We also used [NISQA](https://github.com/gabrielmittag/NISQA) model to remove low-quality utterances.
- 4.5k hours sampled from [HiFiTTS2](https://huggingface.co/datasets/nvidia/hifitts-2) dataset (22 kHz subset). We selected only single-speaker utterances and filtered the dataset by the WER.

All utterances are 25-seconds long. For shorter audio clips we concatenated multiple utterances within the same speaker. Sampling rate: 24kHz.

### Description

- **mimi_codes_16cb** - Tokens extracted by the [Mimi](https://huggingface.co/kyutai/mimi) audio codec (16 codebooks).
- **phone_emb_indices** - Alignment of phoneme tokens to Mimi audio frames extracted by [MFA](https://montreal-forced-aligner.readthedocs.io).
- **phone_tokens** - Phoneme tokens.
- **sem_label_shifts** - Monotonic phoneme alignment labels.
- **spk_templates** - Speaker templates for the first 3 seconds of audio extracted by [ReDimNet](https://github.com/IDRnD/redimnet) model.

### Sources 

- **Repository:** [repo](https://github.com/herimor/voxtream/tree/voxtream) 
- **Paper:** [paper](https://arxiv.org/pdf/2509.15969) 
- **Demo:** [demo](https://herimor.github.io/voxtream) 

## Get started

To download the dataset, use the following code:

```bash
from huggingface_hub import snapshot_download

local_dir = snapshot_download('herimor/voxtream-train-9k', repo_type='dataset')
```

Clone our [repo](https://github.com/herimor/voxtream) and follow the instructions in the README file.

## Sample Usage

The following examples demonstrate how to use the VoXtream model (trained on this dataset) for output streaming and full streaming.


### Installation
```bash
pip install voxtream==0.1.5
```

### Output streaming
```bash
voxtream \
    --prompt-audio assets/audio/male.wav \
    --prompt-text "The liquor was first created as 'Brandy Milk', produced with milk, brandy and vanilla." \
    --text "In general, however, some method is then needed to evaluate each approximation." \
    --output "output_stream.wav"
```
* Note: Initial run may take some time to download model weights.

### Full streaming
```bash
voxtream \
    --prompt-audio assets/audio/female.wav \
    --prompt-text "Betty Cooper helps Archie with cleaning a store room, when Reggie attacks her." \
    --text "Staff do not always do enough to prevent violence." \
    --output "full_stream.wav" \
    --full-stream
```

## Citation 

```
@inproceedings{torgashov2026voxtream,
  title={Vo{X}tream: Full-Stream Text-to-Speech with Extremely Low Latency},
  author={Torgashov, Nikita and Henter, Gustav Eje and Skantze, Gabriel},
  booktitle={Proc. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
  year={2026},
  note={to appear},
  url={https://arxiv.org/abs/2509.15969}
}
```