| | --- |
| | configs: |
| | - config_name: zh_single |
| | data_files: |
| | - split: test |
| | path: data/zh_single/test.parquet |
| | - config_name: zh_multi |
| | data_files: |
| | - split: test |
| | path: data/zh_multi/test.parquet |
| | - config_name: en_single |
| | data_files: |
| | - split: test |
| | path: data/en_single/test.parquet |
| | - config_name: en_multi |
| | data_files: |
| | - split: test |
| | path: data/en_multi/test.parquet |
| | language: |
| | - zh |
| | - en |
| | license: cc-by-nc-sa-4.0 |
| | task_categories: |
| | - text-to-speech |
| | tags: |
| | - nonverbal-vocalization |
| | - expressive-tts |
| | - benchmark |
| | - speech-synthesis |
| | pretty_name: NV-Bench |
| | size_categories: |
| | - 1K<n<10K |
| | --- |
| | |
| | # NV-Bench: Benchmarking Nonverbal Vocalization Synthesis in Expressive Text-to-Speech Models |
| |
|
| | [](https://nvbench.github.io/) |
| | [](https://github.com/nvbench/NV-Bench) |
| |
|
| | ## Abstract |
| |
|
| | While recent text-to-speech (TTS) systems increasingly integrate nonverbal vocalizations (NVVs), their evaluation lacks standardized metrics and reliable ground truth references. To bridge this gap, we propose **NV-Bench**, the first benchmark grounded in a functional taxonomy that treats NVVs as communicative acts rather than acoustic artifacts. NV-Bench comprises **1,651 multilingual, in-the-wild utterances** with paired human reference audio, balanced across **14 categories**. We introduce a dual-dimensional evaluation protocol: |
| |
|
| | 1. **Instruction Alignment** — utilizes our proposed Paralinguistic Character Error Rate (PCER) to assess controllability. |
| | 2. **Acoustic Fidelity** — quantifies the distributional gap between synthesized and real speech. |
| |
|
| | Experimental results demonstrate a strong correlation between our objective metrics and human perception, establishing NV-Bench as a standardized evaluation framework. |
| |
|
| | ## Dataset Overview |
| |
|
| | ### Subsets |
| |
|
| | | Subset | Language | Description | Label Type | |
| | |---|---|---|---| |
| | | `zh_single` | Chinese | Single nonverbal event per utterance | Single-label | |
| | | `zh_multi` | Chinese | Multiple nonverbal events per utterance | Multi-label | |
| | | `en_single` | English | Single nonverbal event per utterance | Single-label | |
| | | `en_multi` | English | Multiple nonverbal events per utterance | Multi-label | |
| |
|
| | - **Single-label Subset**: Strictly balanced — exactly one NVV event per utterance (50 samples per category). Isolates fundamental generation capabilities. |
| | - **Multi-label Subset**: Challenging utterances with 2+ NVV events — tests robustness under dense paralinguistic conditions with relative balance. |
| |
|
| | ### Data Fields |
| |
|
| | | Field | Type | Description | |
| | |---|---|---| |
| | | `text` | `string` | Target text with inline NVV tags (e.g. `[Laughter]`, `[Cough]`) | |
| | | `prompt_text` | `string` | Prompt text for zero-shot speaker cloning | |
| | | `category` | `string` | NVV category label (one of 14 categories) | |
| | | `type` | `string` | Subset identifier (`zh_single`, `zh_multi`, `en_single`, `en_multi`) | |
| | | `wav` | `Audio` | Ground-truth reference audio (MP3) | |
| | | `prompt_wav` | `Audio` | Speaker prompt audio for zero-shot cloning (MP3) | |
| |
|
| | ### Functional Taxonomy |
| |
|
| | NVVs are organized into **three functional levels** based on communicative intent: |
| |
|
| | | Name | Description | Categories | |
| | |---|---|---| |
| | | Vegetative Sounds | Biological reflexes grounding speech in physical realism | `[Cough]`, `[Sigh]`, `[Breathing]` | |
| | | Affect Bursts | Valenced vocalizations conveying emotion or instant reactions | `[Laughter]`, `[Surprise-ah]`, `[Surprise-oh]`, `[Dissatisfaction-hnn]`| |
| | | Conversational Grunts | Interaction-management cues — filled pauses and prosodic particles | `[Uhm]`, `[Confirmation-en]`, `[Question-ei]`, `[Question-ah]`, `[Question-en]`, `[Question-oh]`, `[Question-huh]` | |
| |
|
| | ## Usage |
| |
|
| | ```python |
| | from datasets import load_dataset |
| | |
| | # Load a specific subset |
| | dataset = load_dataset("AnonyData/NV-Bench", "zh_single", split="test") |
| | |
| | # Access a sample |
| | sample = dataset[0] |
| | print(sample["text"]) # Target text with NVV tags |
| | print(sample["category"]) # NVV category |
| | print(sample["wav"]) # Ground-truth audio |
| | print(sample["prompt_wav"]) # Speaker prompt audio |
| | print(sample["prompt_text"]) # Prompt text |
| | ``` |
| |
|
| | ## Evaluation Protocol |
| |
|
| | ### Instruction Alignment |
| | Measures whether the model can generate the specified NVV events at the correct positions. |
| |
|
| | | Metric | Description | |
| | |---|---| |
| | | **CER** | Character Error Rate | |
| | | **PCER** | Paralinguistic Character Error Rate | |
| | | **OCER** | Overall Character Error Rate | |
| |
|
| | ### Acoustic Fidelity |
| | Measures how realistic synthesized speech sounds compared to real recordings. |
| |
|
| | | Metric | Description | |
| | |---|---| |
| | | **FAD / FD / KL** | Distribution distance metrics | |
| | | **SIM** | Speaker similarity | |
| | | **DNSMOS** | Perceptual quality score | |
| |
|
| | ## Pipeline |
| |
|
| | 1. **Data Processing** — 565K clips (~1,560 hrs) filtered via Emilia-Pipeline & MiMo-Audio for single-speaker verification. |
| | 2. **Multi-lingual NVASR** — SenseVoice-Small fine-tuned on 6 datasets with unified label taxonomy. |
| | 3. **Human Verification** — 1,651 prompt-GT pairs (7.9 hrs). |
| |
|
| | ## Citation |
| |
|
| | ```bibtex |
| | Coming soon |
| | ``` |
| |
|
| | ## License |
| |
|
| | This dataset is released under the [CC BY NC SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) license. |