vipbench / README.md
sendfuze's picture
Upload README.md with huggingface_hub
4569402 verified
metadata
license: cc-by-nc-4.0
language:
  - en
pretty_name: VIPBench
size_categories:
  - 100K<n<1M
tags:
  - speaker-recognition
  - voice-identity
  - voice-cloning
  - human-perception
  - benchmark
  - audio
task_categories:
  - audio-classification
  - other
configs:
  - config_name: judgments
    data_files: data/participant_responses.csv
    default: true
  - config_name: stimuli
    data_files: data/stimuli.csv
  - config_name: speakers
    data_files: data/speakers.csv
  - config_name: morph_metadata
    data_files: data/stimuli_interpol.csv

VIPBench: A Human-Aligned Benchmark for Voice Identity Perception in the Age of Voice Cloning

VIPBench is a benchmark of 124,876 same/different identity judgments from 1,290 English-speaking listeners on 9,800 voice pairs spanning 100 demographically-stratified speakers. Pairs cover three stimulus families: real recordings, AI voice clones generated by a state-of-the-art TTS system, and continuously morphed voices.

The benchmark evaluates whether speaker-embedding and speech-representation models align with human voice-identity perception, providing a perceptual evaluation target distinct from metadata-label speaker identification.

Anonymized release for NeurIPS 2026 Evaluations & Datasets Track double-blind review. Author identities and permanent URLs will be added at camera-ready.


Dataset summary

Item Count
Speakers 100 (50 M / 50 F, 5 sociophonetic groups, 2 age brackets)
Reference audio clips 100 (one per speaker)
Comparison audio clips 9,800 (98 per speaker)
Voice pairs 9,800
Listener judgments 124,876
Listeners 1,290
Median judgments per pair 10 (range 8 to 92)
Stimulus types 6 (real same/different, AI clones, voice morphs)
Pre-extracted speaker embeddings 10 models
Per-layer SSL embeddings 5 models

Supported tasks

The benchmark defines four evaluation tasks:

  1. Predict listener agreement rate (continuous regression). Predict P(same) per pair. Metric: Pearson r, R^2 against the human consensus, bounded by the Spearman-Brown noise ceiling rho_SB = 0.705.
  2. Human-aligned binary verification. Classify pairs against the human majority vote. Metrics: AUC (ranking) and Platt-calibrated ECE (calibration).
  3. Representational similarity (RSA). Spearman correlation between human and model representational dissimilarity matrices, with a Mantel permutation test.
  4. Real-to-synthetic transfer. Whether a predictor fit on real-speech pairs still works on voice clones and morphs.

A 10-fold gender-balanced speaker-level cross-validation protocol prevents speaker leakage.

Dataset structure

data/
  speakers.csv                   # 100 rows: speaker id, name, group, gender, age
  stimuli.csv                    # 9,800 rows: per-pair aggregates (P(same), votes, type)
  participant_responses.csv      # 124,876 rows: per-judgment records
  stimuli_interpol.csv           # 8,100 rows: morph-trajectory metadata for Type 6
  audio/
    reference/                   # 100 *R.wav (16 kHz mono)
    comparison/                  # 9,800 *.wav
  embeddings/
    rawnet3.npz, ecapa_tdnn.npz, titanet.npz, xvector.npz, resemblyzer.npz,
    wav2vec2.npz, hubert.npz, wavlm.npz, xlsr.npz, whisper.npz
    layers/                      # per-layer (mean-pooled) for SSL models
      wav2vec2.npz, hubert.npz, wavlm.npz, xlsr.npz, whisper.npz
samples/                         # 5-speaker quick-look subset (~150 MB)
code/                            # 10 extraction scripts + analysis notebook + reproduce.sh
docs/                            # annotation protocol, schemas, model table, reproduction

For column-level dictionaries see docs/data_dictionary.md. For the six stimulus types see docs/stimulus_types.md. For the listening-study protocol see docs/annotation_protocol.md.

Embedding format

Each .npz is a key-value store keyed by audio basename without the .wav extension (e.g., M01R, 1_F01, 4_M12_M15B). Values are numpy arrays of shape (embedding_dim,) for the 10 main embeddings and (num_layers, embedding_dim) for the per-layer bundles. The 9,900 keys cover 100 references plus 9,800 comparisons.

Pairing reference and comparison

Each row of data/stimuli.csv represents one voice pair. The reference column gives the reference speaker ID (e.g., M01) and the id column gives the stimulus identifier of the comparison clip (e.g., 1_M01, 4_M12_M15B). The pairing rule is:

You want Reference clip Comparison clip
Audio file data/audio/reference/{row.reference}R.wav data/audio/comparison/{row.id}.wav
Embedding key {row.reference}R (e.g., M01R) {row.id} (e.g., 1_M01)

Cosine-similarity scoring against P(same):

import numpy as np, pandas as pd
from sklearn.metrics.pairwise import cosine_similarity

stim = pd.read_csv('data/stimuli.csv')
emb  = dict(np.load('data/embeddings/ecapa_tdnn.npz'))

stim['cos'] = stim.apply(
    lambda r: cosine_similarity(
        emb[f'{r.reference}R'].reshape(1, -1),
        emb[r.id].reshape(1, -1)
    )[0, 0],
    axis=1,
)
stim['p_same'] = stim['same_vote'] / stim['num_response']
print(stim[['cos', 'p_same']].corr())   # Pearson r against listener consensus

Quick start

pip install -r requirements.txt
cd code && bash reproduce.sh        # ~10 min from cached embeddings

To re-extract embeddings from the audio (~24 CPU-hours plus ~1 GPU-hour for Whisper), see docs/reproduction.md.

Source data and collection

  • Speakers. 100 English-speaking US celebrities stratified across 5 sociophonetic groups (1 = New York City English, 2 = Southern American English, 3 = African American English, 4 = Latino English, 5 = Asian American English) x 2 genders x 2 age brackets (1 = under 45, 2 = 55 or older), 5 speakers per cell.
  • Reference audio. Clips selected from publicly available recordings (interviews, podcasts).
  • Voice clones. Generated with Cartesia (a state-of-the-art TTS system) seeded from a natural source clip of the speaker being cloned. The variant letter in the stimulus ID identifies the seed: a Type 3 clone shares its seed clip with the comparison clip of the matched Type 2 pair, and a Type 5 clone shares its seed with the matched Type 4 pair (e.g., the clone in 3_F01B is seeded from the same F01B source clip used as the comparison in 2_F01B).
  • Voice morphs. For each of the 100 reference speakers, the latent voice representation of the reference speaker is interpolated toward each of 4 within-group comparison speakers (matched on sociophonetic group, age group, and gender), at 2 distinct recordings per comparison speaker, sampled at 10 morph scales between 0 and 1, plus 1 anchor at scale 1. This yields 4 x 2 x 10 + 1 = 81 Type-6 stimuli per reference speaker (8,100 total). Generated using the voice-morphing feature of the same Cartesia TTS system.
  • Listeners. 1,290 adult English-speaking participants recruited via the Centaur AI platform under an IRB-approved protocol. Consent followed the platform's standard pipeline.

Each pair received at least 8 judgments; real-speech pairs (Types 1, 2, 4) received more coverage than synthetic pairs to give tighter consensus estimates on the real-speech reference distribution.

Considerations for use

Personally identifying information

The dataset names public-figure speakers because the celebrity-stratified design is integral to the benchmark and source recordings are already public. Listener identifiers in participant_responses.csv are pseudonymized integers tied to no external account.

Biases and limitations

  • English-speaking listener pool, US-dialect speakers. Cross-language perception is not measured.
  • 100 speakers limits statistical power for some subgroup contrasts (20 speakers per sociophonetic group).
  • Studio-quality audio. In-the-wild conditions (noise, codec compression, telephony) are not represented.
  • The operational target is a population consensus, appropriate for ambiguous stimuli where any absolute identity label would itself be probabilistic.

Responsible use

The benchmark measures model-human alignment at the evaluation level. We do not release clone-generation recipes or adversarial training targets. Voice-cloning systems that better align with human perception could inform adversarial use; the same alignment knowledge also strengthens defenses (perception-aligned identity models can flag clones that metadata-based verification accepts).

License

  • Dataset (audio, judgments, metadata, embeddings): Creative Commons Attribution-NonCommercial 4.0 International (CC-BY-NC 4.0). See LICENSE.
  • Code (scripts, notebook): MIT License. See LICENSE-CODE.
  • Pretrained model weights (loaded by extraction scripts): each baseline retains its original license; see docs/model_table.md.

Commercial use of the audio, judgments, or derived embeddings is not permitted under this license.

Citation

To be filled in at camera-ready.

@inproceedings{vipbench2026,
  title  = {VIPBench: A Human-Aligned Benchmark for Voice Identity Perception in the Age of Voice Cloning},
  author = {Anonymous},
  booktitle = {Advances in Neural Information Processing Systems Datasets and Benchmarks},
  year   = {2026},
  note   = {Anonymized for double-blind review.}
}

Files

  • README.md (this file): dataset card.
  • LICENSE: CC-BY-NC 4.0 full text.
  • LICENSE-CODE: MIT full text for scripts.
  • croissant.json: MLCommons Croissant 1.0 metadata (core + Responsible AI fields).
  • DATASHEET.md: Datasheet for Datasets (Gebru et al. 2021).
  • CHANGELOG.md: version history.
  • CITATION.cff: machine-readable citation.
  • requirements.txt: pinned Python dependencies.
  • data/, samples/, code/, docs/: see structure section above.