vipbench / docs /stimulus_types.md
sendfuze's picture
Upload docs/stimulus_types.md with huggingface_hub
42989cd verified

Stimulus types

VIPBench contains 9,800 voice pairs in 6 stimulus types, designed to span the perceptual identity landscape from "obviously same" to "obviously different" with controlled ambiguous regions in between.

The six types

Type Description Metadata label Pair count P(same) shape
1 Same recording (reference compared with itself, segmented differently) Same 100 concentrated near 1.0
2 Same speaker, different recording Same 400 high but spread
3 Same speaker, AI voice clone Same 400 spreads across full range (clones with metadata-same label)
4 Different speakers, real recordings Different 400 concentrated near 0.0
5 Different speakers, AI voice clones Different 400 concentrated near 0.0
6 Continuously morphed voices (no clean metadata label) 8,100 sweeps full range across the morph trajectory

Total: 9,800 pairs, of which 1,700 carry a clean metadata same/different label (Types 1-5) and 8,100 are morph trajectories.

Note: Type 6's 8,100 pairs are 81 stimuli per reference speaker x 100 reference speakers. The 81 stimuli per reference speaker decompose as 4 within-group comparison speakers (matched on sociophonetic group, age group, and gender) x 2 distinct recordings per comparison x 10 morph scales between 0 and 1, plus 1 shared anchor at scale 1 (4 x 2 x 10 + 1 = 81). Per-stimulus trajectory metadata is in data/stimuli_interpol.csv.

Naming convention

Stimulus IDs in data/stimuli.csv and the audio filenames in data/audio/comparison/ follow the pattern:

  • Types 1-5: <type>_<reference_speaker> for type-1 same-recording pairs; <type>_<reference_speaker><variant> where the comparison clip varies across A-E for types 2-5.
    • Examples: 1_M01.wav (Type 1, M01), 2_M01B.wav (Type 2, M01 with variant B), 4_F03_F09B.wav (Type 4, reference F03 paired with F09 variant B).
  • Type 6 morphs: 6_<source_speaker><variant>_<target_speaker><variant>_<scale>.wav where the variant letter (A-E) identifies the seed clip used for each speaker and <scale> is the interpolation level. Example: 6_M05A_M03A_065.wav is a morph between M05's clip A and M03's clip A at scale 65.

The stimulus ID matches the comparison-audio basename (without .wav) and is the key into the embedding .npz files.

Voice cloning

Voice clones (Types 3 and 5) were generated with Cartesia (a state-of-the-art TTS system) seeded from a natural source clip of the speaker being cloned. The variant letter in the stimulus ID identifies the seed: a Type 3 clone shares its seed clip with the comparison clip of the matched Type 2 pair, and a Type 5 clone shares its seed with the matched Type 4 pair. For example, 3_F01B is seeded from the same F01B source clip that appears as the comparison in 2_F01B; 5_M01_F09B is seeded from the same F09B source clip that appears as the comparison in 4_M01_F09B. The reference clip itself was not used as the seed. The clone shares the metadata identity of the source speaker by construction; whether listeners hear the clone as that speaker is the per-pair question that the benchmark measures via P(same).

Voice morphing

Type 6 pairs were generated using the voice-morphing feature of the same Cartesia TTS system, interpolating the latent voice representation of the reference speaker toward each of 4 within-group comparison speakers (matched on sociophonetic group, age group, and gender). For each reference speaker x comparison speaker x recording (2 distinct recordings per comparison speaker), 10 morph scales between 0 and 1 were sampled, plus 1 shared anchor at scale 1. Per reference speaker: 4 x 2 x 10 + 1 = 81 stimuli, totaling 8,100 across 100 reference speakers. Stimulus IDs encode the two endpoints and the scale (e.g., 6_M05A_M03A_065 = morph between M05 and M03 with seed recordings A from each, at scale 65). Morphs have no clean metadata speaker label: at scale 0 the audio matches one speaker, at scale 100 the other, and intermediate scales sweep a perceptual continuum. This is the largest category in the dataset (8,100 of 9,800 pairs) and is designed to probe identity perception at fine resolution. Per-stimulus trajectory metadata (source speakers, recording variants, scale) is in data/stimuli_interpol.csv.

Why this design

The six types reflect three orthogonal axes:

  1. Metadata identity (same vs different speaker): Types 1-2-3 vs 4-5; Type 6 sweeps.
  2. Synthesis (real vs AI-generated): Types 1-2-4 vs 3-5; Type 6 is morphed.
  3. Ambiguity (concentrated vs spread P(same)): Types 1, 4, 5 are concentrated; Types 3 and 6 sweep, exposing where listener perception diverges from the metadata label.

Types 3 and 6 are where perceptual and metadata identity most often disagree, making them the centerpiece of the benchmark's perception-vs-metadata contrast.