vipbench / docs /data_dictionary.md
sendfuze's picture
Upload docs/data_dictionary.md with huggingface_hub
650d8a4 verified

Data dictionary

Column-level schemas for every CSV in data/ and samples/.

speakers.csv (100 rows)

Column Type Description Valid values
id text Speaker identifier. Joins to stimuli.reference and to <id>R keys in embedding files. F01-F50, M01-M50
name text Speaker name (public figure). celebrity name
group int Sociophonetic group code. 1-5 (see below)
gender int Speaker gender. 1=male, 2=female
age int Speaker age bracket. 1=under 45, 2=55 or older

Sociophonetic group mapping

The 5 sociophonetic groups partition the 100 speakers into 20 speakers each:

Code Group
1 New York City English
2 Southern American English
3 African American English
4 Latino English
5 Asian American English

The benchmark protocol uses group as a stratification variable for cross-validation folds (gender-balanced, speaker-level) and for fairness analyses.

stimuli.csv (9,800 rows)

Column Type Description
id text Stimulus identifier (matches the comparison-audio basename). Key into embedding files.
stimuli_type int Stimulus type, 1-6. See stimulus_types.md.
reference text Reference speaker ID. Joins to speakers.id.
comparison text Comparison speaker ID for non-Type-6 pairs (NaN for Type 6).
voice_clone int 1 if comparison clip is an AI voice clone, 0 otherwise.
correct_answer int Metadata same/different label. 1=same speaker by metadata, 0=different.
scale int For Type 6 morphs, interpolation level in [0, 100]. 100 for non-morph pairs.
num_response int Number of listener judgments on this pair.
same_vote int Listeners who answered "same speaker".
diff_vote int Listeners who answered "different speaker".
correct_vote int Listeners whose answer matches the metadata label.
incorrect_vote int Listeners whose answer disagrees with the metadata label.
accuracy float correct_vote / num_response.
group int Reference speaker's sociophonetic group.
gender int Reference speaker's gender.
age int Reference speaker's age bracket.

P(same) is computed downstream as same_vote / num_response.

participant_responses.csv (124,876 rows)

Column Type Description
user_id int Pseudonymized listener identifier. Tied to no external account.
stimuli_id text Stimulus identifier. Joins to stimuli.id.
stimuli_type int Stimulus type, 1-6.
answer int Listener's binary judgment. 1=same speaker, 0=different.
correct int 1 if answer matches correct_answer in stimuli.csv, 0 otherwise.
know_speaker int Listener-recognition probe. 1 if listener identified the reference speaker, 0 otherwise. May be missing for early-trial responses.
age float Listener age band (categorical, encoded as float).
gender float Listener gender.
first_language float Listener first-language flag. 0=English first, 1=other.
num_stimuli_seen float Cumulative stimulus count for this listener at the time of the response.

stimuli_interpol.csv (8,100 rows)

Per-stimulus metadata for Type 6 morphs.

Column Type Description
id text Stimulus identifier. Joins to stimuli.id.
source text Source speaker A (one endpoint of the morph trajectory).
target text Source speaker B (other endpoint).
scale int Interpolation level in [0, 100].

(Other columns may be present and are described in their headers; the four above are the schema-stable subset used by the analysis notebook.)

Embedding .npz files

Each file in data/embeddings/ is a key-value store:

  • Keys (.files attribute): audio basenames without .wav. 9,900 keys total: 100 references like M01R, F03R plus 9,800 comparisons like 1_F01, 4_M12_M15B, 6_F03_F09_50.
  • Values: 1-D np.float32 arrays of shape (embedding_dim,). Dim depends on the model (see model_table.md).

Per-layer files (data/embeddings/layers/) use the same keys; values have shape (num_layers, embedding_dim) (mean-pooled across time per layer).

How to pair reference and comparison

Every voice pair in VIPBench is one row of stimuli.csv. The pairing rule is:

Asset Reference clip Comparison clip
CSV column reference (e.g., M01) id (e.g., 1_M01, 4_M12_M15B)
Audio file data/audio/reference/{reference}R.wav data/audio/comparison/{id}.wav
Embedding key {reference}R (e.g., M01R) {id} (e.g., 1_M01)

Equivalently: the reference clip's basename is the speaker ID with R appended; the comparison clip's basename is exactly the stimulus id. The 100 reference embeddings (*R) and 9,800 comparison embeddings (id) together make the 9,900 keys present in every .npz.

Reference example: scoring a model against P(same).

import numpy as np, pandas as pd
from sklearn.metrics.pairwise import cosine_similarity

stim = pd.read_csv('data/stimuli.csv')
emb  = dict(np.load('data/embeddings/ecapa_tdnn.npz'))

def cos_pair(row):
    ref = emb[f'{row.reference}R'].reshape(1, -1)
    cmp = emb[row.id].reshape(1, -1)
    return cosine_similarity(ref, cmp)[0, 0]

stim['cos']    = stim.apply(cos_pair, axis=1)
stim['p_same'] = stim['same_vote'] / stim['num_response']
print(stim[['cos', 'p_same']].corr())

The same pattern (with librosa.load(...) instead of dictionary lookup) loads the corresponding audio.