VIPBench reviewer-friendly sample subset
This directory holds a 5-speaker slice of the full release so reviewers can inspect the dataset without downloading the full 5.8 GB bundle. Bundle size: **115 MB** (audio + subsetted embeddings).
The format is identical to the full release; analyses that work on data/ work on samples/ by changing one path.
How this sample was created
Selection criterion: one speaker per sociophonetic group, gender-balanced as far as possible at 5 cells (3M + 2F). Within each of the 5 sociophonetic groups, the first speaker (by speaker ID order in data/speakers.csv) of the chosen target gender was selected. The 5 chosen speakers are listed in the table below; their integer codes (1-5) cover the full 5-group stratification scheme of the full benchmark.
For each chosen speaker, the sample includes:
- Their reference audio clip (1 file per speaker, e.g.
M01R.wav). - All 98 comparison clips paired with that speaker as the reference (covers stimulus types 1, 2, 3, and Type 6 morphs anchored on this speaker; Types 4 and 5 different-speaker pairs that use one of the 5 speakers as a comparison are also retained because their
referencefield is one of the 5). - All listener judgments on those 490 pairs (6,401 judgments).
Filtering logic (reproducible from the full release):
samples/speakers.csv= rows ofdata/speakers.csvwhereid ∈ {M01, F06, M11, F16, M21}.samples/stimuli.csv= rows ofdata/stimuli.csvwherereference ∈ {M01, F06, M11, F16, M21}. Yields 490 rows.samples/participant_responses.csv= rows ofdata/participant_responses.csvwherestimuli_idappears in (2). Yields 6,401 rows.samples/audio/reference/andsamples/audio/comparison/= audio files corresponding to the IDs in (1) and (2).samples/embeddings/<model>.npz= same 10 main + 5 layer-bundle embeddings as the full release, but with each.npzreduced to the 495 keys (5 references + 490 comparisons) corresponding to the sample. Embedding dimensions are unchanged.
The sample was constructed by deterministic filtering of the released CSVs and embedding files (no re-extraction; code/run_all_extractions.sh is not used here). All sampled audio and embeddings are bit-identical to their counterparts in data/.
What's included
| Item | Count |
|---|---|
| Speakers | 5 (M01, F06, M11, F16, M21; one per sociophonetic group, 3M+2F) |
| Reference audio (R.wav) | 5 |
| Comparison audio | 490 (98 per speaker, all stimulus types) |
| Listener judgments | 6,401 |
| Pre-extracted embeddings | 10 models, subsetted to 495 keys each |
| Per-layer SSL embeddings | 5 models, subsetted to 495 keys each |
| Stimulus types covered | All 6 |
Layout
samples/
README.md # this file
speakers.csv # 5 rows (subset of data/speakers.csv)
stimuli.csv # 490 rows (subset of data/stimuli.csv)
participant_responses.csv # 6,401 rows (subset of data/participant_responses.csv)
audio/
reference/ # 5 *R.wav (symlinks to ../../../exp_2)
comparison/ # 490 *.wav (symlinks to ../../../output)
embeddings/
rawnet3.npz, ecapa_tdnn.npz, ... (10 models)
layers/wav2vec2.npz, ... (5 SSL models)
Quickstart
import numpy as np, pandas as pd
from sklearn.metrics.pairwise import cosine_similarity
stim = pd.read_csv('samples/stimuli.csv')
emb = dict(np.load('samples/embeddings/ecapa_tdnn.npz'))
cos = []
for _, row in stim.iterrows():
ref, cmp = emb[f'{row.reference}R'], emb[row.id]
cos.append(cosine_similarity([ref], [cmp])[0][0])
stim['cos_ecapa'] = cos
# P(same) target
stim['p_same'] = stim['same_vote'] / stim['num_response']
# Pearson r
print(stim[['cos_ecapa', 'p_same']].corr())
Speaker subset
| ID | Group | Gender | Age bracket |
|---|---|---|---|
| M01 | 1 (New York City English) | M | 1 (under 45) |
| F06 | 2 (Southern American English) | F | 1 (under 45) |
| M11 | 3 (African American English) | M | 1 (under 45) |
| F16 | 4 (Latino English) | F | 1 (under 45) |
| M21 | 5 (Asian American English) | M | 1 (under 45) |
See docs/data_dictionary.md for the full integer-to-group mapping and column schemas.
License
CC-BY-NC 4.0 for audio + judgments + embeddings; same as the full release. See ../LICENSE.