Code: extraction + analysis
This directory contains everything needed to (a) regenerate the pre-extracted embeddings from audio, and (b) reproduce the figures and tables in the accompanying NeurIPS 2026 paper.
Layout
code/
extraction_utils.py # shared audio loading and save logic
extract_*.py # 10 per-model extraction scripts
extract_ssl_layers.py # per-transformer-layer extraction (5 SSL models)
run_all_extractions.sh # master runner
benchmark_analysis.ipynb # main analysis notebook (80 cells)
reproduce.sh # end-to-end reproduction (default: analysis only)
README.md # this file
Quick start
pip install -r ../requirements.txt
cd code
bash reproduce.sh
reproduce.sh defaults to analysis-only: it executes the notebook against the embeddings already shipped in ../data/embeddings/. This takes ~10 minutes on a laptop.
To re-extract embeddings from the audio:
bash reproduce.sh --extract # ~24 CPU-hours + ~1 GPU-hour
The notebook auto-resolves paths via Path.cwd().parent, so just open it from code/ (jupyter notebook benchmark_analysis.ipynb) or run via the command above.
Path resolution
All scripts and the notebook expect the release directory layout:
<VIPBENCH_ROOT>/
code/ <-- you are here
data/audio/reference/*.wav
data/audio/comparison/*.wav
data/embeddings/<model>.npz <-- output of extraction
data/embeddings/layers/<model>.npz <-- per-layer SSL output
extraction_utils._resolve_root() picks the root via:
VIPBENCH_ROOTenvironment variable (if set), else- parent of the script's directory.
Override with VIPBENCH_ROOT=/some/other/path bash reproduce.sh.
Models
| Model | Script | HF checkpoint | Dim | Type |
|---|---|---|---|---|
| RawNet3 | extract_rawnet3_embeddings.py |
espnet/voxcelebs12_rawnet3 | 192 | Supervised |
| ECAPA-TDNN | extract_ecapa_tdnn.py |
speechbrain/spkrec-ecapa-voxceleb | 192 | Supervised |
| TitaNet | extract_titanet.py |
nvidia/speakerverification_en_titanet_large | 192 | Supervised |
| x-vector | extract_xvector.py |
speechbrain/spkrec-xvect-voxceleb | 512 | Supervised |
| resemblyzer | extract_resemblyzer.py |
(bundled with package) | 256 | Supervised |
| wav2vec 2.0 | extract_wav2vec2.py |
facebook/wav2vec2-base | 768 | SSL |
| HuBERT | extract_hubert.py |
facebook/hubert-base-ls960 | 768 | SSL |
| WavLM | extract_wavlm.py |
microsoft/wavlm-base-plus | 768 | SSL |
| XLS-R | extract_xlsr.py |
facebook/wav2vec2-xls-r-300m | 1024 | SSL |
| Whisper | extract_whisper.py |
openai/whisper-base (encoder) | 512 | Weakly supervised |
Per-layer mean-pooled embeddings for the 5 SSL models are produced by extract_ssl_layers.py and saved to data/embeddings/layers/<model>.npz.
Output format
Each data/embeddings/<model>.npz is a key-value store keyed by audio basename without .wav (e.g. M01R, 1_F01). Values are 1-D np.float32 arrays of shape (embedding_dim,). The 9,900 keys cover 100 references plus 9,800 comparisons.
Per-layer bundles (layers/<model>.npz) use the same keys; values have shape (num_layers, embedding_dim).
Notes
- TitaNet requires NVIDIA NeMo (
nemo_toolkit[asr]); install is heavy (~5 GB). The line is commented out inrequirements.txt. - The notebook caches expensive computations under
code/cache/. Deletecode/cache/to force recompute. - Models are downloaded from Hugging Face on first run; subsequent runs use the local cache.
License
Code in this directory is MIT-licensed (see ../LICENSE-CODE). The dataset (audio, judgments, embeddings) is CC-BY-NC 4.0 (see ../LICENSE).