vipbench / docs /model_table.md
sendfuze's picture
Upload docs/model_table.md with huggingface_hub
5740783 verified

Baseline models

The 10 speech representations benchmarked in the accompanying paper. Each is loaded from a publicly available pretrained checkpoint; weights are downloaded by the corresponding extraction script in code/.

Model Paradigm Training data Dim HF checkpoint Extraction script
x-vector Supervised classification VoxCeleb 1+2 512 speechbrain/spkrec-xvect-voxceleb extract_xvector.py
ECAPA-TDNN AAM-Softmax VoxCeleb 1+2 192 speechbrain/spkrec-ecapa-voxceleb extract_ecapa_tdnn.py
RawNet3 AAM-Softmax VoxCeleb 1+2 192 espnet/voxcelebs12_rawnet3 extract_rawnet3_embeddings.py
TitaNet (large) AAM-Softmax VoxCeleb 1+2, Fisher, SWB, LibriSpeech, NIST SRE 192 nvidia/speakerverification_en_titanet_large extract_titanet.py
resemblyzer GE2E loss, 3-layer LSTM LibriSpeech + VoxCeleb 1+2 256 bundled with resemblyzer package extract_resemblyzer.py
wav2vec 2.0 Contrastive masked prediction LibriSpeech 960 h 768 facebook/wav2vec2-base extract_wav2vec2.py
HuBERT Masked prediction LibriSpeech 960 h 768 facebook/hubert-base-ls960 extract_hubert.py
WavLM Masked prediction + denoising 94K h mixed 768 microsoft/wavlm-base-plus extract_wavlm.py
XLS-R Contrastive multilingual 436K h, 128 languages 1024 facebook/wav2vec2-xls-r-300m extract_xlsr.py
Whisper (encoder, base) Multitask weakly supervised ASR 680K h web audio 512 openai/whisper-base extract_whisper.py

Output format

Each script saves <model>.npz to <VIPBENCH_ROOT>/data/embeddings/. The file is a key-value store with 9,900 keys (audio basenames without .wav) mapping to 1-D np.float32 arrays of shape (embedding_dim,).

For SSL models (wav2vec 2.0, HuBERT, WavLM, XLS-R, Whisper), extract_ssl_layers.py additionally produces a per-layer mean-pooled bundle saved to <VIPBENCH_ROOT>/data/embeddings/layers/<model>.npz. Values there have shape (num_layers, embedding_dim).

Pooling

Self-supervised models (wav2vec 2.0, HuBERT, WavLM, XLS-R) and Whisper produce frame-level outputs; we use mean pooling across time for the utterance-level embedding. Speaker-specialized models (x-vector, ECAPA-TDNN, RawNet3, TitaNet, resemblyzer) produce a single utterance-level vector by design and are passed through unchanged.

Best-layer protocol

For SSL models the final layer is known to underperform on speaker tasks (SUPERB, 2021). The notebook implements a SUPERB-style nested speaker-CV best-layer protocol: for each held-out speaker fold, the layer that maximizes Pearson r against P(same) on training speakers is selected and applied to the test speakers. No pair contributes to selecting its own layer. Per-layer Pearson r curves are reported in the appendix figure.

Licenses

Each baseline retains its original license. As of release time:

  • speechbrain checkpoints: Apache 2.0
  • ESPnet checkpoints: Apache 2.0
  • NVIDIA NeMo: NVIDIA Source Code License (research permitted)
  • Hugging Face hosted facebook/microsoft checkpoints: respective lab licenses (typically MIT / Apache 2.0)
  • OpenAI Whisper: MIT
  • resemblyzer: MIT

Verify the license at the linked checkpoint page before redistribution. The CC-BY-NC 4.0 license on VIPBench's audio + judgments + derived embeddings does not extend to these third-party model weights.