{ "cells": [ { "cell_type": "markdown", "id": "cell_000", "metadata": {}, "source": [ "# Voice Identity Perception Benchmark\n", "\n", "## Abstract\n", "\n", "This notebook accompanies a proposed benchmark for **evaluating speaker-embedding models against human voice-identity perception**. Unlike existing speaker-verification benchmarks (e.g., VoxCeleb) that compare model outputs to ground-truth speaker labels, our benchmark targets the **population-consensus human judgment** of whether two voice clips belong to the same person. We benchmark 10 publicly available speech representation models across five evaluation tasks and report results on 9,800 voice pairs judged by 175 qualified human listeners (92,239 individual judgments).\n", "\n", "## Dataset\n", "\n", "- **9,800 voice pairs** across 100 celebrity speakers (50M/50F, 5 sociophonetic groups)\n", "- **124,876 total human judgments** (92,239 after preregistered participant filtering)\n", "- Six stimulus types: same audio, same speaker/different recording, voice clones, different-speaker pairs, different-speaker clones, and **8,100 continuously interpolated (blended) voices**\n", "\n", "## Models benchmarked (10 total)\n", "\n", "**Supervised speaker embeddings (5):** RawNet3, ECAPA-TDNN, TitaNet, Resemblyzer (GE2E), x-vector\n", "\n", "**Self-supervised speech representations (4):** wav2vec 2.0, HuBERT, WavLM, XLS-R\n", "\n", "**Weakly-supervised speech representation (1):** Whisper\n", "\n", "## Notebook Structure\n", "\n", "### Part I \u2014 Setup (Sections 1-2)\n", "- **Section 1**: Dataset description and human-perception reliability\n", "- **Section 2**: Model embeddings, cosine similarities, inter-model agreement\n", "\n", "### Part II \u2014 Core Benchmark Tasks (Sections 3-5)\n", "- **Section 3**: Task 1 \u2014 Predict human P(same) [primary regression metric]\n", "- **Section 4**: Task 2 \u2014 Human-aligned verification with Platt-calibrated ECE\n", "- **Section 5**: Task 3 \u2014 Speaker-level representational similarity (RDM + Mantel test)\n", "\n", "### Part III \u2014 Stimulus-Level Analyses (Sections 6-7)\n", "- **Section 6**: Per-stimulus-type analysis and real\u2192synthetic transfer\n", "- **Section 7**: Individual differences and human disagreement prediction\n", "\n", "### Part IV \u2014 Representation Analyses (Sections 8, 13)\n", "- **Section 8**: Mahalanobis metric learning (does per-dimension reweighting help?)\n", "- **Section 13**: SSL layer-wise alignment diagnostic (per-layer curves and layer-selection sensitivity)\n", "\n", "### Part V \u2014 Benchmark Validation (Sections 9-12)\n", "- **Section 9**: Pairwise model significance (paired bootstrap + Benjamini-Hochberg FDR correction)\n", "- **Section 10**: Individual human baseline (leave-one-out consensus agreement)\n", "- **Section 11**: Type-6 ablation (rank stability without blended stimuli)\n", "- **Section 12**: Demographic fairness (gender \u00d7 sociophonetic group disparities)\n", "\n", "### Part VI \u2014 Summary\n", "- **Section 14**: Grand comparison table, noise-ceiling normalization, key findings\n", "\n", "## Key Methodological Notes\n", "\n", "1. **Two data subsets are reported throughout**: the full dataset (all 1,290 participants) and the preregistered qualified subset (175 participants). Results are nearly identical, validating that noise from low-engagement participants does not distort model comparisons.\n", "2. **SSL models' main-benchmark cosine similarities use best-layer selection via nested speaker-level cross-validation** (10 folds, gender-balanced). For each fold, the best transformer layer is chosen on training-fold speakers and applied to held-out test speakers. This follows the SUPERB-style convention (Yang et al., 2021) and is the fair protocol for comparing SSL models to supervised speaker embeddings -- the last (`last_hidden_state`) layer is known to underperform for speaker tasks because masked-prediction and contrastive SSL objectives optimize deeper layers away from speaker identity. Last-layer values are preserved in `{model}_cosim_lastlayer` columns and reported in Section 13 for comparison.\n", "3. **Speaker-level cross-validation** is used wherever a learned model is evaluated (Mahalanobis, Platt calibration, layer selection) to prevent speaker leakage.\n", "4. **Noise ceiling**: split-half reliability gives $R^2_\\text{ceiling} \\approx 0.69$ (Section 1). The best model reaches ~60% of this ceiling (Section 3); Section 10 shows the gap is driven by individual-listener variability rather than representational inadequacy." ] }, { "cell_type": "code", "execution_count": null, "id": "cell_001", "metadata": { "execution": { "iopub.execute_input": "2026-04-22T16:21:00.986511Z", "iopub.status.busy": "2026-04-22T16:21:00.986208Z", "iopub.status.idle": "2026-04-22T16:21:02.819526Z", "shell.execute_reply": "2026-04-22T16:21:02.817481Z" } }, "outputs": [], "source": [ "import numpy as np\n", "import pandas as pd\n", "import matplotlib.pyplot as plt\n", "import seaborn as sns\n", "from scipy import stats\n", "from scipy.stats import pearsonr, spearmanr\n", "from scipy.optimize import minimize\n", "from sklearn.metrics.pairwise import cosine_similarity\n", "from sklearn.metrics import roc_curve, auc, roc_auc_score, accuracy_score\n", "from sklearn.linear_model import LogisticRegression\n", "from sklearn.model_selection import GroupKFold\n", "from sklearn.decomposition import PCA\n", "from sklearn.preprocessing import StandardScaler\n", "from pathlib import Path\n", "import pickle\n", "import hashlib\n", "import warnings\n", "warnings.filterwarnings('ignore')\n", "\n", "sns.set_style('white')\n", "plt.rcParams['figure.figsize'] = (12, 6)\n", "plt.rcParams['font.size'] = 11\n", "\n", "# Path resolution: assume notebook lives at /code/benchmark_analysis.ipynb\n", "NB_DIR = Path.cwd()\n", "RELEASE_DIR = NB_DIR.parent if NB_DIR.name == 'code' else NB_DIR\n", "DATA_DIR = RELEASE_DIR / 'data'\n", "EMB_DIR = DATA_DIR / 'embeddings'\n", "LAYER_EMB_DIR = EMB_DIR / 'layers'\n", "# Backwards-compatible aliases used elsewhere in the notebook\n", "BASE_DIR = RELEASE_DIR\n", "BENCH_DIR = EMB_DIR\n", "CACHE_DIR = NB_DIR / 'cache'\n", "CACHE_DIR.mkdir(exist_ok=True)\n", "\n", "# Set CACHE_USE = False to force recomputation of all cached results.\n", "# Delete the cache/ folder to invalidate caches if input files change.\n", "CACHE_USE = True\n", "\n", "def cached(cache_key, compute_fn, description=''):\n", " \"\"\"Compute or load a cached result. cache_key is a filename prefix.\"\"\"\n", " cache_file = CACHE_DIR / f'{cache_key}.pkl'\n", " if CACHE_USE and cache_file.exists():\n", " print(f'[cache HIT] {cache_key}{(\": \" + description) if description else \"\"}')\n", " with open(cache_file, 'rb') as f:\n", " return pickle.load(f)\n", " print(f'[cache MISS] {cache_key}{(\": \" + description) if description else \"\"} -- computing')\n", " result = compute_fn()\n", " if CACHE_USE:\n", " with open(cache_file, 'wb') as f:\n", " pickle.dump(result, f)\n", " print(f'[cache SAVED] {cache_key} -> {cache_file.name}')\n", " return result\n", "\n", "# Preregistered filtering criteria\n", "GOLD_TYPES = [1, 2, 4]\n", "MIN_GOLD_STIMULI = 25\n", "MIN_ACCURACY = 0.60\n", "SEED = 42\n", "rng = np.random.default_rng(SEED)" ] }, { "cell_type": "markdown", "id": "part_I_divider", "metadata": {}, "source": [ "---\n", "# Part I \u2014 Setup\n", "\n", "This part describes the dataset and loads the 10 models' pre-extracted embeddings. It establishes the human-perception target (P(same)) that all subsequent tasks try to predict, and the inter-model agreement structure." ] }, { "cell_type": "markdown", "id": "cell_002", "metadata": {}, "source": [ "---\n", "## Section 1: Dataset Description\n", "\n", "This section describes the structure of the voice identity perception dataset.\n", "\n", "**Experiment design:** Participants listened to pairs of voice clips and judged whether the two clips belonged to the same speaker (\"same\") or different speakers (\"different\"). Each pair consists of a fixed reference clip for a given speaker and one of 98 comparison clips. The comparison clips span six stimulus types:\n", "\n", "| Type | Description | Expected answer | N |\n", "|------|-------------|-----------------|---|\n", "| 1 | Same audio (identical clip played twice) | Same | 100 |\n", "| 2 | Same speaker, different recording | Same | 400 |\n", "| 3 | Voice clone of the same speaker | Same | 400 |\n", "| 4 | Different speaker, real recording | Different | 400 |\n", "| 5 | Different speaker, voice clone | Different | 400 |\n", "| 6 | Interpolated/blended voice (continuous mix between two speakers) | Different | 8,100 |\n", "\n", "**Participant filtering:** Following the preregistered protocol, we define a \"qualified\" subset of participants who completed at least 25 gold-standard trials (Types 1, 2, 4) with at least 60% accuracy. We report results on both the full dataset (all participants) and the qualified subset.\n", "\n", "**Key quantity: P(same).** For each pair, we compute the proportion of participants who judged the pair as \"same speaker.\" This continuous measure (0 to 1) serves as the primary human perception target throughout the benchmark.\n", "\n", "**Inter-rater reliability** is assessed via split-half correlation: randomly split participants into two halves, compute P(same) for each half independently, correlate, and apply the Spearman-Brown prophecy formula:\n", "\n", "$$r_{SB} = \\frac{2 r_{half}}{1 + r_{half}}$$\n", "\n", "This estimates the reliability of the full-sample P(same) estimates." ] }, { "cell_type": "code", "execution_count": null, "id": "cell_003", "metadata": { "execution": { "iopub.execute_input": "2026-04-22T16:21:02.822471Z", "iopub.status.busy": "2026-04-22T16:21:02.822074Z", "iopub.status.idle": "2026-04-22T16:21:02.990195Z", "shell.execute_reply": "2026-04-22T16:21:02.987571Z" } }, "outputs": [], "source": [ "# Load raw data\n", "responses = pd.read_csv(DATA_DIR / 'participant_responses.csv')\n", "stimuli = pd.read_csv(DATA_DIR / 'stimuli.csv')\n", "speakers = pd.read_csv(DATA_DIR / 'speakers.csv')\n", "\n", "print(f'Total responses: {len(responses):,}')\n", "print(f'Unique participants: {responses[\"user_id\"].nunique():,}')\n", "print(f'Total stimuli (pairs): {len(stimuli):,}')\n", "print(f'Speakers: {len(speakers)}')" ] }, { "cell_type": "code", "execution_count": null, "id": "cell_004", "metadata": { "execution": { "iopub.execute_input": "2026-04-22T16:21:03.037208Z", "iopub.status.busy": "2026-04-22T16:21:03.036864Z", "iopub.status.idle": "2026-04-22T16:21:03.081239Z", "shell.execute_reply": "2026-04-22T16:21:03.079195Z" } }, "outputs": [], "source": [ "# Participant metadata (computed but NOT used for primary benchmark target).\n", "# The preregistered participant filter (>=25 gold trials, >=60% accuracy) is used\n", "# ONLY in Section 10's per-listener baseline analysis, where individual-level\n", "# accuracy estimates require sufficient trials per participant to be meaningful.\n", "# All other analyses use the full 1,290-participant pool and the 124,876-judgment\n", "# corpus for their P(same) target.\n", "\n", "gold_responses = responses[responses['stimuli_type'].isin(GOLD_TYPES)].copy()\n", "participant_stats = gold_responses.groupby('user_id').agg(\n", " n_gold=('correct', 'count'),\n", " n_correct=('correct', 'sum')).reset_index()\n", "participant_stats['accuracy'] = participant_stats['n_correct'] / participant_stats['n_gold']\n", "\n", "qualified_ids = participant_stats[\n", " (participant_stats['accuracy'] >= MIN_ACCURACY) &\n", " (participant_stats['n_gold'] >= MIN_GOLD_STIMULI)]['user_id'].values\n", "resp_qual = responses[responses['user_id'].isin(qualified_ids)].copy()\n", "\n", "print(f'Full dataset: {len(responses):,} responses from {responses[\"user_id\"].nunique():,} participants')\n", "print(f'Qualified subset: {len(resp_qual):,} responses from {len(qualified_ids):,} participants (reserved for individual-level baseline only)')" ] }, { "cell_type": "code", "execution_count": null, "id": "cell_005", "metadata": { "execution": { "iopub.execute_input": "2026-04-22T16:21:03.084038Z", "iopub.status.busy": "2026-04-22T16:21:03.083779Z", "iopub.status.idle": "2026-04-22T16:21:03.129947Z", "shell.execute_reply": "2026-04-22T16:21:03.128096Z" } }, "outputs": [], "source": [ "# Compute P(same) from ALL 1,290 participants (primary benchmark target)\n", "def compute_psame(resp_df, stimuli_df):\n", " agg = resp_df.groupby('stimuli_id').agg(\n", " n_responses=('answer', 'count'),\n", " n_same=('answer', 'sum')\n", " ).reset_index()\n", " agg['p_same'] = agg['n_same'] / agg['n_responses']\n", " merged = stimuli_df[['id', 'stimuli_type', 'reference', 'comparison',\n", " 'voice_clone', 'correct_answer', 'scale']].merge(\n", " agg, left_on='id', right_on='stimuli_id', how='left')\n", " return merged\n", "\n", "df_full = compute_psame(responses, stimuli)\n", "\n", "df = stimuli[['id', 'stimuli_type', 'reference', 'comparison',\n", " 'voice_clone', 'correct_answer', 'scale']].copy()\n", "df['p_same_full'] = df_full.set_index('id')['p_same'].reindex(df['id'].values).values\n", "df['n_resp_full'] = df_full.set_index('id')['n_responses'].reindex(df['id'].values).values\n", "df['majority_full'] = (df['p_same_full'] > 0.5).astype(int)\n", "\n", "print(f'Pairs with full data: {df[\"p_same_full\"].notna().sum()}')\n", "print(f'Median responses per pair (full): {df[\"n_resp_full\"].median():.0f}')\n", "print(f'Min/max responses per pair: {int(df[\"n_resp_full\"].min())} / {int(df[\"n_resp_full\"].max())}')" ] }, { "cell_type": "code", "execution_count": null, "id": "cell_006", "metadata": { "execution": { "iopub.execute_input": "2026-04-22T16:21:03.132537Z", "iopub.status.busy": "2026-04-22T16:21:03.132274Z", "iopub.status.idle": "2026-04-22T16:21:03.160353Z", "shell.execute_reply": "2026-04-22T16:21:03.158257Z" } }, "outputs": [], "source": [ "# Stimulus type summary\n", "type_labels = {\n", " 1: 'Same audio (baseline)',\n", " 2: 'Same speaker, diff recording',\n", " 3: 'Voice clone (same speaker)',\n", " 4: 'Different speaker',\n", " 5: 'Different speaker clone',\n", " 6: 'Interpolated/blended'\n", "}\n", "\n", "type_summary = df.groupby('stimuli_type').agg(\n", " count=('id', 'count'),\n", " mean_p_same=('p_same_full', 'mean'),\n", " std_p_same=('p_same_full', 'std'),\n", " mean_n_resp=('n_resp_full', 'mean')\n", ").round(3)\n", "type_summary['description'] = type_summary.index.map(type_labels)\n", "type_summary" ] }, { "cell_type": "code", "execution_count": null, "id": "cell_007", "metadata": { "execution": { "iopub.execute_input": "2026-04-22T16:21:03.163410Z", "iopub.status.busy": "2026-04-22T16:21:03.163144Z", "iopub.status.idle": "2026-04-22T16:21:03.185447Z", "shell.execute_reply": "2026-04-22T16:21:03.183530Z" } }, "outputs": [], "source": [ "# Speaker demographics\n", "print('Speaker demographics:')\n", "print(f\" Gender: {speakers['gender'].value_counts().to_dict()}\")\n", "print(f\" Groups: {speakers['group'].nunique()} groups\")\n", "print(f\" Age range: {speakers['age'].min()}-{speakers['age'].max()}\")\n", "speakers.groupby('gender')['age'].describe().round(1)" ] }, { "cell_type": "code", "metadata": {}, "execution_count": null, "outputs": [], "source": [ "# Participant familiarity breakdown.\n", "# A participant was coded as familiar with the reference speaker on a given trial if\n", "# they selected the correct speaker name from the within-group name list. This\n", "# confirms the perceptual consensus is not dominated by celebrity recognition.\n", "total = len(responses)\n", "fam = int((responses['know_speaker'] == 1).sum())\n", "unfam = int((responses['know_speaker'] == 0).sum())\n", "print(f'Overall: {unfam:,} / {total:,} judgments from unfamiliar listeners ({unfam/total:.1%})')\n", "print(f' {fam:,} / {total:,} judgments from familiar listeners ({fam/total:.1%})')\n", "print()\n", "print('Per stimulus type:')\n", "for stype in sorted(responses['stimuli_type'].unique()):\n", " sub = responses[responses['stimuli_type'] == stype]\n", " u = int((sub['know_speaker'] == 0).sum())\n", " print(f' Type {stype}: {u:,}/{len(sub):,} unfamiliar ({u/len(sub):.1%})')\n" ] }, { "cell_type": "code", "execution_count": null, "id": "cell_008", "metadata": { "execution": { "iopub.execute_input": "2026-04-22T16:21:03.188088Z", "iopub.status.busy": "2026-04-22T16:21:03.187832Z", "iopub.status.idle": "2026-04-22T16:21:06.995895Z", "shell.execute_reply": "2026-04-22T16:21:06.993670Z" } }, "outputs": [], "source": [ "# Inter-rater agreement via split-half reliability on ALL 1,290 participants\n", "N_SPLITS = 100\n", "split_corrs = []\n", "\n", "all_ids = np.array(sorted(responses['user_id'].unique()))\n", "for _ in range(N_SPLITS):\n", " perm = rng.permutation(all_ids)\n", " half1 = set(perm[:len(perm)//2])\n", " half2 = set(perm[len(perm)//2:])\n", " r1 = responses[responses['user_id'].isin(half1)].groupby('stimuli_id')['answer'].mean()\n", " r2 = responses[responses['user_id'].isin(half2)].groupby('stimuli_id')['answer'].mean()\n", " common = r1.index.intersection(r2.index)\n", " if len(common) > 100:\n", " r, _ = pearsonr(r1[common], r2[common])\n", " split_corrs.append(r)\n", "\n", "split_half_r = np.mean(split_corrs)\n", "sb_reliability = 2 * split_half_r / (1 + split_half_r)\n", "print(f'Split-half correlation (mean of {N_SPLITS} splits, all 1,290 participants): r = {split_half_r:.4f}')\n", "print(f'Spearman-Brown corrected reliability: {sb_reliability:.4f}')" ] }, { "cell_type": "code", "execution_count": null, "id": "cell_009", "metadata": { "execution": { "iopub.execute_input": "2026-04-22T16:21:06.998780Z", "iopub.status.busy": "2026-04-22T16:21:06.998501Z", "iopub.status.idle": "2026-04-22T16:21:07.939285Z", "shell.execute_reply": "2026-04-22T16:21:07.936924Z" } }, "outputs": [], "source": [ "# P(same) distribution by stimulus type (full set, 1,290 participants)\n", "fig, ax = plt.subplots(figsize=(10, 5))\n", "data_plot = df.dropna(subset=['p_same_full'])\n", "sns.violinplot(data=data_plot, x='stimuli_type', y='p_same_full', ax=ax, inner='quartile',\n", " palette='Set2', cut=0)\n", "type_labels = [\n", " 'Same\\nrecording',\n", " 'Same speaker\\ndifferent recording',\n", " 'Same speaker\\nAI voice clone',\n", " 'Different speakers\\nreal',\n", " 'Different speakers\\nvoice clones',\n", " 'Continuously\\nmorphed voices',\n", "]\n", "ax.set_xticklabels(type_labels)\n", "ax.set_xlabel('')\n", "ax.set_ylabel('P(same)')\n", "ax.set_ylim(-0.05, 1.05)\n", "plt.tight_layout()\n", "import os\n", "os.makedirs('manuscript/figures', exist_ok=True)\n", "plt.savefig('manuscript/figures/psame_by_type.png', dpi=200, bbox_inches='tight')\n", "plt.show()" ] }, { "cell_type": "markdown", "id": "cell_010", "metadata": {}, "source": [ "---\n", "## Section 2: Model Embeddings & Distances\n", "\n", "We benchmark 10 speaker embedding models spanning three training paradigms:\n", "\n", "- **Supervised** (5 models): Trained with speaker identity labels using classification losses (AAM-Softmax, Softmax, or GE2E). These models are explicitly optimized to place same-speaker utterances close together and different-speaker utterances far apart.\n", "- **Self-supervised** (4 models): Trained without any speaker labels, using objectives like contrastive prediction (wav2vec 2.0, XLS-R), masked token prediction (HuBERT, WavLM). Speaker identity information emerges as a byproduct of learning general speech structure.\n", "- **Weakly supervised** (1 model): Whisper, trained on 680K hours of audio-transcript pairs for ASR/translation. Not designed for speaker tasks.\n", "\n", "For each model, we extract an embedding vector for every audio clip (100 references + 9,800 comparisons = 9,900 total). For each of the 9,800 stimulus pairs, we compute the **cosine similarity** between the reference embedding and the comparison embedding:\n", "\n", "$$\\text{cosim}(\\mathbf{e}_{\\text{ref}}, \\mathbf{e}_{\\text{comp}}) = \\frac{\\mathbf{e}_{\\text{ref}} \\cdot \\mathbf{e}_{\\text{comp}}}{\\|\\mathbf{e}_{\\text{ref}}\\| \\cdot \\|\\mathbf{e}_{\\text{comp}}\\|}$$\n", "\n", "This is the standard similarity metric used in speaker verification systems. Higher cosine similarity indicates the model considers the two clips more likely to belong to the same speaker.\n", "\n", "The **inter-model correlation heatmap** reveals how much different models agree with each other about which pairs are similar, independent of whether they agree with humans." ] }, { "cell_type": "code", "execution_count": null, "id": "cell_011", "metadata": { "execution": { "iopub.execute_input": "2026-04-22T16:21:07.942054Z", "iopub.status.busy": "2026-04-22T16:21:07.941794Z", "iopub.status.idle": "2026-04-22T16:21:42.950156Z", "shell.execute_reply": "2026-04-22T16:21:42.947936Z" } }, "outputs": [], "source": [ "# Load all available embeddings\n", "MODEL_INFO = {\n", " 'rawnet3': {'type': 'Supervised', 'dim': 192, 'training': 'AAM-Softmax on VoxCeleb1+2 (Jung et al., Interspeech 2022)'},\n", " 'ecapa_tdnn': {'type': 'Supervised', 'dim': 192, 'training': 'AAM-Softmax on VoxCeleb1+2 (Desplanques et al., Interspeech 2020)'},\n", " 'titanet': {'type': 'Supervised', 'dim': 192, 'training': 'AAM-Softmax on VoxCeleb+Fisher+SWB (Koluguri et al., ICASSP 2022)'},\n", " 'resemblyzer':{'type': 'Supervised', 'dim': 256, 'training': 'GE2E loss, 3-layer LSTM (Wan et al., ICASSP 2018)'},\n", " 'xvector': {'type': 'Supervised', 'dim': 512, 'training': 'Softmax on VoxCeleb1+2, TDNN (Snyder et al., ICASSP 2018)'},\n", " 'wav2vec2': {'type': 'Self-supervised', 'dim': 768, 'training': 'Contrastive on LibriSpeech 960h (Baevski et al., NeurIPS 2020)'},\n", " 'hubert': {'type': 'Self-supervised', 'dim': 768, 'training': 'Masked prediction on LibriSpeech 960h (Hsu et al., IEEE/ACM TASLP 2021)'},\n", " 'wavlm': {'type': 'Self-supervised', 'dim': 768, 'training': 'Masked prediction + denoising on 94K hrs (Chen et al., JSTSP 2022)'},\n", " 'whisper': {'type': 'Weakly supervised', 'dim': 512, 'training': 'Multitask ASR on 680K hrs web audio (Radford et al., ICML 2023)'},\n", " 'xlsr': {'type': 'Self-supervised', 'dim': 1024, 'training': 'Contrastive on 436K hrs multilingual (Babu et al., Interspeech 2022)'},\n", "}\n", "\n", "embeddings = {}\n", "for name in MODEL_INFO:\n", " npz_path = EMB_DIR / f'{name}.npz'\n", " if npz_path.exists():\n", " data = np.load(npz_path, allow_pickle=True)\n", " embeddings[name] = {k: data[k] for k in data.files}\n", " dim = next(iter(embeddings[name].values())).shape[0]\n", " MODEL_INFO[name]['dim'] = dim\n", " print(f' Loaded {name}: {len(embeddings[name])} embeddings, dim={dim}')\n", " else:\n", " print(f' SKIPPED {name}: no .npz at {npz_path}')\n", "\n", "available_models = list(embeddings.keys())\n", "print(f'\\nAvailable models: {available_models}')" ] }, { "cell_type": "code", "execution_count": null, "id": "cell_012", "metadata": { "execution": { "iopub.execute_input": "2026-04-22T16:21:42.952894Z", "iopub.status.busy": "2026-04-22T16:21:42.952623Z", "iopub.status.idle": "2026-04-22T16:21:46.753326Z", "shell.execute_reply": "2026-04-22T16:21:46.750883Z" } }, "outputs": [], "source": [ "# Compute cosine similarity for each model x each pair (vectorized)\n", "for model_name, emb_dict in embeddings.items():\n", " col = f'{model_name}_cosim'\n", " \n", " # Build embedding matrices aligned with df rows\n", " ref_keys = [f\"{ref}R\" for ref in df['reference']]\n", " stim_keys = df['id'].tolist()\n", " \n", " dim = MODEL_INFO[model_name]['dim']\n", " ref_matrix = np.zeros((len(df), dim))\n", " stim_matrix = np.zeros((len(df), dim))\n", " valid_mask = np.ones(len(df), dtype=bool)\n", " \n", " for i, (rk, sk) in enumerate(zip(ref_keys, stim_keys)):\n", " if rk in emb_dict and sk in emb_dict:\n", " ref_matrix[i] = emb_dict[rk]\n", " stim_matrix[i] = emb_dict[sk]\n", " else:\n", " valid_mask[i] = False\n", " \n", " # Vectorized cosine similarity\n", " ref_norm = ref_matrix / (np.linalg.norm(ref_matrix, axis=1, keepdims=True) + 1e-10)\n", " stim_norm = stim_matrix / (np.linalg.norm(stim_matrix, axis=1, keepdims=True) + 1e-10)\n", " sims = np.sum(ref_norm * stim_norm, axis=1)\n", " sims[~valid_mask] = np.nan\n", " \n", " df[col] = sims\n", " valid_count = valid_mask.sum()\n", " print(f'{model_name}: {valid_count} / {len(df)} pairs with valid cosine similarity')" ] }, { "cell_type": "code", "execution_count": null, "id": "cell_012b_bestlayer", "metadata": { "execution": { "iopub.execute_input": "2026-04-22T16:21:46.756096Z", "iopub.status.busy": "2026-04-22T16:21:46.755824Z", "iopub.status.idle": "2026-04-22T16:22:33.386168Z", "shell.execute_reply": "2026-04-22T16:22:33.384012Z" } }, "outputs": [], "source": [ "# ============================================================\n", "# Nested speaker-level CV: best-layer cosine similarity for SSL models\n", "#\n", "# The cosine similarities computed above use each model's final hidden state.\n", "# For SSL models this is known to underperform for speaker tasks (SUPERB lit.).\n", "# Here we replace the SSL models' cosine similarity with a fair, nested-CV\n", "# best-layer value. Results are cached to cache/ssl_bestlayer_cosims.pkl.\n", "# ============================================================\n", "\n", "SSL_MODELS_WITH_LAYERS = ['wav2vec2', 'hubert', 'wavlm', 'xlsr', 'whisper']\n", "\n", "# Load layer-wise embeddings (also used in Section 13)\n", "layer_embs = {}\n", "for m in SSL_MODELS_WITH_LAYERS:\n", " p = LAYER_EMB_DIR / f'{m}.npz'\n", " if p.exists():\n", " data = np.load(p, allow_pickle=True)\n", " layer_embs[m] = {k: data[k] for k in data.files}\n", "\n", "# Build 10 gender-balanced speaker folds (used throughout the notebook)\n", "_all_spk = sorted(df['reference'].unique())\n", "_male = [s for s in _all_spk if s.startswith('M')]\n", "_female = [s for s in _all_spk if s.startswith('F')]\n", "_rng_folds = np.random.default_rng(SEED)\n", "_rng_folds.shuffle(_male); _rng_folds.shuffle(_female)\n", "CV_FOLDS = [[] for _ in range(10)]\n", "for i, s in enumerate(_male): CV_FOLDS[i % 10].append(s)\n", "for i, s in enumerate(_female): CV_FOLDS[i % 10].append(s)\n", "\n", "# Keep a copy of the last-layer cosims for reference/reporting\n", "for m in SSL_MODELS_WITH_LAYERS:\n", " if f'{m}_cosim' in df.columns:\n", " df[f'{m}_cosim_lastlayer'] = df[f'{m}_cosim'].copy()\n", "\n", "def _compute_ssl_bestlayer():\n", " \"\"\"Compute nested-CV best-layer cosine similarity per SSL model.\"\"\"\n", " result = {}\n", " for m in SSL_MODELS_WITH_LAYERS:\n", " if m not in layer_embs:\n", " continue\n", " emb_d = layer_embs[m]\n", " ref_keys = [f'{ref}R' for ref in df['reference']]\n", " stim_keys = df['id'].tolist()\n", " n_pairs = len(df)\n", " sample = next(iter(emb_d.values()))\n", " n_layers, dim = sample.shape\n", " \n", " ref_arr = np.full((n_pairs, n_layers, dim), np.nan)\n", " stim_arr = np.full((n_pairs, n_layers, dim), np.nan)\n", " valid_mask = np.zeros(n_pairs, dtype=bool)\n", " for i, (rk, sk) in enumerate(zip(ref_keys, stim_keys)):\n", " if rk in emb_d and sk in emb_d:\n", " ref_arr[i] = emb_d[rk]\n", " stim_arr[i] = emb_d[sk]\n", " valid_mask[i] = True\n", " \n", " ref_n = ref_arr / (np.linalg.norm(ref_arr, axis=2, keepdims=True) + 1e-10)\n", " stim_n = stim_arr / (np.linalg.norm(stim_arr, axis=2, keepdims=True) + 1e-10)\n", " all_cosims = np.sum(ref_n * stim_n, axis=2) # (n_pairs, n_layers)\n", " \n", " y = df['p_same_full'].values\n", " speakers = df['reference'].values\n", " best_cosim = np.full(n_pairs, np.nan)\n", " selected_layers = []\n", " \n", " for fold_speakers in CV_FOLDS:\n", " test_in_fold = np.isin(speakers, fold_speakers)\n", " train_mask = (~test_in_fold) & valid_mask & ~np.isnan(y)\n", " test_mask = test_in_fold & valid_mask & ~np.isnan(y)\n", " if train_mask.sum() < 50 or test_mask.sum() < 1:\n", " continue\n", " layer_r = np.zeros(n_layers)\n", " for l in range(n_layers):\n", " xl = all_cosims[train_mask, l]\n", " yl = y[train_mask]\n", " if np.std(xl) > 1e-8:\n", " layer_r[l], _ = pearsonr(xl, yl)\n", " best_l = int(np.argmax(layer_r))\n", " selected_layers.append(best_l)\n", " best_cosim[test_mask] = all_cosims[test_mask, best_l]\n", " result[m] = {'best_cosim': best_cosim, 'selected_layers': selected_layers}\n", " return result\n", "\n", "print('Nested-CV best-layer cosine similarity for SSL models...')\n", "ssl_bestlayer = cached('ssl_bestlayer_cosims', _compute_ssl_bestlayer,\n", " 'Per-fold layer selection for SSL models')\n", "\n", "best_layer_per_fold = {}\n", "for m, d in ssl_bestlayer.items():\n", " df[f'{m}_cosim'] = d['best_cosim']\n", " best_layer_per_fold[m] = d['selected_layers']\n", " n_layers_m = layer_embs[m][next(iter(layer_embs[m]))].shape[0]\n", " print(f' {m}: best layer per fold = {d[\"selected_layers\"]} (n_layers={n_layers_m})')\n", "\n", "print('\\nSSL cosine similarity columns now use nested-CV best-layer selection.')\n", "print('Last-layer values preserved in `{model}_cosim_lastlayer` columns.')" ] }, { "cell_type": "code", "execution_count": null, "id": "cell_013", "metadata": { "execution": { "iopub.execute_input": "2026-04-22T16:22:33.389237Z", "iopub.status.busy": "2026-04-22T16:22:33.388920Z", "iopub.status.idle": "2026-04-22T16:22:33.400641Z", "shell.execute_reply": "2026-04-22T16:22:33.398744Z" } }, "outputs": [], "source": [ "# Model info table\n", "info_rows = []\n", "for name in available_models:\n", " info = MODEL_INFO[name]\n", " info_rows.append({\n", " 'Model': name,\n", " 'Type': info['type'],\n", " 'Dim': info['dim'],\n", " 'Training': info['training']\n", " })\n", "pd.DataFrame(info_rows)" ] }, { "cell_type": "code", "execution_count": null, "id": "cell_014", "metadata": { "execution": { "iopub.execute_input": "2026-04-22T16:22:33.403138Z", "iopub.status.busy": "2026-04-22T16:22:33.402879Z", "iopub.status.idle": "2026-04-22T16:22:34.161169Z", "shell.execute_reply": "2026-04-22T16:22:34.159008Z" } }, "outputs": [], "source": [ "# Inter-model correlation heatmap. Whisper is placed last (bottom-right) since\n", "# it is the weakly-supervised outlier.\n", "DISPLAY_NAME = {'rawnet3': 'RawNet3', 'ecapa_tdnn': 'ECAPA-TDNN',\n", " 'titanet': 'TitaNet', 'resemblyzer': 'resemblyzer',\n", " 'xvector': 'x-vector', 'wav2vec2': 'wav2vec 2.0',\n", " 'hubert': 'HuBERT', 'wavlm': 'WavLM',\n", " 'whisper': 'Whisper', 'xlsr': 'XLS-R'}\n", "\n", "if len(available_models) > 1:\n", " plot_models = [m for m in available_models if m != 'whisper']\n", " if 'whisper' in available_models:\n", " plot_models.append('whisper')\n", " cosim_cols = [f'{m}_cosim' for m in plot_models]\n", " corr_matrix = df[cosim_cols].corr()\n", " corr_matrix.columns = [DISPLAY_NAME[m] for m in plot_models]\n", " corr_matrix.index = [DISPLAY_NAME[m] for m in plot_models]\n", "\n", " fig, ax = plt.subplots(figsize=(8, 6))\n", " sns.heatmap(corr_matrix, annot=True, fmt='.3f', cmap='RdYlBu_r',\n", " vmin=0, vmax=1, ax=ax, square=True)\n", " plt.tight_layout()\n", " plt.savefig('manuscript/figures/intermodel_heatmap.png', dpi=200, bbox_inches='tight')\n", " plt.show()\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Part I.5 \u2014 Metadata-Label Analysis\n", "\n", "Before analyzing alignment with human perception, we evaluate the same models against the **metadata label** (same/different speaker by ground truth). This is the kind of evaluation the ML community is familiar with from VoxCeleb-style benchmarks. We then compare the metadata target to the perception target at both the dataset level (how often does the crowd majority agree with metadata?) and the model level (does a model's metadata score predict its perception score?).\n", "\n", "**Label assignment.** Types 1, 2, 3 \u2192 metadata=same (Type 3 clones were generated *from* the reference speaker, so metadata says same). Types 4, 5 \u2192 metadata=different. Type 6 (morphs) is excluded because mid-morphs have no clean ground-truth speaker identity.\n", "\n", "**Layer selection for SSL.** We select the layer that maximizes AUC against the metadata label on training-fold speakers (SUPERB-style per-task selection), separately from the perception-target selection used in Section 2. This gives each SSL model its fairest metadata number, rather than reusing a layer optimized for a different target.\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "execution": { "iopub.execute_input": "2026-04-22T16:22:34.163892Z", "iopub.status.busy": "2026-04-22T16:22:34.163627Z", "iopub.status.idle": "2026-04-22T16:22:34.173491Z", "shell.execute_reply": "2026-04-22T16:22:34.171417Z" } }, "outputs": [], "source": [ "# ============================================================\n", "# M1. Metadata labels (Type 6 excluded)\n", "# ============================================================\n", "# Type 1 (same recording), Type 2 (same spkr, diff rec), Type 3 (clone of\n", "# that speaker) \u2192 metadata = 1 (same).\n", "# Type 4 (diff spkrs, real), Type 5 (diff spkrs, both cloned) \u2192 metadata = 0.\n", "# Type 6 (morphs) is EXCLUDED: mid-morphs have no clean ground-truth label.\n", "metadata_map = {1: 1, 2: 1, 3: 1, 4: 0, 5: 0}\n", "df['metadata_label'] = df['stimuli_type'].map(metadata_map)\n", "\n", "print(f'Pairs with metadata label: {df[\"metadata_label\"].notna().sum()} / {len(df)}')\n", "print(f' Metadata = same (Types 1-3): {(df[\"metadata_label\"] == 1).sum()}')\n", "print(f' Metadata = different (Types 4-5): {(df[\"metadata_label\"] == 0).sum()}')\n", "print(f' Excluded (Type 6): {df[\"metadata_label\"].isna().sum()}')\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "execution": { "iopub.execute_input": "2026-04-22T16:22:34.175974Z", "iopub.status.busy": "2026-04-22T16:22:34.175718Z", "iopub.status.idle": "2026-04-22T16:23:26.200573Z", "shell.execute_reply": "2026-04-22T16:23:26.198051Z" } }, "outputs": [], "source": [ "# ============================================================\n", "# Per-fold best SSL layer selected on METADATA target (AUC on training speakers).\n", "# Supervised models: native embedding (already in df[f'{m}_cosim_lastlayer'] for SSL,\n", "# or df[f'{m}_cosim'] for supervised which has no layer choice). We build a new\n", "# column df[f'{m}_cosim_meta'] that holds:\n", "# - supervised model: its native cosine similarity (same as _cosim)\n", "# - SSL model: nested-CV best-for-metadata cosine similarity\n", "# The existing df[f'{m}_cosim'] (best-for-perception for SSL) is untouched.\n", "# Cached to cache/ssl_bestlayer_cosims_meta.pkl.\n", "# ============================================================\n", "\n", "def _select_best_layer_on_metadata(model_name):\n", " emb_d = layer_embs[model_name]\n", " ref_keys = [f'{ref}R' for ref in df['reference']]\n", " stim_keys = df['id'].tolist()\n", " sample = next(iter(emb_d.values()))\n", " n_layers, dim = sample.shape\n", " n_pairs = len(df)\n", "\n", " ref_arr = np.full((n_pairs, n_layers, dim), np.nan)\n", " stim_arr = np.full((n_pairs, n_layers, dim), np.nan)\n", " valid_mask = np.zeros(n_pairs, dtype=bool)\n", " for i, (rk, sk) in enumerate(zip(ref_keys, stim_keys)):\n", " if rk in emb_d and sk in emb_d:\n", " ref_arr[i] = emb_d[rk]; stim_arr[i] = emb_d[sk]; valid_mask[i] = True\n", " rn = ref_arr / (np.linalg.norm(ref_arr, axis=2, keepdims=True) + 1e-10)\n", " sn = stim_arr / (np.linalg.norm(stim_arr, axis=2, keepdims=True) + 1e-10)\n", " all_cosims = np.sum(rn * sn, axis=2) # (n_pairs, n_layers)\n", "\n", " meta = df['metadata_label'].values\n", " speakers = df['reference'].values\n", " best_cosim = np.full(n_pairs, np.nan)\n", " chosen = []\n", " for fold_speakers in CV_FOLDS:\n", " test_mask = np.isin(speakers, fold_speakers) & valid_mask\n", " train_mask = (~np.isin(speakers, fold_speakers)) & valid_mask & ~np.isnan(meta)\n", " if train_mask.sum() < 50:\n", " continue\n", " y_tr = meta[train_mask].astype(int)\n", " if len(np.unique(y_tr)) < 2:\n", " continue\n", " layer_auc = np.zeros(n_layers)\n", " for l in range(n_layers):\n", " xl = all_cosims[train_mask, l]\n", " if np.std(xl) > 1e-8:\n", " layer_auc[l] = roc_auc_score(y_tr, xl)\n", " best_l = int(np.argmax(layer_auc))\n", " chosen.append(best_l)\n", " best_cosim[test_mask] = all_cosims[test_mask, best_l]\n", " return best_cosim, chosen\n", "\n", "def _compute_ssl_meta_cosims():\n", " out = {}\n", " for m in SSL_MODELS_WITH_LAYERS:\n", " if m not in layer_embs:\n", " continue\n", " c, chosen = _select_best_layer_on_metadata(m)\n", " out[m] = {'cosim': c, 'layers': chosen}\n", " return out\n", "\n", "print('Nested-CV best-layer for SSL on the METADATA target (AUC-on-training)...')\n", "ssl_meta = cached('ssl_bestlayer_cosims_meta', _compute_ssl_meta_cosims,\n", " 'Per-fold layer selection on metadata target')\n", "\n", "for m in available_models:\n", " col = f'{m}_cosim_meta'\n", " if m in ssl_meta:\n", " df[col] = ssl_meta[m]['cosim']\n", " n_layers_m = layer_embs[m][next(iter(layer_embs[m]))].shape[0]\n", " print(f' {m}: layers chosen = {ssl_meta[m][\"layers\"]} (of {n_layers_m})')\n", " else:\n", " df[col] = df[f'{m}_cosim'] # supervised: same native embedding\n", "\n", "print('\\nMetadata cosines stored in df[f\"{m}_cosim_meta\"].')\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "execution": { "iopub.execute_input": "2026-04-22T16:23:26.212135Z", "iopub.status.busy": "2026-04-22T16:23:26.211802Z", "iopub.status.idle": "2026-04-22T16:23:26.377639Z", "shell.execute_reply": "2026-04-22T16:23:26.375503Z" } }, "outputs": [], "source": [ "# ============================================================\n", "# M2. EER / AUC / Accuracy@EER per model \u00d7 three evaluation sets.\n", "# ============================================================\n", "EVAL_SETS = {\n", " 'A: Type 2 + Type 4 (VoxCeleb-like)': [2, 4],\n", " 'B: Type 3 + Type 5 (clones only)': [3, 5],\n", " 'C: Types 1-5 (extended)': [1, 2, 3, 4, 5],\n", "}\n", "\n", "def compute_eer(y_true, scores):\n", " fpr, tpr, thresholds = roc_curve(y_true, scores)\n", " fnr = 1 - tpr\n", " idx = int(np.nanargmin(np.abs(fpr - fnr)))\n", " eer = (fpr[idx] + fnr[idx]) / 2\n", " thr = thresholds[idx]\n", " acc = accuracy_score(y_true, (scores >= thr).astype(int))\n", " return eer, thr, acc\n", "\n", "metadata_rows = []\n", "for set_name, types in EVAL_SETS.items():\n", " sub = df[df['stimuli_type'].isin(types)].copy()\n", " for model_name in available_models:\n", " col = f'{model_name}_cosim_meta'\n", " v = sub.dropna(subset=[col, 'metadata_label'])\n", " if v['metadata_label'].nunique() < 2 or len(v) < 10:\n", " continue\n", " y = v['metadata_label'].values.astype(int)\n", " s = v[col].values\n", " eer, thr, acc = compute_eer(y, s)\n", " a = roc_auc_score(y, s)\n", " metadata_rows.append({\n", " 'set': set_name, 'model': model_name,\n", " 'type': MODEL_INFO[model_name]['type'],\n", " 'n': len(v), 'EER': eer, 'AUC_meta': a, 'Acc@EER': acc,\n", " })\n", "metadata_df = pd.DataFrame(metadata_rows)\n", "\n", "for set_name in EVAL_SETS:\n", " print(f'\\n=== {set_name} ===')\n", " sub = metadata_df[metadata_df['set'] == set_name].sort_values('EER')\n", " print(sub[['model', 'type', 'n', 'EER', 'AUC_meta', 'Acc@EER']].round(4).to_string(index=False))\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "execution": { "iopub.execute_input": "2026-04-22T16:23:26.380513Z", "iopub.status.busy": "2026-04-22T16:23:26.380245Z", "iopub.status.idle": "2026-04-22T16:23:26.932646Z", "shell.execute_reply": "2026-04-22T16:23:26.930658Z" } }, "outputs": [], "source": [ "# ============================================================\n", "# M3. Majority-vote vs metadata divergence (dataset-level, model-free).\n", "# ============================================================\n", "# Palette taken from PNAS Figure 2A; Type 6 (morphs) gets a complementary purple.\n", "TYPE_PALETTE = {1: '#42598a', 2: '#6b94b4', 3: '#86a95c',\n", " 4: '#d4776e', 5: '#d293b5', 6: '#976fa7'}\n", "\n", "# Wilson 95% CI for a proportion\n", "def _wilson_ci(k, n, z=1.96):\n", " if n == 0: return (np.nan, np.nan)\n", " p = k / n\n", " denom = 1 + z*z/n\n", " center = (p + z*z/(2*n)) / denom\n", " half = z * np.sqrt(p*(1-p)/n + z*z/(4*n*n)) / denom\n", " return (max(0.0, center - half), min(1.0, center + half))\n", "\n", "div_rows = []\n", "for stype in sorted(df['stimuli_type'].unique()):\n", " sub = df[df['stimuli_type'] == stype].dropna(subset=['p_same_full'])\n", " if stype == 6:\n", " div_rows.append({'Type': stype, 'n': len(sub), 'metadata': 'N/A',\n", " 'agree_with_meta': np.nan, 'disagree_with_meta': np.nan,\n", " 'disagree_rate': np.nan, 'ci_lo': np.nan, 'ci_hi': np.nan,\n", " 'median_P(same)': sub['p_same_full'].median()})\n", " continue\n", " meta = metadata_map[stype]\n", " n_ag = int((sub['majority_full'] == meta).sum())\n", " n_dis = int((sub['majority_full'] != meta).sum())\n", " rate = n_dis / len(sub) if len(sub) else np.nan\n", " lo, hi = _wilson_ci(n_dis, len(sub))\n", " div_rows.append({'Type': stype, 'n': len(sub), 'metadata': meta,\n", " 'agree_with_meta': n_ag, 'disagree_with_meta': n_dis,\n", " 'disagree_rate': rate, 'ci_lo': lo, 'ci_hi': hi,\n", " 'median_P(same)': sub['p_same_full'].median()})\n", "div_df_table = pd.DataFrame(div_rows)\n", "print('Majority-vote vs metadata label, per stimulus type (with 95% Wilson CI):')\n", "print(div_df_table.round(3).to_string(index=False))\n", "\n", "sub_same = df[df['metadata_label'] == 1].dropna(subset=['p_same_full'])\n", "sub_diff = df[df['metadata_label'] == 0].dropna(subset=['p_same_full'])\n", "dis_same = (sub_same['majority_full'] == 0).mean()\n", "dis_diff = (sub_diff['majority_full'] == 1).mean()\n", "print(f'\\nOverall divergence (Types 1-5; Type 6 excluded):')\n", "print(f' metadata=same pairs -> majority=different on '\n", " f'{(sub_same[\"majority_full\"]==0).sum()} / {len(sub_same)} = {dis_same:.1%}')\n", "print(f' metadata=diff pairs -> majority=same on '\n", " f'{(sub_diff[\"majority_full\"]==1).sum()} / {len(sub_diff)} = {dis_diff:.1%}')\n", "\n", "# --- Figure: two panels, metadata-perception divergence at the dataset level ---\n", "fig, axes = plt.subplots(1, 2, figsize=(13, 4.5),\n", " gridspec_kw={'width_ratios': [1.5, 1]})\n", "\n", "# Panel A: violin of P(same) by stimulus type, with metadata reference marks\n", "ax = axes[0]\n", "data_plot = df.dropna(subset=['p_same_full']).copy()\n", "type_order = sorted(df['stimuli_type'].unique())\n", "violin_palette = [TYPE_PALETTE[t] for t in type_order]\n", "sns.violinplot(data=data_plot, x='stimuli_type', y='p_same_full',\n", " order=type_order, ax=ax, inner='quartile',\n", " palette=violin_palette, cut=0)\n", "META_LINE = '#333333'\n", "for i, stype in enumerate(type_order):\n", " if stype == 6: continue\n", " meta = metadata_map[stype]\n", " ax.plot([i - 0.38, i + 0.38], [meta, meta], color=META_LINE, linewidth=2.4,\n", " linestyle='--', zorder=10,\n", " label='metadata label' if i == 0 else None)\n", "type_labels_short = ['same\\nrecording', 'same spkr\\ndiff rec',\n", " 'same spkr\\nclone', 'diff spkrs\\nreal',\n", " 'diff spkrs\\nclone', 'morphs\\n(no metadata)']\n", "ax.set_xticks(range(len(type_order)))\n", "ax.set_xticklabels(type_labels_short)\n", "ax.set_xlabel('')\n", "ax.set_ylabel('Human P(same)')\n", "ax.set_ylim(-0.05, 1.05)\n", "ax.legend(loc='lower left', frameon=True)\n", "for spine in ('top', 'right'):\n", " ax.spines[spine].set_visible(False)\n", "\n", "# Panel B: disagreement rate with 95% Wilson CI, same per-type colors as Panel A\n", "ax = axes[1]\n", "div_valid = div_df_table[div_df_table['Type'] != 6].copy()\n", "bar_colors = [TYPE_PALETTE[int(t)] for t in div_valid['Type']]\n", "xpos = np.arange(len(div_valid))\n", "err_lo = (div_valid['disagree_rate'] - div_valid['ci_lo']).values\n", "err_hi = (div_valid['ci_hi'] - div_valid['disagree_rate']).values\n", "bars = ax.bar(xpos, div_valid['disagree_rate'].values,\n", " color=bar_colors, edgecolor='#444', linewidth=0.8,\n", " yerr=[err_lo, err_hi], capsize=5,\n", " error_kw={'ecolor': '#333', 'elinewidth': 1.2})\n", "ax.set_xticks(xpos)\n", "ax.set_xticklabels(['same\\nrecording', 'same spkr\\ndiff rec',\n", " 'same spkr\\nclone', 'diff spkrs\\nreal',\n", " 'diff spkrs\\nclone'])\n", "ax.set_xlabel('')\n", "ax.set_ylabel(r'Fraction where majority vote $\\neq$ metadata')\n", "ax.set_ylim(0, max(0.55, (div_valid['ci_hi'].max() if div_valid['ci_hi'].notna().any() else 0) + 0.05))\n", "for i, (val, hi) in enumerate(zip(div_valid['disagree_rate'].values,\n", " div_valid['ci_hi'].values)):\n", " y = (hi if not np.isnan(hi) else val) + 0.015\n", " ax.text(i, y, f'{val:.2f}', ha='center', va='bottom', fontsize=10)\n", "for spine in ('top', 'right'):\n", " ax.spines[spine].set_visible(False)\n", "\n", "plt.tight_layout()\n", "plt.savefig('manuscript/figures/metadata_divergence.png', dpi=200, bbox_inches='tight')\n", "plt.show()\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "execution": { "iopub.execute_input": "2026-04-22T16:23:26.935960Z", "iopub.status.busy": "2026-04-22T16:23:26.935646Z", "iopub.status.idle": "2026-04-22T16:23:27.628706Z", "shell.execute_reply": "2026-04-22T16:23:27.626510Z" } }, "outputs": [], "source": [ "# ============================================================\n", "# M4. Model-metadata (Set C) vs model-perception (Types 1-5).\n", "# 2D plot: X = AUC on metadata, Y = Pearson r against P(same).\n", "# Same embedding (best-for-metadata for SSL) used on both axes where applicable;\n", "# perception r here uses df[f'{m}_cosim'] (the main-benchmark best-for-perception\n", "# embedding) so the Y axis matches what Section 3 reports.\n", "# ============================================================\n", "set_c = metadata_df[metadata_df['set'] == 'C: Types 1-5 (extended)'].set_index('model')\n", "\n", "sub_15 = df[df['stimuli_type'].isin([1, 2, 3, 4, 5])]\n", "perception_r = {}\n", "for m in available_models:\n", " col = f'{m}_cosim'\n", " v = sub_15.dropna(subset=[col, 'p_same_full'])\n", " if len(v) < 10:\n", " continue\n", " r, _ = pearsonr(v[col], v['p_same_full'])\n", " perception_r[m] = r\n", "\n", "TYPE_COLOR_M4 = {\n", " 'Supervised': '#2196F3',\n", " 'Self-supervised': '#FF9800',\n", " 'Weakly supervised': '#9C27B0',\n", "}\n", "fig, ax = plt.subplots(figsize=(8.8, 5.8))\n", "for m in available_models:\n", " if m not in set_c.index or m not in perception_r:\n", " continue\n", " x = set_c.loc[m, 'AUC_meta']\n", " y = perception_r[m]\n", " ptype = MODEL_INFO[m]['type']\n", " ax.scatter(x, y, s=140, color=TYPE_COLOR_M4[ptype], edgecolor='black',\n", " linewidth=1.2, zorder=5)\n", " ax.annotate(m, xy=(x, y), xytext=(7, 7), textcoords='offset points', fontsize=10)\n", "\n", "ax.set_xlabel('AUC on metadata labels (Set C, Types 1\u20135)')\n", "ax.set_ylabel('Pearson r with human P(same) (Types 1\u20135)')\n", "from matplotlib.patches import Patch\n", "ax.legend(handles=[Patch(color=TYPE_COLOR_M4[t], label=t) for t in TYPE_COLOR_M4],\n", " loc='lower right', frameon=True)\n", "for spine in ('top', 'right'):\n", " ax.spines[spine].set_visible(False)\n", "plt.tight_layout()\n", "plt.savefig('manuscript/figures/metadata_vs_perception.png', dpi=200, bbox_inches='tight')\n", "plt.show()\n", "\n", "cmp_rows = []\n", "for m in available_models:\n", " if m not in set_c.index or m not in perception_r:\n", " continue\n", " cmp_rows.append({\n", " 'model': m, 'type': MODEL_INFO[m]['type'],\n", " 'EER_C': set_c.loc[m, 'EER'], 'AUC_meta_C': set_c.loc[m, 'AUC_meta'],\n", " 'Pearson_r_perc': perception_r[m],\n", " })\n", "cmp_df = pd.DataFrame(cmp_rows)\n", "print('Model rankings under metadata vs perception:')\n", "print('\\nSorted by metadata AUC (desc):')\n", "print(cmp_df.sort_values('AUC_meta_C', ascending=False).round(4).to_string(index=False))\n", "print('\\nSorted by perception Pearson r (desc):')\n", "print(cmp_df.sort_values('Pearson_r_perc', ascending=False).round(4).to_string(index=False))\n", "rho, _ = spearmanr(cmp_df['AUC_meta_C'], cmp_df['Pearson_r_perc'])\n", "print(f'\\nSpearman rank correlation (metadata AUC vs perception r): {rho:.4f}')\n" ] }, { "cell_type": "markdown", "id": "part_II_divider", "metadata": {}, "source": [ "---\n", "# Part II \u2014 Core Benchmark Tasks\n", "\n", "Three complementary tasks evaluate whether models predict human voice-identity perception: a continuous regression task (P(same) prediction), a binary verification task with calibration (AUC + Platt-scaled ECE), and a representational-structure task at the speaker-pair level (RDM with Mantel test). Together they probe discrimination, confidence calibration, and population-level structure." ] }, { "cell_type": "markdown", "id": "cell_015", "metadata": {}, "source": [ "---\n", "## Section 3: Task 1 -- Predict Human P(same)\n", "\n", "**Task definition:** For each of the 9,800 pairs, the model provides a cosine similarity score, and humans provide P(same). We ask: how well does the model's continuous similarity score predict the continuous human judgment?\n", "\n", "**Metrics:**\n", "- **Pearson r:** Linear correlation between cosine similarity and P(same). Measures how well a linear function of model similarity predicts human perception.\n", "- **Spearman rho:** Rank correlation. Measures whether the model correctly orders pairs by perceived similarity, regardless of the functional form.\n", "- **R-squared:** Proportion of variance in P(same) explained by cosine similarity: $R^2 = r^2$.\n", "\n", "**Bootstrap confidence intervals** (1,000 resamples of pairs) quantify the precision of each estimate.\n", "\n", "**Per-stimulus-type breakdown** reveals whether models perform uniformly or have systematic blind spots (e.g., good on real speech but poor on clones).\n", "\n", "**Full vs. Qualified comparison** tests whether participant filtering affects the results -- if it does, it suggests that low-engagement participants add noise that inflates or deflates alignment." ] }, { "cell_type": "code", "execution_count": null, "id": "cell_016", "metadata": { "execution": { "iopub.execute_input": "2026-04-22T16:23:27.633418Z", "iopub.status.busy": "2026-04-22T16:23:27.633018Z", "iopub.status.idle": "2026-04-22T16:23:27.711885Z", "shell.execute_reply": "2026-04-22T16:23:27.709800Z" } }, "outputs": [], "source": [ "# Overall correlation: each model's cosine similarity vs human P(same)\n", "def compute_correlations(df, model_name, target_col='p_same_full'):\n", " col = f'{model_name}_cosim'\n", " valid = df.dropna(subset=[col, target_col])\n", " if len(valid) < 10:\n", " return {'pearson_r': np.nan, 'spearman_rho': np.nan, 'r_squared': np.nan, 'n': 0}\n", " pr, pp = pearsonr(valid[col], valid[target_col])\n", " sr, sp = spearmanr(valid[col], valid[target_col])\n", " return {'pearson_r': pr, 'spearman_rho': sr, 'r_squared': pr**2, 'n': len(valid)}\n", "\n", "results_overall = []\n", "for model_name in available_models:\n", " corrs = compute_correlations(df, model_name, 'p_same_full')\n", " corrs['model'] = model_name\n", " corrs['model_type'] = MODEL_INFO[model_name]['type']\n", " results_overall.append(corrs)\n", "\n", "results_df = pd.DataFrame(results_overall).sort_values('pearson_r', ascending=False)\n", "print('Overall model-human alignment (full 1,290-participant dataset):')\n", "print(results_df[['model', 'model_type', 'pearson_r', 'spearman_rho', 'r_squared', 'n']].round(4).to_string(index=False))" ] }, { "cell_type": "code", "execution_count": null, "id": "cell_017", "metadata": { "execution": { "iopub.execute_input": "2026-04-22T16:23:27.715155Z", "iopub.status.busy": "2026-04-22T16:23:27.714865Z", "iopub.status.idle": "2026-04-22T16:23:32.710602Z", "shell.execute_reply": "2026-04-22T16:23:32.708484Z" } }, "outputs": [], "source": [ "# Bootstrap Pearson r with 95% CIs for each model (full dataset)\n", "N_BOOT = 1000\n", "boot_results = {}\n", "\n", "for model_name in available_models:\n", " col = f'{model_name}_cosim'\n", " valid = df.dropna(subset=[col, 'p_same_full'])\n", " x = valid[col].values\n", " y = valid['p_same_full'].values\n", "\n", " boot_r = np.zeros(N_BOOT)\n", " for b in range(N_BOOT):\n", " idx = rng.integers(0, len(x), size=len(x))\n", " boot_r[b], _ = pearsonr(x[idx], y[idx])\n", "\n", " ci_lo, ci_hi = np.percentile(boot_r, [2.5, 97.5])\n", " boot_results[model_name] = {'mean': np.mean(boot_r), 'ci_lo': ci_lo, 'ci_hi': ci_hi}\n", "\n", "# Bar chart with CIs -- three paradigm colors\n", "TYPE_COLOR = {\n", " 'Supervised': '#2196F3', # blue\n", " 'Self-supervised': '#FF9800', # orange\n", " 'Weakly supervised': '#9C27B0', # purple\n", "}\n", "fig, ax = plt.subplots(figsize=(10, 5))\n", "models_sorted = sorted(available_models, key=lambda m: boot_results[m]['mean'], reverse=True)\n", "colors = [TYPE_COLOR[MODEL_INFO[m]['type']] for m in models_sorted]\n", "means = [boot_results[m]['mean'] for m in models_sorted]\n", "errs_lo = [boot_results[m]['mean'] - boot_results[m]['ci_lo'] for m in models_sorted]\n", "errs_hi = [boot_results[m]['ci_hi'] - boot_results[m]['mean'] for m in models_sorted]\n", "\n", "ax.barh(range(len(models_sorted)), means, xerr=[errs_lo, errs_hi],\n", " color=colors, capsize=4, edgecolor='white')\n", "ax.set_yticks(range(len(models_sorted)))\n", "DISPLAY_NAME = {'rawnet3': 'RawNet3', 'ecapa_tdnn': 'ECAPA-TDNN',\n", " 'titanet': 'TitaNet', 'resemblyzer': 'resemblyzer',\n", " 'xvector': 'x-vector', 'wav2vec2': 'wav2vec 2.0',\n", " 'hubert': 'HuBERT', 'wavlm': 'WavLM',\n", " 'whisper': 'Whisper', 'xlsr': 'XLS-R'}\n", "ax.set_yticklabels([DISPLAY_NAME[m] for m in models_sorted])\n", "ax.set_xlabel('Pearson r with Human P(same)')\n", "\n", "from matplotlib.patches import Patch\n", "ax.legend(handles=[Patch(color=TYPE_COLOR['Supervised'], label='Supervised'),\n", " Patch(color=TYPE_COLOR['Self-supervised'], label='Self-supervised'),\n", " Patch(color=TYPE_COLOR['Weakly supervised'], label='Weakly supervised')],\n", " loc='lower right')\n", "ax.invert_yaxis()\n", "plt.tight_layout()\n", "import os\n", "os.makedirs('manuscript/figures', exist_ok=True)\n", "plt.savefig('manuscript/figures/pearson_bar.png', dpi=200, bbox_inches='tight')\n", "plt.show()" ] }, { "cell_type": "code", "execution_count": null, "id": "cell_019", "metadata": { "execution": { "iopub.execute_input": "2026-04-22T16:23:32.713817Z", "iopub.status.busy": "2026-04-22T16:23:32.713501Z", "iopub.status.idle": "2026-04-22T16:23:32.990689Z", "shell.execute_reply": "2026-04-22T16:23:32.988669Z" } }, "outputs": [], "source": [ "# Per-stimulus-type Pearson r\n", "type_corr_rows = []\n", "for model_name in available_models:\n", " for stype in sorted(df['stimuli_type'].unique()):\n", " sub = df[df['stimuli_type'] == stype]\n", " corrs = compute_correlations(sub, model_name, 'p_same_full')\n", " type_corr_rows.append({\n", " 'model': model_name,\n", " 'type': stype,\n", " 'pearson_r': corrs['pearson_r'],\n", " 'n': corrs['n']\n", " })\n", "\n", "type_corr_df = pd.DataFrame(type_corr_rows)\n", "pivot = type_corr_df.pivot(index='model', columns='type', values='pearson_r').round(3)\n", "pivot.columns = [f'Type {c}' for c in pivot.columns]\n", "print('Pearson r by stimulus type:')\n", "pivot" ] }, { "cell_type": "markdown", "id": "cell_021", "metadata": {}, "source": [ "---\n", "## Section 4: Task 2 -- Human-Aligned Verification (Binary)\n", "\n", "**Task definition:** Convert the continuous P(same) to a binary label via human majority vote: a pair is labeled \"same\" if P(same) > 0.5, and \"different\" otherwise. Pairs with exactly P(same) = 0.5 are excluded. We then evaluate each model as a binary classifier using its cosine similarity as the decision score.\n", "\n", "**Metrics:**\n", "- **AUC (Area Under ROC Curve):** The probability that the model assigns a higher cosine similarity to a randomly chosen \"same\" pair than a randomly chosen \"different\" pair. AUC = 0.5 is chance, 1.0 is perfect.\n", "- **Accuracy at optimal threshold:** Using Youden's J statistic ($J = \\text{TPR} - \\text{FPR}$), we find the cosine similarity threshold that maximizes the sum of sensitivity and specificity.\n", "- **Expected Calibration Error (ECE):** Cosine similarity is not a probability (range [-1, 1], and in practice concentrated near the high end), so comparing it directly to empirical agreement conflates *representation quality* with *scale*. We therefore compute ECE in two ways:\n", " 1. **ECE_raw**: raw $|{\\overline{\\text{cosim}}_b - \\overline{P(\\text{same})}_b}|$ within cosine-sim bins. Penalizes any scale mismatch.\n", " 2. **ECE_calibrated**: we first fit **Platt scaling** (logistic regression `P(same) ~ sigmoid(a \\cdot cosim + b)`) using 10-fold speaker-level cross-validation, then compute standard ECE on the calibrated probabilities. This isolates calibration quality from scale.\n", "\n", "$$\\text{ECE} = \\sum_{b=1}^{B} \\frac{n_b}{N} \\left| \\overline{p}_b - \\overline{y}_b \\right|$$\n", "\n", "where $\\overline{p}_b$ is the mean predicted probability and $\\overline{y}_b$ is the empirical label frequency in bin $b$." ] }, { "cell_type": "code", "execution_count": null, "id": "cell_022", "metadata": { "execution": { "iopub.execute_input": "2026-04-22T16:23:32.994050Z", "iopub.status.busy": "2026-04-22T16:23:32.993700Z", "iopub.status.idle": "2026-04-22T16:23:33.675122Z", "shell.execute_reply": "2026-04-22T16:23:33.672501Z" } }, "outputs": [], "source": [ "# ROC curves and AUC\n", "# Exclude pairs where exactly 50% said same (ties)\n", "df_binary = df[(df['p_same_full'] != 0.5) & df['p_same_full'].notna()].copy()\n", "y_true = df_binary['majority_full'].values\n", "\n", "DISPLAY_NAME = {'rawnet3': 'RawNet3', 'ecapa_tdnn': 'ECAPA-TDNN',\n", " 'titanet': 'TitaNet', 'resemblyzer': 'resemblyzer',\n", " 'xvector': 'x-vector', 'wav2vec2': 'wav2vec 2.0',\n", " 'hubert': 'HuBERT', 'wavlm': 'WavLM',\n", " 'whisper': 'Whisper', 'xlsr': 'XLS-R'}\n", "fig, ax = plt.subplots(figsize=(8, 7))\n", "auc_results = {}\n", "\n", "for model_name in available_models:\n", " col = f'{model_name}_cosim'\n", " valid = df_binary.dropna(subset=[col])\n", " if len(valid) < 10:\n", " continue\n", " y = valid['majority_full'].values\n", " scores = valid[col].values\n", " fpr, tpr, thresholds = roc_curve(y, scores)\n", " roc_auc = auc(fpr, tpr)\n", " auc_results[model_name] = roc_auc\n", " \n", " # Optimal threshold (Youden's J)\n", " j_scores = tpr - fpr\n", " opt_idx = np.argmax(j_scores)\n", " opt_thresh = thresholds[opt_idx]\n", " y_pred = (scores >= opt_thresh).astype(int)\n", " acc = accuracy_score(y, y_pred)\n", " \n", " label = f'{DISPLAY_NAME[model_name]} (AUC={roc_auc:.3f})'\n", " ax.plot(fpr, tpr, label=label, linewidth=1.5)\n", "\n", "ax.plot([0, 1], [0, 1], 'k--', alpha=0.3)\n", "ax.set_xlabel('False Positive Rate')\n", "ax.set_ylabel('True Positive Rate')\n", "ax.legend(fontsize=8, loc='lower right')\n", "plt.tight_layout()\n", "plt.savefig('manuscript/figures/roc_curves.png', dpi=200, bbox_inches='tight')\n", "plt.show()" ] }, { "cell_type": "code", "execution_count": null, "id": "cell_023", "metadata": { "execution": { "iopub.execute_input": "2026-04-22T16:23:33.678588Z", "iopub.status.busy": "2026-04-22T16:23:33.678192Z", "iopub.status.idle": "2026-04-22T16:23:36.655011Z", "shell.execute_reply": "2026-04-22T16:23:36.652827Z" } }, "outputs": [], "source": [ "# Calibration analysis with Platt scaling and standard ECE (cached)\n", "\n", "N_BINS = 10\n", "\n", "# Build the same 10 folds as elsewhere\n", "all_speakers = sorted(df['reference'].unique())\n", "_male = [s for s in all_speakers if s.startswith('M')]\n", "_female = [s for s in all_speakers if s.startswith('F')]\n", "_rng_cal = np.random.default_rng(SEED)\n", "_rng_cal.shuffle(_male); _rng_cal.shuffle(_female)\n", "folds = [[] for _ in range(10)]\n", "for i, s in enumerate(_male): folds[i % 10].append(s)\n", "for i, s in enumerate(_female): folds[i % 10].append(s)\n", "\n", "def platt_calibrate_cv(scores, labels, speakers, folds):\n", " probs = np.full_like(scores, np.nan, dtype=float)\n", " for fold in folds:\n", " test_mask = np.isin(speakers, fold)\n", " train_mask = ~test_mask\n", " if train_mask.sum() < 10 or test_mask.sum() < 1:\n", " continue\n", " lr = LogisticRegression(max_iter=1000)\n", " lr.fit(scores[train_mask].reshape(-1, 1), labels[train_mask])\n", " probs[test_mask] = lr.predict_proba(scores[test_mask].reshape(-1, 1))[:, 1]\n", " return probs\n", "\n", "def _compute_calibration():\n", " ece_r = {}\n", " ece_c = {}\n", " probs_all = {}\n", " for model_name in available_models:\n", " col = f'{model_name}_cosim'\n", " valid = df.dropna(subset=[col, 'p_same_full', 'majority_full']).copy()\n", " if len(valid) < 10:\n", " continue\n", " # Raw ECE\n", " valid['bin_raw'] = pd.qcut(valid[col], N_BINS, duplicates='drop')\n", " cal_raw = valid.groupby('bin_raw', observed=True).agg(\n", " mean_score=(col, 'mean'),\n", " mean_psame=('p_same_full', 'mean'),\n", " count=('id', 'count'))\n", " ece_r[model_name] = (cal_raw['count'] / cal_raw['count'].sum() *\n", " (cal_raw['mean_psame'] - cal_raw['mean_score']).abs()).sum()\n", " # Platt\n", " scores = valid[col].values\n", " labels = valid['majority_full'].values.astype(int)\n", " spk_arr = valid['reference'].values\n", " probs = platt_calibrate_cv(scores, labels, spk_arr, folds)\n", " valid['prob_calibrated'] = probs\n", " v2 = valid.dropna(subset=['prob_calibrated'])\n", " probs_all[model_name] = v2[['id', 'prob_calibrated']].set_index('id')\n", " v2 = v2.copy()\n", " v2['bin_cal'] = pd.qcut(v2['prob_calibrated'], N_BINS, duplicates='drop')\n", " cal_std = v2.groupby('bin_cal', observed=True).agg(\n", " mean_prob=('prob_calibrated', 'mean'),\n", " mean_label=('majority_full', 'mean'),\n", " count=('id', 'count'))\n", " ece_c[model_name] = (cal_std['count'] / cal_std['count'].sum() *\n", " (cal_std['mean_label'] - cal_std['mean_prob']).abs()).sum()\n", " return {'ece_raw': ece_r, 'ece_cal': ece_c, 'probs': probs_all}\n", "\n", "cal_result = cached('calibration', _compute_calibration, 'Platt scaling + ECE for all models')\n", "ece_raw_results = cal_result['ece_raw']\n", "ece_results = cal_result['ece_cal']\n", "calibrated_prob = cal_result['probs']\n", "\n", "# Plot calibrated reliability diagrams\n", "n_models = len(available_models)\n", "ncols = min(5, n_models)\n", "nrows = (n_models + ncols - 1) // ncols\n", "DISPLAY_NAME = {'rawnet3': 'RawNet3', 'ecapa_tdnn': 'ECAPA-TDNN',\n", " 'titanet': 'TitaNet', 'resemblyzer': 'resemblyzer',\n", " 'xvector': 'x-vector', 'wav2vec2': 'wav2vec 2.0',\n", " 'hubert': 'HuBERT', 'wavlm': 'WavLM',\n", " 'whisper': 'Whisper', 'xlsr': 'XLS-R'}\n", "fig, axes = plt.subplots(nrows, ncols, figsize=(3.5 * ncols, 3.5 * nrows))\n", "axes = np.atleast_1d(axes).flatten()\n", "\n", "for idx, model_name in enumerate(available_models):\n", " ax = axes[idx]\n", " if model_name not in calibrated_prob:\n", " ax.set_visible(False); continue\n", " valid = df.merge(calibrated_prob[model_name], left_on='id', right_index=True, how='inner')\n", " valid = valid.dropna(subset=['prob_calibrated', 'majority_full']).copy()\n", " valid['bin'] = pd.qcut(valid['prob_calibrated'], N_BINS, duplicates='drop')\n", " cal = valid.groupby('bin', observed=True).agg(\n", " mean_prob=('prob_calibrated', 'mean'),\n", " mean_label=('majority_full', 'mean'))\n", " ax.plot([0, 1], [0, 1], 'k--', alpha=0.3)\n", " ax.plot(cal['mean_prob'], cal['mean_label'], 'o-', color='#1565C0', markersize=5)\n", " ax.set_xlabel('Predicted prob (Platt)'); ax.set_ylabel('Empirical freq')\n", " ax.set_xlim(0, 1); ax.set_ylim(0, 1)\n", " ax.set_title(DISPLAY_NAME[model_name], fontsize=10)\n", "\n", "for j in range(len(available_models), len(axes)):\n", " axes[j].set_visible(False)\n", "plt.tight_layout()\n", "plt.savefig('manuscript/figures/calibration_1.png', dpi=200, bbox_inches='tight')\n", "plt.show()\n", "\n", "ece_comp = pd.DataFrame([\n", " {'model': m, 'type': MODEL_INFO[m]['type'],\n", " 'ECE_raw (cosim vs P(same))': ece_raw_results.get(m, np.nan),\n", " 'ECE_calibrated (Platt prob)': ece_results.get(m, np.nan)}\n", " for m in available_models\n", "]).sort_values('ECE_calibrated (Platt prob)')\n", "print('Raw vs Platt-calibrated ECE:')\n", "print(ece_comp.round(4).to_string(index=False))" ] }, { "cell_type": "code", "execution_count": null, "id": "cell_024", "metadata": { "execution": { "iopub.execute_input": "2026-04-22T16:23:36.657872Z", "iopub.status.busy": "2026-04-22T16:23:36.657601Z", "iopub.status.idle": "2026-04-22T16:23:36.721538Z", "shell.execute_reply": "2026-04-22T16:23:36.718999Z" } }, "outputs": [], "source": [ "# Summary table: AUC, accuracy, raw ECE, calibrated ECE per model\n", "verification_rows = []\n", "for model_name in available_models:\n", " col = f'{model_name}_cosim'\n", " valid = df_binary.dropna(subset=[col])\n", " if len(valid) < 10:\n", " continue\n", " y = valid['majority_full'].values\n", " scores = valid[col].values\n", " \n", " fpr, tpr, thresholds = roc_curve(y, scores)\n", " roc_auc = auc(fpr, tpr)\n", " opt_idx = np.argmax(tpr - fpr)\n", " opt_thresh = thresholds[opt_idx]\n", " y_pred = (scores >= opt_thresh).astype(int)\n", " acc = accuracy_score(y, y_pred)\n", " \n", " verification_rows.append({\n", " 'model': model_name,\n", " 'type': MODEL_INFO[model_name]['type'],\n", " 'AUC': roc_auc,\n", " 'Acc@opt': acc,\n", " 'Threshold': opt_thresh,\n", " 'ECE_raw': ece_raw_results.get(model_name, np.nan),\n", " 'ECE_calibrated': ece_results.get(model_name, np.nan)\n", " })\n", "\n", "verif_df = pd.DataFrame(verification_rows).sort_values('AUC', ascending=False)\n", "print('Human-Aligned Verification Results (Platt-scaled ECE):')\n", "print(verif_df.round(4).to_string(index=False))" ] }, { "cell_type": "markdown", "id": "cell_025", "metadata": {}, "source": [ "---\n", "## Section 5: Task 3 -- Speaker-Level Representational Similarity Analysis\n", "\n", "**Motivation:** Pair-level Spearman rank correlation (Section 3) is mathematically identical to a naive RSA over the same 9,800 pairs, because `spearmanr(1-y, 1-x) = spearmanr(y, x)`. To get a genuinely different analysis, we aggregate to the **speaker-pair level**: for each ordered pair of speakers $(A, B)$ we compute a mean P(same) and mean cosine similarity across trials. The benchmark only contains between-speaker pairs within the same sociophonetic group, so off-diagonal cells are populated for within-group (A, B) pairs only. Diagonal cells (ref = comp) are excluded because same-speaker cells (dissim \u2248 0.1\u20130.4) sit in a different range from different-speaker cells (dissim \u2248 0.7\u20130.9) in both the human and the model RDM; including them would inflate the Spearman correlation via this cluster gap and mask the between-speaker geometry we want to measure.\n", "\n", "**Stimulus-type selection.** Off-diagonal cells can be populated from Types 4, 5, and/or 6. We use **only Types 4 and 5** (real and cloned different-speaker, 1 stimulus each per cell = 2 stimuli per cell). Type 6 (morphed) stimuli are excluded because a morph at scale 0.5 is neither $A$ nor $B$, so the \"comp speaker $B$\" label does not cleanly apply; aggregating Type 6 into cell $[A, B]$ would conflate speaker-pair geometry with morph-trajectory geometry. Morph-trajectory alignment is characterized separately via the scale-by-scale Type 6 curves (supplementary figure).\n", "\n", "**Human RDM:** $\\text{RDM}_{\\text{human}}[A, B] = 1 - \\text{mean}_{\\text{Types 4-5}} P(\\text{same} \\mid \\text{ref}=A, \\text{comp}=B)$\n", "\n", "**Model RDM:** $\\text{RDM}_{\\text{model}}[A, B] = 1 - \\text{mean}_{\\text{Types 4-5}} \\text{cosim}(\\mathbf{e}_{\\text{ref}}, \\mathbf{e}_{\\text{comp}})$\n", "\n", "Each RDM is a 400-entry vector (within-group off-diagonal cells). We compute the **Spearman correlation** between the human and model RDMs. Significance is assessed with a **Mantel permutation test** (5,000 permutations of the model-side vector; two-sided p-value).\n", "\n", "**Why this is different from Section 3:** Section 3 operates at the level of individual stimulus pairs. Section 5 collapses within-pair variation to the speaker-pair level and asks whether the model captures the coarse population map of within-group speakers. SSL models are expected to be weaker here because their coarse speaker structure was never explicitly trained." ] }, { "cell_type": "code", "execution_count": null, "id": "cell_026", "metadata": { "execution": { "iopub.execute_input": "2026-04-22T16:23:36.724546Z", "iopub.status.busy": "2026-04-22T16:23:36.724276Z", "iopub.status.idle": "2026-04-22T16:23:36.901388Z", "shell.execute_reply": "2026-04-22T16:23:36.899533Z" } }, "outputs": [], "source": [ "# Speaker-level RDM from Types 4+5 off-diagonal (different-speaker real + clone).\n", "# Type 6 (morphed) is excluded: at scale 0.5 a morph is neither A nor B, so the\n", "# speaker-pair identity is ill-defined for those stimuli. Diagonal cells (ref == comp)\n", "# are excluded because all models trivially recover self-similarity.\n", "\n", "def parse_comp_speaker(row):\n", " \"\"\"Return the comparison speaker for each stimulus.\"\"\"\n", " if row['stimuli_type'] in [1, 2, 3]:\n", " return row['reference']\n", " if row['stimuli_type'] in [4, 5]:\n", " comp = str(row['comparison'])\n", " return comp if comp not in ['nan', 'None', ''] else row['reference']\n", " if row['stimuli_type'] == 6:\n", " parts = str(row['id']).split('_')\n", " if len(parts) >= 3:\n", " return parts[2][:3]\n", " return row['reference']\n", "\n", "df['comp_speaker'] = df.apply(parse_comp_speaker, axis=1)\n", "print(f\"Unique reference speakers: {df['reference'].nunique()}\")\n", "print(f\"Unique comparison speakers: {df['comp_speaker'].nunique()}\")\n", "\n", "cosim_cols = [f'{m}_cosim' for m in available_models]\n", "agg_spec = {'p_same_full': 'mean'}\n", "for c in cosim_cols:\n", " agg_spec[c] = 'mean'\n", "\n", "off_diag_mask = df['reference'] != df['comp_speaker']\n", "rdm_df = (df[off_diag_mask & df['stimuli_type'].isin([4, 5])]\n", " .groupby(['reference', 'comp_speaker'])\n", " .agg(agg_spec)\n", " .reset_index())\n", "\n", "print(f'\\nRDM entries: {len(rdm_df)} (Types 4+5 off-diagonal, within-group)')\n", "rdm_df.head()" ] }, { "cell_type": "code", "execution_count": null, "id": "cell_027", "metadata": { "execution": { "iopub.execute_input": "2026-04-22T16:23:36.904534Z", "iopub.status.busy": "2026-04-22T16:23:36.904225Z", "iopub.status.idle": "2026-04-22T16:24:02.348391Z", "shell.execute_reply": "2026-04-22T16:24:02.346056Z" } }, "outputs": [], "source": [ "# Mantel test: permute the model-side vector and compute empirical p-value.\n", "\n", "def mantel_test(x, y, n_perm=5000, seed=SEED):\n", " rng_m = np.random.default_rng(seed)\n", " observed, _ = spearmanr(x, y)\n", " null = np.zeros(n_perm)\n", " y_perm = y.copy()\n", " for i in range(n_perm):\n", " rng_m.shuffle(y_perm)\n", " null[i], _ = spearmanr(x, y_perm)\n", " p = (np.abs(null) >= np.abs(observed)).mean()\n", " return observed, p\n", "\n", "# Speaker-level RDM correlations (Types 4+5 off-diagonal only)\n", "speaker_rsa = []\n", "for model_name in available_models:\n", " col = f'{model_name}_cosim'\n", " if col not in rdm_df.columns:\n", " continue\n", " valid = rdm_df[col].notna() & rdm_df['p_same_full'].notna()\n", " h = 1 - rdm_df.loc[valid, 'p_same_full'].values\n", " mv = 1 - rdm_df.loc[valid, col].values\n", " rho, p = mantel_test(h, mv, n_perm=5000)\n", " speaker_rsa.append({\n", " 'model': model_name,\n", " 'type': MODEL_INFO[model_name]['type'],\n", " 'rsa_rho': rho,\n", " 'mantel_p': p,\n", " 'n_pairs': int(valid.sum())\n", " })\n", "\n", "rsa_df = pd.DataFrame(speaker_rsa).sort_values('rsa_rho', ascending=False)\n", "print('Speaker-level RDM (Types 4+5, within-group different-speaker geometry):')\n", "print(rsa_df.round(4).to_string(index=False))" ] }, { "cell_type": "markdown", "id": "part_III_divider", "metadata": {}, "source": [ "---\n", "# Part III \u2014 Stimulus-Level Analyses\n", "\n", "These sections examine where model-human alignment varies across the stimulus space. Section 6 asks whether predictors fit on real speech transfer to synthetic speech. Section 7 asks whether models can predict when humans will disagree with each other." ] }, { "cell_type": "markdown", "id": "cell_028", "metadata": {}, "source": [ "---\n", "## Section 6: Per-Stimulus-Type Analysis\n", "\n", "**Motivation:** Current speaker verification benchmarks evaluate only on real speech. But voice cloning introduces synthetic stimuli that may behave differently in embedding space. This section asks: **do models generalize from real speech to synthetic speech?**\n", "\n", "**Cross-type transfer experiment:** For each model, we:\n", "1. Fit a linear regression on real-speech pairs only (Types 1, 2, 4): $P(\\text{same}) \\sim \\beta_0 + \\beta_1 \\cdot \\text{cosim}$\n", "2. Use this fitted model to predict P(same) on synthetic pairs (Types 3, 5, 6)\n", "3. Compare the prediction quality to a model fitted on all types\n", "\n", "If the transfer gap is zero, the relationship between model similarity and human perception is the same for real and synthetic speech. A positive gap would mean the model needs exposure to synthetic data to predict human perception of it.\n", "\n", "**Type 6 analysis:** The blended voices provide a unique continuous manipulation. Each blend has a \"scale\" parameter (0 = fully Speaker A, 100 = fully Speaker B). We plot both model cosine similarity and human P(same) as a function of this scale to visualize whether models track the same identity gradient that humans perceive." ] }, { "cell_type": "code", "execution_count": null, "id": "cell_030", "metadata": { "execution": { "iopub.execute_input": "2026-04-22T16:24:02.351352Z", "iopub.status.busy": "2026-04-22T16:24:02.351075Z", "iopub.status.idle": "2026-04-22T16:24:02.466448Z", "shell.execute_reply": "2026-04-22T16:24:02.464293Z" } }, "outputs": [], "source": [ "# Cross-stimulus-type transfer: train Platt-scaled predictor on real, test on synthetic\n", "# Use MSE and noise-ceiling-normalized R^2 on raw predicted values -- NOT Pearson r,\n", "# which is invariant to affine transformations and therefore trivially zero transfer gap.\n", "\n", "from sklearn.metrics import mean_squared_error\n", "\n", "real_types = [1, 2, 4]\n", "synth_types = [3, 5, 6]\n", "\n", "transfer_results = []\n", "\n", "for model_name in available_models:\n", " col = f'{model_name}_cosim'\n", " valid = df.dropna(subset=[col, 'p_same_full']).copy()\n", " real_data = valid[valid['stimuli_type'].isin(real_types)]\n", " synth_data = valid[valid['stimuli_type'].isin(synth_types)]\n", " \n", " if len(real_data) < 50 or len(synth_data) < 50:\n", " continue\n", " \n", " # Fit linear predictor on REAL, predict P(same) on SYNTHETIC\n", " X_real = real_data[[col]].values\n", " y_real = real_data['p_same_full'].values\n", " X_synth = synth_data[[col]].values\n", " y_synth = synth_data['p_same_full'].values\n", " \n", " from sklearn.linear_model import LinearRegression\n", " lr_real = LinearRegression().fit(X_real, y_real)\n", " pred_synth_from_real = lr_real.predict(X_synth)\n", " \n", " # Also fit on synth itself (oracle) for comparison\n", " lr_synth = LinearRegression().fit(X_synth, y_synth)\n", " pred_synth_from_synth = lr_synth.predict(X_synth)\n", " \n", " # MSE and R^2 on RAW values (not Pearson, so slope/intercept matter)\n", " mse_transfer = mean_squared_error(y_synth, pred_synth_from_real)\n", " mse_oracle = mean_squared_error(y_synth, pred_synth_from_synth)\n", " \n", " # R^2 = 1 - MSE / Var(y)\n", " var_y_synth = np.var(y_synth)\n", " r2_transfer = 1 - mse_transfer / var_y_synth\n", " r2_oracle = 1 - mse_oracle / var_y_synth\n", " \n", " # Also report the fitted parameters to see the actual difference\n", " intercept_real, slope_real = lr_real.intercept_, lr_real.coef_[0]\n", " intercept_synth, slope_synth = lr_synth.intercept_, lr_synth.coef_[0]\n", " \n", " transfer_results.append({\n", " 'model': model_name,\n", " 'R2_transfer': r2_transfer,\n", " 'R2_oracle': r2_oracle,\n", " 'R2_gap': r2_oracle - r2_transfer,\n", " 'MSE_transfer': mse_transfer,\n", " 'MSE_oracle': mse_oracle,\n", " 'slope_real': slope_real,\n", " 'slope_synth': slope_synth,\n", " 'intercept_real': intercept_real,\n", " 'intercept_synth': intercept_synth\n", " })\n", "\n", "transfer_df = pd.DataFrame(transfer_results).sort_values('R2_gap', ascending=False)\n", "print('Cross-type transfer (REAL->SYNTH) with proper MSE/R^2 metrics:')\n", "print('Columns:')\n", "print(' R2_transfer: R^2 on synthetic using linear map fit on REAL')\n", "print(' R2_oracle: R^2 on synthetic using linear map fit on SYNTHETIC itself (oracle)')\n", "print(' R2_gap: difference -- positive means real->synth transfer is LOSSY')\n", "print(' slope/intercept: reveal whether the real and synth mappings differ')\n", "print()\n", "print(transfer_df[['model', 'R2_transfer', 'R2_oracle', 'R2_gap', 'slope_real', 'slope_synth', 'intercept_real', 'intercept_synth']].round(4).to_string(index=False))" ] }, { "cell_type": "code", "execution_count": null, "id": "cell_031", "metadata": { "execution": { "iopub.execute_input": "2026-04-22T16:24:02.468963Z", "iopub.status.busy": "2026-04-22T16:24:02.468705Z", "iopub.status.idle": "2026-04-22T16:24:03.092480Z", "shell.execute_reply": "2026-04-22T16:24:03.090474Z" } }, "outputs": [], "source": [ "# Type 6 special analysis: P(same) vs scale (human) and cosine sim vs scale (models).\n", "# Output two figures:\n", "# - type6_human.png : human P(same) vs scale (standalone)\n", "# - type6_models.png : model cosine vs scale, RAW (left) + min-max normalized (right)\n", "type6 = df[df['stimuli_type'] == 6].dropna(subset=['scale']).copy()\n", "\n", "PARADIGM_COLOR = {'Supervised': '#2196F3', 'Self-supervised': '#FF9800', 'Weakly supervised': '#9C27B0'}\n", "DISPLAY_NAME = {'rawnet3': 'RawNet3', 'ecapa_tdnn': 'ECAPA-TDNN',\n", " 'titanet': 'TitaNet', 'resemblyzer': 'resemblyzer',\n", " 'xvector': 'x-vector', 'wav2vec2': 'wav2vec 2.0',\n", " 'hubert': 'HuBERT', 'wavlm': 'WavLM',\n", " 'whisper': 'Whisper', 'xlsr': 'XLS-R'}\n", "ORDER_BY_PARADIGM = {'Supervised': 0, 'Self-supervised': 1, 'Weakly supervised': 2}\n", "\n", "if len(type6) > 0 and len(available_models) > 0:\n", " bins = np.arange(0, 105, 5)\n", " type6['scale_bin'] = pd.cut(type6['scale'], bins, include_lowest=True)\n", " agg_h = type6.groupby('scale_bin', observed=True)['p_same_full'].agg(['mean', 'sem']).dropna()\n", "\n", " # ----- Figure: human P(same) only -----\n", " fig, ax = plt.subplots(figsize=(7, 4.5))\n", " ax.errorbar(range(len(agg_h)), agg_h['mean'], yerr=1.96*agg_h['sem'],\n", " fmt='o-', color='#2196F3', markersize=4, capsize=2)\n", " ax.set_xlabel('Interpolation Scale (binned)')\n", " ax.set_ylabel('Human P(same)')\n", " ax.set_xticks(range(0, len(agg_h), 4))\n", " ax.set_xticklabels([f'{int(b.left)}' for b in agg_h.index[::4]], rotation=45)\n", " for spine in ('top','right'): ax.spines[spine].set_visible(False)\n", " plt.tight_layout()\n", " plt.savefig('manuscript/figures/type6_human.png', dpi=200, bbox_inches='tight')\n", " plt.show()\n", "\n", " # ----- Figure: model curves, raw (left) + normalized (right) -----\n", " sorted_models = sorted(available_models,\n", " key=lambda m: (ORDER_BY_PARADIGM[MODEL_INFO[m]['type']], m))\n", " LINESTYLES = {'Supervised':['-','--','-.',':',(0,(3,1,1,1))],\n", " 'Self-supervised':['-','--','-.',':'],\n", " 'Weakly supervised':['-']}\n", " fig, axes = plt.subplots(1, 2, figsize=(14, 5))\n", " for ax, mode in zip(axes, ['raw', 'norm']):\n", " style_idx = {p:0 for p in PARADIGM_COLOR}\n", " for m in sorted_models:\n", " col = f'{m}_cosim'\n", " if col not in type6.columns: continue\n", " agg_m = type6.groupby('scale_bin', observed=True)[col].mean().dropna()\n", " if agg_m.std() < 1e-10: continue\n", " v = agg_m.values\n", " if mode == 'norm':\n", " v = (v - v.min()) / (v.max() - v.min() + 1e-12)\n", " ptype = MODEL_INFO[m]['type']\n", " ls = LINESTYLES[ptype][style_idx[ptype] % len(LINESTYLES[ptype])]\n", " style_idx[ptype] += 1\n", " ax.plot(range(len(agg_m)), v, marker='o', markersize=3, linestyle=ls,\n", " color=PARADIGM_COLOR[ptype], label=DISPLAY_NAME[m], alpha=0.85, linewidth=1.6)\n", " ax.set_xlabel('Interpolation Scale (binned)')\n", " ax.set_ylabel('Cosine similarity' if mode=='raw'\n", " else 'Cosine similarity (min-max normalized per model)')\n", " ax.set_xticks(range(0, len(agg_h), 4))\n", " ax.set_xticklabels([f'{int(b.left)}' for b in agg_h.index[::4]], rotation=45)\n", " for spine in ('top','right'): ax.spines[spine].set_visible(False)\n", " axes[1].legend(fontsize=8, loc='lower right', ncol=2, frameon=True, framealpha=0.9)\n", " plt.tight_layout()\n", " plt.savefig('manuscript/figures/type6_models.png', dpi=200, bbox_inches='tight')\n", " plt.show()\n" ] }, { "cell_type": "markdown", "id": "cell_032", "metadata": {}, "source": [ "---\n", "## Section 7: Individual Differences in Human Perception\n", "\n", "**Motivation:** Not all voice pairs are equally easy to judge. Some pairs elicit strong consensus (nearly all listeners agree), while others provoke maximal disagreement (close to 50/50 split). Can models predict *which pairs humans will disagree on*?\n", "\n", "**Human disagreement** is quantified via the binary entropy of the vote distribution for each pair:\n", "\n", "$$H = -[p \\log_2 p + (1-p) \\log_2 (1-p)]$$\n", "\n", "where $p = P(\\text{same})$. Entropy is 0 when all listeners agree (p=0 or p=1) and 1 when they are maximally split (p=0.5). Human agreement is defined as $1 - H$.\n", "\n", "**Model confidence** is defined as the absolute distance from the model's optimal verification threshold: $|\\text{cosim} - \\theta_{\\text{opt}}|$. A pair far from the threshold is \"easy\" for the model.\n", "\n", "If model confidence correlates with human agreement, it means the model's uncertainty tracks human uncertainty -- pairs that are hard for humans are also ambiguous in the embedding space." ] }, { "cell_type": "code", "execution_count": null, "id": "cell_033", "metadata": { "execution": { "iopub.execute_input": "2026-04-22T16:24:03.095890Z", "iopub.status.busy": "2026-04-22T16:24:03.095560Z", "iopub.status.idle": "2026-04-22T16:24:03.136535Z", "shell.execute_reply": "2026-04-22T16:24:03.134575Z" } }, "outputs": [], "source": [ "# Human disagreement: entropy of vote distribution per pair\n", "def binary_entropy(p):\n", " \"\"\"Entropy of Bernoulli distribution.\"\"\"\n", " if p <= 0 or p >= 1:\n", " return 0.0\n", " return -(p * np.log2(p) + (1-p) * np.log2(1-p))\n", "\n", "df['human_entropy'] = df['p_same_full'].apply(lambda p: binary_entropy(p) if pd.notna(p) else np.nan)\n", "df['human_agreement'] = 1 - df['human_entropy']\n", "\n", "print(f'Mean human entropy: {df[\"human_entropy\"].mean():.4f}')\n", "print(f'Median human entropy: {df[\"human_entropy\"].median():.4f}')\n", "print(f'Pairs with high disagreement (entropy > 0.9): {(df[\"human_entropy\"] > 0.9).sum()}')" ] }, { "cell_type": "code", "execution_count": null, "id": "cell_034", "metadata": { "execution": { "iopub.execute_input": "2026-04-22T16:24:03.139270Z", "iopub.status.busy": "2026-04-22T16:24:03.139002Z", "iopub.status.idle": "2026-04-22T16:24:03.237449Z", "shell.execute_reply": "2026-04-22T16:24:03.235070Z" } }, "outputs": [], "source": [ "# Can models predict human disagreement?\n", "disagree_results = []\n", "\n", "for model_name in available_models:\n", " col = f'{model_name}_cosim'\n", " valid = df.dropna(subset=[col, 'human_entropy'])\n", " if len(valid) < 10:\n", " continue\n", " \n", " # Model confidence: distance from optimal threshold\n", " if model_name in auc_results:\n", " # Recompute optimal threshold\n", " valid_b = df_binary.dropna(subset=[col])\n", " fpr, tpr, thresholds = roc_curve(valid_b['majority_full'], valid_b[col])\n", " opt_thresh = thresholds[np.argmax(tpr - fpr)]\n", " else:\n", " opt_thresh = valid[col].median()\n", " \n", " model_confidence = np.abs(valid[col] - opt_thresh)\n", " \n", " # Higher model confidence should predict higher human agreement (lower entropy)\n", " r, p = pearsonr(model_confidence, valid['human_agreement'])\n", " disagree_results.append({'model': model_name, 'r_confidence_agreement': r, 'p': p})\n", "\n", "disagree_df = pd.DataFrame(disagree_results).sort_values('r_confidence_agreement', ascending=False)\n", "print('Model confidence vs human agreement:')\n", "print(disagree_df.round(4).to_string(index=False))" ] }, { "cell_type": "code", "execution_count": null, "id": "cell_035", "metadata": { "execution": { "iopub.execute_input": "2026-04-22T16:24:03.240276Z", "iopub.status.busy": "2026-04-22T16:24:03.240006Z", "iopub.status.idle": "2026-04-22T16:24:03.254749Z", "shell.execute_reply": "2026-04-22T16:24:03.252797Z" } }, "outputs": [], "source": [ "# Characterize high-disagreement pairs\n", "high_disagree = df[df['human_entropy'] > 0.9].copy()\n", "print(f'High-disagreement pairs: {len(high_disagree)}')\n", "print(f'\\nBy stimulus type:')\n", "print(high_disagree['stimuli_type'].value_counts().sort_index())\n", "print(f'\\nProportion of each type that is high-disagreement:')\n", "for stype in sorted(df['stimuli_type'].unique()):\n", " total = (df['stimuli_type'] == stype).sum()\n", " high = ((df['stimuli_type'] == stype) & (df['human_entropy'] > 0.9)).sum()\n", " print(f' Type {stype}: {high}/{total} = {high/total:.1%}')" ] }, { "cell_type": "markdown", "id": "part_IV_divider", "metadata": {}, "source": [ "---\n", "# Part IV \u2014 Representation Analyses\n", "\n", "These sections ask structural questions about the embedding spaces themselves: Section 8 (Mahalanobis metric learning) tests whether per-dimension reweighting improves prediction \u2014 revealing whether the space is isotropically aligned with perception. Section 13 (at the end of Part V) tests whether SSL models benefit from choosing a non-default transformer layer." ] }, { "cell_type": "markdown", "id": "cell_036", "metadata": {}, "source": [ "---\n", "## Section 8: Mahalanobis Metric Learning\n", "\n", "**Motivation:** Cosine similarity treats all dimensions of the embedding space equally. But human perception may weight certain acoustic/representational dimensions more than others. A **diagonal Mahalanobis metric** learns per-dimension weights that best predict human judgments.\n", "\n", "**Method:** For each pair, compute the embedding difference $\\mathbf{d} = \\mathbf{e}_{\\text{ref}} - \\mathbf{e}_{\\text{comp}}$. The weighted distance is:\n", "\n", "$$d_M(\\mathbf{e}_{\\text{ref}}, \\mathbf{e}_{\\text{comp}}) = \\sum_{i=1}^{D} w_i \\cdot (e_{\\text{ref},i} - e_{\\text{comp},i})^2$$\n", "\n", "where $w_i \\geq 0$ are learnable per-dimension weights. We optimize $\\mathbf{w}$ to minimize MSE between $d_M$ and human dissimilarity $(1 - P(\\text{same}))$, with L2 regularization:\n", "\n", "$$\\min_{\\mathbf{w}} \\frac{1}{N} \\sum_j \\left( d_M^{(j)} - (1 - P_{\\text{same}}^{(j)}) \\right)^2 + \\lambda \\|\\mathbf{w}\\|^2$$\n", "\n", "Weights are parameterized as $w_i = e^{\\alpha_i}$ to ensure positivity, and optimized with L-BFGS-B.\n", "\n", "**Evaluation:** 10-fold cross-validation with speaker-level splits (gender-balanced: 5M/5F per fold). We compare $R^2$ of the learned Mahalanobis distance against isotropic (uniform-weight) distance on held-out speakers.\n", "\n", "**For high-dimensional models** (dim > 256), we first reduce to 50 dimensions with PCA to prevent overfitting (9,800 pairs cannot support learning 768+ independent weights).\n", "\n", "**What this tells us:** If Mahalanobis significantly outperforms isotropic distance, the model's embedding space is **anisotropic** with respect to human perception -- some dimensions matter more than others for identity. If it does not help, the space is already approximately isotropically aligned." ] }, { "cell_type": "code", "execution_count": null, "id": "cell_037", "metadata": { "execution": { "iopub.execute_input": "2026-04-22T16:24:03.258211Z", "iopub.status.busy": "2026-04-22T16:24:03.257879Z", "iopub.status.idle": "2026-04-22T16:24:03.269004Z", "shell.execute_reply": "2026-04-22T16:24:03.267179Z" } }, "outputs": [], "source": [ "# Learn per-dimension weights (diagonal Mahalanobis) via cross-validation\n", "from scipy.special import expit as sigmoid\n", "\n", "def prepare_diffs(df_rows, emb_dict):\n", " \"\"\"Compute embedding differences for all pairs (vectorized).\"\"\"\n", " ref_keys = [f\"{ref}R\" for ref in df_rows['reference']]\n", " stim_keys = df_rows['id'].tolist()\n", " p_same = df_rows['p_same_full'].values\n", " \n", " valid = []\n", " for j, (rk, sk, ps) in enumerate(zip(ref_keys, stim_keys, p_same)):\n", " if rk in emb_dict and sk in emb_dict and not np.isnan(ps):\n", " valid.append(j)\n", " \n", " if not valid:\n", " return np.array([]), np.array([]), []\n", " \n", " dim = next(iter(emb_dict.values())).shape[0]\n", " diffs = np.zeros((len(valid), dim))\n", " targets = np.zeros(len(valid))\n", " for idx, j in enumerate(valid):\n", " diffs[idx] = emb_dict[ref_keys[j]] - emb_dict[stim_keys[j]]\n", " targets[idx] = p_same[j]\n", " return diffs, targets, valid\n", "\n", "def fit_diagonal_mahalanobis(diffs_train, y_train, diffs_test, y_test, reg_lambda=1e-3):\n", " \"\"\"Fit diagonal Mahalanobis: learn per-dimension weights.\"\"\"\n", " dim = diffs_train.shape[1]\n", " sq_diffs_train = diffs_train ** 2\n", " sq_diffs_test = diffs_test ** 2\n", " \n", " dissim_train = 1 - y_train\n", " dissim_test = 1 - y_test\n", " \n", " def objective(log_w):\n", " w = np.exp(log_w)\n", " d_M = sq_diffs_train @ w\n", " residuals = d_M - dissim_train\n", " return np.mean(residuals ** 2) + reg_lambda * np.sum(w ** 2)\n", " \n", " log_w0 = np.zeros(dim)\n", " result = minimize(objective, log_w0, method='L-BFGS-B',\n", " options={'maxiter': 500, 'ftol': 1e-8})\n", " w_opt = np.exp(result.x)\n", " \n", " d_M_test = sq_diffs_test @ w_opt\n", " if np.std(d_M_test) < 1e-10 or np.std(dissim_test) < 1e-10:\n", " return 0.0, 0.0, w_opt\n", " r_mahal, _ = pearsonr(d_M_test, dissim_test)\n", " \n", " d_iso_test = sq_diffs_test @ np.ones(dim)\n", " if np.std(d_iso_test) < 1e-10:\n", " r_iso = 0.0\n", " else:\n", " r_iso, _ = pearsonr(d_iso_test, dissim_test)\n", " \n", " return r_mahal ** 2, r_iso ** 2, w_opt\n", "\n", "print('Mahalanobis metric learning with 10-fold speaker-level CV...')\n", "print('(This may take a few minutes for high-dimensional models)\\n')" ] }, { "cell_type": "code", "execution_count": null, "id": "cell_038", "metadata": { "execution": { "iopub.execute_input": "2026-04-22T16:24:03.272481Z", "iopub.status.busy": "2026-04-22T16:24:03.272119Z", "iopub.status.idle": "2026-04-22T16:24:03.294152Z", "shell.execute_reply": "2026-04-22T16:24:03.292117Z" } }, "outputs": [], "source": [ "# Mahalanobis CV: for SSL models, uses the SAME per-fold best layer as Section 2.\n", "# For supervised models, uses the model's native (only) embedding.\n", "# Results are cached; delete cache/mahalanobis.pkl to force recomputation.\n", "\n", "def _prepare_diffs_at_layer(df_rows, emb_dict_layers, layer_idx):\n", " \"\"\"Compute embedding differences using the specified layer.\"\"\"\n", " ref_keys = [f'{ref}R' for ref in df_rows['reference']]\n", " stim_keys = df_rows['id'].tolist()\n", " p_same = df_rows['p_same_full'].values\n", " valid = []\n", " for j, (rk, sk, ps) in enumerate(zip(ref_keys, stim_keys, p_same)):\n", " if rk in emb_dict_layers and sk in emb_dict_layers and not np.isnan(ps):\n", " valid.append(j)\n", " if not valid:\n", " return np.array([]), np.array([])\n", " sample = next(iter(emb_dict_layers.values()))\n", " _, dim = sample.shape\n", " diffs = np.zeros((len(valid), dim))\n", " targets = np.zeros(len(valid))\n", " for idx, j in enumerate(valid):\n", " diffs[idx] = emb_dict_layers[ref_keys[j]][layer_idx] - emb_dict_layers[stim_keys[j]][layer_idx]\n", " targets[idx] = p_same[j]\n", " return diffs, targets\n", "\n", "def _compute_mahalanobis():\n", " all_speakers_local = sorted(df['reference'].unique())\n", " male_spk_local = [s for s in all_speakers_local if s.startswith('M')]\n", " female_spk_local = [s for s in all_speakers_local if s.startswith('F')]\n", " _rng_m = np.random.default_rng(SEED)\n", " _rng_m.shuffle(male_spk_local); _rng_m.shuffle(female_spk_local)\n", " folds_m = [[] for _ in range(10)]\n", " for i, s in enumerate(male_spk_local): folds_m[i % 10].append(s)\n", " for i, s in enumerate(female_spk_local): folds_m[i % 10].append(s)\n", " \n", " results_out = []\n", " for model_name in available_models:\n", " is_ssl_with_layers = model_name in SSL_MODELS_WITH_LAYERS and model_name in layer_embs\n", " dim = MODEL_INFO[model_name]['dim']\n", " if is_ssl_with_layers:\n", " sample = next(iter(layer_embs[model_name].values()))\n", " _, dim = sample.shape\n", " use_pca = dim > 256\n", " pca_dim = 50 if use_pca else dim\n", " \n", " fold_r2_mahal = []\n", " fold_r2_iso = []\n", " for fold_idx, fold_speakers in enumerate(folds_m):\n", " test_speakers = set(fold_speakers)\n", " train_speakers = set(all_speakers_local) - test_speakers\n", " train_mask = df['reference'].isin(train_speakers)\n", " test_mask = df['reference'].isin(test_speakers)\n", " \n", " if is_ssl_with_layers:\n", " layer_idx = best_layer_per_fold[model_name][fold_idx] if fold_idx < len(best_layer_per_fold[model_name]) else 0\n", " diffs_train, y_train = _prepare_diffs_at_layer(df[train_mask], layer_embs[model_name], layer_idx)\n", " diffs_test, y_test = _prepare_diffs_at_layer(df[test_mask], layer_embs[model_name], layer_idx)\n", " else:\n", " emb_dict = embeddings[model_name]\n", " diffs_train, y_train, _ = prepare_diffs(df[train_mask], emb_dict)\n", " diffs_test, y_test, _ = prepare_diffs(df[test_mask], emb_dict)\n", " \n", " if len(diffs_train) < 50 or len(diffs_test) < 10:\n", " continue\n", " if use_pca:\n", " pca = PCA(n_components=pca_dim)\n", " diffs_train = pca.fit_transform(diffs_train)\n", " diffs_test = pca.transform(diffs_test)\n", " r2_m, r2_i, _ = fit_diagonal_mahalanobis(diffs_train, y_train, diffs_test, y_test)\n", " fold_r2_mahal.append(r2_m)\n", " fold_r2_iso.append(r2_i)\n", " \n", " if fold_r2_mahal:\n", " results_out.append({\n", " 'model': model_name,\n", " 'type': MODEL_INFO[model_name]['type'],\n", " 'dim': dim,\n", " 'pca': pca_dim if use_pca else 'N/A',\n", " 'R2_isotropic': np.mean(fold_r2_iso),\n", " 'R2_mahalanobis': np.mean(fold_r2_mahal),\n", " 'improvement': np.mean(fold_r2_mahal) - np.mean(fold_r2_iso),\n", " 'fold_R2_iso': list(map(float, fold_r2_iso)),\n", " 'fold_R2_mahal': list(map(float, fold_r2_mahal)),\n", " 'n_folds': len(fold_r2_mahal),\n", " 'used_best_layer': is_ssl_with_layers,\n", " })\n", " return pd.DataFrame(results_out).sort_values('R2_mahalanobis', ascending=False)\n", "\n", "print('Mahalanobis metric learning with 10-fold speaker-level CV')\n", "print(' (SSL models use per-fold best-layer embeddings, consistent with Section 2 / main benchmark)')\n", "mahal_df = cached('mahalanobis', _compute_mahalanobis, 'Diagonal Mahalanobis CV for all models')\n", "print('\\nMahalanobis results:')\n", "print(mahal_df.round(4).to_string(index=False))" ] }, { "cell_type": "code", "execution_count": null, "id": "cell_039", "metadata": { "execution": { "iopub.execute_input": "2026-04-22T16:24:03.297095Z", "iopub.status.busy": "2026-04-22T16:24:03.296807Z", "iopub.status.idle": "2026-04-22T16:24:03.841265Z", "shell.execute_reply": "2026-04-22T16:24:03.838929Z" } }, "outputs": [], "source": [ "# Visualization: improvement from metric learning (with 95% CI from 10 folds)\n", "if len(mahal_df) > 0:\n", " from scipy.stats import t as _t_dist\n", " DISPLAY_NAME = {'rawnet3': 'RawNet3', 'ecapa_tdnn': 'ECAPA-TDNN',\n", " 'titanet': 'TitaNet', 'resemblyzer': 'resemblyzer',\n", " 'xvector': 'x-vector', 'wav2vec2': 'wav2vec 2.0',\n", " 'hubert': 'HuBERT', 'wavlm': 'WavLM',\n", " 'whisper': 'Whisper', 'xlsr': 'XLS-R'}\n", " _t_crit = _t_dist.ppf(0.975, df=9) # 10-fold CV\n", " def _ci_half(folds):\n", " a = np.asarray(folds, dtype=float)\n", " return _t_crit * a.std(ddof=1) / np.sqrt(len(a))\n", " fig, ax = plt.subplots(figsize=(10, 5))\n", " models_m = [DISPLAY_NAME[m] for m in mahal_df['model']]\n", " x = np.arange(len(models_m))\n", " width = 0.35\n", " ci_iso = mahal_df['fold_R2_iso'].apply(_ci_half).values\n", " ci_mahal = mahal_df['fold_R2_mahal'].apply(_ci_half).values\n", " ax.bar(x - width/2, mahal_df['R2_isotropic'].values, width,\n", " yerr=ci_iso, capsize=4,\n", " label='Isotropic (cosine)', color='#90CAF9',\n", " error_kw={'ecolor':'#333', 'elinewidth':1.0})\n", " ax.bar(x + width/2, mahal_df['R2_mahalanobis'].values, width,\n", " yerr=ci_mahal, capsize=4,\n", " label='Mahalanobis (learned)', color='#1565C0',\n", " error_kw={'ecolor':'#333', 'elinewidth':1.0})\n", " ax.set_xlabel('Model')\n", " ax.set_ylabel(r'$R^2$ (cross-validated)')\n", " ax.set_xticks(x)\n", " ax.set_xticklabels(models_m, rotation=30, ha='right')\n", " ax.legend()\n", " for spine in ('top','right'): ax.spines[spine].set_visible(False)\n", " plt.tight_layout()\n", " plt.savefig('manuscript/figures/mahalanobis_bar.png', dpi=200, bbox_inches='tight')\n", " plt.show()\n" ] }, { "cell_type": "markdown", "id": "part_V_divider", "metadata": {}, "source": [ "---\n", "# Part V \u2014 Benchmark Validation & Robustness\n", "\n", "These sections check that the benchmark's conclusions are robust: pairwise significance tests (Section 9) tell us which rank orderings are real vs. noise; the human baseline (Section 10) anchors model performance to individual listeners; the Type-6 ablation (Section 11) checks that the dominant blended-voice stimuli do not drive the main rankings; fairness analyses (Section 12) check demographic disparities; and Section 13 addresses the SSL layer-choice question." ] }, { "cell_type": "markdown", "id": "sec09_header", "metadata": {}, "source": [ "---\n", "## Section 9: Pairwise Model Significance (Paired Bootstrap)\n", "\n", "**Motivation:** We have reported raw Pearson r for each model, but differences between models (e.g., resemblyzer r=0.645 vs ECAPA-TDNN r=0.633) may or may not be statistically reliable. Because all models are evaluated on the **same** 9,800 pairs, the correlations are dependent. We use a **paired bootstrap** to test pairwise significance:\n", "\n", "1. For each bootstrap replicate, resample pair indices with replacement.\n", "2. Compute Pearson r for both models on the same resampled set.\n", "3. Record the difference $r_A - r_B$.\n", "4. The 95% CI of the difference tells us whether it excludes zero.\n", "\n", "This is more appropriate than unpaired comparison (which ignores the within-pair correlation between models) and more flexible than Steiger's Z (which requires Gaussian assumptions).\n", "\n", "We report a 10 x 10 matrix of pairwise p-values, corrected for multiple comparisons (Benjamini-Hochberg FDR control)." ] }, { "cell_type": "code", "execution_count": null, "id": "sec09_code", "metadata": { "execution": { "iopub.execute_input": "2026-04-22T16:24:03.844762Z", "iopub.status.busy": "2026-04-22T16:24:03.844430Z", "iopub.status.idle": "2026-04-22T16:24:05.500074Z", "shell.execute_reply": "2026-04-22T16:24:05.498089Z" } }, "outputs": [], "source": [ "# Paired bootstrap test for pairwise model correlation differences (cached).\n", "# Multiple-comparison correction: Benjamini-Hochberg FDR control across the 45 unique pairs.\n", "# Whisper is placed last (bottom-right) in the heatmaps as the weakly-supervised outlier.\n", "N_BOOT_PAIRED = 2000\n", "\n", "valid_rows = df.dropna(subset=[f'{m}_cosim' for m in available_models] + ['p_same_full'])\n", "y = valid_rows['p_same_full'].values\n", "cosim_mat = np.stack([valid_rows[f'{m}_cosim'].values for m in available_models], axis=1)\n", "n = len(y)\n", "M = len(available_models)\n", "\n", "observed_r = np.array([pearsonr(cosim_mat[:, m_idx], y)[0] for m_idx in range(M)])\n", "\n", "def _compute_boot():\n", " rng_pb = np.random.default_rng(SEED)\n", " boot_r_local = np.zeros((N_BOOT_PAIRED, M))\n", " for b in range(N_BOOT_PAIRED):\n", " idx = rng_pb.integers(0, n, n)\n", " yb = y[idx]\n", " xb = cosim_mat[idx]\n", " for m_idx in range(M):\n", " boot_r_local[b, m_idx] = pearsonr(xb[:, m_idx], yb)[0]\n", " return boot_r_local\n", "\n", "boot_r = cached('paired_bootstrap', _compute_boot, f'{N_BOOT_PAIRED} paired bootstrap replicates')\n", "\n", "# Pairwise raw p-values, mean diffs, and 95% bootstrap CIs\n", "p_matrix = np.ones((M, M))\n", "diff_mean = np.zeros((M, M))\n", "ci_low = np.zeros((M, M))\n", "ci_high = np.zeros((M, M))\n", "for i in range(M):\n", " for j in range(M):\n", " if i == j: continue\n", " d = boot_r[:, i] - boot_r[:, j]\n", " diff_mean[i, j] = d.mean()\n", " ci_low[i, j], ci_high[i, j] = np.percentile(d, [2.5, 97.5])\n", " p_matrix[i, j] = 2 * min((d <= 0).mean(), (d >= 0).mean())\n", "\n", "# Benjamini-Hochberg FDR-adjusted q-values across the 45 unique pairs (i=25 trials, compute the leave-one-out consensus label on each of their judged pairs.\n", "2. Compute the participant's agreement rate with the consensus.\n", "3. For each model, compute its agreement with the full-sample majority using the threshold chosen in Section 4.\n", "4. Compare the distributions.\n", "\n", "**What this analysis does and does not claim.**\n", "- It *does* claim: state-of-the-art supervised embeddings agree with the population consensus more often than the average individual listener does.\n", "- It *does not* claim: models are \"correct\" and low-agreement individuals are \"wrong.\" For high-entropy pairs (voice clones, blends near 50/50), consensus is close to a coin flip, so agreeing with it is not objective correctness." ] }, { "cell_type": "code", "execution_count": null, "id": "sec10_code", "metadata": { "execution": { "iopub.execute_input": "2026-04-22T16:24:05.503535Z", "iopub.status.busy": "2026-04-22T16:24:05.503172Z", "iopub.status.idle": "2026-04-22T16:24:20.043613Z", "shell.execute_reply": "2026-04-22T16:24:20.041721Z" } }, "outputs": [], "source": [ "# Individual human accuracy vs leave-one-out consensus (full 1,290-participant dataset).\n", "# Minimum-trials filter set to 10 (instead of 25): with <10 trials, per-participant\n", "# accuracy can only take a handful of discrete values (0, 1/n, ..., 1), producing\n", "# artificial spikes at 0 and 1 that are not meaningful.\n", "MIN_TRIALS_FOR_BASELINE = 10\n", "\n", "pair_votes = responses.groupby('stimuli_id').agg(\n", " n_total=('answer', 'count'),\n", " n_same=('answer', 'sum')\n", ").reset_index()\n", "pair_votes_dict = pair_votes.set_index('stimuli_id').to_dict('index')\n", "\n", "trial_counts = responses.groupby('user_id').size()\n", "eligible_ids = trial_counts[trial_counts >= MIN_TRIALS_FOR_BASELINE].index.values\n", "print(f'Participants with >={MIN_TRIALS_FOR_BASELINE} trials: {len(eligible_ids)} / {len(trial_counts)}')\n", "\n", "participant_acc = []\n", "for uid in eligible_ids:\n", " user_resp = responses[responses['user_id'] == uid]\n", " hits = 0\n", " total = 0\n", " for _, r in user_resp.iterrows():\n", " sid = r['stimuli_id']\n", " if sid not in pair_votes_dict: continue\n", " stats_p = pair_votes_dict[sid]\n", " n_minus = stats_p['n_total'] - 1\n", " if n_minus < 1: continue\n", " n_same_minus = stats_p['n_same'] - int(r['answer'])\n", " loo_majority = 1 if n_same_minus > n_minus / 2 else (0 if n_same_minus < n_minus / 2 else np.nan)\n", " if np.isnan(loo_majority): continue\n", " hits += int(r['answer'] == loo_majority)\n", " total += 1\n", " if total >= 1:\n", " participant_acc.append({'user_id': uid, 'acc': hits / total, 'n': total})\n", "\n", "human_acc_df = pd.DataFrame(participant_acc)\n", "print(f'\\nIndividual human accuracy vs LOO consensus (n={len(human_acc_df)}):')\n", "print(f' Mean: {human_acc_df[\"acc\"].mean():.4f} Median: {human_acc_df[\"acc\"].median():.4f}')\n", "print(f' Std: {human_acc_df[\"acc\"].std():.4f} Range: [{human_acc_df[\"acc\"].min():.4f}, {human_acc_df[\"acc\"].max():.4f}]')\n", "\n", "model_accs = {row['model']: row['Acc@opt'] for _, row in verif_df.iterrows()}\n", "\n", "from scipy.stats import gaussian_kde\n", "\n", "fig = plt.figure(figsize=(11, 4.8))\n", "gs = fig.add_gridspec(2, 1, height_ratios=[1.1, 1.4], hspace=0.08)\n", "ax_kde = fig.add_subplot(gs[0])\n", "ax_dot = fig.add_subplot(gs[1], sharex=ax_kde)\n", "\n", "kde = gaussian_kde(human_acc_df['acc'].values, bw_method=0.12)\n", "x_range = np.linspace(0.00, 1.0, 500)\n", "y_kde = kde(x_range)\n", "ax_kde.fill_between(x_range, 0, y_kde, alpha=0.30, color='#888', edgecolor='#555', linewidth=1.2)\n", "ax_kde.plot(x_range, y_kde, color='#555', linewidth=1.2)\n", "ax_kde.set_ylabel(f'Individual listeners\\n(n = {len(human_acc_df)})', fontsize=10)\n", "ax_kde.set_yticks([])\n", "ax_kde.tick_params(axis='x', labelbottom=False)\n", "for spine in ('top', 'right'):\n", " ax_kde.spines[spine].set_visible(False)\n", "\n", "type_palette = {'Supervised': '#2196F3', 'Self-supervised': '#FF9800', 'Weakly supervised': '#9C27B0'}\n", "by_type = {'Supervised': [], 'Self-supervised': [], 'Weakly supervised': []}\n", "for m in available_models:\n", " if m not in model_accs: continue\n", " by_type[MODEL_INFO[m]['type']].append((m, model_accs[m]))\n", "for k in by_type:\n", " by_type[k].sort(key=lambda x: -x[1])\n", "\n", "rows_top_down = []\n", "for ptype in ['Supervised', 'Self-supervised', 'Weakly supervised']:\n", " for m, acc in by_type[ptype]:\n", " rows_top_down.append((m, acc, ptype))\n", " rows_top_down.append((None, None, None))\n", "rows_top_down.pop()\n", "rows = rows_top_down[::-1]\n", "\n", "for y_idx, (m, acc, ptype) in enumerate(rows):\n", " if m is None:\n", " continue\n", " color = type_palette[ptype]\n", " ax_dot.plot(acc, y_idx, 'o', color=color, markersize=11, markeredgecolor='black',\n", " markeredgewidth=0.7, zorder=5)\n", "\n", "DISPLAY_NAME = {'rawnet3': 'RawNet3', 'ecapa_tdnn': 'ECAPA-TDNN',\n", " 'titanet': 'TitaNet', 'resemblyzer': 'resemblyzer',\n", " 'xvector': 'x-vector', 'wav2vec2': 'wav2vec 2.0',\n", " 'hubert': 'HuBERT', 'wavlm': 'WavLM',\n", " 'whisper': 'Whisper', 'xlsr': 'XLS-R'}\n", "y_labels = [(DISPLAY_NAME[r[0]] if r[0] is not None else '') for r in rows]\n", "ax_dot.set_yticks(range(len(rows)))\n", "ax_dot.set_yticklabels(y_labels, fontsize=10)\n", "for tick, (m, acc, ptype) in zip(ax_dot.get_yticklabels(), rows):\n", " if ptype is not None:\n", " tick.set_color(type_palette[ptype])\n", "\n", "mean_human = human_acc_df['acc'].mean()\n", "for ax in (ax_kde, ax_dot):\n", " ax.axvline(mean_human, color='black', linestyle=':', linewidth=1.1, alpha=0.65, zorder=1)\n", "ax_kde.text(mean_human, y_kde.max() * 0.95,\n", " f'mean human = {mean_human:.2f}',\n", " ha='center', va='top', fontsize=9,\n", " bbox=dict(boxstyle='round,pad=0.3', facecolor='white', edgecolor='none', alpha=0.9))\n", "\n", "ax_dot.set_xlim(0.00, 1.0)\n", "ax_dot.set_ylim(-1, len(rows))\n", "ax_dot.set_xlabel('Accuracy vs leave-one-out human majority vote', fontsize=11)\n", "for spine in ('top', 'right'):\n", " ax_dot.spines[spine].set_visible(False)\n", "\n", "from matplotlib.patches import Patch\n", "ax_dot.legend(handles=[\n", " Patch(color=type_palette['Supervised'], label='Supervised'),\n", " Patch(color=type_palette['Self-supervised'], label='SSL'),\n", " Patch(color=type_palette['Weakly supervised'], label='Weakly supervised'),\n", "], loc='lower right', frameon=True, framealpha=0.95, fontsize=9)\n", "\n", "plt.savefig('manuscript/figures/human_baseline_2.png', dpi=200, bbox_inches='tight')\n", "plt.show()\n", "\n", "print(f'\\nModel percentile rank in human accuracy distribution:')\n", "for model_name in available_models:\n", " if model_name in model_accs:\n", " acc = model_accs[model_name]\n", " pct = (human_acc_df['acc'] < acc).mean() * 100\n", " print(f' {model_name:15s} acc={acc:.4f} beats {pct:.1f}% of individual humans')\n" ] }, { "cell_type": "markdown", "id": "sec11_header", "metadata": {}, "source": [ "---\n", "## Section 11: Stimulus-Type Ablation (Type 6 Dominance Check)\n", "\n", "**Motivation:** Type 6 (blended voices) accounts for 8,100 of 9,800 pairs (83%). The headline Pearson r and RSA metrics are dominated by this single stimulus type. If model rankings were driven by Type 6 alone, the claimed benchmark would be narrower than it appears.\n", "\n", "We recompute the main metrics on **Types 1-5 only** (1,700 pairs, excluding blended voices) and compare rankings to the full dataset." ] }, { "cell_type": "code", "execution_count": null, "id": "sec11_code", "metadata": { "execution": { "iopub.execute_input": "2026-04-22T16:24:20.046849Z", "iopub.status.busy": "2026-04-22T16:24:20.046583Z", "iopub.status.idle": "2026-04-22T16:24:20.162805Z", "shell.execute_reply": "2026-04-22T16:24:20.160647Z" } }, "outputs": [], "source": [ "# Recompute Pearson r and AUC on Types 1-5 only (exclude Type 6 blends)\n", "\n", "df_no6 = df[df['stimuli_type'] != 6].copy()\n", "df_no6_binary = df_no6[(df_no6['p_same_full'] != 0.5) & df_no6['p_same_full'].notna()]\n", "\n", "print(f'Pairs excluding Type 6: {len(df_no6)}')\n", "print(f'Binary-classifiable (exclude ties): {len(df_no6_binary)}')\n", "\n", "ablation_rows = []\n", "for model_name in available_models:\n", " col = f'{model_name}_cosim'\n", " \n", " # Pearson r on Types 1-5\n", " valid = df_no6.dropna(subset=[col, 'p_same_full'])\n", " r_no6, _ = pearsonr(valid[col], valid['p_same_full']) if len(valid) > 10 else (np.nan, None)\n", " \n", " # Pearson r on full (Types 1-6)\n", " valid_all = df.dropna(subset=[col, 'p_same_full'])\n", " r_all, _ = pearsonr(valid_all[col], valid_all['p_same_full']) if len(valid_all) > 10 else (np.nan, None)\n", " \n", " # AUC on Types 1-5 binary\n", " valid_b = df_no6_binary.dropna(subset=[col])\n", " if len(valid_b) > 10:\n", " auc_no6 = roc_auc_score(valid_b['majority_full'], valid_b[col])\n", " else:\n", " auc_no6 = np.nan\n", " \n", " auc_all = auc_results.get(model_name, np.nan)\n", " \n", " ablation_rows.append({\n", " 'model': model_name,\n", " 'r (Types 1-6)': r_all,\n", " 'r (Types 1-5 only)': r_no6,\n", " 'r delta': r_no6 - r_all,\n", " 'AUC (Types 1-6)': auc_all,\n", " 'AUC (Types 1-5 only)': auc_no6,\n", " 'AUC delta': auc_no6 - auc_all\n", " })\n", "\n", "ablation_df = pd.DataFrame(ablation_rows).sort_values('r (Types 1-5 only)', ascending=False)\n", "print('\\nType 6 ablation: metrics with and without blended voices:')\n", "print(ablation_df.round(4).to_string(index=False))\n", "\n", "# Are rankings preserved?\n", "rank_all = ablation_df['r (Types 1-6)'].rank(ascending=False).values\n", "rank_no6 = ablation_df['r (Types 1-5 only)'].rank(ascending=False).values\n", "rank_corr, _ = spearmanr(rank_all, rank_no6)\n", "print(f'\\nSpearman rank correlation between full-set and Type-1-5-only rankings: {rank_corr:.4f}')" ] }, { "cell_type": "markdown", "id": "sec12_header", "metadata": {}, "source": [ "---\n", "## Section 12: Demographic Fairness\n", "\n", "**Motivation:** Benchmarks that ignore fairness risk rewarding models that perform well only on dominant demographics. The dataset has 100 speakers balanced 50M/50F with 5 sociophonetic groups. We measure whether model-human alignment varies by speaker demographics.\n", "\n", "**Metrics per demographic subgroup:**\n", "- Pearson r between cosine similarity and P(same)\n", "- AUC against majority vote\n", "\n", "**Disparity metric:** max - min across subgroups. Large disparity means the model's performance depends on who the speaker is, which is a fairness concern for applications like voice authentication." ] }, { "cell_type": "code", "execution_count": null, "id": "sec12_code", "metadata": { "execution": { "iopub.execute_input": "2026-04-22T16:24:20.165744Z", "iopub.status.busy": "2026-04-22T16:24:20.165475Z", "iopub.status.idle": "2026-04-22T16:24:21.480448Z", "shell.execute_reply": "2026-04-22T16:24:21.478559Z" } }, "outputs": [], "source": [ "# Demographic fairness: model-human alignment by speaker gender and group\n", "\n", "# Attach speaker metadata to each stimulus via the reference speaker\n", "spk_meta = speakers.set_index('id')[['gender', 'group', 'age']].to_dict('index')\n", "df['ref_gender'] = df['reference'].map(lambda s: spk_meta.get(s, {}).get('gender', np.nan))\n", "df['ref_group'] = df['reference'].map(lambda s: spk_meta.get(s, {}).get('group', np.nan))\n", "df['ref_age'] = df['reference'].map(lambda s: spk_meta.get(s, {}).get('age', np.nan))\n", "\n", "def subgroup_alignment(df_sub, model_name):\n", " col = f'{model_name}_cosim'\n", " valid = df_sub.dropna(subset=[col, 'p_same_full'])\n", " if len(valid) < 30:\n", " return {'r': np.nan, 'n': len(valid)}\n", " r, _ = pearsonr(valid[col], valid['p_same_full'])\n", " return {'r': r, 'n': len(valid)}\n", "\n", "# ---- By gender ----\n", "gender_results = []\n", "for model_name in available_models:\n", " for g, glabel in [(1, 'Female'), (2, 'Male')]:\n", " sub = df[df['ref_gender'] == g]\n", " res = subgroup_alignment(sub, model_name)\n", " gender_results.append({'model': model_name, 'gender': glabel, **res})\n", "gender_df = pd.DataFrame(gender_results)\n", "gender_pivot = gender_df.pivot(index='model', columns='gender', values='r').round(4)\n", "gender_pivot['disparity'] = (gender_pivot['Male'] - gender_pivot['Female']).abs()\n", "gender_pivot = gender_pivot.sort_values('disparity', ascending=False)\n", "gender_pivot.index = [DISPLAY_NAME[m] for m in gender_pivot.index]\n", "print('Alignment (Pearson r) by speaker gender:')\n", "print(gender_pivot.to_string())\n", "\n", "# ---- By sociophonetic group ----\n", "print('\\n\\nAlignment (Pearson r) by sociophonetic group:')\n", "group_results = []\n", "for model_name in available_models:\n", " for g in sorted(df['ref_group'].dropna().unique()):\n", " sub = df[df['ref_group'] == g]\n", " res = subgroup_alignment(sub, model_name)\n", " group_results.append({'model': model_name, 'group': int(g), **res})\n", "group_df = pd.DataFrame(group_results)\n", "group_pivot = group_df.pivot(index='model', columns='group', values='r').round(4)\n", "group_pivot['max-min'] = group_pivot.max(axis=1) - group_pivot.min(axis=1)\n", "group_pivot = group_pivot.sort_values('max-min', ascending=False)\n", "group_pivot.index = [DISPLAY_NAME[m] for m in group_pivot.index]\n", "print(group_pivot.to_string())\n", "\n", "# Visualize\n", "fig, axes = plt.subplots(1, 2, figsize=(16, 5))\n", "sns.heatmap(gender_pivot[['Female', 'Male']], annot=True, fmt='.3f', cmap='RdYlGn',\n", " vmin=0, vmax=0.7, ax=axes[0], cbar_kws={'label': 'Pearson r'})\n", "\n", "group_cols = [c for c in group_pivot.columns if c != 'max-min']\n", "sns.heatmap(group_pivot[group_cols], annot=True, fmt='.3f', cmap='RdYlGn',\n", " vmin=0, vmax=0.7, ax=axes[1], cbar_kws={'label': 'Pearson r'})\n", "plt.tight_layout()\n", "plt.savefig('manuscript/figures/fairness_heatmap_1.png', dpi=200, bbox_inches='tight')\n", "plt.show()" ] }, { "cell_type": "markdown", "id": "sec13_header", "metadata": {}, "source": [ "---\n", "## Section 13: SSL Layer-Wise Alignment (Sensitivity Analysis)\n", "\n", "**Purpose of this section.** The main benchmark already uses **nested-CV best-layer selection** for SSL models' cosine similarity (Section 2). This section provides the per-layer diagnostic: for each SSL model, we show how alignment varies across all transformer layers, making the layer-selection behavior transparent and interpretable.\n", "\n", "**What this shows:**\n", "1. **Per-layer curves**: Pearson r against P(same) as a function of layer index, for all 5 SSL models.\n", "2. **Best-layer position**: which layer the nested-CV procedure selects, and how consistent it is across folds.\n", "3. **Last-layer comparison**: the out-of-the-box (`last_hidden_state`) alignment that the main benchmark would have reported without layer selection.\n", "\n", "**Key methodological disclosure.** The r values reported here are computed using ALL data to fit the per-layer curves (for visualization). The main benchmark's SSL cosine similarities in Section 2 use strictly nested speaker-CV: each pair's cosine similarity comes from the layer chosen on training-fold speakers (excluding that pair's speaker). The nested-CV numbers are slightly lower than the all-data numbers here (small overfitting correction), but the per-layer shape is nearly identical across folds." ] }, { "cell_type": "code", "execution_count": null, "id": "sec13_code", "metadata": { "execution": { "iopub.execute_input": "2026-04-22T16:24:21.484092Z", "iopub.status.busy": "2026-04-22T16:24:21.483805Z", "iopub.status.idle": "2026-04-22T16:24:21.501779Z", "shell.execute_reply": "2026-04-22T16:24:21.499697Z" } }, "outputs": [], "source": [ "# Layer-wise alignment for SSL models (reuses layer_embs from Section 2; cached)\n", "\n", "SSL_LAYER_MODELS = ['wav2vec2', 'hubert', 'wavlm', 'xlsr', 'whisper']\n", "\n", "if 'layer_embs' not in dir() or not layer_embs:\n", " # Fallback: load if not already in memory (should not happen in normal flow)\n", " print('layer_embs not loaded; reloading from disk...')\n", " layer_embs = {}\n", " for m in SSL_LAYER_MODELS:\n", " p = LAYER_EMB_DIR / f'{m}.npz'\n", " if p.exists():\n", " data = np.load(p, allow_pickle=True)\n", " layer_embs[m] = {k: data[k] for k in data.files}\n", "else:\n", " for m in SSL_LAYER_MODELS:\n", " if m in layer_embs:\n", " sample = next(iter(layer_embs[m].values()))\n", " print(f'Reused {m}_layers from Section 2: {len(layer_embs[m])} clips, shape {sample.shape}')\n", "\n", "def _compute_layer_alignment():\n", " result = {}\n", " for model_name, emb_dict in layer_embs.items():\n", " ref_keys = [f'{ref}R' for ref in df['reference']]\n", " stim_keys = df['id'].tolist()\n", " sample = next(iter(emb_dict.values()))\n", " n_layers, dim = sample.shape\n", " n_pairs = len(df)\n", " \n", " ref_arr = np.full((n_pairs, n_layers, dim), np.nan)\n", " stim_arr = np.full((n_pairs, n_layers, dim), np.nan)\n", " valid_mask = np.zeros(n_pairs, dtype=bool)\n", " for i, (rk, sk) in enumerate(zip(ref_keys, stim_keys)):\n", " if rk in emb_dict and sk in emb_dict:\n", " ref_arr[i] = emb_dict[rk]\n", " stim_arr[i] = emb_dict[sk]\n", " valid_mask[i] = True\n", " ref_v = ref_arr[valid_mask]\n", " stim_v = stim_arr[valid_mask]\n", " ref_n = ref_v / (np.linalg.norm(ref_v, axis=2, keepdims=True) + 1e-10)\n", " stim_n = stim_v / (np.linalg.norm(stim_v, axis=2, keepdims=True) + 1e-10)\n", " cosims = np.sum(ref_n * stim_n, axis=2) # (N, L)\n", " y = df.loc[valid_mask, 'p_same_full'].values\n", " finite = np.isfinite(y)\n", " layer_r = []\n", " for l in range(n_layers):\n", " xl = cosims[:, l]\n", " m_fin = np.isfinite(xl) & finite\n", " if m_fin.sum() < 10:\n", " layer_r.append(np.nan)\n", " else:\n", " r, _ = pearsonr(xl[m_fin], y[m_fin])\n", " layer_r.append(r)\n", " result[model_name] = np.array(layer_r)\n", " return result\n", "\n", "layer_alignment = cached('layer_alignment', _compute_layer_alignment,\n", " 'Per-layer Pearson r for SSL models')\n", "\n", "for model_name, layer_r in layer_alignment.items():\n", " n_layers = len(layer_r)\n", " best_l = int(np.nanargmax(layer_r))\n", " last_l = n_layers - 1\n", " print(f'\\n{model_name} ({n_layers} layers):')\n", " print(f' per-layer r: {[round(x, 3) for x in layer_r]}')\n", " print(f' best layer: {best_l} (r={layer_r[best_l]:.4f})')\n", " print(f' last layer (L={last_l}, naive out-of-the-box baseline; NOT used in main benchmark): r={layer_r[last_l]:.4f}')\n", " print(f' improvement from layer selection: {layer_r[best_l] - layer_r[last_l]:+.4f}')" ] }, { "cell_type": "code", "execution_count": null, "id": "sec13_plot", "metadata": { "execution": { "iopub.execute_input": "2026-04-22T16:24:21.505102Z", "iopub.status.busy": "2026-04-22T16:24:21.504843Z", "iopub.status.idle": "2026-04-22T16:24:21.984337Z", "shell.execute_reply": "2026-04-22T16:24:21.982008Z" } }, "outputs": [], "source": [ "# Plot: Pearson r vs normalized layer index for the four SSL models.\n", "# Whisper is excluded -- it is weakly supervised, not SSL.\n", "SSL_MODELS = ['wav2vec2', 'hubert', 'wavlm', 'xlsr']\n", "SSL_LABEL = {'wav2vec2': 'wav2vec 2.0', 'hubert': 'HuBERT',\n", " 'wavlm': 'WavLM', 'xlsr': 'XLS-R'}\n", "SSL_COLOR = {'wav2vec2': '#E74C3C', 'hubert': '#3498DB',\n", " 'wavlm': '#9B59B6', 'xlsr': '#16A085'}\n", "\n", "fig, ax = plt.subplots(figsize=(9, 5))\n", "\n", "# Reference line: best supervised model (resemblyzer)\n", "REF_MODEL = 'resemblyzer'\n", "if REF_MODEL in available_models:\n", " r_sup = compute_correlations(df, REF_MODEL, 'p_same_full')['pearson_r']\n", " ax.axhline(r_sup, color='#555', alpha=0.6, linestyle='--', linewidth=1.3, zorder=1)\n", " ax.text(0.99, r_sup + 0.012,\n", " f'best supervised: {REF_MODEL} (r = {r_sup:.3f})',\n", " fontsize=9, color='#333', ha='right', va='bottom')\n", "\n", "for m in SSL_MODELS:\n", " if m not in layer_alignment:\n", " continue\n", " r_vec = layer_alignment[m]\n", " layers_norm = np.linspace(0, 1, len(r_vec))\n", " ax.plot(layers_norm, r_vec, 'o-', color=SSL_COLOR[m],\n", " label=SSL_LABEL[m], linewidth=2.2, markersize=5.5, alpha=0.9)\n", " best_l = int(np.nanargmax(r_vec))\n", " ax.scatter([layers_norm[best_l]], [r_vec[best_l]], color=SSL_COLOR[m],\n", " s=240, marker='*', edgecolor='black', linewidth=1.4, zorder=10)\n", "\n", "ax.set_xlabel('Normalized layer index (0 = first transformer layer, 1 = last)', fontsize=11)\n", "ax.set_ylabel('Pearson r with human P(same)', fontsize=11)\n", "ax.legend(title='SSL model', loc='lower left', frameon=True, framealpha=0.9, fontsize=10)\n", "ax.set_ylim(-0.05, 0.75)\n", "ax.set_xlim(-0.03, 1.03)\n", "for spine in ('top', 'right'):\n", " ax.spines[spine].set_visible(False)\n", "plt.tight_layout()\n", "plt.savefig('manuscript/figures/layer_alignment.png', dpi=200, bbox_inches='tight')\n", "plt.show()\n", "\n", "# Summary table (still include Whisper for completeness in the printout)\n", "summary_layer = []\n", "for m, r_vec in layer_alignment.items():\n", " best_l = int(np.nanargmax(r_vec))\n", " last_l = len(r_vec) - 1\n", " summary_layer.append({\n", " 'model': m,\n", " 'n_layers': len(r_vec),\n", " 'r_last_layer (used in main)': r_vec[last_l],\n", " 'r_best_layer': r_vec[best_l],\n", " 'best_layer_index': best_l,\n", " 'best_layer_norm': round(best_l / (len(r_vec) - 1), 3) if len(r_vec) > 1 else 0,\n", " 'improvement': r_vec[best_l] - r_vec[last_l]\n", " })\n", "layer_summary_df = pd.DataFrame(summary_layer)\n", "print('\\nSSL last-layer vs best-layer summary:')\n", "print(layer_summary_df.round(4).to_string(index=False))\n", "\n", "best_sup_r = max(compute_correlations(df, m, 'p_same_full')['pearson_r']\n", " for m in available_models if MODEL_INFO[m]['type'] == 'Supervised')\n", "best_ssl_last_r = max(r_vec[-1] for r_vec in layer_alignment.values())\n", "best_ssl_best_r = max(np.nanmax(r_vec) for r_vec in layer_alignment.values())\n", "print(f'\\nBest supervised model: r = {best_sup_r:.4f}')\n", "print(f'Best SSL (last layer): r = {best_ssl_last_r:.4f} (gap to supervised: {best_sup_r - best_ssl_last_r:.4f})')\n", "print(f'Best SSL (best layer): r = {best_ssl_best_r:.4f} (gap to supervised: {best_sup_r - best_ssl_best_r:.4f})')\n" ] }, { "cell_type": "markdown", "id": "part_VI_divider", "metadata": {}, "source": [ "---\n", "# Part VI \u2014 Summary\n" ] }, { "cell_type": "markdown", "id": "cell_040", "metadata": {}, "source": [ "---\n", "## Section 14: Summary & Comparison Table\n" ] }, { "cell_type": "code", "execution_count": null, "id": "cell_041", "metadata": { "execution": { "iopub.execute_input": "2026-04-22T16:24:21.987254Z", "iopub.status.busy": "2026-04-22T16:24:21.986975Z", "iopub.status.idle": "2026-04-22T16:24:22.090071Z", "shell.execute_reply": "2026-04-22T16:24:22.088245Z" } }, "outputs": [], "source": [ "# Grand comparison table with noise-ceiling normalization\n", "summary_rows = []\n", "\n", "# Noise ceiling: R^2 ceiling = Spearman-Brown-corrected split-half reliability\n", "R2_CEILING = sb_reliability # ~ 0.69\n", "\n", "# Map of SSL models to their last-layer r (for the sensitivity column)\n", "last_layer_r = {}\n", "for model_name in available_models:\n", " last_col = f'{model_name}_cosim_lastlayer'\n", " if last_col in df.columns:\n", " valid = df.dropna(subset=[last_col, 'p_same_full'])\n", " if len(valid) > 10:\n", " r, _ = pearsonr(valid[last_col], valid['p_same_full'])\n", " last_layer_r[model_name] = r\n", "\n", "for model_name in available_models:\n", " row = {\n", " 'Model': model_name,\n", " 'Type': MODEL_INFO[model_name]['type'],\n", " 'Dim': MODEL_INFO[model_name]['dim'],\n", " }\n", "\n", " # Task 1: Pearson r (best-layer for SSL via nested-CV; native output for supervised)\n", " corrs = compute_correlations(df, model_name, 'p_same_full')\n", " row['Pearson r'] = corrs['pearson_r']\n", " row['R^2'] = corrs['r_squared']\n", " row['R^2 / ceiling'] = corrs['r_squared'] / R2_CEILING if R2_CEILING > 0 else np.nan\n", "\n", " # Task 2: AUC, raw and calibrated ECE\n", " row['AUC'] = auc_results.get(model_name, np.nan)\n", " row['ECE_raw'] = ece_raw_results.get(model_name, np.nan)\n", " row['ECE_cal'] = ece_results.get(model_name, np.nan)\n", "\n", " # Task 3: speaker-level RDM RSA\n", " rsa_row = rsa_df[rsa_df['model'] == model_name]\n", " row['SpkRDM rho'] = rsa_row['rsa_rho'].values[0] if len(rsa_row) > 0 else np.nan\n", "\n", " # Section 8: Mahalanobis (note: uses last-layer embeddings for SSL)\n", " mahal_row = mahal_df[mahal_df['model'] == model_name] if len(mahal_df) > 0 else pd.DataFrame()\n", " row['Mahal R^2'] = mahal_row['R2_mahalanobis'].values[0] if len(mahal_row) > 0 else np.nan\n", "\n", " # Sensitivity: last-layer r for SSL models (empty for supervised, which have no layer choice)\n", " row['r (last-layer SSL)'] = last_layer_r.get(model_name, np.nan)\n", "\n", " # Significance: which other models this model is NOT significantly different from\n", " # under BH FDR (q > 0.05) on the Pearson r column. Reads q_matrix from the\n", " # paired-bootstrap cell. Empty string means model is significantly different from all others.\n", " if 'q_matrix' in dir():\n", " m_idx = available_models.index(model_name)\n", " ties = [available_models[j] for j in range(len(available_models))\n", " if j != m_idx and q_matrix[m_idx, j] > 0.05]\n", " row['r tied with (BH q>.05)'] = ', '.join(ties) if ties else ''\n", "\n", " summary_rows.append(row)\n", "\n", "summary = pd.DataFrame(summary_rows).sort_values('Pearson r', ascending=False)\n", "print('=' * 115)\n", "print('GRAND COMPARISON TABLE')\n", "print()\n", "print(' Pearson r: Main benchmark protocol (best-layer via nested speaker-CV for SSL, native output for supervised)')\n", "print(' R^2 / ceiling: Noise-ceiling-normalized R^2 (ceiling = {:.3f} from split-half reliability)'.format(R2_CEILING))\n", "print(' AUC: Area under ROC curve against human majority vote (Section 4)')\n", "print(' ECE_raw / ECE_cal: Raw vs Platt-calibrated Expected Calibration Error (Section 4)')\n", "print(' SpkRDM rho: Spearman correlation of speaker-pair dissimilarity matrices (Section 5)')\n", "print(' Mahal R^2: Diagonal Mahalanobis R^2 on native embeddings (Section 8)')\n", "print(' r (last-layer SSL): Pearson r using last_hidden_state (out-of-the-box baseline; sensitivity only)')\n", "print(' r tied with (BH q>.05): Other models not significantly different on Pearson r (BH FDR; empty = differs from all)')\n", "print('=' * 115)\n", "print(summary.round(4).to_string(index=False))\n" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.12.2" } }, "nbformat": 4, "nbformat_minor": 4 }