SONAR weights

Pretrained checkpoints for SONAR: Spectral-Contrastive Audio Residuals for Generalizable Deepfake Detection (ICML 2026).

File ITW EER Architecture License
xlsr2_300m.pt — XLSR-300M backbone (fairseq, derivative of facebookresearch/fairseq). CC-BY-NC-4.0 (upstream)
baseline_xlsr_aasist.pth ~10.5% Single XLSR + AASIST baseline (paper Table 1 row "XLSR+AASIST"). CC-BY-NC-4.0
sonar_full_xlsr_aasist_eer6.pth 6.0% SONAR-Full: dual XLSR + RFE + cross-attention + AASIST + JS-alignment loss. Matches guided_model.GuidedModel. CC-BY-NC-4.0
sonar_finetune_xlsr_mamba_eer5p5.pth 5.5% SONAR-Finetune: frozen XLSR-Mamba content branch + RFE/NFE + cross-attention + Conformer head + JS-alignment loss. CC-BY-NC-4.0

Code: https://github.com/idonithid/SONAR-Audio-DF-Detection Project page: https://idonithid.github.io/SONAR-Audio-DF-Detection/

Loading

from huggingface_hub import hf_hub_download
import torch
from argparse import Namespace
from sonar.guided_model import GuidedModel

ckpt = hf_hub_download(repo_id="idonithid/SONAR-weights",
                       filename="sonar_full_xlsr_aasist_eer6.pth")
xlsr = hf_hub_download(repo_id="idonithid/SONAR-weights",
                       filename="xlsr2_300m.pt")
import os; os.environ["SONAR_XLSR_CKPT"] = xlsr

model = GuidedModel(Namespace(algo=4, batch_size=1, device="cuda"), "cuda").cuda()
model.load_state_dict(torch.load(ckpt, map_location="cuda"), strict=False)
model.eval()

Citation

@inproceedings{hidekel2026sonar,
  title     = {{SONAR}: Spectral-Contrastive Audio Residuals for Generalizable Deepfake Detection},
  author    = {Hidekel, Ido Nitzan and Lifshitz, Gal and Cohen, Khen and Raviv, Dan},
  booktitle = {Proceedings of the 43rd International Conference on Machine Learning (ICML)},
  year      = {2026}
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support