SNAP: Speaker Nulling for Artifact Projection in Speech Deepfake Detection
Abstract
A speaker-nulling framework called SNAP is proposed to reduce speaker entanglement in speech encoders, enabling detectors to focus on artifact-related patterns for improved deepfake detection performance.
Recent advancements in text-to-speech technologies enable generating high-fidelity synthetic speech nearly indistinguishable from real human voices. While recent studies show the efficacy of self-supervised learning-based speech encoders for deepfake detection, these models struggle to generalize across unseen speakers. Our quantitative analysis suggests these encoder representations are substantially influenced by speaker information, causing detectors to exploit speaker-specific correlations rather than artifact-related cues. We call this phenomenon speaker entanglement. To mitigate this reliance, we introduce SNAP, a speaker-nulling framework. We estimate a speaker subspace and apply orthogonal projection to suppress speaker-dependent components, isolating synthesis artifacts within the residual features. By reducing speaker entanglement, SNAP encourages detectors to focus on artifact-related patterns, leading to state-of-the-art performance.
Community
In our paper, we demonstrate that SSL-based speech encoders, such as WavLM, are heavily dominated by speaker identity rather than the actual synthesis artifacts essential for deepfake detection.
Based on this claim, we show that by using a simple method to nullify speaker-dependent components, our model isolates synthesis artifacts and achieves state-of-the-art deepfake detection performance.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Unsupervised Domain Adaptation for Audio Deepfake Detection with Modular Statistical Transformations (2026)
- Assessing the Impact of Speaker Identity in Speech Spoofing Detection (2026)
- ZeSTA: Zero-Shot TTS Augmentation with Domain-Conditioned Training for Data-Efficient Personalized Speech Synthesis (2026)
- A SUPERB-Style Benchmark of Self-Supervised Speech Models for Audio Deepfake Detection (2026)
- Audio Deepfake Detection in the Age of Advanced Text-to-Speech models (2026)
- What Counts as Real? Speech Restoration and Voice Quality Conversion Pose New Challenges to Deepfake Detection (2026)
- HierCon: Hierarchical Contrastive Attention for Audio Deepfake Detection (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper
