SNAP: Speaker Nulling for Artifact Projection in Speech Deepfake Detection
Abstract
A speaker-nulling framework called SNAP is proposed to reduce speaker entanglement in speech encoders, enabling detectors to focus on artifact-related patterns for improved deepfake detection performance.
Recent advancements in text-to-speech technologies enable generating high-fidelity synthetic speech nearly indistinguishable from real human voices. While recent studies show the efficacy of self-supervised learning-based speech encoders for deepfake detection, these models struggle to generalize across unseen speakers. Our quantitative analysis suggests these encoder representations are substantially influenced by speaker information, causing detectors to exploit speaker-specific correlations rather than artifact-related cues. We call this phenomenon speaker entanglement. To mitigate this reliance, we introduce SNAP, a speaker-nulling framework. We estimate a speaker subspace and apply orthogonal projection to suppress speaker-dependent components, isolating synthesis artifacts within the residual features. By reducing speaker entanglement, SNAP encourages detectors to focus on artifact-related patterns, leading to state-of-the-art performance.
Community
In our paper, we demonstrate that SSL-based speech encoders, such as WavLM, are heavily dominated by speaker identity rather than the actual synthesis artifacts essential for deepfake detection.
Based on this claim, we show that by using a simple method to nullify speaker-dependent components, our model isolates synthesis artifacts and achieves state-of-the-art deepfake detection performance.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper
