NearID: Identity Representation Learning via Near-identity Distractors
Abstract
Researchers developed a novel framework using Near-identity distractors to improve identity-focused vision tasks by creating a dataset and evaluation protocol that better isolates identity from background context, leading to more reliable representations and metrics.
When evaluating identity-focused tasks such as personalized generation and image editing, existing vision encoders entangle object identity with background context, leading to unreliable representations and metrics. We introduce the first principled framework to address this vulnerability using Near-identity (NearID) distractors, where semantically similar but distinct instances are placed on the exact same background as a reference image, eliminating contextual shortcuts and isolating identity as the sole discriminative signal. Based on this principle, we present the NearID dataset (19K identities, 316K matched-context distractors) together with a strict margin-based evaluation protocol. Under this setting, pre-trained encoders perform poorly, achieving Sample Success Rates (SSR), a strict margin-based identity discrimination metric, as low as 30.7% and often ranking distractors above true cross-view matches. We address this by learning identity-aware representations on a frozen backbone using a two-tier contrastive objective enforcing the hierarchy: same identity > NearID distractor > random negative. This improves SSR to 99.2%, enhances part-level discrimination by 28.0%, and yields stronger alignment with human judgments on DreamBench++, a human-aligned benchmark for personalization. Project page: https://gorluxor.github.io/NearID/
Community
Code, inference code, and datasets are released. Model is at https://huggingface.co/Aleksandar/nearid-siglip2.
Get this paper in your agent:
hf papers read 2604.01973 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 1
Datasets citing this paper 10
Browse 10 datasets citing this paperSpaces citing this paper 0
No Space linking this paper