Visual Comparisons Which shoe is more sporty?
Problem: Fine-grained visual comparisons require accounting for subtle visual differences specific to each comparison pair. Status Quo: Learning a Global Ranking Function [Parikh & Grauman 11, Datta et al. 11, Li et al. 12, Kovashka et al. 12, ...]
o fails to account for subtle differences among closely related images o each comparison pair exhibits unique visual cues/rationales o visual comparisons need not be transitive
Our Approach We propose a local learning approach for fine-grained comparisons.
o learn attribute-specific distance metrics o identify top K analogous neighboring pairs w.r.t. each novel pair o train local function that tailors to the neighborhood statistics Key Idea: having the right data > having more data Analogous Neighboring Pairs Detect analogous pairs based on individual similarity & paired contrast. o select neighboring pairs that accentuate fine-grained differences o take product of pairwise distances of individual members o i.e. highly analogous if both query-training couplings are similar
Learned Attribute Distance Learn a Mahalanobis metric per attribute (similarity computation). o attribute similarity doesn’t rely equally on each dim of feature space o constraints  similar images be close, dissimilar images be far
Observation: Nearest analogous pairs most suited for local learning need not be those closest in raw feature space. UT Zappos50K Dataset We introduce a new large shoe dataset UT-Zap50K, consisting of CoarseFine-Grained50,025 catalog images from Zappos.com. 4 relative attributes (open, pointy, sporty, comfort) ohigh confidence pairwise labels from mTurk workers o6,751 ordered labels + 4,612 “equal” labels o4,334 twice-labeled fine-grained labels (no “equal” option)o
Results: UT-Zap50K o FG-LocalPair: our proposed fine-grained approach o Global[Parikh & Grauman 11]: status quo of learning a single global ranking function per attributeo RandPair: local approach with random neighbors o RelTree[Li et al. 12]: non-linear relative attribute approacho LocalPair: our approach w/o the learned metric (10 iterations @ K=100)Accuracy Comparison o coarser comparisons
o fine-grained comparisons
o accuracy for the 30 hardest test pairs (according to learned metrics)
Observation: We outperform all baselines, demonstrating strong advantage for detecting subtle differences on the harder comparisons (~20% more). Results: PubFig & Scenes We form supervision pairs using the category-wise comparisons  avg. 20,000 ordered labels / attribute. o Public Figures Face (PubFig): 772 images w/ 11 attributes o Outdoor Scene Recognition (OSR): 2,688 images w/ 6 attributes
Observation: We outperform the current state of the art on 2 popular relative attribute datasets. Our gains are especially dominant on localizable attributes due to the learned metrics.