Visual ComparisonsWhich shoe is more sporty?Problem:Fine-grained visualcomparisons requireaccounting for subtlevisual differences specificto each comparison pair.Status Quo: Learning a Global Ranking Function[Parikh & Grauman 11, Datta et al. 11, Li et al. 12, Kovashka et al. 12, ...]o fails to account for subtle differencesamong closely related imageso each comparison pair exhibits uniquevisual cues/rationaleso visual comparisons need not be transitiveOur ApproachWe propose a local learning approach for fine-grained comparisons.o learn attribute-specific distance metricso identify top K analogous neighboring pairs w.r.t. each novel pairo train local function that tailors to the neighborhood statisticsKey Idea: having the right data > having more dataAnalogous Neighboring PairsDetect analogous pairs based on individual similarity & paired contrast.o select neighboring pairs that accentuate fine-grained differenceso take product of pairwise distances of individual memberso i.e. highly analogous if both query-training couplings are similarLearned Attribute DistanceLearn a Mahalanobis metric per attribute (similarity computation).o attribute similarity doesn’t rely equally on each dim of feature spaceo constraints similar images be close, dissimilar images be farObservation: Nearest analogous pairs most suited for locallearning need not be those closest in raw feature space.UT Zappos50K DatasetWe introduce a new large shoe dataset UT-Zap50K, consisting ofCoarseFine-Grained50,025 catalog images from Zappos.com.4 relative attributes (open, pointy, sporty, comfort)ohigh confidence pairwise labels from mTurk workerso6,751 ordered labels + 4,612 “equal” labelso4,334 twice-labeled fine-grained labels (no “equal” option)oResults: UT-Zap50Ko FG-LocalPair: our proposed fine-grained approacho Global[Parikh & Grauman 11]: status quo of learning a singleglobal ranking function per attributeo RandPair: local approach with random neighborso RelTree[Li et al. 12]: non-linear relative attribute approacho LocalPair: our approach w/o the learned metric(10 iterations @ K=100)Accuracy Comparisono coarser comparisonso fine-grained comparisonso accuracy for the 30 hardest test pairs (according to learned metrics)Observation:We outperform all baselines,demonstrating strong advantage fordetecting subtle differences on theharder comparisons (~20% more).Results: PubFig & ScenesWe form supervision pairs using the category-wise comparisons avg. 20,000 ordered labels / attribute.o Public Figures Face (PubFig): 772 images w/ 11 attributeso Outdoor Scene Recognition (OSR): 2,688 images w/ 6 attributesObservation: We outperform the current state of the art on 2 popular relative attributedatasets. Our gains are especially dominant on localizable attributes due to the learned metrics.