Introduction & Motivationunsolved problems, and bridge computer and humanvision, we define a battery of 5 tests that measure thegap between human and machine performances inseveral dimensions.- Here, to help focus research efforts onto the hardestdesign new experiments to understand mechanismsof human vision, and to reason about its failure.Cases where humans are better inspire computationalresearchers to learn from humans.- Cases where machines are superior motivate us toor personal robots), perfect accuracy is not necessarilythe goal; rather, having the same type of behavior (e.g.,failing in cases where humans fail too) is favorable.In some applications (e.g., human-machine interactionTest 1: Scene RecognitionA & B) Scene classification accuracy over 6-, 8- and 15-CAT datasets. Error bars represent the standard error of the mean over 10 runs. Naivechance is simply set by the size of the largest class. All models work well above chance level. C) Top: animal vs. non-animal (distractor images)classification. Bottom: classification of target images. 4-way classification is only over target scenes (and not distractors).- A & B: We find that HOG, SSIM, texton, denseSIFT, LBP,and LBHPF outperform other models (accuracy above 70%).We note that spatial feature integration (i.e., x_pyr for themodel x) enhances accuracies.- C: Animal vs. Non-Animal: All models perform above 70%,except tiny image. Human accuracy here is about 80%. Inter-estingly, some models exceed human performance here.SUN dataset: Models that performed well on small datasets(although they degrade heavily) still rank on top. GIST modelworks well here (16.3%) but below top contenders: HOG, tex-ton, SSIM, denseSIFT, and LBP (or their variants). Modelsranking at the bottom, in order, are tiny image, line hist, geocolor, HMAX, and geo map8x8.Performances and correlations on SUN dataset. We randomlychose n = { 1, 5, 10, 20, 50} images per class for training and 50 for test.Learned Lessons1) Models outperform humans in rapid categorization tasks, indicating that discriminative informa-tion is in place but humans do not have enough time to extract it. Models outperform humans onjumbled images and score relatively high in absence of (less) global information.2) We find that some models and edge detection methods are more efficient on line drawings andedge maps. Our analysis helps objectively assess the power of edge detection algorithms to ex-tract meaningful structural features for classification, which hints toward new directions.3) While models are far from human performance over object and scene recognition on naturalscenes, even classic models show high performance and correlation with humans on sketches.4) Consistent with the literature, we find that some models (e.g., HOG, SSIM, geo/texton, andGIST) perform well. We find that they also resemble Fighumans better.5) Invariance analysis shows that only sparseSIFT and geo_color are invariant to in-plane rotationwith the former having higher accuracy (our 3rd test). GIST, a model of scene recognition worksbetter than many models over both Caltech-256 and Sketch datasets.Test 2: Recognition of Line Drawings and Edge MapsLine Drawings- Scenes were were presentedto subjects for 17-87 msin a 6-alternative force choicetask (human acc= 77.3%).- On color images, geo_color,sparseSIFT, GIST, and SSIMshowed the highest correla-tion (all with classification ac-curacy ≥ 75%), while tinyimages, texton, LBHF, andLBP showed the least. Overthe SUN dataset, HOG,denseSIFT, and textonshowed high correlation withhuman CM.- It seems that those modelsthat take advantage of re-gional histogram of features(e.g., denseSIFT, GIST, geo_x; x=map or color) or heavilyrely on edge histograms(texton and HOG) showhigher correlation withhumans on color images(although low in magnitude).- Over line drawings: As images, geo_color, SSIM,and sparseSIFT correlate with humans.To our surprise,geo_color worked well. Human-model agreement on the 6-CAT dataset. See our paperand its supplement for confusion matrices of models. Geometric map: ground, pourous, sky, and vertical regions. Edge maps for a sample image .Edge Maps Scene classification results using edge detected images over 6-CAT dataset. Canny edge detectorleads to best accuracies followed by the log and gPb methods.- A majority of models perform > 70% on linedrawings which is higher than human perfor-mance (similar pattern on images withhuman=77.3% and models > 80%).- SVM trained on images and tested on linedrawings: Some models (e.g., line hists, GIST,geo map, sparseSIFT) better generalize to drawings.SVM trained on line drawings and tested onedge maps: Surprisingly, averaged over allmodels, Sobel and Canny perform better thangPb. GIST, line hists, and HMAX were the mostsuccessful models using all edge detectionmethods. sparseSIFT, LBP, geo_color, andgeo_texton were the most affected ones.- Models using Canny technique achieved thebest scene classification accuracy. Top: training a SVM from color photographs and testing online drawings, gPb edge maps, and inverted (FL) images. Bottom: SVMtrained on line drawings and applied to edge maps.Test 3: Invariance Analysis d values over original, 90 o , and 180 o rotated animal images.- A majority of models are invariant to scaling while few are drasti-cally affected with a large amount of scaling (e.g., siagianItti07,SSIM, line hists, and sparseSIFT).- Interestingly, LBP here shows a similar pattern as humans acrossfour stimulus categories (i. e., max for head, min for close body).- Some models show higher similarity to human disruption over thefour categories of the animal dataset: sparseSIFT, SSIM, and HOG.Test 4: Local vs. Global Information Correlation and classification accuracy over jumbled images.As expected, models based on histograms are less influ-enced (e.g ., geo color, line hist, HOG, texton, and LBP).- Models correlate higher with humans over scenes (OSR andISR) than objects, and better on outdoor scenes than indoors.- Some models, which use global feature statistics, show highcorrelation only on scenes but very low on objects (e.g.,GIST, texton, geo map, and LBP), since they do not captureobject shape or structure.Test 5: Object Recognition- On Caltech-256,HOG achieves thehighest accuracyabout 33.28% fol-lowed by SSIM,texton, and denseSIFT.GIST which is spe-cifically designed forscene categorizationachieves 27.4% accu-racy, better than somemodels specializedfor object recognition(e.g., HMAX). Left: Object recognition performance on Caltech-256 dataset. Right: Recognition rate and correlations on Sketch dataset.On sketch images, the shogSmooth model, specially designed for recognizing sketch images, outperforms others(acc=57.2%). Texton histogram and SSIM ranked second and fourth, respectively. HMAX did very well (in contrast toCaltech-256), perhaps due to its success in capturing edges, corners, etc.- Overall, models did much better on sketches than on natural objects (results are almost 2 times higher than the Caltech-256). Here, similar to the Caltech-256, features relying on geometry (e.g., geo_map) did not perform well.Summary Classification results corresponding to 50 training and (50 over SUN and remaining images over Caltech-256 and Sketch) testing images per classAnimal vs. non-Animal corresponds to classification of 600 target vs. 600 distractor images . Top three models on each dataset are highlighted in red.