Introduction & Motivation unsolved problems, and bridge computer and human vision, we define a battery of 5 tests that measure the gap between human and machine performances in several dimensions.- Here, to help focus research efforts onto the hardest design new experiments to understand mechanisms of human vision, and to reason about its failure. Cases where humans are better inspire computational researchers to learn from humans.- Cases where machines are superior motivate us to or personal robots), perfect accuracy is not necessarily the goal; rather, having the same type of behavior (e.g., failing in cases where humans fail too) is favorable.In some applications (e.g., human-machine interaction
Test 1: Scene Recognition
A & B) Scene classification accuracy over 6-, 8- and 15-CAT datasets. Error bars represent the standard error of the mean over 10 runs. Naive chance is simply set by the size of the largest class. All models work well above chance level. C) Top: animal vs. non-animal (distractor images) classification. Bottom: classification of target images. 4-way classification is only over target scenes (and not distractors). - A & B: We find that HOG, SSIM, texton, denseSIFT, LBP, and LBHPF outperform other models (accuracy above 70%). We note that spatial feature integration (i.e., x_pyr for the model x) enhances accuracies. - C: Animal vs. Non-Animal: All models perform above 70%, except tiny image. Human accuracy here is about 80%. Inter- estingly, some models exceed human performance here. SUN dataset: Models that performed well on small datasets (although they degrade heavily) still rank on top. GIST model works well here (16.3%) but below top contenders: HOG, tex- ton, SSIM, denseSIFT, and LBP (or their variants). Models ranking at the bottom, in order, are tiny image, line hist, geo color, HMAX, and geo map8x8.
Performances and correlations on SUN dataset. We randomly chose n = { 1, 5, 10, 20, 50} images per class for training and 50 for test. Learned Lessons 1) Models outperform humans in rapid categorization tasks, indicating that discriminative informa- tion is in place but humans do not have enough time to extract it. Models outperform humans on jumbled images and score relatively high in absence of (less) global information. 2) We find that some models and edge detection methods are more efficient on line drawings and edge maps. Our analysis helps objectively assess the power of edge detection algorithms to ex- tract meaningful structural features for classification, which hints toward new directions. 3) While models are far from human performance over object and scene recognition on natural scenes, even classic models show high performance and correlation with humans on sketches. 4) Consistent with the literature, we find that some models (e.g., HOG, SSIM, geo/texton, and GIST) perform well. We find that they also resemble Fighumans better. 5) Invariance analysis shows that only sparseSIFT and geo_color are invariant to in-plane rotation with the former having higher accuracy (our 3rd test). GIST, a model of scene recognition works better than many models over both Caltech-256 and Sketch datasets. Test 2: Recognition of Line Drawings and Edge Maps Line Drawings - Scenes were were presented to subjects for 17-87 ms in a 6-alternative force choice task (human acc= 77.3%). - On color images, geo_color, sparseSIFT, GIST, and SSIM showed the highest correla- tion (all with classification ac- curacy ≥ 75%), while tiny images, texton, LBHF, and LBP showed the least. Over the SUN dataset, HOG, denseSIFT, and texton showed high correlation with human CM. - It seems that those models that take advantage of re- gional histogram of features (e.g., denseSIFT, GIST, geo_ x; x=map or color) or heavily rely on edge histograms (texton and HOG) show higher correlation with humans on color images (although low in magnitude). - Over line drawings: As images, geo_color, SSIM, and sparseSIFT correlate with humans.To our surprise, geo_color worked well.
Human-model agreement on the 6-CAT dataset. See our paper and its supplement for confusion matrices of models.
Geometric map: ground, pourous, sky, and vertical regions.
Edge maps for a sample image . Edge Maps
Scene classification results using edge detected images over 6-CAT dataset. Canny edge detector leads to best accuracies followed by the log and gPb methods. - A majority of models perform > 70% on line drawings which is higher than human perfor- mance (similar pattern on images with human=77.3% and models > 80%). - SVM trained on images and tested on line drawings: Some models (e.g., line hists, GIST, geo map, sparseSIFT) better generalize to drawings. SVM trained on line drawings and tested on edge maps: Surprisingly, averaged over all models, Sobel and Canny perform better than gPb. GIST, line hists, and HMAX were the most successful models using all edge detection methods. sparseSIFT, LBP, geo_color, and geo_texton were the most affected ones. - Models using Canny technique achieved the best scene classification accuracy.
Top: training a SVM from color photographs and testing on line drawings, gPb edge maps, and inverted (FL) images. Bottom: SVM trained on line drawings and applied to edge maps. Test 3: Invariance Analysis
d values over original, 90 o , and 180 o rotated animal images. - A majority of models are invariant to scaling while few are drasti- cally affected with a large amount of scaling (e.g., siagianItti07, SSIM, line hists, and sparseSIFT). - Interestingly, LBP here shows a similar pattern as humans across four stimulus categories (i. e., max for head, min for close body). - Some models show higher similarity to human disruption over the four categories of the animal dataset: sparseSIFT, SSIM, and HOG. Test 4: Local vs. Global Information
Correlation and classification accuracy over jumbled images. As expected, models based on histograms are less influ- enced (e.g ., geo color, line hist, HOG, texton, and LBP). - Models correlate higher with humans over scenes (OSR and ISR) than objects, and better on outdoor scenes than indoors. - Some models, which use global feature statistics, show high correlation only on scenes but very low on objects (e.g., GIST, texton, geo map, and LBP), since they do not capture object shape or structure. Test 5: Object Recognition - On Caltech-256, HOG achieves the highest accuracy about 33.28% fol- lowed by SSIM, texton, and dense SIFT. GIST which is spe- cifically designed for scene categorization achieves 27.4% accu- racy, better than some models specialized for object recognition (e.g., HMAX).
Left: Object recognition performance on Caltech-256 dataset. Right: Recognition rate and correlations on Sketch dataset. On sketch images, the shogSmooth model, specially designed for recognizing sketch images, outperforms others (acc=57.2%). Texton histogram and SSIM ranked second and fourth, respectively. HMAX did very well (in contrast to Caltech-256), perhaps due to its success in capturing edges, corners, etc. - Overall, models did much better on sketches than on natural objects (results are almost 2 times higher than the Caltech- 256). Here, similar to the Caltech-256, features relying on geometry (e.g., geo_map) did not perform well. Summary
Classification results corresponding to 50 training and (50 over SUN and remaining images over Caltech-256 and Sketch) testing images per class Animal vs. non-Animal corresponds to classification of 600 target vs. 600 distractor images . Top three models on each dataset are highlighted in red.