SUPERGLASSES: Benchmarking Vision Language Models as Intelligent Agents for AI Smart Glasses
Paper • 2602.22683 • Published
image_id int64 672 2.18k | image_name stringlengths 30 55 | image imagewidth (px) 2.54k 4.1k |
|---|---|---|
1,820 | 88d665e1-photo-506_singular_display_fullPicture.jpeg | |
1,821 | 2dd8cee7-photo-400_singular_display_fullPicture.jpeg | |
1,822 | f6cc0a71-photo-394_singular_display_fullPicture.jpeg | |
1,824 | 1f1ed6a3-photo-346_singular_display_fullPicture.jpeg | |
1,825 | d5e44172-photo-323_singular_display_fullPicture.jpeg | |
1,826 | 6573874b-photo-332_singular_display_fullPicture.jpeg | |
1,827 | 27e0c0c2-photo-318_singular_display_fullPicture.jpeg | |
1,828 | da9f1350-od_photo-179_singular_display_fullPicture.jpeg | |
1,829 | f279df53-photo-150_singular_display_fullPicture.jpeg | |
1,830 | 5ef24dd9-photo-136_singular_display_fullPicture.jpeg | |
1,831 | 53f78839-photo-128_singular_display_fullPicture.jpeg | |
1,832 | 68b6a348-photo-104_singular_display_fullPicture.jpeg | |
1,833 | 5cf81acf-photo-123_singular_display_fullPicture.jpeg | |
1,834 | ea60ab82-photo-54_singular_display_fullPicture.jpeg | |
1,836 | d91c09ad-photo-355_singular_display_fullPicture.jpeg | |
1,837 | d217ecc6-photo-503_singular_display_fullPicture.jpeg | |
1,839 | c5d381fd-photo-373_singular_display_fullPicture.jpeg | |
1,841 | 1676e5e8-photo-188_singular_display_fullPicture.jpeg | |
1,842 | fb70612e-photo-671_singular_display_fullPicture.jpeg | |
1,843 | 99f44053-photo-697_singular_display_fullPicture.jpeg | |
1,844 | 79b9bdc9-photo-657_singular_display_fullPicture.jpeg | |
1,845 | 424b3f2e-od_photo-647_singular_display_fullPicture.jpeg | |
1,846 | add8bd4a-od_photo-678_singular_display_fullPicture.jpeg | |
1,847 | 808f4004-photo-612_singular_display_fullPicture.jpeg | |
1,848 | 00a810ab-photo-577_singular_display_fullPicture.jpeg | |
1,849 | 361967d8-od_photo-596_singular_display_fullPicture.jpeg | |
1,850 | a6a57a39-photo-584_singular_display_fullPicture.jpeg | |
1,851 | c10b32ba-photo-531_singular_display_fullPicture.jpeg | |
1,852 | ffa7a918-photo-522_singular_display_fullPicture.jpeg | |
1,853 | 135435bb-photo-528_singular_display_fullPicture.jpeg | |
1,091 | 48f3a407-IMG_20250821_155934_HC.jpeg | |
1,092 | 04d3f0d2-IMG_20250821_160344_HC.jpeg | |
1,093 | 6d929228-IMG_20250821_160133_HC.jpeg | |
1,094 | 29e9ed7b-IMG_20250821_160150_HC.jpeg | |
1,095 | 7c2219db-IMG_20250821_160209_HC.jpeg | |
1,096 | 5c7bb2ca-IMG_20250821_160219_HC.jpeg | |
1,097 | 90b4de96-IMG_20250821_160234_HC.jpeg | |
1,098 | 33b28ea2-IMG_20250821_160251.jpeg | |
1,099 | 74fb2adc-IMG_20250821_160303_HC.jpeg | |
1,100 | 4e22cdc5-IMG_20250821_160331_HC.jpeg | |
1,101 | e89b3317-IMG_20250821_160930_HC.jpeg | |
1,102 | 07e5604d-IMG_20250821_160956.jpeg | |
1,103 | f694b1a7-IMG_20250821_161020_HC.jpeg | |
1,104 | 48532afb-IMG_20250821_161056.jpeg | |
1,105 | e9493df5-IMG_20250821_161128_HC.jpeg | |
1,106 | 7c71a894-IMG_20250821_161145.jpeg | |
1,107 | e2af5602-IMG_20250821_161251_HC.jpeg | |
1,108 | 66960604-IMG_20250821_161321_HC.jpeg | |
1,109 | c917e45b-IMG_20250821_161334_HC.jpeg | |
1,110 | 15a07a39-IMG_20250821_161342.jpeg | |
1,111 | ba393402-IMG_20250821_161356_HC.jpeg | |
1,112 | cd4b9b4c-IMG_20250821_161422_HC.jpeg | |
1,113 | c5ca3d03-IMG_20250821_161432_HC.jpeg | |
1,114 | 2e13f3ee-IMG_20250821_161454_HC.jpeg | |
1,115 | 3912e262-IMG_20250821_161511.jpeg | |
1,116 | 66c07982-IMG_20250821_161601_HC.jpeg | |
1,117 | 5d521f59-IMG_20250821_161610.jpeg | |
1,118 | 40eb0c10-IMG_20250821_161619.jpeg | |
1,119 | b87533e4-IMG_20250821_161640_HC.jpeg | |
1,120 | 8664e4fa-IMG_20250821_161650_HC.jpeg | |
1,121 | 251ebdbe-IMG_20250821_161658.jpeg | |
1,122 | 66db9157-IMG_20250821_161708_HC.jpeg | |
1,123 | feb2f506-IMG_20250821_161730_HC.jpeg | |
1,124 | fafa9b73-IMG_20250821_161743_HC.jpeg | |
1,125 | a164084b-IMG_20250821_161809_HC.jpeg | |
1,126 | 0ddb71db-IMG_20250821_161817_HC.jpeg | |
1,127 | 2cf94a11-IMG_20250821_161831_HC.jpeg | |
1,128 | 6d0179c9-IMG_20250821_161848_HC.jpeg | |
1,129 | ecd03e76-IMG_20250821_161904_HC.jpeg | |
1,130 | bca72508-IMG_20250821_161916.jpeg | |
1,131 | e6588e36-IMG_20250821_161936_HC.jpeg | |
1,132 | 39a6f4e8-IMG_20250821_160424.jpeg | |
1,133 | 86ad1db0-IMG_20250821_160457_HC.jpeg | |
1,134 | 8d147ac7-IMG_20250821_160529_HC.jpeg | |
1,135 | 3cdd5857-IMG_20250821_160537.jpeg | |
1,136 | e9fc58c8-IMG_20250821_160552_HC.jpeg | |
1,137 | ab81167e-IMG_20250821_160604.jpeg | |
1,138 | 48529403-IMG_20250821_160628_HC.jpeg | |
1,139 | c8026c7b-IMG_20250821_160647_HC.jpeg | |
1,140 | aae16dd0-IMG_20250821_160705.jpeg | |
1,141 | 4a1ae627-IMG_20250821_163837.jpeg | |
1,142 | 44df6c9b-IMG_20250821_161951.jpeg | |
1,143 | 05c3f184-IMG_20250821_162004_HC.jpeg | |
1,144 | fc00d44f-IMG_20250821_162031_HC.jpeg | |
1,145 | 9c39cc75-IMG_20250821_162041_HC.jpeg | |
1,146 | ea8bce90-IMG_20250821_162055_HC.jpeg | |
1,147 | a8c6a0b4-IMG_20250821_162121_HC.jpeg | |
1,148 | b52f0395-IMG_20250821_162130_HC.jpeg | |
1,149 | 16d389c6-IMG_20250821_162143_HC.jpeg | |
1,150 | 97d04c04-IMG_20250821_162155.jpeg | |
1,151 | b17a0f0e-IMG_20250821_162221_HC.jpeg | |
1,152 | 25950e78-IMG_20250821_162234_HC.jpeg | |
1,153 | 50a4e2c2-IMG_20250821_162246_HC.jpeg | |
1,154 | b7ad097a-IMG_20250821_162306_HC.jpeg | |
1,155 | a2a8e43d-IMG_20250821_162341.jpeg | |
1,156 | df4ceeee-IMG_20250821_162426.jpeg | |
1,157 | a59bdc63-IMG_20250821_162531_HC.jpeg | |
1,158 | 2fac0e83-IMG_20250821_162552_HC.jpeg | |
1,159 | 061f0ade-IMG_20250821_162605_HC.jpeg | |
1,160 | 03f343c3-IMG_20250821_162639_HC.jpeg |
This repository contains the dataset for the paper SUPERGLASSES: Benchmarking Vision Language Models as Intelligent Agents for AI Smart Glasses.
SUPERGLASSES is the first comprehensive Visual Question Answering (VQA) benchmark built on real-world data entirely collected by smart glasses devices. It comprises 2,422 egocentric image-question pairs spanning 14 image domains and 8 query categories, enriched with full search trajectories and reasoning annotations.
The benchmark is specifically designed to evaluate Vision Language Models (VLMs) in realistic smart glasses usage scenarios, where identifying an object of interest is a critical prerequisite for external knowledge retrieval.
The dataset is provided in two main configurations:
images: Contains the egocentric images captured by smart glasses.queries: Contains questions, answers, and detailed annotations including difficulty, location, and reasoning sub-questions.