Datasets:
ArXiv:
License:
metadata
license: unknown
Wearables Benchmarks
This page consolidates the GitHub and HuggingFace links to wearables AI benchmarks from Meta.
Datasets highly relevant to wearable use cases
| Benchmark | Paper | GitHub | HuggingFace | TLDR |
|---|---|---|---|---|
| WearVox | arXiv | facebookresearch/wearvox | WearVox | A benchmark for evaluating voice assistants in realistic wearable scenarios using 3,842 egocentric audio recordings collected via AI glasses across multiple task types like QA, tool calling, and speech translation. |
| WearVQA | arXiv | WearVQA | A benchmark for evaluating visual question answering on wearable devices using 2,520 image-question pairs that reflect egocentric challenges like occlusion, poor lighting, and blur. | |
| CRAG | arXiv | facebookresearch/CRAG | A comprehensive RAG benchmark for factual question answering spanning five domains, eight question categories, and varied entity popularity and temporal dynamics. | |
| CRAG-MM | arXiv | facebookresearch/CRAG-MM | Single-Turn & Multi-Turn | A multimodal conversational benchmark featuring image-based QA across 13 domains with single- and multi-turn conversations captured via smart glasses and public sources. |
| MemoryQA | arXiv | facebookresearch/MemoryQA | A benchmark for answering recall questions about visual content retrieved from previously stored multimodal memories. | |
| PLM-VideoBench | arXiv | facebookresearch/perception_models | A comprehensive video understanding benchmark suite covering fine-grained QA, egocentric smart glasses QA, region captioning, temporal localization, and dense video captioning. |
Datasets moderately relevant to wearable use cases
| Benchmark | Paper | GitHub | TLDR |
|---|---|---|---|
| Head-to-tail | arXiv | facebookresearch/head-to-tail | A benchmark specifically designed to assess how well LLMs incorporate factual knowledge across head, torso, and tail popularity distributions. |
| VisualLens | arXiv | facebookresearch/visuallens | A recommendation benchmark evaluating systems under task-agnostic visual user histories using data from Google Local and Yelp. |
| MeetingQA | arXiv | facebookresearch/AssoMem | A benchmark simulating real-world meeting scenarios where multi-turn dialogues form the memory base, paired with diverse QA examples. |
| SemiBench | arXiv | facebookresearch/SemiBench | A benchmark for evaluating knowledge extraction quality and its impact on question answering from semi-structured webpages. |
License: Please refer to each dataset's license terms and conditions before using these datasets. License information can be found in the respective GitHub repositories and HuggingFace dataset pages.