WearVox: An Egocentric Multichannel Voice Assistant Benchmark for Wearables
Paper
• 2601.02391 • Published
The dataset is currently empty. Upload or create new data files. Then, you will be able to explore them in the Dataset Viewer.
This page consolidates the GitHub and HuggingFace links to wearables AI benchmarks from Meta.
| Benchmark | Paper | GitHub/HuggingFace Links |
|---|---|---|
| WearVox | arXiv | facebookresearch/wearvox |
| WearVQA | arXiv | WearVQA |
| CRAG | arXiv | facebookresearch/CRAG |
| CRAG-MM | arXiv | Single-Turn & Multi-Turn |
| MemoryQA | arXiv | facebookresearch/MemoryQA/ |
| PLMVideoBench | arXiv | facebookresearch/perception_models |
| Benchmark | Paper | GitHub/HuggingFace Links |
|---|---|---|
| Head-to-tail | arXiv | facebookresearch/head-to-tail |
| VisualLens | arXiv | facebookresearch/visuallens |
| MeetingQA | arXiv | facebookresearch/AssoMem |
| SemiBench | arXiv | facebookresearch/SemiBench |
License: Please refer to each dataset's license terms and conditions before using these datasets. License information can be found in the respective GitHub repositories and HuggingFace dataset pages.