Add dataset description, task category and paper link
#2
by nielsr HF Staff - opened
README.md
CHANGED
|
@@ -2,7 +2,8 @@
|
|
| 2 |
language:
|
| 3 |
- en
|
| 4 |
license: cc-by-sa-4.0
|
| 5 |
-
|
|
|
|
| 6 |
configs:
|
| 7 |
- config_name: benchmark
|
| 8 |
data_files:
|
|
@@ -18,4 +19,31 @@ configs:
|
|
| 18 |
path: data/revplan.parquet
|
| 19 |
- split: shape
|
| 20 |
path: data/shape.parquet
|
| 21 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2 |
language:
|
| 3 |
- en
|
| 4 |
license: cc-by-sa-4.0
|
| 5 |
+
task_categories:
|
| 6 |
+
- video-text-to-text
|
| 7 |
configs:
|
| 8 |
- config_name: benchmark
|
| 9 |
data_files:
|
|
|
|
| 19 |
path: data/revplan.parquet
|
| 20 |
- split: shape
|
| 21 |
path: data/shape.parquet
|
| 22 |
+
---
|
| 23 |
+
|
| 24 |
+
# SAW-Bench: Situated Awareness in the Real World
|
| 25 |
+
|
| 26 |
+
[Project Page](https://sawbench.github.io) | [Paper](https://huggingface.co/papers/2602.16682)
|
| 27 |
+
|
| 28 |
+
SAW-Bench (Situated Awareness in the Real World) is a benchmark designed to evaluate the egocentric situated awareness of multimodal foundation models (MFMs). Unlike environment-centric benchmarks, SAW-Bench focuses on observer-centric relationships, requiring models to reason relative to an agent's viewpoint, pose, and motion.
|
| 29 |
+
|
| 30 |
+
## Dataset Summary
|
| 31 |
+
The dataset comprises 786 self-recorded videos captured using Ray-Ban Meta (Gen 2) smart glasses across diverse indoor and outdoor environments. It includes over 2,071 human-annotated question-answer pairs spanning six awareness tasks:
|
| 32 |
+
|
| 33 |
+
- **Affordance**: Reasoning about possible actions in context.
|
| 34 |
+
- **Direction**: Observer-centric spatial orientation.
|
| 35 |
+
- **Localization**: Determining position relative to surroundings.
|
| 36 |
+
- **Memory**: Reasoning over temporal events in the video.
|
| 37 |
+
- **Revplan** (Reverse Planning): Inferring previous actions or intentions.
|
| 38 |
+
- **Shape**: Understanding the geometric structure of the environment from the observer's perspective.
|
| 39 |
+
|
| 40 |
+
## Citation
|
| 41 |
+
If you use this dataset in your research, please cite the following paper:
|
| 42 |
+
```bibtex
|
| 43 |
+
@article{li2026learning,
|
| 44 |
+
title={Learning Situated Awareness in the Real World},
|
| 45 |
+
author={Li, Chuhan and Han, Ruilin and Hsu, Joy and Liang, Yongyuan and Dhawan, Rajiv and Wu, Jiajun and Yang, Ming-Hsuan and Wang, Xin Eric},
|
| 46 |
+
journal={arXiv preprint arXiv:2602.16682},
|
| 47 |
+
year={2026}
|
| 48 |
+
}
|
| 49 |
+
```
|