| | ## 📘 Dataset Description |
| |
|
| | **StaticEmbodiedBench** is a dataset for evaluating vision-language models on embodied intelligence tasks, as featured in the [OpenCompass leaderboard](https://staging.opencompass.org.cn/embodied-intelligence/rank/brain). |
| |
|
| | It covers three key capabilities: |
| |
|
| | - **Macro Planning**: Decomposing a complex task into a sequence of simpler subtasks. |
| | - **Micro Perception**: Performing concrete simple tasks such as spatial understanding and fine-grained perception. |
| | - **Stage-wise Reasoning**: Deciding the next action based on the agent’s current state and perceptual inputs. |
| |
|
| | Each sample is also labeled with a visual perspective: |
| |
|
| | - **First-Person View**: The visual sensor is integrated with the agent, e.g., mounted on the end-effector. |
| | - **Third-Person View**: The visual sensor is separate from the agent, e.g., top-down or observer view. |
| |
|
| | This release includes **200 open-source samples** from the full dataset, provided for public research and benchmarking purposes. |
| |
|
| | --- |
| |
|
| | ## 📚 Citation |
| |
|
| | If you use this dataset in your research, please cite it as follows: |
| |
|
| | ```bibtex |
| | @misc{staticembodiedbench, |
| | title = {StaticEmbodiedBench}, |
| | author = {Jiahao Xiao, Shengyu Guo, Chunyi Li, Bowen Yan and Jianbo Zhang}, |
| | year = {2025}, |
| | url = {https://huggingface.co/datasets/xiaojiahao/StaticEmbodiedBench} |
| | } |
| | |