Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -2,8 +2,8 @@
|
|
| 2 |
license: cc-by-nc-4.0
|
| 3 |
task_categories:
|
| 4 |
- visual-question-answering
|
| 5 |
-
-
|
| 6 |
-
-
|
| 7 |
language:
|
| 8 |
- en
|
| 9 |
- zh
|
|
@@ -14,20 +14,23 @@ tags:
|
|
| 14 |
- embodied-ai
|
| 15 |
- multimodal
|
| 16 |
- benchmark
|
| 17 |
-
|
|
|
|
| 18 |
---
|
| 19 |
|
| 20 |
[](https://creativecommons.org/licenses/by-nc/4.0/)
|
| 21 |
|
| 22 |
# SWITCH: Benchmarking Interaction and Verification on Real-World Interfaces in Lifelong Embodied Agents
|
| 23 |
|
| 24 |
-
> **⚠️ Dataset Note:**
|
|
|
|
| 25 |
|
| 26 |
## Overview
|
| 27 |
|
| 28 |
-
SWITCH covers the collection and annotation of real-world Tangible Computer Interfaces (TCI) interaction data, which we systematically structure into five distinct tasks. These tasks are designed to evaluate models across three crucial capability dimensions: **Perception/Spatial Reasoning, Causal Reasoning/Planning, and Verification**.
|
| 29 |
|
| 30 |
-
We conduct a comprehensive evaluation of state-of-the-art large multi-modality models (LMMMs) on this benchmark
|
|
|
|
| 31 |
|
| 32 |
## Key Tasks Supported
|
| 33 |
|
|
@@ -59,7 +62,7 @@ To help researchers better understand the dataset structure and visually inspect
|
|
| 59 |
|
| 60 |
## Dataset Structure
|
| 61 |
|
| 62 |
-
The repository is organized as follows:
|
| 63 |
|
| 64 |
```text
|
| 65 |
switchBasic_release0212/
|
|
@@ -83,12 +86,13 @@ switchBasic_release0212/
|
|
| 83 |
├── viewer_eng.html # Visualization (Local HTML Viewer) in English
|
| 84 |
└── README.md
|
| 85 |
```
|
|
|
|
| 86 |
Each task folder contains subfolders defining the input modalities (e.g., img2txt for image in question, text choices as answers; img2video for image in question, video choices as answers). Inside these format folders, you will find the corresponding media assets (imgs/, videos/) and the vqa.json file containing the detailed annotations.
|
| 87 |
|
| 88 |
|
| 89 |
## Leaderboard & Full Set Evaluation
|
| 90 |
|
| 91 |
-
Please visit [
|
| 92 |
|
| 93 |
## License
|
| 94 |
|
|
@@ -96,11 +100,13 @@ This dataset is released under the **Creative Commons Attribution-NonCommercial
|
|
| 96 |
|
| 97 |
## 📖 Citation
|
| 98 |
|
| 99 |
-
If you
|
| 100 |
|
| 101 |
```bibtex
|
| 102 |
-
@
|
| 103 |
-
title={SWITCH:
|
|
|
|
|
|
|
| 104 |
year={2025}
|
| 105 |
}
|
| 106 |
```
|
|
@@ -108,4 +114,5 @@ If you use SWITCH in your research, please cite:
|
|
| 108 |
## Contributing & Contact
|
| 109 |
|
| 110 |
We welcome contributions and feedback! Please feel free to submit issues or pull requests. For questions or inquiries, please reach out to the **BAAI-Agents** team.
|
|
|
|
| 111 |
```
|
|
|
|
| 2 |
license: cc-by-nc-4.0
|
| 3 |
task_categories:
|
| 4 |
- visual-question-answering
|
| 5 |
+
- robotics
|
| 6 |
+
- other
|
| 7 |
language:
|
| 8 |
- en
|
| 9 |
- zh
|
|
|
|
| 14 |
- embodied-ai
|
| 15 |
- multimodal
|
| 16 |
- benchmark
|
| 17 |
+
- tci
|
| 18 |
+
pretty_name: SWITCH-Basic Benchmark v1 (30% public subset)
|
| 19 |
---
|
| 20 |
|
| 21 |
[](https://creativecommons.org/licenses/by-nc/4.0/)
|
| 22 |
|
| 23 |
# SWITCH: Benchmarking Interaction and Verification on Real-World Interfaces in Lifelong Embodied Agents
|
| 24 |
|
| 25 |
+
> **⚠️ Dataset Note:**
|
| 26 |
+
> This repository hosts the **30% public subset** of the full SWITCH-Basic v1 benchmark. It is intended for public exploration, preliminary evaluation, and community feedback.
|
| 27 |
|
| 28 |
## Overview
|
| 29 |
|
| 30 |
+
SWITCH-Basic covers the collection and annotation of real-world Tangible Computer Interfaces (TCI) interaction data, which we systematically structure into five distinct tasks. These tasks are designed to evaluate models across three crucial capability dimensions: **Perception/Spatial Reasoning, Causal Reasoning/Planning, and Verification**.
|
| 31 |
|
| 32 |
+
We conduct a comprehensive evaluation of state-of-the-art large multi-modality models (LMMMs) on this benchmark, providing detailed analysis of their strengths and limitations, thereby offering insights to guide future model development in real-world interactive tasks.
|
| 33 |
+
Furthermore, we leverage the benchmark to evaluate advanced generative models, like Veo3. By comparing generated videos against ground truth, we illustrate how current generative models still exhibit significant room for improvement in logical consistency and fine-grained interaction for real-word use, thus underscoring the importance of SWITCH's target scenarios.
|
| 34 |
|
| 35 |
## Key Tasks Supported
|
| 36 |
|
|
|
|
| 62 |
|
| 63 |
## Dataset Structure
|
| 64 |
|
| 65 |
+
The sub-dataset repository is organized as follows:
|
| 66 |
|
| 67 |
```text
|
| 68 |
switchBasic_release0212/
|
|
|
|
| 86 |
├── viewer_eng.html # Visualization (Local HTML Viewer) in English
|
| 87 |
└── README.md
|
| 88 |
```
|
| 89 |
+
|
| 90 |
Each task folder contains subfolders defining the input modalities (e.g., img2txt for image in question, text choices as answers; img2video for image in question, video choices as answers). Inside these format folders, you will find the corresponding media assets (imgs/, videos/) and the vqa.json file containing the detailed annotations.
|
| 91 |
|
| 92 |
|
| 93 |
## Leaderboard & Full Set Evaluation
|
| 94 |
|
| 95 |
+
Please visit the [SWITCH-Basic v1 Leaderboard Space](https://huggingface.co/spaces/BAAI-Agents/SwitchBasic-Leaderboard) for more details on the latest model results.
|
| 96 |
|
| 97 |
## License
|
| 98 |
|
|
|
|
| 100 |
|
| 101 |
## 📖 Citation
|
| 102 |
|
| 103 |
+
If you utilize SWITCH scenarios or data in your research, please cite:
|
| 104 |
|
| 105 |
```bibtex
|
| 106 |
+
@article{switch2025,
|
| 107 |
+
title={{SWITCH}: {B}enchmarking Modeling and Handling of Tangible Interfaces in Long-horizon Embodied Scenarios},
|
| 108 |
+
author={Jieru Lin, Zhiwei Yu, Börje F. Karlsson},
|
| 109 |
+
journal={arXiv preprint arXiv:2511.17649},
|
| 110 |
year={2025}
|
| 111 |
}
|
| 112 |
```
|
|
|
|
| 114 |
## Contributing & Contact
|
| 115 |
|
| 116 |
We welcome contributions and feedback! Please feel free to submit issues or pull requests. For questions or inquiries, please reach out to the **BAAI-Agents** team.
|
| 117 |
+
|
| 118 |
```
|