tellarin commited on
Commit
48d40ea
·
verified ·
1 Parent(s): 0c42fde

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +18 -11
README.md CHANGED
@@ -2,8 +2,8 @@
2
  license: cc-by-nc-4.0
3
  task_categories:
4
  - visual-question-answering
5
- - image-to-text
6
- - text-to-image
7
  language:
8
  - en
9
  - zh
@@ -14,20 +14,23 @@ tags:
14
  - embodied-ai
15
  - multimodal
16
  - benchmark
17
- pretty_name: SWITCH Benchmark (SwitchBasic 30% Subset)
 
18
  ---
19
 
20
  [![License: CC BY-NC 4.0](https://img.shields.io/badge/License-CC%20BY--NC%204.0-lightgrey.svg)](https://creativecommons.org/licenses/by-nc/4.0/)
21
 
22
  # SWITCH: Benchmarking Interaction and Verification on Real-World Interfaces in Lifelong Embodied Agents
23
 
24
- > **⚠️ Dataset Note:** This repository hosts the **SwitchBasic 30% public subset** of the full SWITCH benchmark. It is intended for public exploration, preliminary evaluation, and community feedback.
 
25
 
26
  ## Overview
27
 
28
- SWITCH covers the collection and annotation of real-world Tangible Computer Interfaces (TCI) interaction data, which we systematically structure into five distinct tasks. These tasks are designed to evaluate models across three crucial capability dimensions: **Perception/Spatial Reasoning, Causal Reasoning/Planning, and Verification**.
29
 
30
- We conduct a comprehensive evaluation of state-of-the-art large multi-modality models (LMMMs) on this benchmark - SWITCH-Basic, providing detailed analysis of their strengths and limitations, thereby offering insights to guide future model development in real-world interactive tasks. Furthermore, we leverage the benchmark to evaluate advanced generative models, like Veo3. By comparing generated videos against ground truth, we illustrate how current models still exhibit significant room for improvement in logical consistency and fine-grained interaction for real-word use, thus underscoring the importance of SWITCH's target scenarios.
 
31
 
32
  ## Key Tasks Supported
33
 
@@ -59,7 +62,7 @@ To help researchers better understand the dataset structure and visually inspect
59
 
60
  ## Dataset Structure
61
 
62
- The repository is organized as follows:
63
 
64
  ```text
65
  switchBasic_release0212/
@@ -83,12 +86,13 @@ switchBasic_release0212/
83
  ├── viewer_eng.html # Visualization (Local HTML Viewer) in English
84
  └── README.md
85
  ```
 
86
  Each task folder contains subfolders defining the input modalities (e.g., img2txt for image in question, text choices as answers; img2video for image in question, video choices as answers). Inside these format folders, you will find the corresponding media assets (imgs/, videos/) and the vqa.json file containing the detailed annotations.
87
 
88
 
89
  ## Leaderboard & Full Set Evaluation
90
 
91
- Please visit [our Leaderboard Space](https://huggingface.co/spaces/BAAI-Agents/SwitchBasic-Leaderboard) for more details.
92
 
93
  ## License
94
 
@@ -96,11 +100,13 @@ This dataset is released under the **Creative Commons Attribution-NonCommercial
96
 
97
  ## 📖 Citation
98
 
99
- If you use SWITCH in your research, please cite:
100
 
101
  ```bibtex
102
- @benchmark{switch2025,
103
- title={SWITCH: Benchmarking Modeling and Handling of Tangible Interfaces in Long-horizon Embodied Scenarios},
 
 
104
  year={2025}
105
  }
106
  ```
@@ -108,4 +114,5 @@ If you use SWITCH in your research, please cite:
108
  ## Contributing & Contact
109
 
110
  We welcome contributions and feedback! Please feel free to submit issues or pull requests. For questions or inquiries, please reach out to the **BAAI-Agents** team.
 
111
  ```
 
2
  license: cc-by-nc-4.0
3
  task_categories:
4
  - visual-question-answering
5
+ - robotics
6
+ - other
7
  language:
8
  - en
9
  - zh
 
14
  - embodied-ai
15
  - multimodal
16
  - benchmark
17
+ - tci
18
+ pretty_name: SWITCH-Basic Benchmark v1 (30% public subset)
19
  ---
20
 
21
  [![License: CC BY-NC 4.0](https://img.shields.io/badge/License-CC%20BY--NC%204.0-lightgrey.svg)](https://creativecommons.org/licenses/by-nc/4.0/)
22
 
23
  # SWITCH: Benchmarking Interaction and Verification on Real-World Interfaces in Lifelong Embodied Agents
24
 
25
+ > **⚠️ Dataset Note:**
26
+ > This repository hosts the **30% public subset** of the full SWITCH-Basic v1 benchmark. It is intended for public exploration, preliminary evaluation, and community feedback.
27
 
28
  ## Overview
29
 
30
+ SWITCH-Basic covers the collection and annotation of real-world Tangible Computer Interfaces (TCI) interaction data, which we systematically structure into five distinct tasks. These tasks are designed to evaluate models across three crucial capability dimensions: **Perception/Spatial Reasoning, Causal Reasoning/Planning, and Verification**.
31
 
32
+ We conduct a comprehensive evaluation of state-of-the-art large multi-modality models (LMMMs) on this benchmark, providing detailed analysis of their strengths and limitations, thereby offering insights to guide future model development in real-world interactive tasks.
33
+ Furthermore, we leverage the benchmark to evaluate advanced generative models, like Veo3. By comparing generated videos against ground truth, we illustrate how current generative models still exhibit significant room for improvement in logical consistency and fine-grained interaction for real-word use, thus underscoring the importance of SWITCH's target scenarios.
34
 
35
  ## Key Tasks Supported
36
 
 
62
 
63
  ## Dataset Structure
64
 
65
+ The sub-dataset repository is organized as follows:
66
 
67
  ```text
68
  switchBasic_release0212/
 
86
  ├── viewer_eng.html # Visualization (Local HTML Viewer) in English
87
  └── README.md
88
  ```
89
+
90
  Each task folder contains subfolders defining the input modalities (e.g., img2txt for image in question, text choices as answers; img2video for image in question, video choices as answers). Inside these format folders, you will find the corresponding media assets (imgs/, videos/) and the vqa.json file containing the detailed annotations.
91
 
92
 
93
  ## Leaderboard & Full Set Evaluation
94
 
95
+ Please visit the [SWITCH-Basic v1 Leaderboard Space](https://huggingface.co/spaces/BAAI-Agents/SwitchBasic-Leaderboard) for more details on the latest model results.
96
 
97
  ## License
98
 
 
100
 
101
  ## 📖 Citation
102
 
103
+ If you utilize SWITCH scenarios or data in your research, please cite:
104
 
105
  ```bibtex
106
+ @article{switch2025,
107
+ title={{SWITCH}: {B}enchmarking Modeling and Handling of Tangible Interfaces in Long-horizon Embodied Scenarios},
108
+ author={Jieru Lin, Zhiwei Yu, Börje F. Karlsson},
109
+ journal={arXiv preprint arXiv:2511.17649},
110
  year={2025}
111
  }
112
  ```
 
114
  ## Contributing & Contact
115
 
116
  We welcome contributions and feedback! Please feel free to submit issues or pull requests. For questions or inquiries, please reach out to the **BAAI-Agents** team.
117
+
118
  ```