ffffatgoose commited on
Commit
4704888
·
verified ·
1 Parent(s): 510a96b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +114 -3
README.md CHANGED
@@ -1,3 +1,114 @@
1
- ---
2
- license: cc-by-nc-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ task_categories:
4
+ - visual-question-answering
5
+ - image-to-text
6
+ - text-to-image
7
+ language:
8
+ - en
9
+ - zh
10
+ tags:
11
+ - ui
12
+ - gui-agent
13
+ - tangible-interfaces
14
+ - embodied-ai
15
+ - multimodal
16
+ - benchmark
17
+ pretty_name: SWITCH Benchmark (SwitchBasic 30% Subset)
18
+ size_categories:
19
+ - 10K<n<100K
20
+ ---
21
+
22
+ # SWITCH: Benchmarking Interaction and Verification on Real-World Interfaces in Lifelong Embodied Agents
23
+
24
+ > **⚠️ Dataset Note:** This repository hosts the **SwitchBasic 30% public subset** of the full SWITCH benchmark. It is intended for public exploration, preliminary evaluation, and community feedback.
25
+
26
+ ## Overview
27
+
28
+ SWITCH covers the collection and annotation of real-world Tangible Computer Interfaces (TCI) interaction data, which we systematically structure into five distinct tasks. These tasks are designed to evaluate models across three crucial capability dimensions: **Perception/Spatial Reasoning, Causal Reasoning/Planning, and Verification**.
29
+
30
+ Furthermore, we leverage the benchmark to evaluate advanced generative models, like Veo3. By comparing generated videos against ground truth, we illustrate how current models still exhibit significant room for improvement in logical consistency and fine-grained interaction for real-word use, thus underscoring the importance of SWITCH's target scenarios.
31
+
32
+ ## Key Tasks Supported
33
+
34
+ This dataset provides annotations to support the following core tasks:
35
+
36
+ - **Task-Aware Visual Question Answering (VQA)**: Composed of two complementary sub-tasks:
37
+ - *(a) UI State Recognition*: assessing whether the model can recognize and describe the current state of TCI elements within the scene.
38
+ - *(b) Goal-Oriented Reasoning*: testing whether the model can interpret the purpose and outcome of actions, reasoning about whether these interactions successfully achieve the intended task goals.
39
+ - **Semantic UI Comprehension**: Tests whether a model can accurately localize and interpret actionable UI elements in cluttered or dynamic settings, reasoning about their spatial and functional relationships while inferring human intent.
40
+ - **Action Generation**: Evaluates a model's ability to infer intent and plan executable, context-aware action sequences.
41
+ - *(a) UI Action Identification*: Detect the relevant interaction region, recognize its affordances, and predict the appropriate mode of interaction.
42
+ - *(b) Action Execution Planning*: Generate the necessary physical actions to perform the intended TCI operation.
43
+ - **State Transition Prediction**: Evaluates causal reasoning and short-term prediction.
44
+ - *(a) UI-State Transition*: Predict changes in the visual or functional state of TCI elements after an action.
45
+ - *(b) Environment-State Transition*: Predict corresponding updates in the surrounding physical or visual environment.
46
+ - *(c) Coupled Transition*: Reason about interdependent updates where TCI and environment states jointly change.
47
+ - **Result Verification**:
48
+ - *(a) Verification Planning*: Test whether the model can infer what actions or checks are required to verify the outcome of a previous operation.
49
+ - *(b) Expected State Prediction*: Assess whether the model can predict what the expected state should look like after a successful interaction.
50
+
51
+ ## Data Visualization (Local HTML Viewer)
52
+
53
+ To help researchers better understand the dataset structure and visually inspect the samples without writing parsing scripts, we provide a local HTML viewer out-of-the-box.
54
+
55
+ **How to use:**
56
+ 1. Clone or download this dataset repository to your local machine.
57
+ 2. Open `viewer.html` (for Chinese) or `viewer_eng.html` (for English) directly in your web browser.
58
+ 3. You can easily browse through the UI images, corresponding VQA pairs, and action annotations.
59
+
60
+ ## Dataset Structure
61
+
62
+ The repository is organized as follows:
63
+
64
+ ```text
65
+ switchBasic_release0212/
66
+ ├── action/ # Action Task
67
+ │ ├── img2txt/ # Modality input format
68
+ │ │ ├── imgs/ # Image assets for this specific task & format
69
+ │ │ └── vqa.json # Annotation file containing QA pairs and metadata
70
+ │ ├── img2video/
71
+ │ │ ├── imgs/ # Input image assets
72
+ │ │ ├── videos/ # Output video assets
73
+ │ │ └── vqa.json # Annotation file
74
+ │ ├── video2txt/ # Other modality formats...
75
+ │ └── video2video/
76
+ ├── final_state/ # State Transition Prediction task
77
+ ├── ui_grounding/ # Semantic UI Comprehension task
78
+ ├── verification_action/ # Result Verification (Action) task
79
+ ├── verification_state/ # Result Verification (State) task
80
+ ├── vqa_state/ # Task-Aware VQA (State Recognition)
81
+ ├── vqa_task/ # Task-Aware VQA (Goal-Oriented Reasoning)
82
+ └── assets/ # General assets (e.g., images for this README)
83
+ ```
84
+ Each task folder contains subfolders defining the input modalities (e.g., img2txt for image in question, text choices as answers; img2video for image in question, video choices as answers). Inside these format folders, you will find the corresponding media assets (imgs/, videos/) and the vqa.json file containing the detailed annotations.
85
+
86
+ ## Leaderboard & Full Set Evaluation
87
+
88
+ This repository contains a **30% public subset** of the SWITCH benchmark, designed for local debugging, exploratory data analysis, and preliminary testing.
89
+
90
+ To preserve the integrity of the benchmark, the remaining 70% of the dataset is kept private. **If you wish to evaluate your model on the full hidden test set and have your results featured on our official Leaderboard, please contact us!**
91
+
92
+ **How to participate:**
93
+ 1. Ensure your model can process the input formats specified in the dataset structure.
94
+ 2. Reach out to the BAAI-Agents team at `borje@baai.ac.cn`.
95
+ 3. We will provide instructions on how to submit your model's predictions for full-set evaluation.
96
+
97
+ ## License
98
+
99
+ This dataset is released under the **Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0)** license. It is restricted to **academic research and non-commercial purposes only**.
100
+
101
+ ## 📖 Citation
102
+
103
+ If you use SWITCH in your research, please cite:
104
+
105
+ ```bibtex
106
+ @benchmark{switch2025,
107
+ title={SWITCH: Benchmarking Modeling and Handling of Tangible Interfaces in Long-horizon Embodied Scenarios},
108
+ year={2025}
109
+ }
110
+ ```
111
+
112
+ ## Contributing & Contact
113
+
114
+ We welcome contributions and feedback! Please feel free to submit issues or pull requests. For questions or inquiries, please reach out to the **BAAI-Agents** team.