Add README.md
Browse files
README.md
CHANGED
|
@@ -75,3 +75,88 @@ configs:
|
|
| 75 |
- split: test_unseen_os
|
| 76 |
path: data/test_unseen_os-*
|
| 77 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 75 |
- split: test_unseen_os
|
| 76 |
path: data/test_unseen_os-*
|
| 77 |
---
|
| 78 |
+
|
| 79 |
+
[Paper](https://arxiv.org/abs/2505.12632) |
|
| 80 |
+
[Code](https://github.com/runamu/monday) |
|
| 81 |
+
[Dataset](https://huggingface.co/datasets/runamu/MONDAY) |
|
| 82 |
+
[Project](https://monday-dataset.github.io)
|
| 83 |
+
|
| 84 |
+
# Dataset Card for MONDAY
|
| 85 |
+
|
| 86 |
+
_MONDAY_ (Mobile OS Navigation Task Dataset for Agents from YouTube) is a cross-platform mobile navigation dataset for training vision-language models. This dataset contains
|
| 87 |
+
- **20K** curated list of videos of mobile navigation tasks from YouTube, including Android and iOS devices.
|
| 88 |
+
- **333K** detected scenes, each representing a temporally segmented step within a mobile navigation task.
|
| 89 |
+
- **313K** identified actions, including touch, scroll, hardware, typing, long press, multi touch and zoom.
|
| 90 |
+
|
| 91 |
+
Please visit our [project page](https://monday-dataset.github.io/) for more details.
|
| 92 |
+
|
| 93 |
+
## Data Fields
|
| 94 |
+
- **video_id (str)**: Unique identifier for the video.
|
| 95 |
+
- **title (str)**: Title of the video.
|
| 96 |
+
- **os (str)**: Operating system of the mobile device used in the video.
|
| 97 |
+
- **num_scenes (int)**: Number of detected scenes in the video.
|
| 98 |
+
- **scene_timestamps_in_sec (list)**: A list of timestamps of the detected scenes in seconds. The list has a length of `num_scenes`.
|
| 99 |
+
- **screen_bboxes (list)**: A list of bounding boxes for the detected phone screen in each scene, given as (left, top, right, bottom) pixel coordinates. The list has a length of `num_scenes`.
|
| 100 |
+
- **ui_element_bboxes (list)**: A list of bounding boxes for the detected user interface (UI) elements in each scene, given as (left, top, right, bottom) coordinates normalized to the [0, 1] range. The list has a length of `num_scenes - 1`.
|
| 101 |
+
```python
|
| 102 |
+
# example
|
| 103 |
+
ui_element_bboxes = [
|
| 104 |
+
[ui_bbox1_scene1, ui_bbox2_scene1, ...], # UI elements in scene 1
|
| 105 |
+
[ui_bbox1_scene2, ui_bbox2_scene2, ...], # UI elements in scene 2
|
| 106 |
+
...
|
| 107 |
+
]
|
| 108 |
+
```
|
| 109 |
+
- **raw_actions (list)**: A list of raw actions identified from the video for each scene. The list has a length of `num_scenes - 1`. Multiple actions can be annotated within a single scene, and all are considered valid. Each element is a list of actions annotated in that scene, with each action represented as a dictionary containing the following keys:
|
| 110 |
+
- **box_id (int)**: The index of the UI element's bounding box (from `ui_element_bboxes[scene_id]`) associated with the action. If the action does not correspond to any UI element, the value is -1.
|
| 111 |
+
- **details (str)**: A detailed description of the action, either automatically generated or manually annotated during the identification process.
|
| 112 |
+
- **type (str)**: A text label describing the action type. Possible values include `"touch"`, `"scroll"`, `"hardware"`, `"typing"`, `"long press"`, `"multi touch"` and `"zoom"`, listed in order of frequency in the dataset.
|
| 113 |
+
```python
|
| 114 |
+
# example
|
| 115 |
+
raw_actions = [
|
| 116 |
+
[
|
| 117 |
+
{"box_id": 0, "details": "...", "type": "touch"}, # First action in scene 1
|
| 118 |
+
{"box_id": 1, "details": "...", "type": "touch"}, # Second action in scene 1
|
| 119 |
+
],
|
| 120 |
+
[
|
| 121 |
+
{"box_id": -1, "details": "...", "type": "typing"}, # First action in scene 2
|
| 122 |
+
],
|
| 123 |
+
...
|
| 124 |
+
]
|
| 125 |
+
```
|
| 126 |
+
Note: The `box_id` is -1 for actions that do not correspond to any UI element.
|
| 127 |
+
|
| 128 |
+
- **actions (list)**: A list of actions in each scene, processed for mobile navigation agent training and evaluation. The list has a length of `num_scenes - 1`. Multiple actions can be annotated within a single scene, and all are considered valid. Each element is a list of actions annotated in that scene, with each action represented as a dictionary containing the following keys:
|
| 129 |
+
- **action_type_id (int)**: An integer identifier for the action type, based on the action type sets used in [SeeClick](https://github.com/njucckevin/SeeClick/blob/main/agent_tasks/action_type.py) and [AitW](https://github.com/google-research/google-research/blob/master/android_in_the_wild/action_type.py).
|
| 130 |
+
- **action_type_text (str)**: A text label describing the action type. Possible values include `"click"`, `"scroll down"`, `"press home"`, `"type"`, `"scroll up"`, `"other hardware"`, `"scroll left"`, `"zoom or multi-touch"`, `"press power"`, `"scroll right"`, and `"press back"`, listed in order of frequency in the dataset.
|
| 131 |
+
- **annot_position (array)**: A flattened array of bounding box coordinates for detected UI elements, formatted as (top, left, height, width), normalized to the [0, 1] range, and rounded to three decimal places. If applicable, the length of this array is `4 * num_ui_elements` per scene; otherwise, it is an empty list.
|
| 132 |
+
- **lift (array)**: Lift coordinates in (x, y) format, normalized to the [0, 1] range and rounded to three decimal places. If not applicable, the value is (-1, -1).
|
| 133 |
+
- **touch (array)**: Touch coordinates in (x, y) format, normalized to the [0, 1] range and rounded to three decimal places. If not applicable, the value is (-1, -1).
|
| 134 |
+
- **type_text (str)**: The entered text, if the action type is `"type"`; otherwise, this is an empty string.
|
| 135 |
+
```python
|
| 136 |
+
# example
|
| 137 |
+
actions = [
|
| 138 |
+
[
|
| 139 |
+
{"action_type_id": 4, "action_type_text": "click", "annot_position": annot_position, "lift": lift_point_action1, "touch": touch_point_action1, "type_text": ""}, # First action in scene 1
|
| 140 |
+
{"action_type_id": 4, "action_type_text": "click", "annot_position": annot_position, "lift": lift_point_action2, "touch": touch_point_action2, "type_text": ""}, # Second action in scene 1
|
| 141 |
+
],
|
| 142 |
+
[
|
| 143 |
+
{"action_type_id": 3, "action_type_text": "type", "annot_position": [], "lift": [-1, -1], "touch": [-1, -1], "type_text": "..."}, # First action in scene 2
|
| 144 |
+
],
|
| 145 |
+
...
|
| 146 |
+
]
|
| 147 |
+
```
|
| 148 |
+
Note: The data format of `actions` is derived from [SeeClick](https://github.com/njucckevin/SeeClick/blob/main/agent_tasks/readme_agent.md) and [AitW](https://github.com/google-research/google-research/tree/master/android_in_the_wild#dataset-format).
|
| 149 |
+
|
| 150 |
+
- **video_fps (float)**: Frames per second of the video. _This value must be preserved when downloading the video to ensure consistency with `scene_timestamps_in_sec`._
|
| 151 |
+
- **video_width (int)**: Width of the video in pixels. _This value must be preserved when downloading the video to ensure consistency with `screen_bboxes`._
|
| 152 |
+
- **video_height (int)**: Height of the video in pixels. _This value must be preserved when downloading the video to ensure consistency with `screen_bboxes`._
|
| 153 |
+
|
| 154 |
+
## Citation
|
| 155 |
+
```bibtex
|
| 156 |
+
@inproceedings{jang2025_monday,
|
| 157 |
+
title={{Scalable Video-to-Dataset Generation for Cross-Platform Mobile Agents}},
|
| 158 |
+
author={Jang, Yunseok and Song, Yeda and Sohn, Sungryull and Logeswaran, Lajanugen and Luo, Tiange and Kim, Dong-Ki and Bae, Kyunghoon and Lee, Honglak},
|
| 159 |
+
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
|
| 160 |
+
year={2025}
|
| 161 |
+
}
|
| 162 |
+
````
|