MONDAY / README.md
runamu's picture
Add README.md
d7bd929
---
dataset_info:
features:
- name: video_id
dtype: string
- name: title
dtype: string
- name: os
dtype: string
- name: num_scenes
dtype: int64
- name: scene_timestamps_in_sec
sequence: float64
- name: screen_bboxes
sequence:
sequence: int64
- name: ui_element_bboxes
sequence:
sequence:
sequence: float64
- name: raw_actions
list:
list:
- name: box_id
dtype: int64
- name: details
dtype: string
- name: type
dtype: string
- name: actions
list:
list:
- name: action_type_id
dtype: int64
- name: action_type_text
dtype: string
- name: annot_position
sequence: float64
- name: lift
sequence: float64
- name: touch
sequence: float64
- name: type_text
dtype: string
- name: video_fps
dtype: float64
- name: video_width
dtype: int64
- name: video_height
dtype: int64
splits:
- name: train
num_bytes: 69622260
num_examples: 19725
- name: validation
num_bytes: 1641036
num_examples: 495
- name: test
num_bytes: 565401
num_examples: 100
- name: test_unseen_os
num_bytes: 169823
num_examples: 50
download_size: 18085770
dataset_size: 71998520
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
- split: test_unseen_os
path: data/test_unseen_os-*
---
[Paper](https://arxiv.org/abs/2505.12632) |
[Code](https://github.com/runamu/monday) |
[Dataset](https://huggingface.co/datasets/runamu/MONDAY) |
[Project](https://monday-dataset.github.io)
# Dataset Card for MONDAY
_MONDAY_ (Mobile OS Navigation Task Dataset for Agents from YouTube) is a cross-platform mobile navigation dataset for training vision-language models. This dataset contains
- **20K** curated list of videos of mobile navigation tasks from YouTube, including Android and iOS devices.
- **333K** detected scenes, each representing a temporally segmented step within a mobile navigation task.
- **313K** identified actions, including touch, scroll, hardware, typing, long press, multi touch and zoom.
Please visit our [project page](https://monday-dataset.github.io/) for more details.
## Data Fields
- **video_id (str)**: Unique identifier for the video.
- **title (str)**: Title of the video.
- **os (str)**: Operating system of the mobile device used in the video.
- **num_scenes (int)**: Number of detected scenes in the video.
- **scene_timestamps_in_sec (list)**: A list of timestamps of the detected scenes in seconds. The list has a length of `num_scenes`.
- **screen_bboxes (list)**: A list of bounding boxes for the detected phone screen in each scene, given as (left, top, right, bottom) pixel coordinates. The list has a length of `num_scenes`.
- **ui_element_bboxes (list)**: A list of bounding boxes for the detected user interface (UI) elements in each scene, given as (left, top, right, bottom) coordinates normalized to the [0, 1] range. The list has a length of `num_scenes - 1`.
```python
# example
ui_element_bboxes = [
[ui_bbox1_scene1, ui_bbox2_scene1, ...], # UI elements in scene 1
[ui_bbox1_scene2, ui_bbox2_scene2, ...], # UI elements in scene 2
...
]
```
- **raw_actions (list)**: A list of raw actions identified from the video for each scene. The list has a length of `num_scenes - 1`. Multiple actions can be annotated within a single scene, and all are considered valid. Each element is a list of actions annotated in that scene, with each action represented as a dictionary containing the following keys:
- **box_id (int)**: The index of the UI element's bounding box (from `ui_element_bboxes[scene_id]`) associated with the action. If the action does not correspond to any UI element, the value is -1.
- **details (str)**: A detailed description of the action, either automatically generated or manually annotated during the identification process.
- **type (str)**: A text label describing the action type. Possible values include `"touch"`, `"scroll"`, `"hardware"`, `"typing"`, `"long press"`, `"multi touch"` and `"zoom"`, listed in order of frequency in the dataset.
```python
# example
raw_actions = [
[
{"box_id": 0, "details": "...", "type": "touch"}, # First action in scene 1
{"box_id": 1, "details": "...", "type": "touch"}, # Second action in scene 1
],
[
{"box_id": -1, "details": "...", "type": "typing"}, # First action in scene 2
],
...
]
```
Note: The `box_id` is -1 for actions that do not correspond to any UI element.
- **actions (list)**: A list of actions in each scene, processed for mobile navigation agent training and evaluation. The list has a length of `num_scenes - 1`. Multiple actions can be annotated within a single scene, and all are considered valid. Each element is a list of actions annotated in that scene, with each action represented as a dictionary containing the following keys:
- **action_type_id (int)**: An integer identifier for the action type, based on the action type sets used in [SeeClick](https://github.com/njucckevin/SeeClick/blob/main/agent_tasks/action_type.py) and [AitW](https://github.com/google-research/google-research/blob/master/android_in_the_wild/action_type.py).
- **action_type_text (str)**: A text label describing the action type. Possible values include `"click"`, `"scroll down"`, `"press home"`, `"type"`, `"scroll up"`, `"other hardware"`, `"scroll left"`, `"zoom or multi-touch"`, `"press power"`, `"scroll right"`, and `"press back"`, listed in order of frequency in the dataset.
- **annot_position (array)**: A flattened array of bounding box coordinates for detected UI elements, formatted as (top, left, height, width), normalized to the [0, 1] range, and rounded to three decimal places. If applicable, the length of this array is `4 * num_ui_elements` per scene; otherwise, it is an empty list.
- **lift (array)**: Lift coordinates in (x, y) format, normalized to the [0, 1] range and rounded to three decimal places. If not applicable, the value is (-1, -1).
- **touch (array)**: Touch coordinates in (x, y) format, normalized to the [0, 1] range and rounded to three decimal places. If not applicable, the value is (-1, -1).
- **type_text (str)**: The entered text, if the action type is `"type"`; otherwise, this is an empty string.
```python
# example
actions = [
[
{"action_type_id": 4, "action_type_text": "click", "annot_position": annot_position, "lift": lift_point_action1, "touch": touch_point_action1, "type_text": ""}, # First action in scene 1
{"action_type_id": 4, "action_type_text": "click", "annot_position": annot_position, "lift": lift_point_action2, "touch": touch_point_action2, "type_text": ""}, # Second action in scene 1
],
[
{"action_type_id": 3, "action_type_text": "type", "annot_position": [], "lift": [-1, -1], "touch": [-1, -1], "type_text": "..."}, # First action in scene 2
],
...
]
```
Note: The data format of `actions` is derived from [SeeClick](https://github.com/njucckevin/SeeClick/blob/main/agent_tasks/readme_agent.md) and [AitW](https://github.com/google-research/google-research/tree/master/android_in_the_wild#dataset-format).
- **video_fps (float)**: Frames per second of the video. _This value must be preserved when downloading the video to ensure consistency with `scene_timestamps_in_sec`._
- **video_width (int)**: Width of the video in pixels. _This value must be preserved when downloading the video to ensure consistency with `screen_bboxes`._
- **video_height (int)**: Height of the video in pixels. _This value must be preserved when downloading the video to ensure consistency with `screen_bboxes`._
## Citation
```bibtex
@inproceedings{jang2025_monday,
title={{Scalable Video-to-Dataset Generation for Cross-Platform Mobile Agents}},
author={Jang, Yunseok and Song, Yeda and Sohn, Sungryull and Logeswaran, Lajanugen and Luo, Tiange and Kim, Dong-Ki and Bae, Kyunghoon and Lee, Honglak},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2025}
}
````