| | --- |
| | base_model: OpenGVLab/InternVL2-4B |
| | library_name: transformers |
| | license: mit |
| | pipeline_tag: image-text-to-text |
| | --- |
| | |
| | # OS-Genesis: Automating GUI Agent Trajectory Construction via Reverse Task Synthesis |
| |
|
| | <div align="center"> |
| |
|
| | [\[🏠Homepage\]](https://qiushisun.github.io/OS-Genesis-Home/) [\[💻Code\]](https://github.com/OS-Copilot/OS-Genesis) [\[📝Paper\]](https://arxiv.org/abs/2412.19723) [\[🤗Models\]](https://huggingface.co/collections/OS-Copilot/os-genesis-6768d4b6fffc431dbf624c2d)[\[🤗Data\]](https://huggingface.co/collections/OS-Copilot/os-genesis-6768d4b6fffc431dbf624c2d) |
| |
|
| | </div> |
| |
|
| | ## Overview |
| |  |
| |
|
| | We introduce OS-Genesis, an interaction-driven pipeline that synthesizes high-quality and diverse GUI agent trajectory data without human supervision. By leveraging reverse task synthesis, OS-Genesis enables effective training of GUI agents to achieve superior performance on dynamic benchmarks such as AndroidWorld and WebArena. |
| |
|
| | ## Quick Start |
| | OS-Genesis-4B-AC is a mobile action model finetuned from [InternVL2-4B](https://huggingface.co/OpenGVLab/InternVL2-4B). |
| |
|
| | ### OS-Genesis AC Family Models |
| | In the following table, we provide an overview of the OS-Genesis AC Family Models used for evaluating the AndroidControl Benchmark. |
| |
|
| | | Model Name | Base Model | Training Data | HF Link | |
| | | :-------------: | :-------------------------------------------------------------------------------------: | :----------------------------------------------------------------------------: | :---------------------------------------------------------: | |
| | | OS-Genesis-4B-AC | [InternVL2-4B](https://huggingface.co/OpenGVLab/InternVL2-4B) | [OS-Genesis-ac-training-data](https://huggingface.co/datasets/OS-Copilot/OS-Genesis-mobile-data/blob/main/os_genesis_ac_training_data.jsonl) | [🤗 link](https://huggingface.co/OS-Copilot/OS-Genesis-4B-AC) | |
| | | OS-Genesis-7B-AC | [Qwen2-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct) | [OS-Genesis-ac-training-data](https://huggingface.co/datasets/OS-Copilot/OS-Genesis-mobile-data/blob/main/os_genesis_ac_training_data.jsonl) | [🤗 link](https://huggingface.co/OS-Copilot/OS-Genesis-7B-AC) | |
| | | OS-Genesis-8B-AC | [InternVL2-8B](https://huggingface.co/OpenGVLab/InternVL2-8B) | [OS-Genesis-ac-training-data](https://huggingface.co/datasets/OS-Copilot/OS-Genesis-mobile-data/blob/main/os_genesis_ac_training_data.jsonl) | [🤗 link](https://huggingface.co/OS-Copilot/OS-Genesis-8B-AC) | |
| |
|
| |
|
| | ### Inference Example |
| | First, install the `transformers` library: |
| |
|
| | ``` |
| | pip install transformers |
| | ``` |
| |
|
| | For additional dependencies, please refer to the [InternVL2 documentation](https://internvl.readthedocs.io/en/latest/get_started/installation.html). |
| |
|
| | For evaluating the AndroidControl Benchmark, please refer to the [**evaluation code**](https://github.com/OS-Copilot/OS-Genesis/tree/main/evaluation/android_control). |
| |
|
| | Inference code example: |
| | ```python |
| | import numpy as np |
| | import torch |
| | import torchvision.transforms as T |
| | from PIL import Image |
| | from torchvision.transforms.functional import InterpolationMode |
| | from transformers import AutoModel, AutoTokenizer |
| | IMAGENET_MEAN = (0.485, 0.456, 0.406) |
| | IMAGENET_STD = (0.229, 0.224, 0.225) |
| | |
| | def build_transform(input_size): |
| | MEAN, STD = IMAGENET_MEAN, IMAGENET_STD |
| | transform = T.Compose([ |
| | T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img), |
| | T.Resize((input_size, input_size), interpolation=InterpolationMode.BICUBIC), |
| | T.ToTensor(), |
| | T.Normalize(mean=MEAN, std=STD) |
| | ]) |
| | return transform |
| | |
| | def find_closest_aspect_ratio(aspect_ratio, target_ratios, width, height, image_size): |
| | best_ratio_diff = float('inf') |
| | best_ratio = (1, 1) |
| | area = width * height |
| | for ratio in target_ratios: |
| | target_aspect_ratio = ratio[0] / ratio[1] |
| | ratio_diff = abs(aspect_ratio - target_aspect_ratio) |
| | if ratio_diff < best_ratio_diff: |
| | best_ratio_diff = ratio_diff |
| | best_ratio = ratio |
| | elif ratio_diff == best_ratio_diff: |
| | if area > 0.5 * image_size * image_size * ratio[0] * ratio[1]: |
| | best_ratio = ratio |
| | return best_ratio |
| | |
| | def dynamic_preprocess(image, min_num=1, max_num=12, image_size=448, use_thumbnail=False): |
| | orig_width, orig_height = image.size |
| | aspect_ratio = orig_width / orig_height |
| | |
| | # calculate the existing image aspect ratio |
| | target_ratios = set( |
| | (i, j) for n in range(min_num, max_num + 1) for i in range(1, n + 1) for j in range(1, n + 1) if |
| | i * j <= max_num and i * j >= min_num) |
| | target_ratios = sorted(target_ratios, key=lambda x: x[0] * x[1]) |
| | |
| | # find the closest aspect ratio to the target |
| | target_aspect_ratio = find_closest_aspect_ratio( |
| | aspect_ratio, target_ratios, orig_width, orig_height, image_size) |
| | |
| | # calculate the target width and height |
| | target_width = image_size * target_aspect_ratio[0] |
| | target_height = image_size * target_aspect_ratio[1] |
| | blocks = target_aspect_ratio[0] * target_aspect_ratio[1] |
| | |
| | # resize the image |
| | resized_img = image.resize((target_width, target_height)) |
| | processed_images = [] |
| | for i in range(blocks): |
| | box = ( |
| | (i % (target_width // image_size)) * image_size, |
| | (i // (target_width // image_size)) * image_size, |
| | ((i % (target_width // image_size)) + 1) * image_size, |
| | ((i // (target_width // image_size)) + 1) * image_size |
| | ) |
| | # split the image |
| | split_img = resized_img.crop(box) |
| | processed_images.append(split_img) |
| | assert len(processed_images) == blocks |
| | if use_thumbnail and len(processed_images) != 1: |
| | thumbnail_img = image.resize((image_size, image_size)) |
| | processed_images.append(thumbnail_img) |
| | return processed_images |
| | |
| | def load_image(image_file, input_size=448, max_num=12): |
| | image = Image.open(image_file).convert('RGB') |
| | transform = build_transform(input_size=input_size) |
| | images = dynamic_preprocess(image, image_size=input_size, use_thumbnail=True, max_num=max_num) |
| | pixel_values = [transform(image) for image in images] |
| | pixel_values = torch.stack(pixel_values) |
| | return pixel_values |
| | |
| | # If you want to load a model using multiple GPUs, please refer to the `Multiple GPUs` section. |
| | path = 'OS-Copilot/OS-Genesis-4B-AC' |
| | model = AutoModel.from_pretrained( |
| | path, |
| | torch_dtype=torch.bfloat16, |
| | low_cpu_mem_usage=True, |
| | trust_remote_code=True).eval().cuda() |
| | tokenizer = AutoTokenizer.from_pretrained(path, trust_remote_code=True, use_fast=False) |
| | |
| | # set the max number of tiles in `max_num` |
| | pixel_values = load_image('./web_dfacd48d-d2c2-492f-b94c-41e6a34ea99f.png', max_num=6).to(torch.bfloat16).cuda() |
| | generation_config = dict(max_new_tokens=1024, do_sample=True) |
| | |
| | question = "<image> |
| | You are a GUI task expert, I will provide you with a high-level instruction, an action history, a screenshot with its corresponding accessibility tree. |
| | High-level instruction: {high_level_instruction} |
| | Action history: {action_history} |
| | Accessibility tree: {a11y_tree} |
| | Please generate the low-level thought and action for the next step." |
| | response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=None, return_history=True) |
| | print(f'User: {question} |
| | Assistant: {response}') |
| | ``` |
| |
|
| |
|
| | ## Citation |
| | If you find this repository helpful, feel free to cite our paper: |
| | ```bibtex |
| | @article{sun2024genesis, |
| | title={OS-Genesis: Automating GUI Agent Trajectory Construction via Reverse Task Synthesis}, |
| | author={Sun, Qiushi and Cheng, Kanzhi and Ding, Zichen and Jin, Chuanyang and Wang, Yian and Xu, Fangzhi and Wu, Zhenyu and Jia, Chengyou and Chen, Liheng and Liu, Zhoumianze and others}, |
| | journal={arXiv preprint arXiv:2412.19723}, |
| | year={2024} |
| | } |
| | ``` |