File size: 3,360 Bytes
e6ffab9 76ea147 e6ffab9 76ea147 307de0e 76ea147 307de0e 76ea147 307de0e 76ea147 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 |
---
base_model: Qwen/Qwen2-VL-7B-Instruct
library_name: transformers
license: apache-2.0
language:
- en
pipeline_tag: image-text-to-text
---
# UIPro: Unleashing Superior Interaction Capability For GUI Agents
<div align="center">
[\[💻Code\]](https://github.com/ZJULiHongxin/UIPro) [\[🚀Quick Start\]](#uses) [\[📝Paper\]](https://arxiv.org/abs/2509.17328)
</div>

## Model Details

### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Brave Group, CASIA
- **Model type:** Vision-Language Model
- **Language(s) (NLP):** English
- **License:** Apache License 2.0
- **Finetuned from model:** Qwen2-VL-7B-Instruct
### Model Sources
HongxinLi/UIPro-7B_Stage2_Web is a GUI agentic model finetuned from Qwen2-VL-7B-Instruct. This model is the web-oriented embodiment of UIPro and capable of solving GUI agent tasks on web scenarios.
<!-- Provide the basic links for the model. -->
- **Repository:** [https://github.com/ZJULiHongxin/UIPro](https://github.com/ZJULiHongxin/UIPro)
- **Paper:** [https://arxiv.org/abs/2509.17328](https://arxiv.org/abs/2509.17328)
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
First, ensure that the necessary dependencies are installed:
```
pip install transformers
pip install qwen-vl-utils
```
Inference code example:
```
from transformers import Qwen2VLForConditionalGeneration, AutoProcessor
from qwen_vl_utils import process_vision_info
# Default: Load the model on the available device(s)
model = Qwen2VLForConditionalGeneration.from_pretrained(
"HongxinLi/UIPro-7B_Stage2_Mobile", torch_dtype="auto", device_map="auto"
)
processor = AutoProcessor.from_pretrained("HongxinLi/UIPro-7B_Stage2_Mobile")
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "./web_6f93090a-81f6-489e-bb35-1a2838b18c01.png",
},
{"type": "text", "text": """Given the Web UI screenshot and previous actions, please generate the next move necessary to advance towards task completion. The user's task is: {task}
Action history: {action_history}
Now, first describe the action intent and then directly plan the next action."""},
],
}
]
```
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```
@InProceedings{Li_2025_ICCV,
author = {Li, Hongxin and Su, Jingran and Chen, Jingfan and Ju, Zheng and Chen, Yuntao and Li, Qing and Zhang, Zhaoxiang},
title = {UIPro: Unleashing Superior Interaction Capability For GUI Agents},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2025},
pages = {1613-1623}
}
```
### Framework versions
- PEFT 0.11.1 |