Improve model card: add metadata, paper link, and library name
#1
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,29 +1,46 @@
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
---
|
| 4 |
|
| 5 |
-
|
| 6 |
|
| 7 |
-
|
| 8 |
|
| 9 |
-
|
|
|
|
|
|
|
|
|
|
| 10 |
|
| 11 |
-
|
| 12 |
|
| 13 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 14 |
# pip install openai pillow transformers
|
| 15 |
# start the vllm server:
|
| 16 |
|
| 17 |
'''
|
| 18 |
-
vllm serve osunlp/GUI-Drag-
|
| 19 |
--host 0.0.0.0 \
|
| 20 |
--port 8000 \
|
| 21 |
-
--max-model-len 16384
|
| 22 |
-
--tensor-parallel-size 2
|
| 23 |
'''
|
| 24 |
|
| 25 |
-
|
| 26 |
-
|
| 27 |
import base64
|
| 28 |
import json
|
| 29 |
import re
|
|
@@ -33,32 +50,42 @@ from openai import OpenAI
|
|
| 33 |
from PIL import Image, ImageDraw
|
| 34 |
from transformers.models.qwen2_vl.image_processing_qwen2_vl_fast import smart_resize as qwen_smart_resize
|
| 35 |
|
| 36 |
-
|
| 37 |
-
|
| 38 |
MODEL_ID = "osunlp/GUI-Drag-3B"
|
| 39 |
BASE_URL = "http://localhost:8000/v1"
|
| 40 |
|
| 41 |
-
|
| 42 |
FN_CALL_TEMPLATE = """You are a helpful assistant.
|
| 43 |
# Tools
|
| 44 |
You may call one or more functions to assist with the user query.
|
| 45 |
You are provided with function signatures within <tools></tools> XML tags:
|
| 46 |
<tools>
|
| 47 |
-
{
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 48 |
</tools>
|
| 49 |
For each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:
|
| 50 |
<tool_call>
|
| 51 |
-
{
|
| 52 |
</tool_call>
|
| 53 |
"""
|
| 54 |
|
| 55 |
-
|
| 56 |
-
|
| 57 |
IMAGE_PATH = Path("demo.png")
|
| 58 |
INSTRUCTION = "Drag to select the last sentence."
|
| 59 |
|
| 60 |
-
|
| 61 |
-
|
| 62 |
def encode_image(image: Image) -> str:
|
| 63 |
"""Encode PIL image to base64 string"""
|
| 64 |
output_buffer = io.BytesIO()
|
|
@@ -67,8 +94,6 @@ def encode_image(image: Image) -> str:
|
|
| 67 |
base64_str = base64.b64encode(byte_data).decode("utf-8")
|
| 68 |
return base64_str
|
| 69 |
|
| 70 |
-
|
| 71 |
-
|
| 72 |
def resize_coordinates(coord, size_pred, size_to_be_mapped):
|
| 73 |
return (
|
| 74 |
round(coord[0] * size_to_be_mapped[0] / size_pred[0]),
|
|
@@ -140,4 +165,15 @@ def main():
|
|
| 140 |
|
| 141 |
if __name__ == "__main__":
|
| 142 |
main()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 143 |
```
|
|
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
+
pipeline_tag: image-text-to-text
|
| 4 |
+
library_name: transformers
|
| 5 |
+
base_model: xlangai/Jedi-3B-1080p
|
| 6 |
+
tags:
|
| 7 |
+
- gui-grounding
|
| 8 |
+
- text-dragging
|
| 9 |
+
- computer-use
|
| 10 |
---
|
| 11 |
|
| 12 |
+
# Beyond Clicking: A Step Towards Generalist GUI Grounding via Text Dragging
|
| 13 |
|
| 14 |
+
This repository contains the weights for **GUI-Drag-3B**, a GUI grounding model specialized in text dragging and click interactions.
|
| 15 |
|
| 16 |
+
- [**Paper**](https://huggingface.co/papers/2601.06031)
|
| 17 |
+
- [**Project Page**](https://osu-nlp-group.github.io/GUI-Drag)
|
| 18 |
+
- [**GitHub Repository**](https://github.com/OSU-NLP-Group/GUI-Drag)
|
| 19 |
+
- [**Dataset**](https://huggingface.co/datasets/osunlp/GUI-Drag-dataset)
|
| 20 |
|
| 21 |
+
## Description
|
| 22 |
|
| 23 |
+
GUI-Drag-3B is trained based on [Jedi-3B-1080p](https://huggingface.co/xlangai/Jedi-3B-1080p) via an efficient continual training strategy. It focuses on narrowing the gap in GUI grounding by introducing text dragging—the process of selecting and manipulating textual content via mouse dragging—alongside traditional clicking.
|
| 24 |
+
|
| 25 |
+
The model was trained on the **GUI-Drag** dataset (161K text dragging examples) and evaluated on the **ScreenDrag** benchmark, achieving substantial improvements in dragging capabilities while preserving original click-based performance.
|
| 26 |
+
|
| 27 |
+
## Usage
|
| 28 |
+
|
| 29 |
+
For details of how to employ the models, please refer to the [repo](https://github.com/OSU-NLP-Group/GUI-Drag/blob/main/evaluation/cli_run_drag.sh) examples.
|
| 30 |
+
|
| 31 |
+
Below is the code of a quick demo (`demo.png` can be found [here](https://github.com/OSU-NLP-Group/GUI-Drag/blob/main/demo.png)):
|
| 32 |
+
|
| 33 |
+
```python
|
| 34 |
# pip install openai pillow transformers
|
| 35 |
# start the vllm server:
|
| 36 |
|
| 37 |
'''
|
| 38 |
+
vllm serve osunlp/GUI-Drag-3B \
|
| 39 |
--host 0.0.0.0 \
|
| 40 |
--port 8000 \
|
| 41 |
+
--max-model-len 16384
|
|
|
|
| 42 |
'''
|
| 43 |
|
|
|
|
|
|
|
| 44 |
import base64
|
| 45 |
import json
|
| 46 |
import re
|
|
|
|
| 50 |
from PIL import Image, ImageDraw
|
| 51 |
from transformers.models.qwen2_vl.image_processing_qwen2_vl_fast import smart_resize as qwen_smart_resize
|
| 52 |
|
|
|
|
|
|
|
| 53 |
MODEL_ID = "osunlp/GUI-Drag-3B"
|
| 54 |
BASE_URL = "http://localhost:8000/v1"
|
| 55 |
|
|
|
|
| 56 |
FN_CALL_TEMPLATE = """You are a helpful assistant.
|
| 57 |
# Tools
|
| 58 |
You may call one or more functions to assist with the user query.
|
| 59 |
You are provided with function signatures within <tools></tools> XML tags:
|
| 60 |
<tools>
|
| 61 |
+
{"type": "function", "function": {"name": "computer_use", "description": "Use a mouse and keyboard to interact with a computer, and take screenshots.
|
| 62 |
+
* This is an interface to a desktop GUI. You do not have access to a terminal or applications menu. You must click on desktop icons to start applications.
|
| 63 |
+
* Some applications may take time to start or process actions, so you may need to wait and take successive screenshots to see the results of your actions. E.g. if you click on Firefox and a window doesn't open, try wait and taking another screenshot.
|
| 64 |
+
* The screen's resolution is {width}x{height}.
|
| 65 |
+
* Whenever you intend to move the cursor to click on an element like an icon, you should consult a screenshot to determine the coordinates of the element before moving the cursor.
|
| 66 |
+
* If you tried clicking on a program or link but it failed to load, even after waiting, try adjusting your cursor position so that the tip of the cursor visually falls on the element that you want to click.
|
| 67 |
+
* Make sure to click any buttons, links, icons, etc with the cursor tip in the center of the element. Don't click boxes on their edges unless asked.", "parameters": {"properties": {"action": {"description": "The action to perform. The available actions are:
|
| 68 |
+
* `key`: Performs key down presses on the arguments passed in order, then performs key releases in reverse order.
|
| 69 |
+
* `type`: Type a string of text on the keyboard.
|
| 70 |
+
* `mouse_move`: Move the cursor to a specified (x, y) pixel coordinate on the screen.
|
| 71 |
+
* `left_click`: Click the left mouse button.
|
| 72 |
+
* `left_click_drag`: Click and drag the cursor to a specified (x, y) pixel coordinate on the screen.
|
| 73 |
+
* `right_click`: Click the right mouse button.
|
| 74 |
+
* `middle_click`: Click the middle mouse button.
|
| 75 |
+
* `double_click`: Double-click the left mouse button.
|
| 76 |
+
* `scroll`: Performs a scroll of the mouse scroll wheel.
|
| 77 |
+
* `wait`: Wait specified seconds for the change to happen.
|
| 78 |
+
* `terminate`: Terminate the current task and report its completion status.", "enum": ["key", "type", "mouse_move", "left_click", "left_click_drag", "right_click", "middle_click", "double_click", "scroll", "wait", "terminate"], "type": "string"}, "keys": {"description": "Required only by `action=key`.", "type": "array"}, "text": {"description": "Required only by `action=type`.", "type": "string"}, "coordinate": {"description": "(x, y): The x (pixels from the left edge) and y (pixels from the top edge) coordinates to move the mouse to. Required only by `action=mouse_move`, `action=left_click_drag`, `action=left_click`, `action=right_click`, `action=double_click`.", "type": "array"}, "pixels": {"description": "The amount of scrolling to perform. Positive values scroll up, negative values scroll down. Required only by `action=scroll`.", "type": "number"}, "time": {"description": "The seconds to wait. Required only by `action=wait`.", "type": "number"}, "status": {"description": "The status of the task. Required only by `action=terminate`.", "type": "string", "enum": ["success", "failure"]}}, "required": ["action"], "type": "object"}}}}
|
| 79 |
</tools>
|
| 80 |
For each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:
|
| 81 |
<tool_call>
|
| 82 |
+
{"name": <function-name>, "arguments": <args-json-object>}
|
| 83 |
</tool_call>
|
| 84 |
"""
|
| 85 |
|
|
|
|
|
|
|
| 86 |
IMAGE_PATH = Path("demo.png")
|
| 87 |
INSTRUCTION = "Drag to select the last sentence."
|
| 88 |
|
|
|
|
|
|
|
| 89 |
def encode_image(image: Image) -> str:
|
| 90 |
"""Encode PIL image to base64 string"""
|
| 91 |
output_buffer = io.BytesIO()
|
|
|
|
| 94 |
base64_str = base64.b64encode(byte_data).decode("utf-8")
|
| 95 |
return base64_str
|
| 96 |
|
|
|
|
|
|
|
| 97 |
def resize_coordinates(coord, size_pred, size_to_be_mapped):
|
| 98 |
return (
|
| 99 |
round(coord[0] * size_to_be_mapped[0] / size_pred[0]),
|
|
|
|
| 165 |
|
| 166 |
if __name__ == "__main__":
|
| 167 |
main()
|
| 168 |
+
```
|
| 169 |
+
|
| 170 |
+
## Citation
|
| 171 |
+
|
| 172 |
+
```bibtex
|
| 173 |
+
@article{gui-drag2025,
|
| 174 |
+
title={Beyond Clicking: A Step Towards Generalist GUI Grounding via Text Dragging},
|
| 175 |
+
author={Anonymous Authors},
|
| 176 |
+
journal={arXiv preprint arXiv:2601.06031},
|
| 177 |
+
year={2025}
|
| 178 |
+
}
|
| 179 |
```
|