File size: 3,369 Bytes
208ec29
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
82b48fd
208ec29
 
 
 
 
 
 
 
 
 
 
 
cf8e898
208ec29
 
 
 
 
82b48fd
 
208ec29
 
 
 
 
 
cf8e898
208ec29
 
 
 
 
 
 
cf8e898
208ec29
 
 
 
 
cf8e898
208ec29
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
---
license: apache-2.0
datasets:
- OS-Copilot/OS-Atlas-data
language:
- en
base_model:
- bytedance-research/UI-TARS-7B-DPO
pipeline_tag: image-text-to-text
library_name: transformers
tags:
- multimodel
- gui
---

## Model Description


Fintor-GUI-S2 s is a GUI grounding model finetuned from [**UI-TARS-7B-DPO**](https://huggingface.co/bytedance-research/UI-TARS-7B-DPO).


## Evaluation Results

We evaluated our model using [Screenspot](https://github.com/likaixin2000/ScreenSpot-Pro-GUI-Grounding) on two benchmarks: Screenspot Pro and Screenspot v2.

We also include evaluation scripts used on these benchmarks. The table below compares our model's performance against the base model performance.

| Model | size | Screenspot Pro | Screenspot v2 |
|-------|:----:|:--------------:|:----------:|
| [UI-TARS-7B-DPO](https://huggingface.co/bytedance-research/UI-TARS-7B-DPO) | 7B | 27.0 | 83.0 |
| **Ours** | | | |
| **Ui-Tars-7B-Instruct-Finetuned-Os-Atlas** | 7B | **33.0** | **91.8** |

**Note - The base model scores slightly lower than the mentioned scores in the paper because the prompts used for evaluation are not publicly available. We used the default prompts when evaluating the base and fine-tuned models.**

## Training procedure

This model used the OS-Copilot dataset for fine-tuning: [OS-Copilot](https://huggingface.co/datasets/OS-Copilot/OS-Atlas-data/tree/main).

[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/am_fintor-neuralleap/huggingface/runs/hl90xquy?nw=nwuseram_fintor) 

This model was trained with SFT and LoRA.

### Evaluation Scripts:

Evaluation scripts available here - [Screenspot_Ui-Tars](https://github.com/ma-neuralleap/ScreenSpot-Pro-GUI-Grounding/blob/main/models/uitaris.py)

### Quick Start
```python
from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info
# default: Load the model on the available device(s)
model = Qwen2VLForConditionalGeneration.from_pretrained(
    "Fintor/Ui-Tars-7B-Instruct-Finetuned-Os-Atlas", 
    torch_dtype=torch.bfloat16,
    attn_implementation="flash_attention_2",
    device_map="auto",
)
# default processer
processor = AutoProcessor.from_pretrained("Fintor/Ui-Tars-7B-Instruct-Finetuned-Os-Atlas")
# Example input
messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "image",
                "image": "path/to/image.jpeg",
            },
            {"type": "text", "text": "Describe this image."},
        ],
    }
]
# Preparation for inference
text = processor.apply_chat_template(
    messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
    text=[text],
    images=image_inputs,
    videos=video_inputs,
    padding=True,
    return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference: Generation of the output
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
    out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
    generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```


## Citation