File size: 8,314 Bytes
5df71ad 4bd2fdf b4cd15b 4bd2fdf 5df71ad 4bd2fdf 5df71ad 4bd2fdf 5df71ad 4bd2fdf 5df71ad 4bd2fdf 5df71ad 4bd2fdf 5df71ad 4bd2fdf 5df71ad 4bd2fdf 5df71ad 4bd2fdf 5df71ad 4bd2fdf 5df71ad 4bd2fdf e892afd 4bd2fdf 5df71ad b4cd15b 5df71ad 4bd2fdf b4cd15b 4bd2fdf b4cd15b e892afd b4cd15b 5df71ad 4bd2fdf |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 |
---
language:
- en
license: apache-2.0
tags:
- merge
- mergekit
- slerp
- agent
- gui-automation
- vision
- multimodal
- far-7b
- ui-tars
base_model:
- microsoft/Fara-7B
- ByteDance-Seed/UI-TARS-1.5-7B
library_name: transformers
pipeline_tag: image-text-to-text
---
# Fara-TARS-7B: The Hybrid Reasoning & GUI Agent
**Fara-TARS-7B** is a state-of-the-art merged model that combines the high-level reasoning and planning capabilities of **Microsoft Fara-7B** with the precise GUI grounding and agentic capabilities of **ByteDance UI-TARS-7B**.
This model achieves a **Hybrid Mode**: it can seamlessly switch between writing complex text plans (Reasoning) and executing precise coordinate actions (Agentic Tool Calls) based on the user prompt.
## Key Capabilities
| Capability | Performance | Description |
| :--- | :--- | :--- |
| **GUI Grounding** | π’ **SOTA** | Accurately maps text instructions to `[x, y]` coordinates (e.g., "Click Submit" -> `[1200, 800]`). |
| **Reasoning** | π’ **Excellent** | Can generate long-form plans (e.g., "Weekly Python Learning Plan") without hallucinating clicks. |
| **Language** | π’ **English-Only** | Tuned to strictly follow English instructions, eliminating language bleeding common in TARS merges. |
| **Agentic Output** | π’ **Structured** | Outputs actions in strict JSON format: `<tool_call>{"name": "click", ...}</tool_call>`. |
## How to Use (Inference Code)
To unlock the full potential of this model (Agent Mode vs Text Mode), **you must use the specific generation configuration below**. This handles the tool schema injection and prevents repetition loops.
### Installation
```bash
pip install torch transformers pillow
```
### Python Inference Class
Use this class to interact with the model. It handles the system prompt injection and JSON parsing automatically.
```python
import torch
from transformers import AutoModelForVision2Seq, AutoTokenizer, GenerationConfig
from PIL import Image
import json
import re
class FaraAgent:
def __init__(self, model_path, device="auto"):
print(f"Loading Fara-TARS from {model_path}...")
self.model = AutoModelForVision2Seq.from_pretrained(
model_path,
torch_dtype=torch.bfloat16,
device_map=device,
trust_remote_code=True,
low_cpu_mem_usage=True
)
self.tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
# Safety fix for padding
if self.tokenizer.pad_token is None:
self.tokenizer.pad_token = self.tokenizer.eos_token
self.tokenizer.pad_token_id = self.tokenizer.eos_token_id
# Define Agent Tools
self.tools_schema = [
{"name": "left_click", "description": "Click coordinate [x, y]", "parameters": {"type": "object", "properties": {"point": {"type": "array"}}}},
{"name": "type_text", "description": "Type text", "parameters": {"type": "object", "properties": {"text": {"type": "string"}}}},
{"name": "scroll", "description": "Scroll screen", "parameters": {"type": "object", "properties": {"pixels": {"type": "integer"}}}},
{"name": "terminate", "description": "Task done", "parameters": {"type": "object", "properties": {"status": {"type": "string"}}}}
]
def _format_prompt(self, user_prompt):
# Injects the schema and strict English/Format instructions
tools_json = json.dumps(self.tools_schema, indent=2)
system = (
f"You are Fara-TARS, a GUI automation agent.\n"
f"AVAILABLE TOOLS:\n{tools_json}\n\n"
"INSTRUCTIONS:\n"
"1. Reason first, then act.\n"
"2. Output valid JSON inside <tool_call> tags.\n"
"3. Format: <tool_call>{{\"name\": \"left_click\", ...}}</tool_call>"
)
return (
f"<|im_start|>system\n{system}<|im_end|>\n"
f"<|im_start|>user\n{user_prompt}<|im_end|>\n"
f"<|im_start|>assistant\n"
)
def _repair_json(self, json_str):
# Auto-fixes common LLM JSON errors (smart quotes, missing keys)
json_str = json_str.replace("β", '"').replace("β", '"').replace("'", '"')
json_str = re.sub(r'(\w+)"\s*:', r'"\1":', json_str)
return json_str
def run(self, prompt, image_path=None):
formatted_prompt = self._format_prompt(prompt)
# Handle Image Input (Optional)
if image_path:
image = Image.open(image_path).convert("RGB")
inputs = self.model.build_conversation_input_ids(
tokenizer=self.tokenizer, query=formatted_prompt, image=image
)
else:
inputs = self.tokenizer(formatted_prompt, return_tensors="pt").to(self.model.device)
# Critical: Stop generation at tool close to prevent loops
stop_strings = ["</tool_call>", "<|im_end|>"]
# Optimized Config
config = GenerationConfig(
max_new_tokens=2048,
do_sample=True,
temperature=0.4,
top_p=0.95,
repetition_penalty=1.15, # Prevents "United.com" loops
no_repeat_ngram_size=0, # Must be 0 to allow JSON keys
pad_token_id=self.tokenizer.pad_token_id,
eos_token_id=self.tokenizer.eos_token_id
)
with torch.no_grad():
output = self.model.generate(
**inputs,
generation_config=config,
tokenizer=self.tokenizer,
stop_strings=stop_strings
)
input_len = inputs['input_ids'].shape[1]
raw_response = self.tokenizer.decode(output[0][input_len:], skip_special_tokens=True)
# Parse Output
tool_action = None
text_content = raw_response
if "<tool_call>" in raw_response:
parts = raw_response.split("<tool_call>")
text_content = parts[0].strip()
tool_str = parts[1].split("</tool_call>")[0].strip()
try:
tool_action = json.loads(self._repair_json(tool_str))
except:
tool_action = {"error": "malformed_json", "raw": tool_str}
return {"thought": text_content, "action": tool_action}
# Usage
agent = FaraAgent("your-username/Fara-TARS-7B")
result = agent.run("Click the Submit button at (1200, 800)")
print(result)
```
## Benchmark Performance
The model was evaluated on a comprehensive suite covering Web Automation, GUI Grounding, and Complex Reasoning.
| Category | Task | Result Type | Performance |
| :--- | :--- | :--- | :--- |
| **GUI Grounding** | "Click Submit at (1200, 800)" | **Tool Call** | β
Correct JSON: `{"point": [1200, 800]}` |
| **Web Automation** | "Type 'Hello World' in search" | **Tool Call** | β
Correct JSON: `{"name": "type", "text": "Hello World"}` |
| **Reasoning** | "Design a Weekly Python Plan" | **Text** | β
Generates full Markdown plan (900+ tokens) |
| **Hybrid** | "Compare Selenium vs Playwright" | **Agentic Text** | β
Uses `type` tool to output a Markdown table |
| **Safety** | "Stop at critical payment point" | **Tool Call** | β
Uses `terminate` tool with status `stop_confirm` |
## Merge Details
This model was merged using **Mergekit**.
### Configuration
```yaml
models:
- model: microsoft/Fara-7B
- model: ByteDance-Seed/UI-TARS-1.5-7B
merge_method: slerp
base_model: microsoft/Fara-7B
dtype: bfloat16
parameters:
t:
# 5-point gradient:
# 0.1 (Start): Mostly Fara -> Ensures input understanding and English grammar.
# 0.3 -> 0.5 (Middle): Blends TARS capability for reasoning and logic.
# 0.1 (End): Mostly Fara -> Ensures the output stops correctly and doesn't loop.
- value: [0.1, 0.3, 0.5, 0.3, 0.1]
```
*(Note: While `slerp` was used, specific inference parameters (temp=0.4, rep_penalty=1.15) are required to stabilize the output, as documented in the Usage section).*
## Limitations
1. **Strict Prompting:** The model expects the specific System Prompt defined in the usage class. Without it, it may hallucinate tool names.
2. **Repetition:** In extremely long lists (100+ items), the model may repeat. The recommended `repetition_penalty=1.15` fixes this for 99% of cases.
## License
Apache 2.0
--- |