Mungert commited on
Commit
ae08fd8
·
verified ·
1 Parent(s): a595481

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +358 -0
README.md ADDED
@@ -0,0 +1,358 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ ---
4
+
5
+ # <span style="color: #7FFF7F;">Fara-7B GGUF Models</span>
6
+
7
+
8
+ ## <span style="color: #7F7FFF;">Model Generation Details</span>
9
+
10
+ This model was generated using [llama.cpp](https://github.com/ggerganov/llama.cpp) at commit [`3d07caa99`](https://github.com/ggerganov/llama.cpp/commit/3d07caa99bff9213411202b4063aa2f44e919654).
11
+
12
+
13
+
14
+
15
+
16
+ ---
17
+
18
+ ## <span style="color: #7FFF7F;">Quantization Beyond the IMatrix</span>
19
+
20
+ I've been experimenting with a new quantization approach that selectively elevates the precision of key layers beyond what the default IMatrix configuration provides.
21
+
22
+ In my testing, standard IMatrix quantization underperforms at lower bit depths, especially with Mixture of Experts (MoE) models. To address this, I'm using the `--tensor-type` option in `llama.cpp` to manually "bump" important layers to higher precision. You can see the implementation here:
23
+ 👉 [Layer bumping with llama.cpp](https://github.com/Mungert69/GGUFModelBuilder/blob/main/model-converter/tensor_list_builder.py)
24
+
25
+ While this does increase model file size, it significantly improves precision for a given quantization level.
26
+
27
+ ### **I'd love your feedback—have you tried this? How does it perform for you?**
28
+
29
+
30
+
31
+
32
+ ---
33
+
34
+ <a href="https://readyforquantum.com/huggingface_gguf_selection_guide.html" style="color: #7FFF7F;">
35
+ Click here to get info on choosing the right GGUF model format
36
+ </a>
37
+
38
+ ---
39
+
40
+
41
+
42
+ <!--Begin Original Model Card-->
43
+
44
+
45
+ # Fara-7B: An Efficient Agentic Model for Computer Use
46
+
47
+ [![Microsoft](https://img.shields.io/badge/Microsoft-Project-0078D4?logo=microsoft)](https://aka.ms/msaif/fara)
48
+ [![Hugging Face Dataset](https://img.shields.io/badge/🤗-Dataset-yellow)](https://huggingface.co/datasets/microsoft/WebTailBench)
49
+ [![Foundry](https://img.shields.io/badge/Azure-Foundry-0089D6)](https://aka.ms/foundry-fara-7b)
50
+ [![Github](https://img.shields.io/badge/Github-181717?logo=github&logoColor=white)](https://github.com/microsoft/fara)
51
+
52
+ [Official Microsoft Blog](https://www.microsoft.com/en-us/research/?p=1155843&preview=1&_ppp=0a22f3e916)<br>
53
+ [Technical Report](https://aka.ms/fara-techreport)<br>
54
+ [Github](https://github.com/microsoft/fara)<br>
55
+ [Microsoft Foundry](https://ai.azure.com/explore/models/Fara-7B/version/1/registry/azureml-msr?tid=72f988bf-86f1-41af-91ab-2d7cd011db47)<br>
56
+
57
+ ## Model Summary
58
+
59
+ **Developer:** Microsoft Research
60
+
61
+ **Description:**
62
+ Fara-7B is Microsoft's first agentic small language model (SLM) designed specifically for computer use. With only 7 billion parameters, Fara-7B is an ultra-compact Computer Use Agent (CUA) that achieves state-of-the-art performance within its size class and is competitive with larger, more resource-intensive agentic systems.
63
+
64
+ **Model Architecture:**
65
+ Multimodal decoder-only language model that takes an image (screenshot) + text context. It directly predicts thoughts and actions with grounded arguments. Current production baselines leverage Qwen 2.5-VL (7B).
66
+
67
+ **Parameters:** 7 Billion
68
+
69
+ **Inputs:** User goal (text), current screenshot(s), history of previous outputs (thoughts + actions text) from the agent.
70
+
71
+ **Context Length:** 128k
72
+
73
+ **Outputs:** Generated text in response to the input, with a chain-of-thought block followed by a tool call block to indicate the action.
74
+
75
+ **GPUs:** 64 H100s
76
+
77
+ **Training Time:** 2.5 days
78
+
79
+ **Public Data Summary:** N/A
80
+
81
+ **Dates:** Trained between 26th October 2025 to 29th October 2025
82
+
83
+ **Status:** Static model trained on public and private data
84
+
85
+ **Release Date:** November 24th, 2025
86
+
87
+ **License:** MIT
88
+
89
+ **Model Dependencies:** Qwen 2.5 VL
90
+
91
+ **Additional Assets:** N/A
92
+
93
+ **Acceptable Use Policy:** N/A
94
+
95
+ ---
96
+
97
+ ## 1. Model Overview
98
+
99
+ Fara is a 7B Computer Use Agent (CUA) model specialized for taking actions on the web to accomplish high-level user tasks. Beyond understanding webpage layout and basic action mechanics, it plans and executes high-level goals like booking restaurants, applying for jobs, planning trips, and buying shopping lists. Its training relies on a large-scale, fully synthetic dataset of action trajectories generated and verified by a multi-agent pipeline.
100
+
101
+ Fara perceives browser inputs via screenshots, while internal reasoning and state history are recorded textually. Based on recent screenshots and a full history of actions, it predicts the next action with necessary arguments (e.g., coordinates for clicks).
102
+
103
+ ### 1.1 Alignment Approach
104
+
105
+ Fara-7B uses a robust post-training safety approach leveraging open-source and in-house synthetic datasets. It incorporates critical point recognition—situations requiring user permission or sensitive information—to safely halt actions. The model is trained to refuse harmful tasks and undergoes automated red teaming to assess risks, including grounding, jailbreaks, harmful content, and copyright violations.
106
+
107
+ ### 1.2 Safeguards
108
+
109
+ Fara-7B is trained to refuse tasks in categories that violate usage policy:
110
+
111
+ | Type | Description | Examples |
112
+ |------|------------|---------|
113
+ | Illegal Activities | Tasks requiring unlawful actions | Terrorism-related searches, piracy, unauthorized access, weapons creation |
114
+ | Deceptive Tasks | Tasks misleading or impersonating | Fake forms, fraudulent listings, phishing |
115
+ | High-Risk/Regulated Domains | Tasks requiring professional oversight | Medical, legal, financial advice or approvals |
116
+ | Harassment, Exploitation, Hate | Tasks harming or discriminating | Harassment content, stalking, sexualizing minors |
117
+ | Unsafe Technical Use | Misuse of automation | Large-scale scraping, spam, system disruption |
118
+ | Misinformation | Spreading false claims | Publishing unverified claims |
119
+ | Sexual | Erotic or pornographic tasks | Erotic roleplay, porn searches |
120
+
121
+ Critical points where the agent stops include entering personal info, completing purchases, making calls, sending emails, submitting applications, and signing into accounts.
122
+
123
+ ---
124
+
125
+ ## 2. Usage
126
+
127
+ ### 2.1 Primary Use Cases
128
+
129
+ - Automating web tasks such as shopping, booking travel, restaurant reservations, info-seeking, or account workflows.
130
+ - Performs actions step-by-step using multimodal understanding from browser screenshots.
131
+ - On-device execution provides privacy guarantees and lower latency.
132
+
133
+ ### 2.2 Out-of-Scope Use Cases
134
+
135
+ - Model not evaluated for all downstream purposes; consider limitations of LLMs for accuracy, safety, and fairness.
136
+ - Must adhere to applicable laws and regulations.
137
+ - English-only support.
138
+
139
+ ### 2.3 Distribution Channels
140
+
141
+ - Hugging Face
142
+ - Azure AI Foundry
143
+
144
+ ### 2.4 Input Formats
145
+
146
+ Given the nature of the training data, always use the ChatML template with the following system prompt for inference:
147
+
148
+ ---
149
+
150
+ **System Prompt:**
151
+
152
+ You are a web automation agent that performs actions on websites to fulfill user requests by calling various tools.
153
+
154
+ You should stop execution at **Critical Points**. A Critical Point occurs in tasks like:
155
+
156
+ - Checkout
157
+ - Book
158
+ - Purchase
159
+ - Call
160
+ - Email
161
+ - Order
162
+
163
+ A Critical Point requires the user's permission or personal/sensitive information (name, email, credit card, address, payment information, resume, etc.) to complete a transaction (purchase, reservation, sign-up, etc.), or to communicate as a human would (call, email, apply to a job, etc.).
164
+
165
+ **Guideline:** Solve the task as far as possible **up until a Critical Point**.
166
+
167
+ **Examples:**
168
+
169
+ - If the task is to "call a restaurant to make a reservation," do **not** actually make the call. Instead, navigate to the restaurant's page and find the phone number.
170
+ - If the task is to "order new size 12 running shoes," do **not** place the order. Instead, search for the right shoes that meet the criteria and add them to the cart.
171
+
172
+ Some tasks, like answering questions, may not encounter a Critical Point at all.
173
+
174
+ ---
175
+
176
+ **Function Signatures:**
177
+
178
+ You are provided with function signatures within XML tags:
179
+
180
+ ```json
181
+ {
182
+ "type": "function",
183
+ "function": {
184
+ "name": "computer_use",
185
+ "description": "Use a mouse and keyboard to interact with a computer, and take screenshots.\n* This is an interface to a desktop GUI. You do not have access to a terminal or applications menu. You must click on desktop icons to start applications.\n* Some applications may take time to start or process actions, so you may need to wait and take successive screenshots to see the results of your actions. E.g. if you click on Firefox and a window doesn't open, try wait and taking another screenshot.\n* The screen's resolution is 1428x896.\n* Whenever you intend to move the cursor to click on an element like an icon, you should consult a screenshot to determine the coordinates of the element before moving the cursor.\n* If you tried clicking on a program or link but it failed to load, even after waiting, try adjusting your cursor position so that the tip of the cursor visually falls on the element that you want to click.\n* Make sure to click any buttons, links, icons, etc with the cursor tip in the center of the element. Don't click boxes on their edges unless asked.\n* When a separate scrollable container prominently overlays the webpage, if you want to scroll within it, you typically need to mouse_move() over it first and then scroll().\n* If a popup window appears that you want to close, if left_click() on the 'X' or close button doesn't work, try key(keys=['Escape']) to close it.\n* On some search bars, when you type(), you may need to press_enter=False and instead separately call left_click() on the search button to submit the search query. This is especially true of search bars that have auto-suggest popups for e.g. locations\n* For calendar widgets, you usually need to left_click() on arrows to move between months and left_click() on dates to select them; type() is not typically used to input dates there.",
186
+ "parameters": {
187
+ "properties": {
188
+ "action": {
189
+ "description": "The action to perform. The available actions are:\n* key: Performs key down presses on the arguments passed in order, then performs key releases in reverse order. Includes 'Enter', 'Alt', 'Shift', 'Tab', 'Control', 'Backspace', 'Delete', 'Escape', 'ArrowUp', 'ArrowDown', 'ArrowLeft', 'ArrowRight', 'PageDown', 'PageUp', 'Shift', etc.\n* type: Type a string of text on the keyboard.\n* mouse_move: Move the cursor to a specified (x, y) pixel coordinate on the screen.\n* left_click: Click the left mouse button.\n* scroll: Performs a scroll of the mouse scroll wheel.\n* visit_url: Visit a specified URL.\n* web_search: Perform a web search with a specified query.\n* history_back: Go back to the previous page in the browser history.\n* pause_and_memorize_fact: Pause and memorize a fact for future reference.\n* wait: Wait specified seconds for the change to happen.\n* terminate: Terminate the current task and report its completion status.",
190
+ "enum": ["key", "type", "mouse_move", "left_click", "scroll", "visit_url", "web_search", "history_back", "pause_and_memorize_fact", "wait", "terminate"],
191
+ "type": "string"
192
+ },
193
+ "keys": {"description": "Required only by action=key.", "type": "array"},
194
+ "text": {"description": "Required only by action=type.", "type": "string"},
195
+ "coordinate": {"description": "(x, y) coordinates for mouse actions. Required only by action=left_click, action=mouse_move, and action=type.", "type": "array"},
196
+ "pixels": {"description": "Amount of scrolling. Positive = up, Negative = down. Required only by action=scroll.", "type": "number"},
197
+ "url": {"description": "The URL to visit. Required only by action=visit_url.", "type": "string"},
198
+ "query": {"description": "The query to search for. Required only by action=web_search.", "type": "string"},
199
+ "fact": {"description": "The fact to remember for the future. Required only by action=pause_and_memorize_fact.", "type": "string"},
200
+ "time": {"description": "Seconds to wait. Required only by action=wait.", "type": "number"},
201
+ "status": {"description": "Status of the task. Required only by action=terminate.", "type": "string", "enum": ["success", "failure"]}
202
+ },
203
+ "required": ["action"],
204
+ "type": "object"
205
+ }
206
+ }
207
+ }
208
+
209
+ For each function call, return a JSON object with the function name and arguments within XML tags:
210
+
211
+ ```json
212
+ {
213
+ "name": "<function-name>",
214
+ "arguments": <args-json-object>
215
+ }
216
+
217
+ ```
218
+
219
+ - Function signatures provided for all actions (`key`, `type`, `mouse_move`, `left_click`, `scroll`, `visit_url`, `web_search`, `history_back`, `pause_and_memorize_fact`, `wait`, `terminate`).
220
+
221
+
222
+ ### 2.5 Technical Requirements & Integration
223
+
224
+ - Required packages: `torch >=2.7.1`, `transformers >=4.53.3`, `vllm >=0.10.0`
225
+ - Tested on NVIDIA A6000, A100, H100 GPUs (Ubuntu 24.04.3 LTS)
226
+ - Recommended on vLLM server with bf16 precision
227
+ - Provided implementation via Magentic-UI in Docker sandbox for safe web execution
228
+
229
+ ### 2.6 Responsible AI Considerations
230
+
231
+ - English-only; other languages may have degraded performance
232
+ - Potential stereotype reinforcement or inappropriate content
233
+ - Verify outputs, especially in high-stakes or regulated domains
234
+ - Misuse includes fraud, spam, malware generation
235
+ - Use safety services like Azure AI Content Safety where possible
236
+ - Recommended: human-in-the-loop, sandboxing, access control, output verification
237
+
238
+ ---
239
+
240
+ ## 3. Data Overview
241
+
242
+ ### 3.1 Training, Testing, Validation Datasets
243
+
244
+ - Multi-agent data generation pipeline produces synthetic trajectories from seed URLs and open-source tasks
245
+ - Records screenshots, thoughts, action traces, and verification via verifier agents
246
+ - Includes high-quality public datasets: image and text modalities
247
+ - Specialized data: grounding, UI understanding (VQA, captioning, OCR), safety/refusal datasets
248
+
249
+ ---
250
+
251
+ ## 4. Quality and Performance Evaluation
252
+ ### Table: Online Agent Evaluation Results
253
+
254
+ | Model | Params | WebVoyager | Online-M2W | DeepShop | WebTailBench |
255
+ |------------------------------|--------|------------|------------|----------|---------------|
256
+ | **SoM Agents** | | | | | |
257
+ | SoM Agent (GPT-5) | - | 90.6 | 57.7 | 49.1 | 60.4 |
258
+ | SoM Agent (o3) | - | 79.3 | 55.4 | 49.7 | 52.7 |
259
+ | SoM Agent (GPT-4o) | - | 65.1 | 34.6 | 16.0 | 30.8 |
260
+ | GLM-4.1V-9B-Thinking | 9B | 66.8 | 33.9 | 32.0 | 22.4 |
261
+ | **Computer Use Models** | | | | | |
262
+ | OpenAI computer-use-preview | - | 70.9 | 42.9 | 24.7 | 25.7 |
263
+ | UI-TARS-1.5-7B | 7B | 66.4 | 31.3 | 11.6 | 19.5 |
264
+ | Fara-7B | 7B | 73.5 | 34.1 | 26.2 | 38.4 |
265
+
266
+ The table reports task completion success rates on WebVoyager, Online-Mind2Web, DeepShop, and WebTailBench for both SoM agents and native computer-use agents.
267
+ Scores are averaged over 3 runs.
268
+
269
+ ### 4.2 Safety Evaluation & Red-Teaming
270
+
271
+ - Post-training safety with critical point design
272
+ - Red-teaming on Azure: grounding, jailbreaks, harmful content, copyright
273
+
274
+ ### Guidelines for Safe Use
275
+
276
+ - Human-in-the-loop monitoring recommended
277
+ - Do not share sensitive data
278
+ - Run in sandboxed environments
279
+ - Limit internet access via allow-lists/block-lists
280
+ - Avoid use in commercial, high-stakes, or regulated domains
281
+
282
+ **Security Considerations:**
283
+ - Automates interactions across websites, apps, OS; requires strict access control, sandboxing, and monitoring
284
+
285
+ **Contact for More Information:** MSFTAIActRequest@microsoft.com
286
+
287
+ ---
288
+
289
+ ## Appendix: Benchmarks
290
+
291
+ | Benchmark | Link |
292
+ |-----------|------|
293
+ | WebVoyager | [MinorJerry/WebVoyager](https://huggingface.co/datasets/MinorJerry/WebVoyager) |
294
+ | Online-Mind2Web | [osunlp/Online-Mind2Web](https://huggingface.co/datasets/osunlp/Online-Mind2Web) |
295
+ | DeepShop | [DeepShop/DeepShop](https://huggingface.co/datasets/DeepShop/DeepShop) |
296
+ | WebTailBench | [microsoft/WebTailBench](https://huggingface.co/datasets/microsoft/WebTailBench) |
297
+ | ScreenSpot v1 | [rootsautomation/ScreenSpot](https://huggingface.co/datasets/rootsautomation/ScreenSpot) |
298
+ | ScreenSpot v2 | [Voxel51/ScreenSpot-v2](https://huggingface.co/datasets/Voxel51/ScreenSpot-v2) |
299
+ | AgentHarm | [ai-safety-institute/AgentHarm](https://huggingface.co/datasets/ai-safety-institute/AgentHarm) |
300
+
301
+ <!--End Original Model Card-->
302
+
303
+ ---
304
+
305
+ # <span id="testllm" style="color: #7F7FFF;">🚀 If you find these models useful</span>
306
+
307
+ Help me test my **AI-Powered Quantum Network Monitor Assistant** with **quantum-ready security checks**:
308
+
309
+ 👉 [Quantum Network Monitor](https://readyforquantum.com/?assistant=open&utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme)
310
+
311
+
312
+ The full Open Source Code for the Quantum Network Monitor Service available at my github repos ( repos with NetworkMonitor in the name) : [Source Code Quantum Network Monitor](https://github.com/Mungert69). You will also find the code I use to quantize the models if you want to do it yourself [GGUFModelBuilder](https://github.com/Mungert69/GGUFModelBuilder)
313
+
314
+ 💬 **How to test**:
315
+ Choose an **AI assistant type**:
316
+ - `TurboLLM` (GPT-4.1-mini)
317
+ - `HugLLM` (Hugginface Open-source models)
318
+ - `TestLLM` (Experimental CPU-only)
319
+
320
+ ### **What I’m Testing**
321
+ I’m pushing the limits of **small open-source models for AI network monitoring**, specifically:
322
+ - **Function calling** against live network services
323
+ - **How small can a model go** while still handling:
324
+ - Automated **Nmap security scans**
325
+ - **Quantum-readiness checks**
326
+ - **Network Monitoring tasks**
327
+
328
+ 🟡 **TestLLM** – Current experimental model (llama.cpp on 2 CPU threads on huggingface docker space):
329
+ - ✅ **Zero-configuration setup**
330
+ - ⏳ 30s load time (slow inference but **no API costs**) . No token limited as the cost is low.
331
+ - 🔧 **Help wanted!** If you’re into **edge-device AI**, let’s collaborate!
332
+
333
+ ### **Other Assistants**
334
+ 🟢 **TurboLLM** – Uses **gpt-4.1-mini** :
335
+ - **It performs very well but unfortunatly OpenAI charges per token. For this reason tokens usage is limited.
336
+ - **Create custom cmd processors to run .net code on Quantum Network Monitor Agents**
337
+ - **Real-time network diagnostics and monitoring**
338
+ - **Security Audits**
339
+ - **Penetration testing** (Nmap/Metasploit)
340
+
341
+ 🔵 **HugLLM** – Latest Open-source models:
342
+ - 🌐 Runs on Hugging Face Inference API. Performs pretty well using the lastest models hosted on Novita.
343
+
344
+ ### 💡 **Example commands you could test**:
345
+ 1. `"Give me info on my websites SSL certificate"`
346
+ 2. `"Check if my server is using quantum safe encyption for communication"`
347
+ 3. `"Run a comprehensive security audit on my server"`
348
+ 4. '"Create a cmd processor to .. (what ever you want)" Note you need to install a [Quantum Network Monitor Agent](https://readyforquantum.com/Download/?utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme) to run the .net code on. This is a very flexible and powerful feature. Use with caution!
349
+
350
+ ### Final Word
351
+
352
+ I fund the servers used to create these model files, run the Quantum Network Monitor service, and pay for inference from Novita and OpenAI—all out of my own pocket. All the code behind the model creation and the Quantum Network Monitor project is [open source](https://github.com/Mungert69). Feel free to use whatever you find helpful.
353
+
354
+ If you appreciate the work, please consider [buying me a coffee](https://www.buymeacoffee.com/mahadeva) ☕. Your support helps cover service costs and allows me to raise token limits for everyone.
355
+
356
+ I'm also open to job opportunities or sponsorship.
357
+
358
+ Thank you! 😊