File size: 3,057 Bytes
9b14191 4947d96 81d0d88 9b14191 c5b5621 9b14191 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | ---
license: apache-2.0
language:
- en
base_model:
- Raziel1234/Duchifat-2
pipeline_tag: text-generation
tags:
- computer-use
- code
- agent
---

# Duchifat-2-Computer-v1 ๐๏ธ๐ป
## Overview
**Duchifat-2-Computer-v1** is a high-precision, specialized Small Language Model (SLM) with **136M parameters**. This model is a fine-tuned version of the base `Duchifat-2`, specifically engineered for **Task-Oriented Control** and **CLI Automation**.
Through aggressive Supervised Fine-Tuning (SFT) and "Hard Alignment," we have eliminated general-purpose hallucinations (such as irrelevant PDF/Video references) to create a reliable bridge between natural language instructions and executable computer actions.
## ๐ค The Core Engine of CLI-Assistant
This model is designed to function as the primary reasoning engine for the **CLI-Assistant** project. It transforms human intent into structured tool-calls with near-zero latency.
๐ **To see the full implementation and integrate this model into your system, visit:**
๐ [CLI-Agent on GitHub](https://github.com/nevo398/CLI-Agent)
## Key Features
- **Deterministic Alignment:** Optimized for precise tool-calling formats (e.g., `[SAY_TEXT]`, `[CREATE_NOTE]`).
- **Ultra-Lightweight:** 136M parameters allow for lightning-fast inference on CPU/Edge devices or low-cost API endpoints.
- **Context-Aware:** Understands complex instructions involving times, dates, and nested technical content.
- **Zero-Hallucination:** Drastically reduced pre-training bias to ensure the model stays within the "Computer Action" domain.
## ๐ ๏ธ Usage & Prompt Template
To achieve the best results, the model must be prompted using the following format:
```text
<instruction> {Your Command Here} </instruction>
<assistant>
```
## Example
# User input:
```Say 'The backup is complete'```
# Model Output:
```[SAY_TEXT]("The backup is complete")```
## Quick Start(Inference)
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "razielAI/Duchifat-2-Computer"
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True, torch_dtype=torch.bfloat16).to("cuda")
prompt = "<instruction> Say 'The backup is complete' </instruction>\n<assistant> "
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=50, do_sample=False)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Training Details
- **Base Model**: Duchifat-2(Pre-trained on 3.27B tokens)
- **SFT Technique**: High-LR Hard Alignment (1e-4)
- **Epochs:** 80 (Aggressive Alignment)
- **Hardware**: Trained on T4 via Google Colab.
## LICENSE
This model is released under the Apache 2.0 License. Please refer to the [CLI-Agent on GitHub](https://github.com/nevo398/CLI-Agent) repository for additional integration guidelines.
|