--- license: apache-2.0 language: - en base_model: - Raziel1234/Duchifat-2 pipeline_tag: text-generation tags: - computer-use - code - agent --- ![image](https://cdn-uploads.huggingface.co/production/uploads/69b27531cf7d058ba0ddd0c2/pkmTTDwMqinJKZ2NjhcAt.png) # Duchifat-2-Computer-v1 🕊️💻 ## Overview **Duchifat-2-Computer-v1** is a high-precision, specialized Small Language Model (SLM) with **136M parameters**. This model is a fine-tuned version of the base `Duchifat-2`, specifically engineered for **Task-Oriented Control** and **CLI Automation**. Through aggressive Supervised Fine-Tuning (SFT) and "Hard Alignment," we have eliminated general-purpose hallucinations (such as irrelevant PDF/Video references) to create a reliable bridge between natural language instructions and executable computer actions. ## 🤖 The Core Engine of CLI-Assistant This model is designed to function as the primary reasoning engine for the **CLI-Assistant** project. It transforms human intent into structured tool-calls with near-zero latency. 🔗 **To see the full implementation and integrate this model into your system, visit:** 👉 [CLI-Agent on GitHub](https://github.com/nevo398/CLI-Agent) ## Key Features - **Deterministic Alignment:** Optimized for precise tool-calling formats (e.g., `[SAY_TEXT]`, `[CREATE_NOTE]`). - **Ultra-Lightweight:** 136M parameters allow for lightning-fast inference on CPU/Edge devices or low-cost API endpoints. - **Context-Aware:** Understands complex instructions involving times, dates, and nested technical content. - **Zero-Hallucination:** Drastically reduced pre-training bias to ensure the model stays within the "Computer Action" domain. ## 🛠️ Usage & Prompt Template To achieve the best results, the model must be prompted using the following format: ```text {Your Command Here} ``` ## Example # User input: ```Say 'The backup is complete'``` # Model Output: ```[SAY_TEXT]("The backup is complete")``` ## Quick Start(Inference) ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch model_id = "razielAI/Duchifat-2-Computer" tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True, torch_dtype=torch.bfloat16).to("cuda") prompt = " Say 'The backup is complete' \n " inputs = tokenizer(prompt, return_tensors="pt").to("cuda") outputs = model.generate(**inputs, max_new_tokens=50, do_sample=False) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## Training Details - **Base Model**: Duchifat-2(Pre-trained on 3.27B tokens) - **SFT Technique**: High-LR Hard Alignment (1e-4) - **Epochs:** 80 (Aggressive Alignment) - **Hardware**: Trained on T4 via Google Colab. ## LICENSE This model is released under the Apache 2.0 License. Please refer to the [CLI-Agent on GitHub](https://github.com/nevo398/CLI-Agent) repository for additional integration guidelines.