Qwen-0.5B-Coder-El-Terminalo
A 0.5B parameter shell command generator fine-tuned from Qwen2.5-Coder-0.5B-Instruct using LoRA. It converts natural language queries into accurate shell commands for Linux and macOS.
Built for El Terminalo โ a GPU-accelerated terminal emulator for macOS.
What It Does
You type English โ it outputs a shell command. Nothing else. No explanations, no alternatives, no markdown.
Input: "list files ordered by size"
Output: ls -lhS
Input: "find all .log files modified in last 24 hours"
Output: find . -name '*.log' -mtime -1
Input: "kill process on port 8080"
Output: fuser -k 8080/tcp
Input: "get cluster events ordered by timestamp"
Output: kubectl get events --sort-by='.metadata.creationTimestamp'
Model Details
| Base Model | Qwen2.5-Coder-0.5B-Instruct |
| Method | LoRA (rank 32, alpha 64) |
| Training Data | 5,536 curated NLโcommand pairs |
| Target Modules | q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj |
| Epochs | 4 |
| Learning Rate | 2e-4 (cosine scheduler) |
| Quantization | Q8_0 GGUF |
| File Size | ~500MB |
| RAM Usage | ~800MBโ1GB during inference |
| License | Apache 2.0 |
Command Coverage
Trained on 209 unique base commands across these categories:
| Category | Examples | Count |
|---|---|---|
| Git | git log, git stash, git rebase |
620 |
| Docker | docker ps, docker-compose up |
500 |
| Kubernetes | kubectl get, kubectl port-forward |
375 |
| File Operations | find, ls, tar, chmod |
551 |
| System Admin | systemctl, journalctl, ps, kill |
453 |
| Networking | curl, ssh, rsync, scp |
304 |
| Databases | psql, mysql, redis-cli |
256 |
| Package Management | apt, brew, npm, pip |
193 |
Supports both Linux (bash) and macOS (zsh) via context-aware system prompts.
Quick Start โ Ollama
1. Download the GGUF file from this repo
2. Create a Modelfile
FROM ./shell-cmd-qwen-0.5b-q8.gguf
TEMPLATE """<|im_start|>system
{{ .System }}<|im_end|>
<|im_start|>user
{{ .Prompt }}<|im_end|>
<|im_start|>assistant
"""
PARAMETER stop "<|im_start|>"
PARAMETER stop "<|im_end|>"
PARAMETER temperature 0.1
PARAMETER top_p 0.9
SYSTEM """You are a shell command generator. OS: linux, Shell: bash, CWD: /home/user. Output ONLY the command."""
macOS users: Change the SYSTEM line to:
SYSTEM """You are a shell command generator. OS: macos, Shell: zsh, CWD: ~/. Output ONLY the command."""
3. Create and run
ollama create shell-cmd -f Modelfile
ollama run shell-cmd "compress the logs directory into a tar.gz"
# โ tar -czf logs.tar.gz logs/
Quick Start โ Python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained(
"albinab/Qwen-0.5B-Coder-El-Terminalo",
torch_dtype=torch.bfloat16,
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("albinab/Qwen-0.5B-Coder-El-Terminalo")
messages = [
{"role": "system", "content": "You are a shell command generator. OS: linux, Shell: bash, CWD: /home/user. Output ONLY the command."},
{"role": "user", "content": "show running docker containers"}
]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt").to(model.device)
with torch.no_grad():
output = model.generate(**inputs, max_new_tokens=128, temperature=0.1, do_sample=True)
print(tokenizer.decode(output[0][inputs['input_ids'].shape[1]:], skip_special_tokens=True))
# โ docker ps
Training
Fine-tuned using LLaMA Factory with LoRA on a Google Colab T4 GPU (~20 min training time).
Data Preparation
The training dataset went through extensive cleaning:
- 171 conflicting answers resolved (same question, different answers)
- 57 hardcoded values replaced with placeholders (leaked IPs, usernames)
- Typos corrected in user queries
- 8 missing patterns added (e.g.,
ls -lhSfor sort-by-size) - 63 exact duplicates removed
- Final: 5,536 clean examples with 90/10 train/eval split
Why LoRA over Full Fine-Tuning
Full parameter fine-tuning on a 0.5B model causes catastrophic forgetting โ the model loses its pretrained understanding of what CLI flags mean and memorizes surface patterns instead. LoRA keeps the base model frozen and trains small adapter layers on top, preserving the original knowledge while adding the new skill.
Limitations
- Outputs a single command per query. Multi-step pipelines or scripts are out of scope.
- May hallucinate flags for obscure or rarely-seen commands.
- Trained primarily on common DevOps/SRE commands โ niche tools may not be covered.
- The
REFUSEmechanism (for dangerous commands likerm -rf /) was trained on limited examples and should not be relied on as a safety layer.
Use with El Terminalo
This model powers the AI translation feature in El Terminalo, a modern GPU-accelerated terminal emulator for macOS built with Go + Wails + xterm.js. The model runs locally via Ollama โ no API keys, no cloud, no data leaving your machine.
Citation
@misc{qwen-0.5b-coder-el-terminalo,
author = {Albin},
title = {Qwen-0.5B-Coder-El-Terminalo: A Fine-Tuned Shell Command Generator},
year = {2026},
url = {https://huggingface.co/albinab/Qwen-0.5B-Coder-El-Terminalo},
note = {Fine-tuned from Qwen2.5-Coder-0.5B-Instruct using LoRA}
}
Acknowledgments
- Qwen Team for the base model
- LLaMA Factory for the training framework
- Ollama for local inference
- Downloads last month
- 475
We're not able to determine the quantization variants.
Model tree for albinab/Qwen-0.5B-Coder-El-Terminalo
Base model
Qwen/Qwen2.5-0.5B