| | ---
|
| | license: mit
|
| | language:
|
| | - en
|
| | tags:
|
| | - edge-computing
|
| | - eve-os
|
| | - linux
|
| | - command-assistant
|
| | - tool-use
|
| | - structured-output
|
| | - gguf
|
| | - quantized
|
| | base_model: Qwen/Qwen3-0.6B
|
| | pipeline_tag: text-generation
|
| | library_name: transformers
|
| | ---
|
| |
|
| | # Edge Command Model — EVE-OS & Linux Terminal Assistant
|
| |
|
| | A fine-tuned [Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B) model trained to act as a lightweight command assistant for EVE-OS edge devices and Linux systems. It accepts natural language requests and responds exclusively with structured JSON tool calls.
|
| |
|
| | ## Intended Use
|
| |
|
| | This model runs on edge hardware (ARM or x86 CPU, no GPU required) and serves as an on-device command assistant for operators managing EVE-OS edge nodes. It is designed for offline, air-gapped, or bandwidth-constrained environments where cloud-based LLMs are not available.
|
| |
|
| | **Example interaction:**
|
| |
|
| | ```
|
| | User: Show memory usage
|
| | Model: {"tool": "terminal", "command": "free -h"}
|
| |
|
| | User: What is zedagent?
|
| | Model: {"tool": "explain", "text": "zedagent is the main EVE-OS orchestration agent. It processes configurations from ZedCloud, manages application deployment, handles device attestation, and coordinates all other EVE services."}
|
| | ```
|
| |
|
| | ## Output Format
|
| |
|
| | The model always responds with a single JSON object in one of two formats:
|
| |
|
| | **Terminal commands** (for actions to execute):
|
| | ```json
|
| | {"tool": "terminal", "command": "<shell command>"}
|
| | ```
|
| |
|
| | **Explanations** (for informational queries):
|
| | ```json
|
| | {"tool": "explain", "text": "<explanation>"}
|
| | ```
|
| |
|
| | ## Training Details
|
| |
|
| | | Parameter | Value |
|
| | |---|---|
|
| | | Base model | Qwen/Qwen3-0.6B |
|
| | | Method | QLoRA (4-bit quantization during training) |
|
| | | LoRA rank (r) | 32 |
|
| | | LoRA alpha | 64 |
|
| | | Target modules | q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj |
|
| | | Learning rate | 1e-4 |
|
| | | Scheduler | Cosine |
|
| | | Epochs | 15 |
|
| | | Max sequence length | 512 |
|
| | | Training examples | 1,715 |
|
| | | Training hardware | Single consumer GPU |
|
| |
|
| | ## Performance
|
| |
|
| | Tested against 100 randomly sampled prompts from the training set.
|
| |
|
| | JSON Validity Rate: 99.3%
|
| | Tool Routing Accuracy: 98.6%
|
| | Exact Match Accuracy: 20.0%
|
| | Fuzzy Match Accuracy: 27.6%
|
| | Average Inference Time: 0.692s per query
|
| | Peak Memory Usage: 736.0 MB
|
| |
|
| | ## Training Data
|
| |
|
| | The model was trained on 1,715 instruction-output pairs covering:
|
| |
|
| | - **~350 unique commands** with 4-5 phrasing variations each
|
| | - **Linux commands**: file operations, text processing, networking, process management, disk/storage, kernel modules, containers (containerd/runc), ZFS, LVM, security, namespaces, cgroups
|
| | - **EVE-OS commands and concepts**: all pillar microservices (zedagent, nim, domainmgr, zedrouter, volumemgr, etc.), device filesystem paths (/persist, /config, /run), ZedCloud connectivity, EdgeView, TPM management, containerd operations
|
| | - **Explanations**: EVE-OS architecture, Linux subsystems, file paths, configuration files
|
| |
|
| | All training data was human-curated and reviewed for accuracy.
|
| |
|
| | ## Quantization
|
| |
|
| | The model is provided in GGUF format quantized to **Q4_K_M** for efficient CPU-only inference.
|
| |
|
| | | Format | File Size | RAM Required | Use Case |
|
| | |---|---|---|---|
|
| | | Q4_K_M (recommended) | ~450 MB | 2-4 GB | Edge deployment, CPU inference |
|
| | | Q8_0 | ~700 MB | 4-6 GB | Higher accuracy, more RAM available |
|
| | | F16 | ~1.2 GB | 6-8 GB | Maximum accuracy, development/testing |
|
| |
|
| | ## Hardware Requirements
|
| |
|
| | **Minimum:**
|
| | - CPU: Any modern ARM or x86 processor
|
| | - RAM: 2 GB
|
| | - Storage: 500 MB
|
| | - GPU: Not required
|
| |
|
| | **Recommended:**
|
| | - CPU: ARM Cortex-A72 or better / x86-64
|
| | - RAM: 4 GB
|
| | - Storage: 1 GB
|
| |
|
| | Tested on Raspberry Pi 4 (4GB) and x86 edge gateways.
|
| |
|
| | ## How to Use
|
| |
|
| | ### With llama.cpp
|
| |
|
| | ```bash
|
| | ./llama-cli -m edge-command-model-Q4_K_M.gguf \
|
| | --temp 0.1 \
|
| | --top-p 0.9 \
|
| | -p "<|im_start|>system
|
| | You are an edge device command assistant. You respond ONLY with valid JSON tool calls. Never respond with plain text. Available tools: terminal, explain.
|
| | <|im_end|>
|
| | <|im_start|>user
|
| | Show disk space
|
| | <|im_end|>
|
| | <|im_start|>assistant
|
| | "
|
| | ```
|
| |
|
| | ### With Ollama
|
| |
|
| | Create a Modelfile:
|
| | ```
|
| | FROM ./edge-command-model-Q4_K_M.gguf
|
| | SYSTEM "You are an edge device command assistant. You respond ONLY with valid JSON tool calls. Never respond with plain text. Available tools: terminal, explain."
|
| | PARAMETER temperature 0.1
|
| | PARAMETER num_ctx 512
|
| | ```
|
| |
|
| | Then:
|
| | ```bash
|
| | ollama create edge-cmd -f Modelfile
|
| | ollama run edge-cmd "Show memory usage"
|
| | ```
|
| |
|
| | ### With llama-cpp-python
|
| |
|
| | ```python
|
| | from llama_cpp import Llama
|
| |
|
| | model = Llama(model_path="edge-command-model-Q4_K_M.gguf", n_ctx=512, n_threads=4)
|
| |
|
| | prompt = """<|im_start|>system
|
| | You are an edge device command assistant. You respond ONLY with valid JSON tool calls. Never respond with plain text. Available tools: terminal, explain.
|
| | <|im_end|>
|
| | <|im_start|>user
|
| | Show memory usage
|
| | <|im_end|>
|
| | <|im_start|>assistant
|
| | """
|
| |
|
| | output = model(prompt, max_tokens=128, temperature=0.1, stop=["<|im_end|>"])
|
| | print(output["choices"][0]["text"])
|
| | ```
|
| |
|
| | ## Coverage
|
| |
|
| | The model covers commands and concepts across these categories:
|
| |
|
| | **Linux:**
|
| | File operations, text processing (grep, sed, awk), networking (ip, ss, tcpdump, iptables), process management, disk/storage (lsblk, fdisk, ZFS, LVM), kernel modules, containers (containerd, runc), security (namespaces, cgroups, capabilities), compression, certificates (openssl), WireGuard
|
| |
|
| | **EVE-OS:**
|
| | All pillar microservices (zedagent, nim, domainmgr, zedrouter, volumemgr, baseosmgr, tpmmgr, vaultmgr, loguploader, ledmanager, nodeagent, and more), device filesystem layout (/persist, /config, /run), ZedCloud communication, EdgeView remote diagnostics, containerd operations on EVE, ZFS pool management, device identity and certificates
|
| |
|
| | ## Limitations
|
| |
|
| | - The model is trained on a fixed set of ~350 commands. It may hallucinate plausible but incorrect commands for requests outside its training distribution.
|
| | - Explain responses are generated, not memorized. Factual accuracy of explanations should be verified for critical operations.
|
| | - The model does not support multi-turn conversation. Each request is independent.
|
| | - Complex compound commands (multi-pipe chains) may be less accurate than single commands.
|
| | - The model was trained for EVE-OS specifically and may not generalize well to other edge operating systems.
|
| |
|
| | ## Safety
|
| |
|
| | This model is intended to be used behind an agent harness that:
|
| | - Requires user confirmation (y/n) before executing any terminal command
|
| | - Blocks dangerous commands (rm -rf /, mkfs on mounted volumes, fork bombs)
|
| | - Enforces timeouts on command execution
|
| | - Limits output capture size
|
| |
|
| | **Never execute model outputs directly without human review.**
|
| |
|
| | ## License
|
| |
|
| | MIT
|
| |
|