llm.create_chat_completion(
messages = [
{
"role": "user",
"content": "What is the capital of France?"
}
]
)Eightly Agent
Tiered model family powering Nova, the built-in AI agent for Eight.ly OS — a self-hosted homelab operating system that combines virtualization, containers, storage, networking, and an AI control plane in a single Go binary.
This repo ships four GGUFs designed to be deployed together. The host OS routes each turn to the right model based on intent and available hardware.
Models
| File | Size | Role | Base | Use |
|---|---|---|---|---|
eightly-agent-fg.gguf |
359 MB | Tool router | FunctionGemma 270m | Sub-1s function calls on CPU. Fine-tuned on 4,684 Eight.ly OS tool-calling examples. |
eightly-agent-e2b-Q4_K_M.gguf |
3.2 GB | Conversational | Gemma 4 E2B | Natural chat and reasoning on modest hardware |
eightly-agent-q4b-Q4_K_M.gguf |
2.4 GB | Conversational | Qwen3 4B | Mid-tier homelabs |
eightly-agent-q8b-Q4_K_M.gguf |
4.7 GB | Conversational | Qwen3 8B | Beefy boxes — best reasoning quality |
Architecture
Nova uses a dual-mode router: tool-bound turns dispatch to fg for near-instant function calls, while open-ended or reasoning turns go to whichever conversational model fits the host's hardware tier. The OS handles selection automatically — no configuration required.
This means a Pi-class device gets working AI control without GPU, while beefier boxes get richer conversation without sacrificing tool-call latency.
Running
Standard GGUFs — anything that speaks llama.cpp will load them.
With Ollama:
# Pull the function-caller
ollama create eightly-agent-fg -f Modelfile.fg
# Use it
curl http://localhost:11434/api/generate -d '{
"model": "eightly-agent-fg",
"prompt": "<bos><start_of_turn>developer\nYou are a model that can do function calling...<start_of_turn>user\nHow much disk space is free?<end_of_turn>\n<start_of_turn>model\n",
"raw": true,
"stream": false
}'
With llama.cpp directly:
llama-server -m eightly-agent-fg.gguf --port 8081
Or just run Eight.ly OS and it wires everything up for you.
Tool Calling Format
fg is fine-tuned on FunctionGemma's native format with <start_function_call>call:NAME{args}<end_function_call> tokens. Tool declarations are baked into the prompt as <start_function_declaration>declaration:NAME{...}<end_function_declaration> blocks. The model was trained on 4,684 examples across 41 NAS management tools.
41 Tools
Storage, Docker containers, VMs (KVM/QEMU), LXC, SMB/NFS file sharing, network interfaces, firewall, SnapRAID parity, ZFS pools, SMART health, system info, app store, and more.
Status
Active development. Eight.ly OS is in alpha; this model family is being trained against an evolving 41-tool catalog spanning storage, virtualization, containers, networking, and system control.
Links
- eight.ly
- GitHub: smashingtags/eightly-os (private during alpha)
- Downloads last month
- 30
4-bit
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="smashingtags/eightly-agent", filename="", )