How to use from
llama.cpp
Install from brew
brew install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf smashingtags/eightly-agent:Q4_K_M
# Run inference directly in the terminal:
llama-cli -hf smashingtags/eightly-agent:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf smashingtags/eightly-agent:Q4_K_M
# Run inference directly in the terminal:
llama-cli -hf smashingtags/eightly-agent:Q4_K_M
Use pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases
# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf smashingtags/eightly-agent:Q4_K_M
# Run inference directly in the terminal:
./llama-cli -hf smashingtags/eightly-agent:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli
# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf smashingtags/eightly-agent:Q4_K_M
# Run inference directly in the terminal:
./build/bin/llama-cli -hf smashingtags/eightly-agent:Q4_K_M
Use Docker
docker model run hf.co/smashingtags/eightly-agent:Q4_K_M
Quick Links

Eight.ly Agent

Fine-tuned on 4,684 Eight.ly OS tool-calling examples across 41 NAS management tools (Docker, storage, VMs, LXC, file sharing, system administration).

Architecture

user query
    |
    v
+---------------+
| Nova Router   |
| (Go intent    |
|  classifier)  |
+-------+-------+
        |
   +----+----+
   v         v
+------+  +----------------+
| fg   |  | conversational |
|359MB |  | tier (gemma2,  |
|tool  |  | q4b, q8b, e2b) |
|calls |  |                |
+------+  +----------------+

The Nova Router is a zero-latency Go pattern matcher (57 test cases) that classifies queries into no_tool, tool, or maybe. Tool queries route to FunctionGemma for structured tool-call extraction via GBNF grammar. Conversational queries skip directly to the response model.

Evaluation

Metric Value
Tool-call accuracy (fg) 87.5% (13/15 standard audit suite)
FunctionGemma latency Sub-1s on CPU
Median tool-query response 6.9s end-to-end
No-tool response time 1-3s
System queries 10-18s
Container/health queries 6-14s

Evaluated by Opus 4.7 auditor over 8 rounds. Remaining sharp edges: Docker stop/kill semantics, acknowledgment over-eagerness, storage pool labeling.

Models

Model Base GGUF Size Role
eightly-agent-fg FunctionGemma 270M 359 MB Tool router (dual-model worker)
eightly-agent-q4b Qwen3 4B 2.4 GB Single-model fallback
eightly-agent-q8b Qwen3 8B 4.7 GB Best single-model quality
eightly-agent-e2b Gemma 4 E2B 3.2 GB Experimental (not yet deployed)

Conversational tier uses stock gemma2:2b (1.6 GB) as the response synthesizer.

Training Data

  • 4,684 tool-calling examples (FunctionGemma fine-tune)
  • 41 tools across 6 domains: Docker, Storage, VMs, LXC, File Sharing, System
  • Dataset: smashingtags/eightly-agent-dataset

Usage

# Pull the FunctionGemma tool router
ollama pull smashingtags/eightly-agent-fg

# Pull the single-model fallback (Qwen3 4B)
ollama pull smashingtags/eightly-agent-q4b

# Pull the high-quality single-model (Qwen3 8B)
ollama pull smashingtags/eightly-agent-q8b

# Run locally
ollama run smashingtags/eightly-agent-q4b

These models are designed to work within the Eight.ly OS Nova assistant pipeline. FunctionGemma expects a scoped tool catalog with GBNF grammar constraints. The q4b/q8b models support native Ollama tool calling.

Example: Tool-Calling Flow

User: "How much disk space is free?"

Step 1 — FunctionGemma routes to tool:

<start_function_call>call:get_storage_status{}<end_function_call>

Step 2 — Tool executes, returns real data:

{"mountpoint": "/mnt/storage", "total": "69.8 GB", "used": "22 MB", "free": "69.8 GB", "percent": "0.03%"}

Step 3 — Conversational model responds:

"Your storage pool at /mnt/storage has 69.8 GB free out of 69.8 GB total — essentially empty at 0.03% used."

End-to-end: ~6.9 seconds median. FunctionGemma decision: sub-1 second.

41 Tools

Docker: list_containers, get_container_logs, get_container_stats, list_docker_stacks, list_docker_images, list_docker_networks, list_docker_volumes, container_action, pull_docker_image, install_app

Storage: get_storage_status, get_storage_capacity, get_cache_status, get_snapraid_status, get_disk_health, get_zfs_pools, run_snapraid_sync, run_smart_test, spin_down_disks, create_backup

VMs: list_vms, get_vm_stats, list_vm_snapshots, vm_action, create_vm_snapshot

LXC: list_lxc_containers, lxc_action

File Sharing: get_smb_shares, get_nfs_exports, create_smb_share, create_nfs_export

System: get_system_info, get_system_version, get_system_logs, get_node_time, get_network_interfaces, get_firewall_rules, get_health_overview, get_nova_models, search_apps, set_timezone, reboot_system

Roadmap

  • Tool catalog growing from 41 tools. Next: scheduling tools, backup management, network diagnostics.
  • Multi-turn context for follow-up questions.
  • Additional domain scoping refinements based on real user feedback.
  • Gemma 4 E2B deployment and evaluation.

Links

License

Apache 2.0

Downloads last month
1,159
GGUF
Model size
5B params
Architecture
gemma4
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support