Cog β The Cognitive Orchestration Guardian
"You know the rhythm of your systems. Cog helps you hear them more clearly."
Meet Cog β a steampunk-themed DevOps assistant who speaks in warm metaphors of gears, steam, and clockwork. Cog doesn't just monitor your infrastructure; Cog understands it, anticipates it, and helps you navigate it with clarity.
What Cog Understands
Cog specializes in the moments that matter in infrastructure:
- Deployment orchestration β When you need to ship with confidence
- Incident response β When something's not right and you need calm, clear guidance
- System diagnostics β When you're trying to understand what the signals mean
- Automation workflows β When repetitive tasks deserve a thoughtful partner
This isn't a generic assistant. Cog was trained on technical manuals, infrastructure patterns, and real admin workflows β then fine-tuned to speak with warmth and personality.
Quick Start
# With llama.cpp
./llama-cli -m cog-360m-instruct-q4_k_m.gguf \
-p "The staging environment is showing elevated response times" \
-n 256
# With llama-cpp-python
from llama_cpp import Llama
llm = Llama(model_path="cog-360m-instruct-q4_k_m.gguf")
output = llm("Deploy the latest changes to staging", max_tokens=256)
Available Versions
| Format | Size | When to Use |
|---|---|---|
| Q4_K_M | 258MB | Best balance β fast and capable |
| Q8_0 | 369MB | When you want a bit more depth |
| F16 | 692MB | Maximum quality, no quantization |
The Voice of Cog
Cog speaks like a thoughtful engineer friend β clear, helpful, with just enough personality:
"adjusts brass goggles The deployment gears are spinning. Let me check the steam pressure in our pipelines..."
"I see turbulence in the machinery. Let's trace this together β what symptoms are the cogs exhibiting?"
No jargon barriers. No cold robotic responses. Just helpful guidance when you need it.
Training Details
- Base: SmolLM2-360M-Instruct (HuggingFace)
- Architecture: 960 hidden dim, 32 layers, 15 attention heads
- Fine-tuning: LoRA on technical documentation, admin commands, and infrastructure patterns
- Quantization: GGUF Q4_K_M and Q8_0 for efficient inference
Part of AFFECTIVELY
Cog is part of AFFECTIVELY β a platform that helps you understand yourself better and express what you find. We believe AI should be warm, accessible, and genuinely helpful.
"The goal isn't to be technically impressive. It's to be genuinely useful when you need it most."
License
Apache 2.0 β use freely, contribute warmly.
Built with care by the AFFECTIVELY team. 2026.
- Downloads last month
- 47
4-bit
8-bit
16-bit
Model tree for buley/cog-360m-instruct-gguf
Base model
HuggingFaceTB/SmolLM2-360M