Install from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf icedmoca/kcode-oss-20b-mxfp4# Run inference directly in the terminal:
llama-cli -hf icedmoca/kcode-oss-20b-mxfp4Use pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf icedmoca/kcode-oss-20b-mxfp4# Run inference directly in the terminal:
./llama-cli -hf icedmoca/kcode-oss-20b-mxfp4Build from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf icedmoca/kcode-oss-20b-mxfp4# Run inference directly in the terminal:
./build/bin/llama-cli -hf icedmoca/kcode-oss-20b-mxfp4Use Docker
docker model run hf.co/icedmoca/kcode-oss-20b-mxfp4kcode-oss-20b-mxfp4
kcode-oss-20b-mxfp4 is a GGUF MXFP4 coding-agent model built on top of GPT-OSS 20B and optimized for terminal-native software engineering workflows, structured tool use, retrieval-grounded reasoning, and long-session coding tasks.
The model is designed primarily for:
repository navigation code editing and patch generation shell-oriented workflows structured tool calling retrieval-backed context restoration long-running agent sessions Architecture
Base architecture:
GPT-OSS 20B Mixture-of-Experts (MoE) MXFP4 quantization 131k context length GGUF runtime format
Model metadata:
24 transformer blocks 32 experts 4 active experts per token GPT-4o tokenizer format YaRN rope scaling Intended Usage
This model is intended to be paired with the Kcode runtime and orchestration layer:
exact-context replay context vault references dynamic tool schema expansion persistent memory systems multi-tool agent execution
It performs best in iterative:
edit β test β repair
coding workflows.
Prompting
Example system prompt:
You are Kcode, a terminal-native coding agent.
Repository state:
Task:
Fix the websocket reconnect logic without breaking auth refresh behavior. Runtime Compatibility
Optimized for:
llama.cpp Ollama OpenAI-compatible local servers terminal coding agents structured tool runtimes Notes
kcode-oss-20b-mxfp4 is optimized more heavily for:
coding workflows orchestration stability structured reasoning retrieval-grounded operation long-session memory behavior
than for:
roleplay creative writing unrestricted conversational chat Runtime
GitHub:
- Downloads last month
- 533
We're not able to determine the quantization variants.
Model tree for icedmoca/kcode-oss-20b-mxfp4
Base model
openai/gpt-oss-20b
Install from brew
# Start a local OpenAI-compatible server with a web UI: llama-server -hf icedmoca/kcode-oss-20b-mxfp4# Run inference directly in the terminal: llama-cli -hf icedmoca/kcode-oss-20b-mxfp4