How to use from
llama.cpp
Install from brew
brew install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf zenlm/zen-agent-4b:Q2_K
# Run inference directly in the terminal:
llama-cli -hf zenlm/zen-agent-4b:Q2_K
Install from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf zenlm/zen-agent-4b:Q2_K
# Run inference directly in the terminal:
llama-cli -hf zenlm/zen-agent-4b:Q2_K
Use pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases
# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf zenlm/zen-agent-4b:Q2_K
# Run inference directly in the terminal:
./llama-cli -hf zenlm/zen-agent-4b:Q2_K
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli
# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf zenlm/zen-agent-4b:Q2_K
# Run inference directly in the terminal:
./build/bin/llama-cli -hf zenlm/zen-agent-4b:Q2_K
Use Docker
docker model run hf.co/zenlm/zen-agent-4b:Q2_K
Quick Links

Zen Agent 4b

Compact 4B agent model with strong function calling and tool-use capabilities.

Overview

Built on Zen MoDE (Mixture of Distilled Experts) architecture with 4B parameters and 32K context window.

Developed by Hanzo AI and the Zoo Labs Foundation.

Quick Start

from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "zenlm/zen-agent-4b"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype="auto", device_map="auto")

messages = [{"role": "user", "content": "Hello!"}]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer([text], return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=512)
print(tokenizer.decode(outputs[0][inputs.input_ids.shape[1]:], skip_special_tokens=True))

API Access

curl https://api.hanzo.ai/v1/chat/completions \
  -H "Authorization: Bearer $HANZO_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"model": "zen-agent-4b", "messages": [{"role": "user", "content": "Hello"}]}'

Get your API key at console.hanzo.ai โ€” $5 free credit on signup.

Model Details

Attribute Value
Parameters 4B
Architecture Zen MoDE
Context 32K tokens
License Apache 2.0

License

Apache 2.0

Downloads last month
107
Safetensors
Model size
4B params
Tensor type
F16
ยท
Inference Providers NEW
Input a message to start chatting with zenlm/zen-agent-4b.

Model tree for zenlm/zen-agent-4b

Quantizations
1 model

Spaces using zenlm/zen-agent-4b 3