Zen3 Nano

Zen LM by Hanzo AI โ€” Compact yet capable language model for fast inference and edge deployment. 8B parameters with strong multilingual support.

Specs

Property Value
Parameters 8B
Context Length 40K tokens
Languages 100+
Architecture Zen MoDE (Mixture of Distilled Experts)
Generation Zen3

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("zenlm/zen3-nano", torch_dtype="auto")
tokenizer = AutoTokenizer.from_pretrained("zenlm/zen3-nano")

messages = [{"role": "user", "content": "Explain quantum computing in simple terms."}]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer([text], return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=512)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

API Access

Available via the Hanzo AI API:

from openai import OpenAI

client = OpenAI(base_url="https://api.hanzo.ai/v1", api_key="YOUR_KEY")
response = client.chat.completions.create(
    model="zen3-nano",
    messages=[{"role": "user", "content": "Hello"}],
)
print(response.choices[0].message.content)

Get your API key at console.hanzo.ai โ€” $5 free credit on signup.

License

Apache 2.0


Zen LM is developed by Hanzo AI โ€” Frontier AI infrastructure.

Downloads last month
16
Safetensors
Model size
8B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for zenlm/zen3-nano

Quantizations
2 models