zen4-max-pro / README.md
zeekay's picture
Update model card: add zen/zenlm tags, fix branding
fb034bc verified
metadata
language: en
license: apache-2.0
tags:
  - text-generation
  - zen
  - zenlm
  - hanzo
  - zen4
  - reasoning
  - agentic
  - moe
pipeline_tag: text-generation
library_name: transformers

Zen4 Max Pro

Pro variant of Zen4 Max with enhanced reasoning for enterprise agentic deployments.

Overview

Built on Zen MoDE (Mixture of Distilled Experts) architecture with 1T+ MoE parameters and 256K context window.

Developed by Hanzo AI and the Zoo Labs Foundation.

Quick Start

from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "zenlm/zen4-max-pro"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype="auto", device_map="auto")

messages = [{"role": "user", "content": "Hello!"}]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer([text], return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=512)
print(tokenizer.decode(outputs[0][inputs.input_ids.shape[1]:], skip_special_tokens=True))

API Access

curl https://api.hanzo.ai/v1/chat/completions \
  -H "Authorization: Bearer $HANZO_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"model": "zen4-max-pro", "messages": [{"role": "user", "content": "Hello"}]}'

Get your API key at console.hanzo.ai — $5 free credit on signup.

Model Details

Attribute Value
Parameters 1T+ MoE
Architecture Zen MoDE
Context 256K tokens
License Apache 2.0

License

Apache 2.0