Zen 3 — Safety Research
Parameters: 8B | Architecture: Zen 3 Architecture | Context: 32K | License: Apache 2.0 | Released: 2024-12-01
Zen 3 safety research variant. Weights at zenlm/zen-pro.
The Zen 3 family (Q3–Q4 2024) introduced sparse MoE routing and expanded to vision, audio, and multimodal reasoning.
AI Safety Research Variant
This model is part of Hanzo AI and Zoo Labs Foundation's alignment research program. We study how safety constraints interact with model capability to build better, safer AI systems.
Research applications: Red-teaming, alignment evaluation, safety benchmarking, capability assessment, adversarial testing. Why this matters: Understanding model behavior without safety overlays is essential for building more robust and genuinely safe AI systems.
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("zenlm/zen-pro", torch_dtype="auto")
tokenizer = AutoTokenizer.from_pretrained("zenlm/zen-pro")
The Zen LM Family
Joint research between Hanzo AI (Techstars '17), Zoo Labs Foundation (501c3), and Lux Partners Limited.
All weights Apache 2.0. Download, run locally, fine-tune, deploy commercially.
HuggingFace · Chat · API · Docs
Model tree for zenlm/zen-3-abliterated
Base model
zenlm/zen-pro