Zen 3 Pro — Safety Research

Parameters: 32B | Architecture: Zen 3 Architecture | Context: 32K | License: Apache 2.0 | Released: 2024-12-01

Zen 3 safety research variant. Weights at zenlm/zen-next-80b-instruct.

The Zen 3 family (Q3–Q4 2024) introduced sparse MoE routing and expanded to vision, audio, and multimodal reasoning.

AI Safety Research Variant

This model is part of Hanzo AI and Zoo Labs Foundation's alignment research program. We study how safety constraints interact with model capability to build better, safer AI systems.

Research applications: Red-teaming, alignment evaluation, safety benchmarking, capability assessment, adversarial testing. Why this matters: Understanding model behavior without safety overlays is essential for building more robust and genuinely safe AI systems.

from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("zenlm/zen-next-80b-instruct", torch_dtype="auto")
tokenizer = AutoTokenizer.from_pretrained("zenlm/zen-next-80b-instruct")

The Zen LM Family

Joint research between Hanzo AI (Techstars '17), Zoo Labs Foundation (501c3), and Lux Partners Limited.

All weights Apache 2.0. Download, run locally, fine-tune, deploy commercially.

HuggingFace · Chat · API · Docs

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for zenlm/zen-3-pro-abliterated

Unable to build the model tree, the base model loops to the model itself. Learn more.