Zen4 Pro — Safety Research

Parameters: 14B dense | Architecture: Zen 4 Architecture | Context: 128K | License: Apache 2.0

Professional-grade 14B model fine-tuned for AI safety research, red-teaming, and capability evaluation.

Weights at zenlm/zen4-pro.

AI Safety Research Variant

This model is part of Hanzo AI and Zoo Labs Foundation's alignment research program. We study how safety constraints interact with model capability to build better, safer AI systems.

Research applications: Red-teaming, alignment evaluation, safety benchmarking, capability assessment, adversarial testing. Why this matters: Understanding model behavior without safety overlays is essential for building more robust and genuinely safe AI systems.


The Zen LM Family

Joint research between Hanzo AI (Techstars '17), Zoo Labs Foundation (501c3), and Lux Partners Limited.

All weights Apache 2.0. Download, run locally, fine-tune, deploy commercially.

HuggingFace · Chat · API · Docs

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for zenlm/zen4-pro-abliterated

Base model

zenlm/zen4-pro
Finetuned
(2)
this model