zeekay's picture
feat: AI safety research positioning
cbf41d3 verified
---
license: apache-2.0
language:
- en
- zh
tags:
- zen
- zenlm
- hanzo-ai
- zen-4
- safety-research
pipeline_tag: text-generation
library_name: transformers
base_model: zenlm/zen4-coder
---
# Zen4 Coder — Safety Research
> **Parameters**: 80B MoE | **Architecture**: Zen 4 Architecture | **Context**: 256K | **License**: Apache 2.0
Frontier code model fine-tuned for security research, vulnerability analysis, and adversarial code evaluation.
Weights at [zenlm/zen4-coder](https://huggingface.co/zenlm/zen4-coder).
## AI Safety Research Variant
This model is part of Hanzo AI and Zoo Labs Foundation's alignment research program.
We study how safety constraints interact with model capability to build better, safer AI systems.
**Research applications**: Red-teaming, alignment evaluation, safety benchmarking, capability assessment, adversarial testing.
**Why this matters**: Understanding model behavior without safety overlays is essential for building more robust and genuinely safe AI systems.
---
## The Zen LM Family
Joint research between **Hanzo AI** (Techstars '17), **Zoo Labs Foundation** (501c3), and **Lux Partners Limited**.
All weights Apache 2.0. Download, run locally, fine-tune, deploy commercially.
[HuggingFace](https://huggingface.co/zenlm) · [Chat](https://hanzo.chat) · [API](https://api.hanzo.ai) · [Docs](https://zenlm.org)