Zen4 Ultra GGUF (Abliterated)

1.04T MoE | Q2_K Quantized | Abliterated

GGUF quantized and abliterated version of Zen4 Ultra.

  • Abliterated (uncensored) by huihui-ai
  • Q2_K quantization for reduced memory usage
  • Includes vision projection model (mmproj)

Files

    • 41-part split Q2_K quantization
    • Vision projection model (multimodal)

Full Weights

For full BF16 safetensors weights, see zenlm/zen4-ultra.

Links

Downloads last month
67
GGUF
Model size
1T params
Architecture
deepseek2
Hardware compatibility
Log In to add your hardware

2-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for zenlm/zen4-ultra-gguf

Base model

zenlm/zen4-ultra
Quantized
(1)
this model