Zen4 Mini (GGUF)
GGUF quantization of Zen4 Mini for efficient CPU and mixed CPU/GPU inference.
Model Details
| Property | Value |
|---|---|
| Parameters | Zen4 Mini |
| Format | GGUF (quantized) |
| Architecture | Zen4 |
| License | Apache 2.0 |
| Authors | Zen LM Authors |
Usage
# Using llama.cpp
./llama-cli -m zen4-mini.Q4_K_M.gguf -p "Hello, how can I help you?"
About
Zen4 Mini is a compact, efficient language model from the Zen4 family, optimized for fast inference while maintaining strong general-purpose capabilities.
Developed by: Zen LM Authors
- Downloads last month
- 495
Hardware compatibility
Log In to add your hardware
4-bit
Model tree for zenlm/zen4-mini-GGUF
Base model
zenlm/zen4-mini