Zen Coder 24B (GGUF)
GGUF quantization of Zen Coder 24B for efficient CPU and mixed CPU/GPU inference.
Model Details
| Property | Value |
|---|---|
| Parameters | 24B |
| Format | GGUF (quantized) |
| Architecture | Zen Coder |
| Context Length | 128K tokens |
| License | Apache 2.0 |
| Authors | Zen LM Authors |
Usage
# Using llama.cpp
./llama-cli -m zen-coder-24b.Q4_K_M.gguf -p "Write a Python function to sort a list"
About
Zen Coder 24B is a code-specialized language model optimized for software development tasks including code generation, completion, refactoring, and debugging.
Developed by: Zen LM Authors
- Downloads last month
- 105
Hardware compatibility
Log In to add your hardware
4-bit
5-bit
6-bit
8-bit
16-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support