Qwen3-coder-REAP-25B-A3B-Rust-GGUF
This repository provides high-precision quantized versions of the Qwen3-Coder-REAP-25B-A3B model, featuring a custom importance matrix generated by Em-80, and specifically optimized for Rust code generation.
Work in Progress
I'm using a cursed wsl2 environment, consumer hardware, and don't have fiber. So, my apologies if the quant you want is not available yet. I included the .imatrix if you need to quant your own version. Just please link to me if you use the imatrix. Once all the quants are uploaded I'll spend a weekend doing benchmarks.
Highlights
Model Architecture: Based on the Cerebras REAP (Router-weighted Expert Activation Pruning) variant of Qwen3-Coder-30B-A3B-Instruct.
Custom Imatrix: Includes Qwen3-Coder-REAP-25B-A3B-Rust.imatrix, generated using a diverse and high-density calibration set.
Optimized for Logic: The quantization process focused heavily on maintaining the model's multi-lingual coding and mathematical reasoning capabilities.
Importance Matrix (Imatrix) Details
The included .imatrix file was developed to ensure that lower-bit quants retain as much intelligence as possible. Unlike standard "blind" quants that use generic calibration data, this imatrix was derived from a curated 3,000-sample dataset(21.8 MB) covering:
Programming: Deep coverage of Rust and Python syntax, logic, and idiomatic patterns.
Reasoning: Advanced Mathematics and logical proofs.
Linguistic Quality: High-quality English prose.
By using this importance matrix during the quantization process, we ensure that the weights most critical for code generation and complex problem-solving are preserved with higher fidelity.
Files Included
Weights: Multiple quantization levels (GGUF).
Metadata: Qwen3-Coder-REAP-25B-A3B-Rust.imatrix for users who wish to perform their own custom quantization runs.
Licensing and Attribution
This work is a derivative of the Qwen3-Coder-REAP-25B-A3B model by Cerebras Systems and Qwen3-Coder-30B-A3B-Instruct by Alibaba Cloud.
Weights & Imatrix: Released under the Apache License 2.0.
Attribution: Modifications, quantization, and imatrix generation performed by Em-80.
Please refer to the LICENSE file in this repository for full legal terms and modification notices.
Usage
To use these quants with or quant your own Qwen3-Coder-REAP-25B-A3B with the provided imatrix in llama.cpp:
Example for running a quant
./main -m Qwen3-Coder-REAP-25B-A3B-Rust-IQ3_M.gguf -p "Write a thread-safe singleton in Rust."
Notice: This model is provided "as-is" without warranty of any kind. Use at your own risk.
- Downloads last month
- 4,700
1-bit
2-bit
3-bit
4-bit
Model tree for Em-80/Qwen3-coder-REAP-25B-A3B-Rust-GGUF
Base model
Qwen/Qwen3-Coder-30B-A3B-Instruct