Qwen2.5 Coder quant 4bit GGUF for llama.cpp and ollama with Modelfile focused solver for ARC-AGI. No appreciable enhancement with SFT
Qwen2.5 Coder F16 GGUF solver for ARC-AGI. No appreciable enhancement with SFT
Both trained using UNSLOTH and converted to GGUF
- Downloads last month
- 25
Hardware compatibility
Log In
to view the estimation
4-bit
16-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support