These are simple quantizations of 90dkn0ws/OpenR1-Distill-0.6B. It was initially generated using gguf-my-repo, however the repository was deleted, but a copy inside my Ollama cache was reuploaded. I have always been the author of this quantization.

Downloads last month
2
GGUF
Model size
0.6B params
Architecture
qwen3
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for qikp/OpenR1-Distill-0.6B-GGUF

Quantized
(1)
this model