Kimina-Prover-Distill-1.7B-F32-GGUF
AI-MO/Kimina-Prover-Distill-1.7B is a theorem proving model developed by Project Numina and Kimi teams, focusing on competition style problem solving capabilities in Lean 4. It is a distillation of Kimina-Prover-72B, a model trained via large scale reinforcement learning. It achieves 72.95% accuracy with Pass@32 on MiniF2F-test.
Model Files
| File Name | Size | Format |
|---|---|---|
| Kimina-Prover-Distill-1.7B.gitattributes | 2.48 kB | Git attributes |
| Kimina-Prover-Distill-1.7B.BF16.gguf | 3.45 GB | GGUF BF16 |
| Kimina-Prover-Distill-1.7B.F16.gguf | 3.45 GB | GGUF F16 |
| Kimina-Prover-Distill-1.7B.F32.gguf | 6.89 GB | GGUF F32 |
| Kimina-Prover-Distill-1.7B.Q2_K.gguf | 778 MB | GGUF Q2_K |
| Kimina-Prover-Distill-1.7B.Q3_K_L.gguf | 1 GB | GGUF Q3_K_L |
| Kimina-Prover-Distill-1.7B.Q3_K_M.gguf | 940 MB | GGUF Q3_K_M |
| Kimina-Prover-Distill-1.7B.Q3_K_S.gguf | 867 MB | GGUF Q3_K_S |
| Kimina-Prover-Distill-1.7B.Q4_K_M.gguf | 1.11 GB | GGUF Q4_K_M |
| Kimina-Prover-Distill-1.7B.Q4_K_S.gguf | 1.06 GB | GGUF Q4_K_S |
| Kimina-Prover-Distill-1.7B.Q5_K_M.gguf | 1.26 GB | GGUF Q5_K_M |
| Kimina-Prover-Distill-1.7B.Q5_K_S.gguf | 1.23 GB | GGUF Q5_K_S |
| Kimina-Prover-Distill-1.7B.Q6_K.gguf | 1.42 GB | GGUF Q6_K |
| Kimina-Prover-Distill-1.7B.Q8_0.gguf | 1.83 GB | GGUF Q8_0 |
| README.md | 180 Bytes | Markdown |
| config.json | 29 Bytes | JSON |
Quants Usage
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):
- Downloads last month
- 42
Hardware compatibility
Log In to add your hardware
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit
32-bit
Model tree for prithivMLmods/Kimina-Prover-Distill-1.7B-F32-GGUF
Base model
Qwen/Qwen3-1.7B-Base Finetuned
Qwen/Qwen3-1.7B Finetuned
AI-MO/Kimina-Prover-Distill-1.7B