Inference Providers
Active filters: Distill
stepenZEN/DeepSeek-R1-Distill-Llama-70B-bitsandbytes-4bit
72B • Updated • 8
prithivMLmods/QwQ-R1-Distill-1.5B-CoT
Text Generation
• 2B • Updated • 18
• 4
mradermacher/QwQ-R1-Distill-1.5B-CoT-GGUF
2B • Updated • 223
• 1
mradermacher/QwQ-R1-Distill-1.5B-CoT-i1-GGUF
2B • Updated • 364
stepenZEN/DeepSeek-R1-Distill-Qwen-1.5B-Abliterated-dpo
Text Generation
• 2B • Updated • 11
• 3
mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-Abliterated-dpo-GGUF
2B • Updated • 363
• 5
mradermacher/DeepSeek-R1-Distill-Qwen-1.5B-Abliterated-dpo-i1-GGUF
2B • Updated • 359
adriey/QwQ-R1-Distill-1.5B-CoT-Q8_0-GGUF
Text Generation
• 2B • Updated • 8
RDson/LIMO-R1-Distill-Qwen-7B
8B • Updated • 3
mradermacher/LIMO-R1-Distill-Qwen-7B-GGUF
8B • Updated • 214
prithivMLmods/Delta-Pavonis-Qwen-14B
Text Generation
• 15B • Updated • 8
• 3
mradermacher/Delta-Pavonis-Qwen-14B-GGUF
15B • Updated • 42
• 1
mradermacher/Delta-Pavonis-Qwen-14B-i1-GGUF
15B • Updated • 219
• 1
prithivMLmods/Octantis-QwenR1-1.5B-Q8_0-GGUF
Text Generation
• 2B • Updated • 4
ChiKoi7/GPT-5-Distill-Qwen3-4B-Instruct-Heretic
Text Generation
• 4B • Updated • 8
• • 4
ChiKoi7/GPT-5-Distill-Qwen3-4B-Instruct-Heretic-GGUF
4B • Updated • 65
• 1
mradermacher/GPT-5-Distill-Qwen3-4B-Instruct-Heretic-GGUF
4B • Updated • 105
• 2
mradermacher/GPT-5-Distill-Qwen3-4B-Instruct-Heretic-i1-GGUF
4B • Updated • 66
• 1
ChiKoi7/GPT-5-Distill-llama3.2-3B-Instruct-Heretic
3B • Updated • 402
• 1
ChiKoi7/GPT-5-Distill-llama3.2-3B-Instruct-Heretic-GGUF
3B • Updated • 54
mradermacher/GPT-5-Distill-llama3.2-3B-Instruct-Heretic-GGUF
3B • Updated • 143
• 1
mradermacher/GPT-5-Distill-llama3.2-3B-Instruct-Heretic-i1-GGUF
3B • Updated • 653
• 2
DavidAU/Qwen2.5-7B-Instruct-1M-Thinking-Claude-Gemini-GPT5.2-DISTILL
Text Generation
• 8B • Updated • 136
• 7
mradermacher/Qwen2.5-7B-Instruct-1M-Thinking-Claude-Gemini-GPT5.2-DISTILL-GGUF
8B • Updated • 883
• 5
mradermacher/Qwen2.5-7B-Instruct-1M-Thinking-Claude-Gemini-GPT5.2-DISTILL-i1-GGUF
8B • Updated • 684
• 1
alexgusevski/Qwen2.5-7B-Instruct-1M-Thinking-Claude-Gemini-GPT5.2-DISTILL-mlx-2Bit
Text Generation
• 0.7B • Updated • 85
alexgusevski/Qwen2.5-7B-Instruct-1M-Thinking-Claude-Gemini-GPT5.2-DISTILL-mlx-3Bit
Text Generation
• 1.0B • Updated • 78
alexgusevski/Qwen2.5-7B-Instruct-1M-Thinking-Claude-Gemini-GPT5.2-DISTILL-mlx-4Bit
Text Generation
• 1B • Updated • 91
alexgusevski/Qwen2.5-7B-Instruct-1M-Thinking-Claude-Gemini-GPT5.2-DISTILL-mlx-5Bit
Text Generation
• 1B • Updated • 89
alexgusevski/Qwen2.5-7B-Instruct-1M-Thinking-Claude-Gemini-GPT5.2-DISTILL-mlx-6Bit
Text Generation
• 8B • Updated • 34