Inference Providers
Active filters: thinking
mradermacher/Lumian2-VLR-7B-Thinking-i1-GGUF
8B • Updated • 221
• 1
DavidAU/OpenAi-GPT-oss-20b-MODERATE-uncensored-NEO-Imatrix-gguf
Text Generation
• 21B • Updated • 504
• 9
sequelbox/Qwen3-4B-Thinking-2507-DAG-Reasoning
Text Generation
• 4B • Updated • 16
• 8
ertghiu256/Qwen3-4b-tcomanr-merge-v2.1
Text Generation
• 4B • Updated • 54
• • 2
cs2764/Qwen2.5-QwQ-37B-Eureka-Triple-Cubed-abliterated-uncensored-mlx-6Bit-gs32
Text Generation
• 37B • Updated • 48
mradermacher/gpt-oss-nemo-20b-GGUF
21B • Updated • 108
• 1
mradermacher/gpt-oss-nemo-20b-i1-GGUF
21B • Updated • 216
• 2
mradermacher/Qwen3-4b-tcomanr-merge-v2.1-GGUF
4B • Updated • 85
• 1
mradermacher/Qwen3-4b-tcomanr-merge-v2.1-i1-GGUF
4B • Updated • 155
mradermacher/Qwen3-4B-Thinking-2507-DAG-Reasoning-GGUF
4B • Updated • 216
mradermacher/Qwen3-4B-Thinking-2507-DAG-Reasoning-i1-GGUF
4B • Updated • 212
• 1
rahmanazhar/meta-claude-3.7-finetuned
Text Generation
• Updated • 2
DavidAU/Qwen3-42B-A3B-2507-Thinking-Abliterated-uncensored-TOTAL-RECALL-v2-Medium-MASTER-CODER
Text Generation
• 42B • Updated • 188
• 37
mradermacher/Qwen3-42B-A3B-2507-Thinking-Abliterated-uncensored-TOTAL-RECALL-v2-Medium-MASTER-CODER-GGUF
42B • Updated • 464
• 8
mradermacher/Qwen3-42B-A3B-2507-Thinking-Abliterated-uncensored-TOTAL-RECALL-v2-Medium-MASTER-CODER-i1-GGUF
42B • Updated • 2.5k
• 12
ertghiu256/Qwen3-4b-tcomanr-merge-v2.2
Text Generation
• 4B • Updated • 101
• 2
AdvRahul/Axion-Thinking-4B
4B • Updated • 4
nightmedia/Qwen3-42B-A3B-2507-Thinking-Abliterated-uncensored-TOTAL-RECALL-v2-Medium-MASTER-CODER-qx4-mlx
Text Generation
• 42B • Updated • 841
• 4
mradermacher/Qwen2.5-MOE-2X7B-DeepSeek-Abliterated-UNCensored-19B-GGUF
19B • Updated • 61
mradermacher/Qwen2.5-MOE-2X7B-DeepSeek-Abliterated-UNCensored-19B-i1-GGUF
19B • Updated • 331
• 3
jjzha/Qwen2.5-0.5B-Instruct-fs1-2708
Text Generation
• 0.5B • Updated • 2
jjzha/Qwen2.5-1.5B-Instruct-fs1-2708
Text Generation
• 2B • Updated • 7
jjzha/Qwen2.5-3B-Instruct-fs1-2708
Text Generation
• 3B • Updated • 4
jjzha/Qwen2.5-7B-Instruct-fs1-2708
Text Generation
• 8B • Updated • 5
• 1
jjzha/Qwen2.5-14B-Instruct-fs1-2708
Text Generation
• 15B • Updated • 2
jjzha/Qwen2.5-32B-Instruct-fs1-2708
Text Generation
• 33B • Updated • 2
jjzha/Qwen2.5-0.5B-Instruct-rt-2708
Text Generation
• 0.5B • Updated • 4
jjzha/Qwen2.5-1.5B-Instruct-rt-2708
Text Generation
• 2B • Updated • 5
jjzha/Qwen2.5-3B-Instruct-rt-2708
Text Generation
• 3B • Updated • 6
• 1
jjzha/Qwen2.5-7B-Instruct-rt-2708
Text Generation
• 8B • Updated • 7