Inference Providers
Active filters: thinking
DavidAU/OpenAi-GPT-oss-20b-MODERATE-uncensored-NEO-Imatrix-gguf
Text Generation
• 21B • Updated • 517
• 9
ertghiu256/Qwen3-4b-tcomanr-merge-v2.1
Text Generation
• 4B • Updated • 55
• • 2
cs2764/Qwen2.5-QwQ-37B-Eureka-Triple-Cubed-abliterated-uncensored-mlx-6Bit-gs32
Text Generation
• 37B • Updated • 51
mradermacher/gpt-oss-nemo-20b-GGUF
21B • Updated • 103
• 1
mradermacher/gpt-oss-nemo-20b-i1-GGUF
21B • Updated • 198
• 2
mradermacher/Qwen3-4b-tcomanr-merge-v2.1-GGUF
4B • Updated • 83
• 1
mradermacher/Qwen3-4b-tcomanr-merge-v2.1-i1-GGUF
4B • Updated • 158
mradermacher/Qwen3-4B-Thinking-2507-DAG-Reasoning-GGUF
4B • Updated • 216
mradermacher/Qwen3-4B-Thinking-2507-DAG-Reasoning-i1-GGUF
4B • Updated • 193
• 1
rahmanazhar/meta-claude-3.7-finetuned
Text Generation
• Updated • 2
DavidAU/Qwen3-42B-A3B-2507-Thinking-Abliterated-uncensored-TOTAL-RECALL-v2-Medium-MASTER-CODER
Text Generation
• 42B • Updated • 193
• 37
mradermacher/Qwen3-42B-A3B-2507-Thinking-Abliterated-uncensored-TOTAL-RECALL-v2-Medium-MASTER-CODER-GGUF
42B • Updated • 435
• 8
mradermacher/Qwen3-42B-A3B-2507-Thinking-Abliterated-uncensored-TOTAL-RECALL-v2-Medium-MASTER-CODER-i1-GGUF
42B • Updated • 2.48k
• 12
ertghiu256/Qwen3-4b-tcomanr-merge-v2.2
Text Generation
• 4B • Updated • 101
• 2
AdvRahul/Axion-Thinking-4B
4B • Updated • 9
nightmedia/Qwen3-42B-A3B-2507-Thinking-Abliterated-uncensored-TOTAL-RECALL-v2-Medium-MASTER-CODER-qx4-mlx
Text Generation
• 42B • Updated • 877
• 4
mradermacher/Qwen2.5-MOE-2X7B-DeepSeek-Abliterated-UNCensored-19B-GGUF
19B • Updated • 62
mradermacher/Qwen2.5-MOE-2X7B-DeepSeek-Abliterated-UNCensored-19B-i1-GGUF
19B • Updated • 326
• 3
jjzha/Qwen2.5-0.5B-Instruct-fs1-2708
Text Generation
• 0.5B • Updated • 3
jjzha/Qwen2.5-1.5B-Instruct-fs1-2708
Text Generation
• 2B • Updated • 7
jjzha/Qwen2.5-3B-Instruct-fs1-2708
Text Generation
• 3B • Updated • 4
jjzha/Qwen2.5-7B-Instruct-fs1-2708
Text Generation
• 8B • Updated • 5
• 1
jjzha/Qwen2.5-14B-Instruct-fs1-2708
Text Generation
• 15B • Updated • 2
jjzha/Qwen2.5-32B-Instruct-fs1-2708
Text Generation
• 33B • Updated • 2
jjzha/Qwen2.5-0.5B-Instruct-rt-2708
Text Generation
• 0.5B • Updated • 4
jjzha/Qwen2.5-1.5B-Instruct-rt-2708
Text Generation
• 2B • Updated • 5
jjzha/Qwen2.5-3B-Instruct-rt-2708
Text Generation
• 3B • Updated • 6
• 1
jjzha/Qwen2.5-7B-Instruct-rt-2708
Text Generation
• 8B • Updated • 6
jjzha/Qwen2.5-14B-Instruct-rt-2708
Text Generation
• 15B • Updated • 9
• 1
jjzha/Qwen2.5-32B-Instruct-rt-2708
Text Generation
• 33B • Updated • 4