Inference Providers
Active filters: thinking
DavidAU/Qwen3-8B-64k-Josiefied-Uncensored-NEO-Max-GGUF
Text Generation
• 8B • Updated • 446
• 13
DavidAU/Qwen3-8B-64k-Josiefied-Uncensored-HORROR-Max-GGUF
Text Generation
• 8B • Updated • 122
• 8
mradermacher/Qwen3-8B-320k-Context-10X-Massive-GGUF
8B • Updated • 75
mradermacher/Qwen3-8B-320k-Context-10X-Massive-i1-GGUF
8B • Updated • 48
mradermacher/Qwen3-8B-256k-Context-8X-Grand-GGUF
8B • Updated • 69
mradermacher/Qwen3-8B-64k-Context-2X-Josiefied-Uncensored-GGUF
8B • Updated • 232
• 1
mradermacher/Qwen3-8B-96k-Context-3X-Medium-Plus-GGUF
8B • Updated • 93
mradermacher/Qwen3-8B-192k-Context-6X-Larger-GGUF
8B • Updated • 27
mradermacher/Qwen3-8B-64k-Context-2X-Medium-GGUF
8B • Updated • 115
mradermacher/Qwen3-8B-192k-Context-6X-Josiefied-Uncensored-GGUF
8B • Updated • 134
mradermacher/Qwen3-8B-128k-Context-4X-Large-GGUF
8B • Updated • 66
mradermacher/Qwen3-8B-128k-Context-4X-Large-i1-GGUF
8B • Updated • 117
DavidAU/Qwen3-8B-192k-Josiefied-Uncensored-NEO-Max-GGUF
Text Generation
• 8B • Updated • 3.12k
• 54
mradermacher/Qwen3-8B-256k-Context-8X-Grand-i1-GGUF
8B • Updated • 299
mradermacher/Qwen3-8B-64k-Context-2X-Josiefied-Uncensored-i1-GGUF
8B • Updated • 193
• 2
mradermacher/Qwen3-8B-96k-Context-3X-Medium-Plus-i1-GGUF
8B • Updated • 62
mradermacher/Qwen3-8B-192k-Context-6X-Larger-i1-GGUF
8B • Updated • 335
mradermacher/Qwen3-8B-64k-Context-2X-Medium-i1-GGUF
8B • Updated • 153
ofer-tal/Qwen2.5-QwQ-37B-Eureka-Triple-Cubed-abliterated-uncensored-3.8bpw-exl2
Text Generation
• Updated • 1
mradermacher/Qwen3-8B-192k-Context-6X-Josiefied-Uncensored-i1-GGUF
8B • Updated • 201
• 4
DavidAU/Llama-3.2-8X3B-GATED-MOE-Reasoning-Dark-Champion-Instruct-uncensored-abliterated-18.4B-GGUF
Text Generation
• 18B • Updated • 1.12k
• 20
DavidAU/Llama-3.2-8X3B-GATED-MOE-NEO-Reasoning-Dark-Champion-uncensored-18.4B-IMAT-GGUF
Text Generation
• 18B • Updated • 710
• 10
DavidAU/Llama-3.2-8X3B-GATED-MOE-Horror-Reasoning-Dark-Champion-uncensored-18.4B-IMAT-GGUF
Text Generation
• 18B • Updated • 596
• 9
mradermacher/Qwen3-30B-A6B-16-Extreme-i1-GGUF
31B • Updated • 75
fernandoruiz/medgemma-27b-text-it-Q4_0-GGUF
Text Generation
• 27B • Updated • 27
mlx-community/medgemma-27b-text-it-4bit
Text Generation
• Updated • 158
• 3
mlx-community/medgemma-27b-text-it-8bit
Text Generation
• Updated • 139
• 2
mlx-community/medgemma-27b-text-it-bf16
Text Generation
• Updated • 100
• 3
Mungert/Qwen3-30B-A6B-16-Extreme-GGUF
Text Generation
• 31B • Updated • 170
• 2
mradermacher/medgemma-27b-text-it-GGUF
27B • Updated • 172
• 2