-
-
-
-
-
-
Inference Providers
Active filters: Qwen3
NVFP4/Qwen3-Coder-30B-A3B-Instruct-FP4
Text Generation
• 16B • Updated
• 31.6k
• 18
nvidia/Qwen3-Coder-480B-A35B-Instruct-NVFP4
Text Generation
• 241B • Updated
• 2.04k
• 8
Text Generation
• 17B • Updated
• 77.9k
• 13
nvidia/Qwen3-Next-80B-A3B-Thinking-NVFP4
Text Generation
• Updated
• 128k
• 52
nvidia/Qwen3-30B-A3B-NVFP4
Text Generation
• 16B • Updated
• 76.6k
• 25
QuantTrio/Qwen3-235B-A22B-Thinking-2507-GPTQ-Int4-Int8Mix
Text Generation
• 253B • Updated
• 39
• 4
litert-community/Qwen3-0.6B
Text Generation
• Updated
• 1.67k
• 11
nvidia/Qwen3-Next-80B-A3B-Instruct-NVFP4
Text Generation
• Updated
• 67.1k
• 34
nvidia/Qwen3-235B-A22B-Thinking-2507-NVFP4
Text Generation
• Updated
• 1.16k
• 6
nvidia/Qwen3-235B-A22B-Instruct-2507-NVFP4
Text Generation
• 120B • Updated
• 4.36k
• 6
DavidAU/Qwen3-8B-Hivemind-Instruct-Heretic-Abliterated-Uncensored-NEO-Imatrix-GGUF
Text Generation
• 8B • Updated
• 9.85k
• 33
DavidAU/Qwen3-24B-A4B-Freedom-Thinking-Abliterated-Heretic-NEO-Imatrix-GGUF
Text Generation
• 17B • Updated
• 2.98k
• 26
JunHowie/Qwen3-0.6B-GPTQ-Int4
Text Generation
• 0.6B • Updated
• 145
• 1
JunHowie/Qwen3-0.6B-GPTQ-Int8
Text Generation
• 0.6B • Updated
• 16
JunHowie/Qwen3-1.7B-GPTQ-Int4
Text Generation
• 2B • Updated
• 415
• 1
JunHowie/Qwen3-1.7B-GPTQ-Int8
Text Generation
• 2B • Updated
JunHowie/Qwen3-32B-GPTQ-Int4
Text Generation
• 33B • Updated
• 3.2k
• 4
JunHowie/Qwen3-32B-GPTQ-Int8
Text Generation
• 33B • Updated
• 1.78k
• 4
JunHowie/Qwen3-30B-A3B-GPTQ-Int4
Text Generation
• 5B • Updated
• 13
• 1
Text Generation
• Updated
• 84
JunHowie/Qwen3-14B-GPTQ-Int8
Text Generation
• 15B • Updated
• 441
• 1
JunHowie/Qwen3-14B-GPTQ-Int4
Text Generation
• 15B • Updated
• 2.77k
• 4
JunHowie/Qwen3-8B-GPTQ-Int8
Text Generation
• 8B • Updated
• 161
JunHowie/Qwen3-8B-GPTQ-Int4
Text Generation
• 8B • Updated
• 1.1k
• 4
Text Generation
• Updated
• 119
• 2
Text Generation
• Updated
• 46
• 3
JunHowie/Qwen3-4B-GPTQ-Int4
Text Generation
• 4B • Updated
• 3.73k
• 1
JunHowie/Qwen3-4B-GPTQ-Int8
Text Generation
• 4B • Updated
• 53
prithivMLmods/Qwen3-4B-ft-bf16-Q8_0-GGUF
Text Generation
• 4B • Updated
• 21
steampunque/Qwen3-8B-MP-GGUF
8B • Updated
• 51