Inference Providers
Active filters: vLLM
mistralai/Mistral-Medium-3.5-128B
128B • Updated • 18.3k
• 295
mistralai/Mistral-Medium-3.5-128B-EAGLE
Updated • 350
• 33
mistralai/Mistral-Small-4-119B-2603
119B • Updated • 61.9k
• 372
mistralai/Mistral-Small-4-119B-2603-NVFP4
Updated • 1.04k
• 85
mistralai/Mistral-Small-4-119B-2603-eagle
Updated • 278
• 51
QuantTrio/Qwen3.6-27B-AWQ
Image-Text-to-Text
• 28B • Updated • 224k
• 9
RecViking/Mistral-Medium-3.5-128B-NVFP4
74B • Updated • 5.52k
• 2
olka-fi/Mistral-Medium-3.5-128B-MXFP4
Text Generation
• 128B • Updated • 713
• 2
JunHowie/Qwen3-4B-Instruct-2507-GPTQ-Int4
Text Generation
• 4B • Updated • 39.1k
• 4
Text Generation
• Updated • 2.71k
• 28
QuantTrio/GLM-4.7-Flash-AWQ
Text Generation
• 31B • Updated • 50.3k
• 13
Image-Text-to-Text
• 10B • Updated • 242k
• 13
QuantTrio/Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled-v2-AWQ
Image-Text-to-Text
• 28B • Updated • 118k
• 13
QuantTrio/Qwen3.6-35B-A3B-AWQ
Image-Text-to-Text
• 36B • Updated • 366k
• 17
QuantTrio/MiniMax-M2.7-AWQ
Text Generation
• 229B • Updated • 28.7k
• 7
Text Generation
• 754B • Updated • 978
• 6
QuantTrio/Qwen3.6-27B-AWQ-6Bit
Image-Text-to-Text
• 28B • Updated • 12.3k
• 6
cyankiwi/Mistral-Medium-3.5-128B-AWQ-INT4
25B • Updated • 255
• 1
mradermacher/Mistral-Medium-3.5-128B-GGUF
125B • Updated • 828
• 1
model-scope/glm-4-9b-chat-GPTQ-Int4
Text Generation
• 9B • Updated • 99
• 6
model-scope/glm-4-9b-chat-GPTQ-Int8
Text Generation
• 9B • Updated • 10
• 2
tclf90/qwen2.5-72b-instruct-gptq-int4
Text Generation
• 73B • Updated • 94
• 2
tclf90/qwen2.5-72b-instruct-gptq-int3
Text Generation
• 69B • Updated • 75
prithivMLmods/Nu2-Lupi-Qwen-14B
Text Generation
• 15B • Updated • 6
• 2
mradermacher/Nu2-Lupi-Qwen-14B-GGUF
15B • Updated • 161
• 1
mradermacher/Nu2-Lupi-Qwen-14B-i1-GGUF
15B • Updated • 403
• 1
JunHowie/Qwen3-0.6B-GPTQ-Int4
Text Generation
• 0.6B • Updated • 375
• 1
JunHowie/Qwen3-0.6B-GPTQ-Int8
Text Generation
• 0.6B • Updated • 18
JunHowie/Qwen3-1.7B-GPTQ-Int4
Text Generation
• 2B • Updated • 250
• 1
JunHowie/Qwen3-1.7B-GPTQ-Int8
Text Generation
• 2B • Updated • 16