-
-
-
-
-
-
Inference Providers
Active filters:
chat
mlx-community/Qwen2.5-Coder-0.5B-Instruct-bf16
Text Generation
•
Updated
•
22
•
1
lmstudio-community/Qwen2.5-0.5B-Instruct-MLX-4bit
Text Generation
•
77.3M
•
Updated
•
3.27k
•
1
lmstudio-community/Qwen2.5-0.5B-Instruct-MLX-8bit
Text Generation
•
0.1B
•
Updated
•
132
lmstudio-community/Qwen2.5-1.5B-Instruct-MLX-4bit
Text Generation
•
0.2B
•
Updated
•
86
lmstudio-community/Qwen2.5-1.5B-Instruct-MLX-8bit
Text Generation
•
0.4B
•
Updated
•
134
lmstudio-community/Qwen2.5-3B-Instruct-MLX-4bit
Text Generation
•
0.5B
•
Updated
•
368
lmstudio-community/Qwen2.5-3B-Instruct-MLX-8bit
Text Generation
•
0.9B
•
Updated
•
141
lmstudio-community/Qwen2.5-7B-Instruct-MLX-4bit
Text Generation
•
1B
•
Updated
•
864
xingyaoww/Qwen2.5-Coder-32B-Instruct-AWQ-128k
Text Generation
•
33B
•
Updated
•
69
•
7
theo77186/Qwen2.5-Coder-1.5B-Instruct-20241106
Text Generation
•
2B
•
Updated
lmstudio-community/Qwen2.5-7B-Instruct-MLX-8bit
Text Generation
•
2B
•
Updated
•
462
homer7676/FrierenChatbotV1
Text Generation
•
6B
•
Updated
•
7
•
1
lmstudio-community/Qwen2.5-14B-Instruct-MLX-4bit
Text Generation
•
2B
•
Updated
•
1.16k
•
1
lmstudio-community/Qwen2.5-14B-Instruct-MLX-8bit
Text Generation
•
4B
•
Updated
•
744
lmstudio-community/Qwen2.5-32B-Instruct-MLX-4bit
Text Generation
•
5B
•
Updated
•
827
•
2
lmstudio-community/Qwen2.5-32B-Instruct-MLX-8bit
Text Generation
•
9B
•
Updated
•
494
•
1
lucyknada/Qwen_Qwen2.5-Coder-32B-Instruct-exl2
Text Generation
•
Updated
•
1
•
3
tensorblock/Qwen2.5-Math-7B-Instruct-GGUF
Text Generation
•
8B
•
Updated
•
37
k2rks/Qwen2.5-Coder-14B-Instruct-mlx-4bit
Text Generation
•
2B
•
Updated
•
10
mradermacher/Llama3.1-70B-ShiningValiant2-GGUF
71B
•
Updated
•
54
BlouseJury/Qwen2.5-Coder-32B-Instruct-EXL2-4.0bpw
Text Generation
•
Updated
•
6
lmstudio-community/Qwen2.5-Coder-32B-Instruct-MLX-8bit
Text Generation
•
9B
•
Updated
•
30.8k
•
3
BenevolenceMessiah/Qwen2.5-Coder-32B-Instruct-Q8_0-GGUF
Text Generation
•
33B
•
Updated
•
41
lmstudio-community/Qwen2.5-Coder-14B-Instruct-MLX-4bit
Text Generation
•
2B
•
Updated
•
66.1k
•
2
CISCai/Qwen2.5-Coder-0.5B-Instruct-SOTA-GGUF
Text Generation
•
0.5B
•
Updated
•
48
lmstudio-community/Qwen2.5-Coder-14B-Instruct-MLX-8bit
Text Generation
•
4B
•
Updated
•
59k
•
1
BenevolenceMessiah/Qwen2.5-Coder-32B-Instruct-Q6_K-GGUF
Text Generation
•
33B
•
Updated
•
2
lmstudio-community/Qwen2.5-Coder-7B-Instruct-MLX-4bit
Text Generation
•
1B
•
Updated
•
4.39k
•
3
lmstudio-community/Qwen2.5-Coder-7B-Instruct-MLX-8bit
Text Generation
•
2B
•
Updated
•
1.47k
•
1
tensorblock/Qwen2.5-3B-Instruct-GGUF
Text Generation
•
3B
•
Updated
•
706
•
1