-
-
-
-
-
-
Inference Providers
Active filters:
3-bit
MaziyarPanahi/Qwen2.5-1.5B-Instruct-GGUF
Text Generation
•
2B
•
Updated
•
144k
•
10
MaziyarPanahi/gemma-3-4b-it-GGUF
Text Generation
•
4B
•
Updated
•
200k
•
18
MaziyarPanahi/Qwen3-1.7B-GGUF
Text Generation
•
2B
•
Updated
•
220k
•
6
MaziyarPanahi/NVIDIA-Nemotron-Nano-12B-v2-GGUF
Text Generation
•
12B
•
Updated
•
70.3k
•
2
MaziyarPanahi/BioMistral-7B-GGUF
Text Generation
•
7B
•
Updated
•
1.27k
•
56
MaziyarPanahi/gemma-7b-it-GGUF
Text Generation
•
9B
•
Updated
•
718
•
8
MaziyarPanahi/Saul-Instruct-v1-GGUF
Text Generation
•
7B
•
Updated
•
221
•
10
MaziyarPanahi/Meta-Llama-3.1-405B-Instruct-GGUF
Text Generation
•
410B
•
Updated
•
141k
•
15
MaziyarPanahi/Phi-3.5-mini-instruct-GGUF
Text Generation
•
4B
•
Updated
•
170k
•
25
MaziyarPanahi/Qwen2.5-0.5B-Instruct-GGUF
Text Generation
•
0.5B
•
Updated
•
126
•
2
mlx-community/Llama-3.3-70B-Instruct-3bit
Text Generation
•
9B
•
Updated
•
145
•
8
MaziyarPanahi/Captain-Eris_Violet_Toxic-Magnum-12B-GGUF
Text Generation
•
12B
•
Updated
•
37
•
3
MaziyarPanahi/Llama-3.2-3B-ToxicKod-GGUF
Text Generation
•
3B
•
Updated
•
34
•
1
MaziyarPanahi/Qwen3-8B-GGUF
Text Generation
•
8B
•
Updated
•
219k
•
8
MaziyarPanahi/Qwen3-4B-GGUF
Text Generation
•
4B
•
Updated
•
228k
•
7
mlx-community/Kimi-K2-Thinking-3bit
Text Generation
•
1T
•
Updated
•
355
•
1
0xSero/GLM-4.7-EXL3-3bpw_H6
Text Generation
•
67B
•
Updated
•
17
•
1
kaitchup/Llama-2-7b-gptq-3bit
Text Generation
•
Updated
•
11
clibrain/Llama-2-7b-ft-instruct-es-gptq-3bit
Text Generation
•
Updated
•
6
•
3
clibrain/Llama-2-13b-ft-instruct-es-gptq-3bit
Text Generation
•
Updated
•
5
•
3
MiNeves-tops/opt-125m-gptq-3bit
Text Generation
•
Updated
•
6
Text Generation
•
Updated
•
4
LoneStriker/Yi-6B-200K-3.0bpw-h6-exl2
Text Generation
•
Updated
•
2
danny0122/Llama-2-7b-hf-gptq-3bits
Text Generation
•
6B
•
Updated
•
1
danny0122/Llama-2-7b-hf-gptq-3bitssafe
Text Generation
•
6B
•
Updated
•
1
danny0122/stablelm-base-alpha-3b-gptq-3bits
Text Generation
•
3B
•
Updated
•
1
danny0122/stablelm-base-alpha-3b-gptq-3bitssafe
Text Generation
•
3B
•
Updated
•
1
SicariusSicariiStuff/Tenebra_PreAlpha_128g_3BIT
Text Generation
•
31B
•
Updated
•
2
mahihossain666/llama-2-70b-hf-quantized-3bits-GPTQ
Text Generation
•
65B
•
Updated
•
3
SicariusSicariiStuff/Tenebra_PreAlpha_No_Group_g_3BIT
Text Generation
•
31B
•
Updated
•
2