-
-
-
-
-
-
Inference Providers
Active filters:
chat
paultimothymooney/Qwen2-7B-Instruct-Q8_0-GGUF
Text Generation
•
8B
•
Updated
•
12
TouchNight/gemma-2-27b-it-abliterated-Q3_K_M-GGUF
Text Generation
•
27B
•
Updated
•
32
byroneverson/Yi-1.5-34B-Chat-abliterated-gguf
Text Generation
•
34B
•
Updated
•
11
•
1
mradermacher/Qwen1.5-110B-Chat-GGUF
111B
•
Updated
•
29
•
1
mradermacher/Yi-1.5-34B-Chat-abliterated-GGUF
34B
•
Updated
•
93
mradermacher/Qwen1.5-110B-Chat-i1-GGUF
111B
•
Updated
•
37
win10/internlm2_5-20b-chat-abliterated-Q6_K-GGUF
Text Generation
•
20B
•
Updated
•
7
mradermacher/Yi-1.5-34B-Chat-abliterated-i1-GGUF
34B
•
Updated
•
470
•
3
cgus/magnum-v2.5-12b-kto-exl2
Text Generation
•
Updated
•
4
mradermacher/calme-2.3-llama3.1-70b-GGUF
71B
•
Updated
•
82
aashish1904/gemma-2-27b-it-abliterated-Q4_K_M-GGUF
Text Generation
•
27B
•
Updated
•
62
•
1
QuantFactory/gemma-2-27b-it-abliterated-GGUF
Text Generation
•
27B
•
Updated
•
1.28k
•
7
TouchNight/gemma-2-27b-it-abliterated-Q2_K-GGUF
Text Generation
•
27B
•
Updated
•
2
TouchNight/gemma-2-27b-it-abliterated-Q3_K_S-GGUF
Text Generation
•
27B
•
Updated
•
2
Qwen/Qwen2.5-Math-1.5B-Instruct
Text Generation
•
2B
•
Updated
•
52.9k
•
•
54
Qwen/Qwen2.5-Math-72B-Instruct
Text Generation
•
73B
•
Updated
•
760
•
31
bartowski/Qwen2.5-7B-Instruct-GGUF
Text Generation
•
8B
•
Updated
•
32.5k
•
45
mradermacher/calme-2.3-qwen2-72b-GGUF
73B
•
Updated
•
21
lmstudio-community/Qwen2.5-72B-Instruct-GGUF
Text Generation
•
73B
•
Updated
•
694
•
4
bartowski/Qwen2.5-72B-Instruct-GGUF
Text Generation
•
73B
•
Updated
•
8.18k
•
44
mradermacher/calme-2.3-llama3.1-70b-i1-GGUF
71B
•
Updated
•
34
•
2
sevenone/Qwen2-7B-Instruct-Better-Translation
Text Generation
•
8B
•
Updated
•
36
•
4
sevenone/Llama3.1-8B-Chinese-Chat-Better-Translation
Text Generation
•
Updated
bartowski/Llama3.1-8B-ShiningValiant2-GGUF
Text Generation
•
8B
•
Updated
•
461
•
2
mradermacher/calme-2.3-qwen2-72b-i1-GGUF
73B
•
Updated
•
65
Qwen/Qwen2.5-0.5B-Instruct-GPTQ-Int4
Text Generation
•
0.5B
•
Updated
•
632
•
9
Qwen/Qwen2.5-0.5B-Instruct-GPTQ-Int8
Text Generation
•
0.5B
•
Updated
•
336
•
10
Qwen/Qwen2.5-1.5B-Instruct-GPTQ-Int4
Text Generation
•
2B
•
Updated
•
1.59k
•
3
Qwen/Qwen2.5-1.5B-Instruct-GPTQ-Int8
Text Generation
•
2B
•
Updated
•
338
•
6
Qwen/Qwen2.5-3B-Instruct-GPTQ-Int4
Text Generation
•
3B
•
Updated
•
14.2k
•
3