-
-
-
-
-
-
Inference Providers
Active filters:
4bit
legraphista/Yi-9B-Coder-IMat-GGUF
Text Generation
•
9B
•
Updated
•
335
legraphista/gemma-2-9b-it-IMat-GGUF
Text Generation
•
9B
•
Updated
•
742
•
2
legraphista/gemma-2-27b-it-IMat-GGUF
Text Generation
•
27B
•
Updated
•
1.01k
•
20
legraphista/llm-compiler-7b-IMat-GGUF
Text Generation
•
7B
•
Updated
•
196
legraphista/llm-compiler-7b-ftd-IMat-GGUF
Text Generation
•
7B
•
Updated
•
635
•
2
legraphista/llm-compiler-13b-IMat-GGUF
Text Generation
•
13B
•
Updated
•
400
•
3
legraphista/llm-compiler-13b-ftd-IMat-GGUF
Text Generation
•
13B
•
Updated
•
538
legraphista/Gemma-2-9B-It-SPPO-Iter3-IMat-GGUF
Text Generation
•
9B
•
Updated
•
250
•
4
ModelCloud/gemma-2-9b-it-gptq-4bit
Text Generation
•
10B
•
Updated
•
186
•
4
ModelCloud/gemma-2-9b-gptq-4bit
Text Generation
•
10B
•
Updated
•
2
legraphista/Phi-3-mini-4k-instruct-update2024_07_03-IMat-GGUF
Text Generation
•
4B
•
Updated
•
344
legraphista/internlm2_5-7b-chat-IMat-GGUF
Text Generation
•
8B
•
Updated
•
729
legraphista/internlm2_5-7b-chat-1m-IMat-GGUF
Text Generation
•
8B
•
Updated
•
2.78k
•
1
legraphista/codegeex4-all-9b-IMat-GGUF
Text Generation
•
9B
•
Updated
•
811
•
8
ModelCloud/DeepSeek-V2-Lite-gptq-4bit
Text Generation
•
16B
•
Updated
•
219
ModelCloud/internlm-2.5-7b-gptq-4bit
Feature Extraction
•
8B
•
Updated
ModelCloud/internlm-2.5-7b-chat-gptq-4bit
Feature Extraction
•
8B
•
Updated
ModelCloud/internlm-2.5-7b-chat-1m-gptq-4bit
Feature Extraction
•
8B
•
Updated
legraphista/NuminaMath-7B-TIR-IMat-GGUF
Text Generation
•
7B
•
Updated
•
858
•
1
legraphista/mathstral-7B-v0.1-IMat-GGUF
Text Generation
•
7B
•
Updated
•
295
Text Generation
•
Updated
ModelCloud/Mistral-Nemo-Instruct-2407-gptq-4bit
Text Generation
•
12B
•
Updated
•
166
•
5
legraphista/Athene-70B-IMat-GGUF
Text Generation
•
71B
•
Updated
•
433
•
3
ModelCloud/gemma-2-27b-it-gptq-4bit
Text Generation
•
28B
•
Updated
•
10
•
12
legraphista/Mistral-Nemo-Instruct-2407-IMat-GGUF
Text Generation
•
12B
•
Updated
•
807
•
2
legraphista/Meta-Llama-3.1-8B-Instruct-IMat-GGUF
Text Generation
•
8B
•
Updated
•
638
•
6
ModelCloud/Meta-Llama-3.1-8B-Instruct-gptq-4bit
Text Generation
•
8B
•
Updated
•
13
•
4
ModelCloud/Meta-Llama-3.1-8B-gptq-4bit
Text Generation
•
8B
•
Updated
•
107
legraphista/Meta-Llama-3.1-70B-Instruct-IMat-GGUF
Text Generation
•
71B
•
Updated
•
4.91k
•
11
ModelCloud/Meta-Llama-3.1-70B-Instruct-gptq-4bit
Text Generation
•
71B
•
Updated
•
1
•
4