Inference Providers
Active filters: 1bit
legraphista/Qwen2.5-1.5B-Instruct-IMat-GGUF
Text Generation
• 2B • Updated • 257
legraphista/Qwen2.5-3B-Instruct-IMat-GGUF
Text Generation
• 3B • Updated • 1.41k
legraphista/Qwen2.5-7B-Instruct-IMat-GGUF
Text Generation
• 8B • Updated • 90
legraphista/Qwen2.5-14B-Instruct-IMat-GGUF
Text Generation
• 15B • Updated • 643
legraphista/Qwen2.5-32B-Instruct-IMat-GGUF
Text Generation
• 33B • Updated • 1.88k
legraphista/Qwen2.5-Coder-1.5B-Instruct-IMat-GGUF
Text Generation
• 2B • Updated • 340
legraphista/Qwen2.5-Math-1.5B-Instruct-IMat-GGUF
Text Generation
• 2B • Updated • 423
legraphista/Qwen2.5-Coder-7B-Instruct-IMat-GGUF
Text Generation
• 8B • Updated • 192
legraphista/Qwen2.5-Math-7B-Instruct-IMat-GGUF
Text Generation
• 8B • Updated • 353
legraphista/Qwen2.5-72B-Instruct-IMat-GGUF
Text Generation
• 73B • Updated • 246
legraphista/Llama-3.2-1B-Instruct-IMat-GGUF
Text Generation
• 1B • Updated • 400
legraphista/Llama-3.2-3B-Instruct-IMat-GGUF
Text Generation
• 3B • Updated • 420
• 2
mradermacher/Bitnet-M7-70m-GGUF
77.5M • Updated • 23
mradermacher/Bitnet-M7-70m-i1-GGUF
77.5M • Updated • 78
yasserrmd/Qwen2.5-1.5B-Instruct-lutmac
Al-hin/Llama-3.2-1B-Instruct-IMat-GGUF
Text Generation
• 1B • Updated • 70
lew96123/qwen3.5-0.8b-custom-packed-1bit
Image-Text-to-Text
• Updated • 91
lew96123/qwen3.5-0.8b-custom-packed-turboquant_mse-true-uniform-1bit
Image-Text-to-Text
• Updated • 24