Inference Providers
Active filters: 1bit
legraphista/Qwen2.5-3B-Instruct-IMat-GGUF
Text Generation
• 3B • Updated • 1.4k
legraphista/Qwen2.5-7B-Instruct-IMat-GGUF
Text Generation
• 8B • Updated • 86
legraphista/Qwen2.5-14B-Instruct-IMat-GGUF
Text Generation
• 15B • Updated • 630
legraphista/Qwen2.5-32B-Instruct-IMat-GGUF
Text Generation
• 33B • Updated • 1.87k
legraphista/Qwen2.5-Coder-1.5B-Instruct-IMat-GGUF
Text Generation
• 2B • Updated • 364
legraphista/Qwen2.5-Math-1.5B-Instruct-IMat-GGUF
Text Generation
• 2B • Updated • 432
legraphista/Qwen2.5-Coder-7B-Instruct-IMat-GGUF
Text Generation
• 8B • Updated • 176
legraphista/Qwen2.5-Math-7B-Instruct-IMat-GGUF
Text Generation
• 8B • Updated • 224
legraphista/Qwen2.5-72B-Instruct-IMat-GGUF
Text Generation
• 73B • Updated • 260
legraphista/Llama-3.2-1B-Instruct-IMat-GGUF
Text Generation
• 1B • Updated • 413
legraphista/Llama-3.2-3B-Instruct-IMat-GGUF
Text Generation
• 3B • Updated • 415
• 2
mradermacher/Bitnet-M7-70m-GGUF
77.5M • Updated • 23
mradermacher/Bitnet-M7-70m-i1-GGUF
77.5M • Updated • 77
yasserrmd/Qwen2.5-1.5B-Instruct-lutmac
Al-hin/Llama-3.2-1B-Instruct-IMat-GGUF
Text Generation
• 1B • Updated • 77
lew96123/qwen3.5-0.8b-custom-packed-1bit
Image-Text-to-Text
• Updated • 11