vision_models all the image related models microsoft/Phi-3-vision-128k-instruct Text Generation • Updated Dec 10, 2025 • 176k • 971
Quantize-on-huggingface Running on A10G 1.94k GGUF My Repo 🦙 1.94k Quantize Hugging Face models to GGUF and publish repo
mistral all the mistral models goes here TheBloke/OpenHermes-2.5-Mistral-7B-GGUF 7B • Updated Nov 2, 2023 • 14.1k • 279 TheBloke/CapybaraHermes-2.5-Mistral-7B-GGUF 7B • Updated Jan 31, 2024 • 2.13k • 126 TheBloke/Mistral-7B-Instruct-v0.2-AWQ Text Generation • 7B • Updated Dec 11, 2023 • 153k • 52 mistralai/Mistral-7B-Instruct-v0.3 7B • Updated Dec 3, 2025 • 3.68M • 2.56k
Find n GPUs to run an LLM Running Featured 1.05k Can You Run It? LLM version 🚀 1.05k Calculate GPU needs for running LLMs on your hardware Runtime error Agents 25 GGUF VRAM Calculator 📉 25
Running Featured 1.05k Can You Run It? LLM version 🚀 1.05k Calculate GPU needs for running LLMs on your hardware
mistral all the mistral models goes here TheBloke/OpenHermes-2.5-Mistral-7B-GGUF 7B • Updated Nov 2, 2023 • 14.1k • 279 TheBloke/CapybaraHermes-2.5-Mistral-7B-GGUF 7B • Updated Jan 31, 2024 • 2.13k • 126 TheBloke/Mistral-7B-Instruct-v0.2-AWQ Text Generation • 7B • Updated Dec 11, 2023 • 153k • 52 mistralai/Mistral-7B-Instruct-v0.3 7B • Updated Dec 3, 2025 • 3.68M • 2.56k
vision_models all the image related models microsoft/Phi-3-vision-128k-instruct Text Generation • Updated Dec 10, 2025 • 176k • 971
Find n GPUs to run an LLM Running Featured 1.05k Can You Run It? LLM version 🚀 1.05k Calculate GPU needs for running LLMs on your hardware Runtime error Agents 25 GGUF VRAM Calculator 📉 25
Running Featured 1.05k Can You Run It? LLM version 🚀 1.05k Calculate GPU needs for running LLMs on your hardware
Quantize-on-huggingface Running on A10G 1.94k GGUF My Repo 🦙 1.94k Quantize Hugging Face models to GGUF and publish repo