vision_models all the image related models microsoft/Phi-3-vision-128k-instruct Text Generation • 4B • Updated Dec 10, 2025 • 35.2k • 970
Quantize-on-huggingface Running on A10G 1.8k GGUF My Repo 🦙 1.8k Create quantized models from Hugging Face repos
mistral all the mistral models goes here TheBloke/OpenHermes-2.5-Mistral-7B-GGUF 7B • Updated Nov 2, 2023 • 16.3k • 274 TheBloke/CapybaraHermes-2.5-Mistral-7B-GGUF 7B • Updated Jan 31, 2024 • 1.71k • 125 TheBloke/Mistral-7B-Instruct-v0.2-AWQ Text Generation • 7B • Updated Dec 11, 2023 • 44k • 51 mistralai/Mistral-7B-Instruct-v0.3 7B • Updated Dec 3, 2025 • 936k • 2.34k
Find n GPUs to run an LLM Running Featured 1.03k Can You Run It? LLM version 🚀 1.03k Determine GPU requirements for running large language models Runtime error 25 GGUF VRAM Calculator 📉 25
Running Featured 1.03k Can You Run It? LLM version 🚀 1.03k Determine GPU requirements for running large language models
mistral all the mistral models goes here TheBloke/OpenHermes-2.5-Mistral-7B-GGUF 7B • Updated Nov 2, 2023 • 16.3k • 274 TheBloke/CapybaraHermes-2.5-Mistral-7B-GGUF 7B • Updated Jan 31, 2024 • 1.71k • 125 TheBloke/Mistral-7B-Instruct-v0.2-AWQ Text Generation • 7B • Updated Dec 11, 2023 • 44k • 51 mistralai/Mistral-7B-Instruct-v0.3 7B • Updated Dec 3, 2025 • 936k • 2.34k
vision_models all the image related models microsoft/Phi-3-vision-128k-instruct Text Generation • 4B • Updated Dec 10, 2025 • 35.2k • 970
Find n GPUs to run an LLM Running Featured 1.03k Can You Run It? LLM version 🚀 1.03k Determine GPU requirements for running large language models Runtime error 25 GGUF VRAM Calculator 📉 25
Running Featured 1.03k Can You Run It? LLM version 🚀 1.03k Determine GPU requirements for running large language models
Quantize-on-huggingface Running on A10G 1.8k GGUF My Repo 🦙 1.8k Create quantized models from Hugging Face repos