view post Post 3845 Mistral's new Ministral 3 models can now be Run & Fine-tuned locally! (16GB RAM)Ministral 3 have vision support and the best-in-class performance for their sizes.14B Instruct GGUF: unsloth/Ministral-3-14B-Instruct-2512-GGUF14B Reasoning GGUF: unsloth/Ministral-3-14B-Reasoning-2512-GGUF🐱 Step-by-step Guide: https://docs.unsloth.ai/new/ministral-3All GGUFs, BnB, FP8 etc. variants uploads: https://huggingface.co/collections/unsloth/ministral-3 See translation 3 replies · 🔥 17 17 🤗 7 7 ❤️ 5 5 🚀 3 3 + Reply
view article Article Fine-Tuning Your First Large Language Model (LLM) with PyTorch and Hugging Face Feb 11, 2025 • 97
💻 Local SmolLMs Collection SmolLM models in MLC, ONNX and GGUF format for local applications + in-browser demos • 14 items • Updated May 5, 2025 • 57
unsloth/Mistral-Small-3.2-24B-Instruct-2506 Image-Text-to-Text • 24B • Updated Aug 26, 2025 • 2.26k • • 11