view post Post 746 We collaborated with NVIDIA to teach you how we made LLM training ~25% faster! 🚀Learn how 3 optimizations help your home GPU train models faster:1. Packed-sequence metadata caching2. Double-buffered checkpoint reloads3. Faster MoE routingGuide: https://unsloth.ai/blog/nvidia-collabGitHub: https://github.com/unslothai/unsloth See translation 🔥 3 3 🤝 2 2 🚀 1 1 + Reply
view post Post 8441 We made a guide on how to run open LLMs in Claude Code, Codex and OpenClaw.Use Gemma 4 and Qwen3.6 GGUFs for local agentic coding on 24GB RAMRun with self-healing tool calls, code execution, web search via the Unsloth API endpoint and llama.cppGuide: https://unsloth.ai/docs/basics/api See translation 🔥 25 25 ❤️ 7 7 + Reply