Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
danielhanchenΒ 
posted an update about 23 hours ago
Post
1342
We collaborated with NVIDIA to teach you how we made LLM training ~25% faster! πŸš€

Learn how 3 optimizations help your home GPU train models faster:
1. Packed-sequence metadata caching
2. Double-buffered checkpoint reloads
3. Faster MoE routing

Guide: https://unsloth.ai/blog/nvidia-collab
GitHub: https://github.com/unslothai/unsloth