view post Post 7523 We collaborated with NVIDIA to teach you how we made LLM training ~25% faster! 🚀Learn how 3 optimizations help your home GPU train models faster:1. Packed-sequence metadata caching2. Double-buffered checkpoint reloads3. Faster MoE routingGuide: https://unsloth.ai/blog/nvidia-collabGitHub: https://github.com/unslothai/unsloth See translation 🔥 20 20 🚀 4 4 🤝 2 2 😔 1 1 + Reply
FrontAgent: Frontend Engineering Agent Collection A collection for FrontAgent, an LLM-powered agent system for frontend engineering. It includes the SFT dataset, LoRA planner model and demo Space. • 3 items • Updated 5 days ago • 1
FrontAgent: Frontend Engineering Agent Collection A collection for FrontAgent, an LLM-powered agent system for frontend engineering. It includes the SFT dataset, LoRA planner model and demo Space. • 3 items • Updated 5 days ago • 1