view post Post 7503 We collaborated with NVIDIA to teach you how we made LLM training ~25% faster! πLearn how 3 optimizations help your home GPU train models faster:1. Packed-sequence metadata caching2. Double-buffered checkpoint reloads3. Faster MoE routingGuide: https://unsloth.ai/blog/nvidia-collabGitHub: https://github.com/unslothai/unsloth See translation π₯ 20 20 π 4 4 π€ 2 2 π 1 1 + Reply
DanielRegaladoCardoso/sql-generator-qwen25-coder-7b-lora Text Generation β’ 8B β’ Updated 17 days ago
DanielRegaladoCardoso/sql-generator-qwen25-coder-7b-lora Text Generation β’ 8B β’ Updated 17 days ago
Running Agents Miami-Dade Transit Equity Simulator π Simulate transit policy impacts for MiamiβDade
Sleeping Agents CounterFlow NN β AbsorptionTower π§ͺ A gas absorber, written as a differentiable layer.
Running Agents Miami-Dade Transit Equity Simulator π Simulate transit policy impacts for MiamiβDade