Running on CPU Upgrade Featured 3.06k The Smol Training Playbook 📚 3.06k The secrets to building world-class LLMs
view article Article makeMoE: Implement a Sparse Mixture of Experts Language Model from Scratch May 7, 2024 • 118
view article Article KV Caching Explained: Optimizing Transformer Inference Efficiency Jan 30, 2025 • 261
Running 3.75k The Ultra-Scale Playbook 🌌 3.75k The ultimate guide to training LLM on large GPU Clusters