view article Article Efficient LLM Pretraining: Packed Sequences and Masked Attention Oct 7, 2024 • 64
Running 3.66k The Ultra-Scale Playbook 🌌 3.66k The ultimate guide to training LLM on large GPU Clusters
ModernBERT Collection Bringing BERT into modernity via both architecture changes and scaling • 3 items • Updated Dec 19, 2024 • 156
Running Featured 1.27k FineWeb: decanting the web for the finest text data at scale 🍷 1.27k Generate high-quality text data for LLMs using FineWeb