Stabilizing Reinforcement Learning with LLMs: Formulation and Practices Paper • 2512.01374 • Published Dec 1, 2025 • 105
Cerebras REAP Collection Sparse MoE models compressed using REAP (Router-weighted Expert Activation Pruning) method • 30 items • Updated 15 days ago • 129
REAP the Experts: Why Pruning Prevails for One-Shot MoE compression Paper • 2510.13999 • Published Oct 15, 2025 • 14
Dynamic Chunking for End-to-End Hierarchical Sequence Modeling Paper • 2507.07955 • Published Jul 10, 2025 • 27