AMoE: Agglomerative Mixture-of-Experts Vision Foundation Model Paper • 2512.20157 • Published Dec 23, 2025 • 2
AMoE: Agglomerative MoE Vision Foundation Models Collection CVPR 2026. A family of vision encoders distilled from DINOv3 and SigLIP2, available in MoE and dense variants. • 4 items • Updated 5 days ago • 1
Falcon-H1-Tiny Collection A series of extremely small, yet powerful language models redefining capabilities at small scale • 19 items • Updated 14 days ago • 36
Learnable Multipliers: Freeing the Scale of Language Model Matrix Layers Paper • 2601.04890 • Published Jan 8 • 42
view article Article Introducing Falcon-H1-Arabic: Pushing the Boundaries of Arabic Language AI with Hybrid Architecture Jan 5 • 40