Cerebras REAP Collection Sparse MoE models compressed using REAP (Router-weighted Expert Activation Pruning) method β’ 24 items β’ Updated 5 days ago β’ 94
mmBERT: a modern multilingual encoder Collection mmBERT is trained on 3T tokens from over 1800 languages, showing SoTA scores on benchmarks and exceptional low-resource performance β’ 16 items β’ Updated Sep 9, 2025 β’ 50
view article Article Accelerate ND-Parallel: A guide to Efficient Multi-GPU Training +3 Aug 8, 2025 β’ 91
Llama 4 Collection Meta's new Llama 4 multimodal models, Scout & Maverick. Includes Dynamic GGUFs, 16-bit & Dynamic 4-bit uploads. Run & fine-tune them with Unsloth! β’ 15 items β’ Updated 5 days ago β’ 54
On Teacher Hacking in Language Model Distillation Paper β’ 2502.02671 β’ Published Feb 4, 2025 β’ 18
π§ Reasoning datasets Collection Datasets with reasoning traces for math and code released by the community β’ 24 items β’ Updated May 19, 2025 β’ 181
view article Article Improving Hugging Face Training Efficiency Through Packing with Flash Attention 2 +4 Aug 21, 2024 β’ 42
Qwen2.5 Collection Qwen2.5 language models, including pretrained and instruction-tuned models of 7 sizes, including 0.5B, 1.5B, 3B, 7B, 14B, 32B, and 72B. β’ 46 items β’ Updated 28 days ago β’ 681
The Big Benchmarks Collection Collection Gathering benchmark spaces on the hub (beyond the Open LLM Leaderboard) β’ 13 items β’ Updated Nov 18, 2024 β’ 257
Standard-format-preference-dataset Collection We collect the open-source datasets and process them into the standard format. β’ 14 items β’ Updated May 8, 2024 β’ 26
FP8 LLMs for vLLM Collection Accurate FP8 quantized models by Neural Magic, ready for use with vLLM! β’ 44 items β’ Updated Oct 17, 2024 β’ 76
Korean Datasets I've released so far. Collection μ§κΈκΉμ§ μ λ‘λν νκ΅μ΄ λ°μ΄ν°μ μ½λ μ μ λλ€. β’ 8 items β’ Updated May 24, 2024 β’ 21