MiniPLM: Knowledge Distillation for Pre-Training Language Models
Paper
• 2410.17215 • Published
• 16
Quantization made by Richard Erkhov.
MiniPLM-Mamba-130M - bnb 4bits
MiniPLM-Mamba-130M is a 130M model with the Mamba achitecture pre-trained from scratch on the Pile using the MiniPLM knowledge distillation framework with the offcial Qwen1.5-1.8B as the teacher model. This model shows the flexibility of the MiniPLM framework in conducting knowledge distillation across model families.
We also open-source the pre-training corpus refined by Difference Sampling in MiniPLM for reproducibility.
MiniPLM models achieves better performance given the same computation and scales well across model sizes:
@article{miniplm,
title={MiniPLM: Knowledge Distillation for Pre-Training Language Models},
author={Yuxian Gu and Hao Zhou and Fandong Meng and Jie Zhou and Minlie Huang},
journal={arXiv preprint arXiv:2410.17215},
year={2024}
}