| | --- |
| | datasets: |
| | - monology/pile-uncopyrighted |
| | - MiniLLM/pile-tokenized |
| | language: |
| | - en |
| | library_name: transformers |
| | license: apache-2.0 |
| | metrics: |
| | - accuracy |
| | pipeline_tag: text-generation |
| | --- |
| | |
| | # Pretrain-Qwen-200M |
| |
|
| | [paper](https://arxiv.org/abs/2410.17215) | [code](https://github.com/thu-coai/MiniPLM) |
| |
|
| | **Pretrain-Qwen-200M** is a 200M model with QWen achitecture conventionally pre-trained from scratch on [the Pile](https://huggingface.co/datasets/monology/pile-uncopyrighted) for 50B tokens. |
| |
|
| | We also open-source the tokenized [pre-training corpus](https://huggingface.co/datasets/MiniLLM/pile-tokenized) for reproducibility. |
| |
|
| | **It is used as the baseline for [MiniLLM-Qwen-200M](https://huggingface.co/MiniLLM/MiniPLM-Qwen-200M)** |
| |
|
| | ## Evaluation |
| |
|
| | MiniPLM models achieves better performance given the same computation and scales well across model sizes: |
| |
|
| | <p align='left'> |
| | <img src="https://cdn-uploads.huggingface.co/production/uploads/624ac662102fcdff87be51b9/EOYzajQcwQFT5PobqL3j0.png" width="1000"> |
| | </p> |
| | |
| | ## Other Baselines |
| | + [VanillaKD](https://huggingface.co/MiniLLM/VanillaKD-Pretrain-Qwen-200M) |
| |
|
| | ## Citation |
| |
|
| | ```bibtext |
| | @article{miniplm, |
| | title={MiniPLM: Knowledge Distillation for Pre-Training Language Models}, |
| | author={Yuxian Gu and Hao Zhou and Fandong Meng and Jie Zhou and Minlie Huang}, |
| | journal={arXiv preprint arXiv:2410.17215}, |
| | year={2024} |
| | } |
| | ``` |