| title: README | |
| emoji: π | |
| colorFrom: purple | |
| colorTo: blue | |
| sdk: static | |
| pinned: false | |
| ## Training Small Language Models with Knowledge Distillation | |
| Official pre-trained models and baselines in | |
| + [MiniLLM](https://github.com/microsoft/LMOps/tree/main/minillm): Knowledge distillation of LLMs during instruction tuning. | |
| + [MiniPLM](https://github.com/thu-coai/MiniPLM): Knowledge distillation of LLMs during pre-training. | |