| | --- |
| | license: apache-2.0 |
| | datasets: |
| | - databricks/databricks-dolly-15k |
| | language: |
| | - en |
| | metrics: |
| | - rouge |
| | pipeline_tag: text-generation |
| | --- |
| | # SeqKD-Llama-7B |
| |
|
| | [paper](https://arxiv.org/abs/2306.08543) | [code](https://github.com/microsoft/LMOps/tree/main/minillm) |
| |
|
| | **SeqKD-Llama-7B** is a Llama-7B model distilled from [Llama-13B](https://huggingface.co/MiniLLM/teacher-Llama-13B) on [databricks-dolly-15k](https://huggingface.co/datasets/aisquared/databricks-dolly-15k) with sequence-level forward KLD. |
| |
|
| | It is used as a baseline for [MiniLLM](https://huggingface.co/MiniLLM/MiniLLM-Llama-7B). |
| |
|
| | ## Other Baselines |
| | + [SFT w/o KD](https://huggingface.co/MiniLLM/SFT-Llama-7B) |
| | + [KD](https://huggingface.co/MiniLLM/KD-Llama-7B) |
| |
|
| |
|
| | ## Citation |
| | ``` |
| | @inproceedings{minillm, |
| | title={MiniLLM: Knowledge Distillation of Large Language Models}, |
| | author={Gu, Yuxian and Dong, Li and Wei, Furu and Huang, Minlie}, |
| | booktitle={Proceedings of ICLR}, |
| | year={2024} |
| | } |
| | ``` |