Merge branch 'main' of hf.co:spaces/HuggingFaceH4/general-on-policy-logit-distillation
Browse files
app/src/content/article.mdx
CHANGED
|
@@ -53,7 +53,7 @@ On-policy distillation is a highly effective strategy for compressing LLMs, as r
|
|
| 53 |
|
| 54 |
In this blog post, we introduce **General On-Policy Logit Distillation (GOLD)**, our method for extending on-policy distillation to address a fundamental weakness: the requirement that the teacher and student models must share the *same* tokenizer vocabulary.
|
| 55 |
|
| 56 |
-
Building on Universal Logit Distillation (ULD) [
|
| 57 |
|
| 58 |
Our key contributions are:
|
| 59 |
|
|
@@ -71,7 +71,7 @@ With this foundation in place, let’s step back to review the broader landscape
|
|
| 71 |
|
| 72 |
There are two main type of distillation: off-policy and on-policy. Off-policy distillation trains a student model on fixed data (typically the teacher's precomputed logits or text completions), while on-policy distillation involves the teacher providing feedback to the student's own outputs.
|
| 73 |
|
| 74 |
-
Generalised Knowledge Distillation (GKD) [
|
| 75 |
|
| 76 |
On-policy distillation's advantage is twofold. First, as the student model improves, its generations create progressively higher-quality training data, forming a positive feedback loop. Second, this “context alignment” forces the student to learn from the same types of errors and successes it will encounter during inference, rather than from completions generated only by the teacher.
|
| 77 |
|
|
@@ -100,9 +100,9 @@ While GKD establishes a strong foundation for on-policy training, it assumes bot
|
|
| 100 |
|
| 101 |
The main limitation with all on-policy distillation methods is that they assume the use of the same tokenizer for both the student and the teacher. The current AI ecosystem spans different model families such as [SmolLM](https://huggingface.co/collections/HuggingFaceTB/smollm3), [Llama](https://huggingface.co/collections/meta-llama/llama-32), [Qwen](https://huggingface.co/collections/Qwen/qwen3), and [Gemma](https://huggingface.co/collections/google/gemma-3-release), each with their own strengths and shortcomings. Each model family, and even different versions within the same family, uses its own tokenizer, so requiring a single tokenizer can be overly restrictive when selecting student-teacher pairings. Recent work, such as **Universal Logit Distillation (ULD),** lifts the tokenizer restriction by showing distillation can be performed without needing a perfect alignment between teacher and student vocabularies, albeit in an offline setting.
|
| 102 |
|
| 103 |
-
 of how to train GRPO for the Countdown task and compared it to the performance of KD distillation. Our reward function was a sum of three components:
|
| 287 |
|
|
@@ -429,8 +429,6 @@ Our experiments on the Countdown math task confirm GOLD's advantages. We showed
|
|
| 429 |
|
| 430 |
These findings demonstrate that GOLD is a powerful and flexible technique for model distillation. It provides a path to distill knowledge from any high-performing teacher to any student, regardless of their tokenizer, offering a more effective and token-efficient alternative to reinforcement learning.
|
| 431 |
|
| 432 |
-
## Footnotes
|
| 433 |
-
|
| 434 |
[^f1]: The full GKD loss is then formally defined as: $$\mathcal{L}_{GKD} := (1-\lambda) \mathbb{E}_{(x,y)\sim (X,Y)}[\mathcal{D}_{JSD(\beta)}] + \lambda \mathbb{E}_{x \sim X}[\mathbb{E}_{y \sim p_{S}(.|x)}[\mathcal{D}_{JSD(\beta)}]].$$
|
| 435 |
|
| 436 |
[^f2]: The details of why we can merge the softmax by adding the log probabilities from the merged positions in the sequence. $$P(\text{"Hugging Face"} \mid \text{"<think>"}) = P(\text{"Hugging"} \mid \text{"<think>"}) \times P(\text{"Face"} \mid \text{"<think> Hugging"})$$ and $$\log P(\text{"Hugging Face"} \mid \text{"<think>"}) = \log P(\text{"Hugging"} \mid \text{"<think>"}) + \log P(\text{"Face"} \mid \text{"<think> Hugging"})$$
|
|
|
|
| 53 |
|
| 54 |
In this blog post, we introduce **General On-Policy Logit Distillation (GOLD)**, our method for extending on-policy distillation to address a fundamental weakness: the requirement that the teacher and student models must share the *same* tokenizer vocabulary.
|
| 55 |
|
| 56 |
+
Building on Universal Logit Distillation (ULD) [@boizard2025crosstokenizerdistillationuniversallogit], GOLD is highly effective for complex, multi-step reasoning tasks, such as math. Our results show GOLD performs better than ULD and even GRPO.
|
| 57 |
|
| 58 |
Our key contributions are:
|
| 59 |
|
|
|
|
| 71 |
|
| 72 |
There are two main type of distillation: off-policy and on-policy. Off-policy distillation trains a student model on fixed data (typically the teacher's precomputed logits or text completions), while on-policy distillation involves the teacher providing feedback to the student's own outputs.
|
| 73 |
|
| 74 |
+
Generalised Knowledge Distillation (GKD) [@agarwal2024onpolicydistillationlanguagemodels] unifies these approaches under a common framework by supporting a range of loss functions that enable training on both static teacher data and trajectories generated by the student. The GKD paper shows that on-policy distillation typically outperforms off-policy methods: a result we confirm later in this post.
|
| 75 |
|
| 76 |
On-policy distillation's advantage is twofold. First, as the student model improves, its generations create progressively higher-quality training data, forming a positive feedback loop. Second, this “context alignment” forces the student to learn from the same types of errors and successes it will encounter during inference, rather than from completions generated only by the teacher.
|
| 77 |
|
|
|
|
| 100 |
|
| 101 |
The main limitation with all on-policy distillation methods is that they assume the use of the same tokenizer for both the student and the teacher. The current AI ecosystem spans different model families such as [SmolLM](https://huggingface.co/collections/HuggingFaceTB/smollm3), [Llama](https://huggingface.co/collections/meta-llama/llama-32), [Qwen](https://huggingface.co/collections/Qwen/qwen3), and [Gemma](https://huggingface.co/collections/google/gemma-3-release), each with their own strengths and shortcomings. Each model family, and even different versions within the same family, uses its own tokenizer, so requiring a single tokenizer can be overly restrictive when selecting student-teacher pairings. Recent work, such as **Universal Logit Distillation (ULD),** lifts the tokenizer restriction by showing distillation can be performed without needing a perfect alignment between teacher and student vocabularies, albeit in an offline setting.
|
| 102 |
|
| 103 |
+
![Figure 1: Previous work, ULD by Boizard et al. [@boizard2025crosstokenizerdistillationuniversallogit]. Demonstrate offline distillation on student and teacher models with unmatched tokenizers. GOLD extends their method to the on-policy setting and addresses two weaknesses: token alignment in step 3 and logit alignment in step 4.](attachment:25e68e8b-5c34-4641-958c-674a8fea92be:image.png)
|
| 104 |
|
| 105 |
+
Figure 1: Previous work, ULD by Boizard et al. [@boizard2025crosstokenizerdistillationuniversallogit]. Demonstrate offline distillation on student and teacher models with unmatched tokenizers. GOLD extends their method to the on-policy setting and addresses two weaknesses: token alignment in step 3 and logit alignment in step 4.
|
| 106 |
|
| 107 |
ULD showed that using distillation between models with different tokenizers introduces two key challenges:
|
| 108 |
|
|
|
|
| 160 |
|
| 161 |
### Task Definition
|
| 162 |
|
| 163 |
+
We used a math game called Countdown [@gandhi2024streamsearchsoslearning], where the objective is to reach a target value using a group of numbers and four arithmetic operations (+, -, *, /). Additionally, the model must provide the answer using a specific format because we set a strict parser that considers the answer wrong if it can’t find the expected format. We only consider the answer as correct if it fulfils all the following conditions:
|
| 164 |
|
| 165 |
- Only uses each number once.
|
| 166 |
- The equation given by the model results in the target.
|
|
|
|
| 226 |
|
| 227 |
### GKD with the Same Tokenizer
|
| 228 |
|
| 229 |
+
Our first goal was to validate our GKD implementation by comparing our results with those reported by Agarwal et al [@agarwal2024onpolicydistillationlanguagemodels]. We focused on comparing the performance of combining on-policy and off-policy learning through ablations of five different $\lambda$ values, as shown in Figure 5 We used `Qwen/Qwen3-4B-Instruct-2507` as a teacher and `Qwen/Qwen2.5-1.5B-Instruct` as a student. For the offline learning, we generated completions to the prompts using `Qwen/Qwen3-4B-Instruct-2507` beforehand to speed up the training process. We set the temperature $\gamma=1$ for the student generations and used the forward KL divergence ($\beta=0$)[^f3] in $\mathcal{L}_{OD}$.
|
| 230 |
|
| 231 |
The results confirm that using at least some degree of on-policy training outperforms the SFT setup. We also see a trend of better performance as we increase $\lambda$, with fully on-policy achieving the best overall performance. This behavior confirms the hypothesis that fully on-policy training is better than training with offline data when using models with the same tokenizer.
|
| 232 |
|
|
|
|
| 281 |
|
| 282 |
## On-policy distillation outperforms GRPO
|
| 283 |
|
| 284 |
+
On-policy distillation uses student-generated completions to progressively update the training data. Having established this approach is superior to offline methods like SFT (when tokenizers match), we next compared it to other on-policy methods, specifically Group Relative Policy Optimization (GRPO). GRPO is an RL method introduced in the DeepSeek-Math paper [shao2024deepseekmathpushinglimitsmathematical] and later popularized by the Deepseek R1 release [@deepseekai2025deepseekr1incentivizingreasoningcapability].
|
| 285 |
|
| 286 |
We followed [Philipp Schmid’s tutorial](https://www.philschmid.de/mini-deepseek-r1) of how to train GRPO for the Countdown task and compared it to the performance of KD distillation. Our reward function was a sum of three components:
|
| 287 |
|
|
|
|
| 429 |
|
| 430 |
These findings demonstrate that GOLD is a powerful and flexible technique for model distillation. It provides a path to distill knowledge from any high-performing teacher to any student, regardless of their tokenizer, offering a more effective and token-efficient alternative to reinforcement learning.
|
| 431 |
|
|
|
|
|
|
|
| 432 |
[^f1]: The full GKD loss is then formally defined as: $$\mathcal{L}_{GKD} := (1-\lambda) \mathbb{E}_{(x,y)\sim (X,Y)}[\mathcal{D}_{JSD(\beta)}] + \lambda \mathbb{E}_{x \sim X}[\mathbb{E}_{y \sim p_{S}(.|x)}[\mathcal{D}_{JSD(\beta)}]].$$
|
| 433 |
|
| 434 |
[^f2]: The details of why we can merge the softmax by adding the log probabilities from the merged positions in the sequence. $$P(\text{"Hugging Face"} \mid \text{"<think>"}) = P(\text{"Hugging"} \mid \text{"<think>"}) \times P(\text{"Face"} \mid \text{"<think> Hugging"})$$ and $$\log P(\text{"Hugging Face"} \mid \text{"<think>"}) = \log P(\text{"Hugging"} \mid \text{"<think>"}) + \log P(\text{"Face"} \mid \text{"<think> Hugging"})$$
|
app/src/content/bibliography.bib
CHANGED
|
@@ -128,3 +128,53 @@
|
|
| 128 |
doi = {10.48550/arXiv.1910.10683},
|
| 129 |
url = {https://arxiv.org/abs/1910.10683}
|
| 130 |
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 128 |
doi = {10.48550/arXiv.1910.10683},
|
| 129 |
url = {https://arxiv.org/abs/1910.10683}
|
| 130 |
}
|
| 131 |
+
|
| 132 |
+
@misc{boizard2025crosstokenizerdistillationuniversallogit,
|
| 133 |
+
title={Towards Cross-Tokenizer Distillation: the Universal Logit Distillation Loss for LLMs},
|
| 134 |
+
author={Nicolas Boizard and Kevin El Haddad and Céline Hudelot and Pierre Colombo},
|
| 135 |
+
year={2025},
|
| 136 |
+
eprint={2402.12030},
|
| 137 |
+
archivePrefix={arXiv},
|
| 138 |
+
primaryClass={cs.CL},
|
| 139 |
+
url={https://huggingface.co/papers/2402.12030},
|
| 140 |
+
}
|
| 141 |
+
|
| 142 |
+
@misc{agarwal2024onpolicydistillationlanguagemodels,
|
| 143 |
+
title={On-Policy Distillation of Language Models: Learning from Self-Generated Mistakes},
|
| 144 |
+
author={Rishabh Agarwal and Nino Vieillard and Yongchao Zhou and Piotr Stanczyk and Sabela Ramos and Matthieu Geist and Olivier Bachem},
|
| 145 |
+
year={2024},
|
| 146 |
+
eprint={2306.13649},
|
| 147 |
+
archivePrefix={arXiv},
|
| 148 |
+
primaryClass={cs.LG},
|
| 149 |
+
url={https://huggingface.co/papers/2306.13649},
|
| 150 |
+
}
|
| 151 |
+
|
| 152 |
+
@misc{gandhi2024streamsearchsoslearning,
|
| 153 |
+
title={Stream of Search (SoS): Learning to Search in Language},
|
| 154 |
+
author={Kanishk Gandhi and Denise Lee and Gabriel Grand and Muxin Liu and Winson Cheng and Archit Sharma and Noah D. Goodman},
|
| 155 |
+
year={2024},
|
| 156 |
+
eprint={2404.03683},
|
| 157 |
+
archivePrefix={arXiv},
|
| 158 |
+
primaryClass={cs.LG},
|
| 159 |
+
url={https://huggingface.co/papers/2404.03683},
|
| 160 |
+
}
|
| 161 |
+
|
| 162 |
+
@misc{shao2024deepseekmathpushinglimitsmathematical,
|
| 163 |
+
title={DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models},
|
| 164 |
+
author={Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Xiao Bi and Haowei Zhang and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
|
| 165 |
+
year={2024},
|
| 166 |
+
eprint={2402.03300},
|
| 167 |
+
archivePrefix={arXiv},
|
| 168 |
+
primaryClass={cs.CL},
|
| 169 |
+
url={https://huggingface.co/papers/2402.03300},
|
| 170 |
+
}
|
| 171 |
+
|
| 172 |
+
@misc{deepseekai2025deepseekr1incentivizingreasoningcapability,
|
| 173 |
+
title={DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning},
|
| 174 |
+
author={DeepSeek-AI and Daya Guo and Dejian Yang and Haowei Zhang and Junxiao Song and Ruoyu Zhang and Runxin Xu and Qihao Zhu and Shirong Ma and Peiyi Wang and Xiao Bi and Xiaokang Zhang and Xingkai Yu and Yu Wu and Z. F. Wu and Zhibin Gou and Zhihong Shao and Zhuoshu Li and Ziyi Gao and Aixin Liu and Bing Xue and Bingxuan Wang and Bochao Wu and Bei Feng and Chengda Lu and Chenggang Zhao and Chengqi Deng and Chenyu Zhang and Chong Ruan and Damai Dai and Deli Chen and Dongjie Ji and Erhang Li and Fangyun Lin and Fucong Dai and Fuli Luo and Guangbo Hao and Guanting Chen and Guowei Li and H. Zhang and Han Bao and Hanwei Xu and Haocheng Wang and Honghui Ding and Huajian Xin and Huazuo Gao and Hui Qu and Hui Li and Jianzhong Guo and Jiashi Li and Jiawei Wang and Jingchang Chen and Jingyang Yuan and Junjie Qiu and Junlong Li and J. L. Cai and Jiaqi Ni and Jian Liang and Jin Chen and Kai Dong and Kai Hu and Kaige Gao and Kang Guan and Kexin Huang and Kuai Yu and Lean Wang and Lecong Zhang and Liang Zhao and Litong Wang and Liyue Zhang and Lei Xu and Leyi Xia and Mingchuan Zhang and Minghua Zhang and Minghui Tang and Meng Li and Miaojun Wang and Mingming Li and Ning Tian and Panpan Huang and Peng Zhang and Qiancheng Wang and Qinyu Chen and Qiushi Du and Ruiqi Ge and Ruisong Zhang and Ruizhe Pan and Runji Wang and R. J. Chen and R. L. Jin and Ruyi Chen and Shanghao Lu and Shangyan Zhou and Shanhuang Chen and Shengfeng Ye and Shiyu Wang and Shuiping Yu and Shunfeng Zhou and Shuting Pan and S. S. Li and Shuang Zhou and Shaoqing Wu and Shengfeng Ye and Tao Yun and Tian Pei and Tianyu Sun and T. Wang and Wangding Zeng and Wanjia Zhao and Wen Liu and Wenfeng Liang and Wenjun Gao and Wenqin Yu and Wentao Zhang and W. L. Xiao and Wei An and Xiaodong Liu and Xiaohan Wang and Xiaokang Chen and Xiaotao Nie and Xin Cheng and Xin Liu and Xin Xie and Xingchao Liu and Xinyu Yang and Xinyuan Li and Xuecheng Su and Xuheng Lin and X. Q. Li and Xiangyue Jin and Xiaojin Shen and Xiaosha Chen and Xiaowen Sun and Xiaoxiang Wang and Xinnan Song and Xinyi Zhou and Xianzu Wang and Xinxia Shan and Y. K. Li and Y. Q. Wang and Y. X. Wei and Yang Zhang and Yanhong Xu and Yao Li and Yao Zhao and Yaofeng Sun and Yaohui Wang and Yi Yu and Yichao Zhang and Yifan Shi and Yiliang Xiong and Ying He and Yishi Piao and Yisong Wang and Yixuan Tan and Yiyang Ma and Yiyuan Liu and Yongqiang Guo and Yuan Ou and Yuduan Wang and Yue Gong and Yuheng Zou and Yujia He and Yunfan Xiong and Yuxiang Luo and Yuxiang You and Yuxuan Liu and Yuyang Zhou and Y. X. Zhu and Yanhong Xu and Yanping Huang and Yaohui Li and Yi Zheng and Yuchen Zhu and Yunxian Ma and Ying Tang and Yukun Zha and Yuting Yan and Z. Z. Ren and Zehui Ren and Zhangli Sha and Zhe Fu and Zhean Xu and Zhenda Xie and Zhengyan Zhang and Zhewen Hao and Zhicheng Ma and Zhigang Yan and Zhiyu Wu and Zihui Gu and Zijia Zhu and Zijun Liu and Zilin Li and Ziwei Xie and Ziyang Song and Zizheng Pan and Zhen Huang and Zhipeng Xu and Zhongyu Zhang and Zhen Zhang},
|
| 175 |
+
year={2025},
|
| 176 |
+
eprint={2501.12948},
|
| 177 |
+
archivePrefix={arXiv},
|
| 178 |
+
primaryClass={cs.CL},
|
| 179 |
+
url={https://huggingface.co/papers/2501.12948},
|
| 180 |
+
}
|