Towards On-Policy SFT: Distribution Discriminant Theory and its Applications in LLM Training
Abstract
A framework is presented that bridges the generalization gap between supervised fine-tuning and reinforcement learning by enabling on-policy fine-tuning through distribution discriminant theory and complementary techniques.
Supervised fine-tuning (SFT) is computationally efficient but often yields inferior generalization compared to reinforcement learning (RL). This gap is primarily driven by RL's use of on-policy data. We propose a framework to bridge this chasm by enabling On-Policy SFT. We first present \textit{Distribution Discriminant Theory (DDT)}, which explains and quantifies the alignment between data and the model-induced distribution. Leveraging DDT, we introduce two complementary techniques: (i) \textit{In-Distribution Finetuning (IDFT)}, a loss-level method to enhance generalization ability of SFT, and (ii) \textit{Hinted Decoding}, a data-level technique that can re-align the training corpus to the model's distribution. Extensive experiments demonstrate that our framework achieves generalization performance surpassing prominent offline RL algorithms, including DPO and SimPO, while maintaining the efficiency of an SFT pipeline. The proposed framework thus offers a practical alternative in domains where RL is infeasible. We open-source the code here: https://github.com/zhangmiaosen2000/Towards-On-Policy-SFT
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper