text
string
source
string
( L={30,31,32}). Since each layer has 32 attention heads, we effectively perform ablation over n=|HL|= 96 features (attention heads) in total. For a given group L, we begin by estimating the function fLusing both LASSO andPROXY SPEX , based on evaluations of fL(S)for5000 subsets Ssampled uniformly at random. These estimates serve as surrogates for the true head importance function. We then maximize the estimated functions to identify the most important attention heads under varying sparsity constraints (target numbers of retained heads). We use the procedure detailed in Section 4.2 to identify heads to remove for both PROXY SPEX and LASSO. We also compare against a Best-of- Nbaseline, in which the model is pruned by selecting the subset Sthat achieves the highest value of fL(S)among 5000 randomly sampled subsets at the target sparsity level. Evaluation. In order to evaluate the performance of an ablated model LLM S, we measure its accuracy on the test set using gL(S)≜Accuracy of LLM SonDtest. (15) In Figure 9, we report the value of gL(S)for the pruned models obtained by each method. We find thatPROXY SPEX consistently outperforms both baselines, yielding higher test accuracy across all evaluated sparsity levels. Inference setup. All experiments are run on a single NVIDIA H100 GPU, with batch size 50. Average runtime per ablation (i.e., evaluating fL(S)once for a given S) is approximately 1.7 seconds. Therefore, collecting a training dataset {(Si, fL(Si))}with5000 training samples takes approximately 2.5 hours. 26
https://arxiv.org/abs/2505.17495v1
arXiv:2505.17496v1 [cs.CL] 23 May 2025Analyzing Mitigation Strategies for Catastrophic Forgetting in End-to-End Training of Spoken Language Models Chi-Yuan Hsiao1, Ke-Han Lu1, Kai-Wei Chang1, Chih-Kai Yang1, Wei-Chih Chen1, Hung-yi Lee1 1National Taiwan University, Taiwan r12942086@ntu.edu.tw, d12942024@ntu.edu.tw, kaiwei.chang.tw@gmail.com, chihkaiyang1124@gmail.com, r12921120@ntu.edu.tw, hungyilee@ntu.edu.tw Abstract End-to-end training of Spoken Language Models (SLMs) commonly involves adapting pre-trained text-based Large Lan- guage Models (LLMs) to the speech modality through multi- stage training on diverse tasks such as ASR, TTS and spoken question answering (SQA). Although this multi-stage continual learning equips LLMs with both speech understanding and gen- eration capabilities, the substantial differences in task and data distributions across stages can lead to catastrophic forgetting, where previously acquired knowledge is lost. This paper in- vestigates catastrophic forgetting and evaluates three mitigation strategies—model merging, discounting the LoRA scaling fac- tor, and experience replay to balance knowledge retention with new learning. Results show that experience replay is the most effective, with further gains achieved by combining it with other methods. These findings provide insights for developing more robust and efficient SLM training pipelines. Index Terms : spoken language model, catastrophic forgetting, continual learning, model merging 1. Introduction Inspired by the remarkable success of large language mod- els (LLMs) [1, 2, 3, 4] in natural language processing (NLP), researchers have begun exploring spoken language models (SLMs)1as powerful solutions for speech processing tasks. For instance, textless SLMs [5] perform speech continuation with- out text supervision, while task-specific SLMs, such as V ALL- E [6] for text-to-speech (TTS) and Seamless [7] for speech translation, leverage generative language modeling to achieve state-of-the-art performance. More recently, researchers have also investigated the instruction-following (IF) capability of SLMs, enabling them to tackle diverse speech processing tasks through natural language guidance. This advancement en- hances the flexibility and adaptability of SLMs across various applications [8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18]. Due to the high complexity of speech signals, advanced SLMs are typically built by incorporating pre-trained text LLMs rather than being trained from scratch. A common approach is to use a pre-trained text LLM as the backbone and adapt it to the speech modality, allowing it to understand and/or generate speech [13, 14, 15, 19]. These models can be broadly cate- gorized based on how they incorporate speech into the LLM. One approach involves integrating a speech encoder with a text 1Currently, there is no strict definition of SLMs. In this paper, we define SLMs as models that can process both text and speech as input and generate either text or speech as output, with at least one modality being speech. Without loss of generality, this paper focus on analysing SLMs capable of simultaneously accepting text and speech as input and producing both text and speech as output. Figure 1: Continual training of a spoken language model using multi-stage speech processing tasks. LLM through a projection network for representation align- ment. The projection network are then trained optionally along with the LLM to make it familiar with the speech modality, as seen in models like Qwen-Audio [17, 18], SALMONN [14], and DeSTA[15, 16]. While these
https://arxiv.org/abs/2505.17496v1
SLMs demonstrate strong speech understanding capabilities, they cannot generate speech responses. Another approach directly integrates speech tokens (e.g., semantic tokens derived from self-supervised learning (SSL) speech models and acoustic tokens from speech codec mod- els [20]) into the LLM, as seen in models [21, 22, 23, 24] such as Moshi and Mini-Omni. This usually requires vocabulary ex- pansion [19, 22], where the LLM’s vocabulary is extended to include both text and speech tokens. By jointly modeling text and speech tokens, these SLMs can effectively understand and generate both modalities. In order to familiarize LLMs with the speech modality, multi-stage training is often employed. This involves training the LLM on speech processing tasks across several stages, each using a distinct dataset, such as ASR, TTS, and Spoken Ques- tion Answering (SQA) as shown in Fig. 1. For example, during the ASR stage, the LLM is equipped with speech understanding capabilities, while in the TTS stage, the LLM learns to generate speech. In the SQA stage, the LLM gains the ability to answer questions in speech based on spoken input. However, due to substantial differences in tasks and data distributions across stages, catastrophic forgetting [25] may oc- cur, causing the LLM to lose previously acquired knowledge or abilities. Although SLMs gain new speech understanding ca- pabilities, they must also retain their original text-based knowl- edge for tasks such as SQA or following speech instructions. Catastrophic forgetting can degrade the performance of SLMs in both text and speech modalities. To study this problem and explore potential solutions, we examine three widely used strategies for mitigating catastrophic forgetting in LLMs and SLMs. These strategies include (1) model merging [26, 27], (2) discounting the LoRA scaling fac- tor [14, 28], and (3) experience replay [29, 30, 31]. Specifi- cally, in this paper, we train an SLM based on LLaMA[3] and systematically analyze catastrophic forgetting at each training stage by evaluating its performance on question answering and instruction-following tasks in the text modality to assess knowl- edge retention. After applying mitigation strategies, we com- pare their effectiveness by evaluating SQA in the speech modal- ity, along with the previously tested text-based tasks. Overall, this study investigates catastrophic forgetting in SLM training and evaluates three commonly used mitigation strategies. Our experimental results show that among the three strategies examined, experience replay proves to be the most ef- fective, significantly reducing knowledge loss while maintain- ing performance across both text and speech modalities. 2. Mitigation strategies In this section, we present three common strategies for miti- gating catastrophic forgetting in LLMs and SLMs, which are the focus of this paper: (1) model merging [26, 27], (2) dis- counting the LoRA scaling factor [32, 15], and (3) experience replay [29, 30, 31]. 2.1. Model merging Consider an SLM training process with Nstages, where θidenotes the model weights after the i-th training stage. The complete set of model weights is given by M= {θ0, θ1, θ2, . . . , θ N}, where θ0represents the original pre- trained model, and θNthe final SLM weights. To leverage infor- mation from multiple training stages,
https://arxiv.org/abs/2505.17496v1
we explore model merg- ing techniques that aggregate weights in Musing several meth- ods, including naive linear combination method, TIES [33], and DARE [34]. By applying these methods, we aim to preserve knowledge from different training stages, mitigating forgetting and enhancing the final performance. 2.2. Discounting LoRA-scaling factor Given an input x∈Rdinand a weight matrix W∈Rdout×din, the forward pass with a LoRA adapter of rank ris given by: y=Wx+α r(BAx ), (1) where A∈Rr×dinandB∈Rdout×rare the weight of the adapter, and αis the scaling factor. The strategy is to set a lower αof model that is finetuned with LoRA adapter during inference process to make the effect of adapter weaker. 2.3. Experience replay Different from above strategies, experience replay is applied while model training. Suppose a pre-trained model θ0is ini- tially trained on a dataset D0and will undergo Nadditional training stages. We define the set of training datasets as: D={D1,D2, . . . ,DN}, (2) whereDirepresents the dataset used in the i-th training stage. At each stage i, experience replay is applied to construct an augmented dataset D′ iby including random samples from all previous datasets as well as D0, defined as: D′ i=Di∪i−1[ j=0Sample (Dj, s|Di|), (3) where Sample (D, k)means randomly sample kexamples from dataset D.sis the sampling ratio, determining the pro- portion of each previous dataset Djincluded in D′ i. Since each dataset Dimay corresponds to different task, the multi-task Speech TokenVocabulary EmbeddingLLMSpeech Encoder Vocoder Text TokenFigure 2: The architecture of a Spoken Language Model (SLM), which consists of a backbone LLM, a speech encoder that con- verts speech into speech tokens, and a vocoder that synthesizes the speech tokens into a speech waveform. learning is applied when training with the augmented dataset D′ i. Notably, the number of random samples is according to the size of i-th dataset |Di|. 3. Experimental setup 3.1. Spoken language model 3.1.1. Model architecture As shown in Figure 2, our SLM comprises three main com- ponents: a speech encoder, a LLM backbone, and a vocoder. The speech encoder extracts speech features from speech wave- forms, subsequently quantized into discrete speech tokens via k- means clustering. These tokens are incorporated into the LLM’s vocabulary for language modeling. Finally, the vocoder recon- structs speech waveform from the generated speech tokens. 3.1.2. Training methods We fine-tune the LLM in three stages of instruction-tuning for different tasks, including automatic speech recognition (ASR), text-to-speech synthesis (TTS), and spoken question answer- ing (SQA). During fine-tuning, loss is computed only on the model’s response. Formally, let Pdenotes a prompt and Rthe model’s response. A text sentence TwithLwords is defined asT= [t1,t2,t3, . . . ,tL], where tirepresents the text token sequence of the i-th word. Similarly, a speech utterance Swith Lspoken words is defined as S= [s1,s2,s3, . . . ,sL], where si represents the speech token sequence of the i-th word. Each ti andsican have varying lengths, containing a different number of text tokens and speech tokens, respectively. We outline the data formulation and methodology for each training stage: ASR stage: To align speech and text tokens, we first train
https://arxiv.org/abs/2505.17496v1
the model on automatic speech recognition (ASR). In this stage, the model learns to generate a text transcription TASR given a text instruction TIfor ASR and a speech utterance SASR. The prompt Pand the model response Rare shown as: P= [TI,SASR],R= [TASR], (4) where SASR andTASR are the speech-text pair in ASR dataset. TTS stage: Next, the model learns speech generation us- ing the same ASR dataset, as speech-text pairs are also required for text-to-speech synthesis (TTS). In this stage, the model gen- erates text and speech tokens in an alternating, word-by-word interleaved manner—an approach crucial for successful train- ing, as the model often struggles to converge without it. Given a text instruction TIfor TTS and a text sentence TASR. The prompt and model response are: P= [TI,TASR],R= [t1,s1,t2,s2, ...,tL,sL], (5) where ti∈TASR,si∈SASR∀i= 1,2, ..., L , and SASR and TASR are from the same speech-text pairs in ASR dataset. SQA stage: Finally, the model learns spoken question an- swering (SQA) by leveraging its ASR and TTS capabilities. Given a spoken question SQ, the model first predicts its text transcription TQ, followed by the text answer TAand its word- by-word interleaved text-speech representation. The prompt and response are structured as: P= [SQ],R= [TQ,TA,t1,s1,t2,s2, ...,tL,sL],(6) where ti∈TA,si∈SA∀i= 1,2, ..., L .SQ,TQ,SAandTA originate from the same SQA dataset example. Experience replay: During training with experience re- play, data formulation follows the same structure as in each training stage, including samples from both the current and pre- vious stages. The initial dataset D0, representing the LLM’s original training data, is assumed to contain text instruction- response pairs. If such a dataset is available, the prompt and response are formulated as: P= [TI],R= [TR], (7) where TIandTRare the text instruction-response pair in D0. 3.1.3. Training details We follow SeamlessM4T v2’s settings [7] for speech token ex- traction and reconstruction. Our model uses xlsr2-1b-v2 [35] as speech encoder with k-means clustering [36] to obtain 10,000 discrete speech tokens. The LLM is based on Llama-3.2-11B- Vision-Instruct, excluding the vision encoder. For vocoding, we adopt the pre-trained HiFi-GAN [37] from SeamlessM4T v2, supporting multiple speakers and languages. Our hyperparam- eters include LoRA adapters of rank r= 64 andα= 16 for self-attention matrices, with full fine-tuning of the embedding layer, language model heads, and the last five self-attention lay- ers. We optimize the model using the AdamW optimizer with a learning rate of 1e-5 and a warmup ratio 0.1. In ASR and TTS stage, we train the whole dataset for 2 epochs with batch size 4 and set max sequence length to 800. In SQA stage, we train the whole dataset for 1 epoch with batch size 1 and set max se- quence length to 1200. All experiments were conducted on four Nvidia RTX A6000 GPUs, with the full training process taking approximately five days. 3.2. Mitigation strategies 3.2.1. Model merging Our training process consists of three stages, resulting in a set of model weights denoted as M={θ0, θ1, θ2, θ3}, where θ0, θ1,θ2, and θ3represent the model weights at the initial stage, after the ASR stage, after the TTS
https://arxiv.org/abs/2505.17496v1
stage, and after the SQA stage, respectively. The following introduces the settings for each model merging method: Linear Combination: weight = [0.02,0.03,0.05,0.9]TIES: weight = [−,0.04,0.06,0.9], density = [−,0.9,0.9,0.9], BaseModel =θ0 DARE: weight = [−,0.04,0.06,0.9], density = [−,0.9,0.9,0.9], BaseModel =θ0 Discounting LoRA-scaling factor Restricted to computa- tion budget, we only choose α= 15 andα= 14 as hyperpa- rameter of LoRA adapters for evaluation. Experience replay: We evaluate two training settings: with and without experience replay across all training stages. When applying experience replay, we use a subsampling ratio ofs= 0.005. For the initial dataset D0, the LLM’s original training data, we select a text instruction-tuning dataset gener- ated by the same LLM. We assume that its distribution closely approximates that of the original training data. Mixed strategy: In our evaluation, we also apply addi- tional mitigation strategies to models trained with experience replay, resulting in two additional baseline settings: “model merging after experience replay” and “discounting the LoRA scaling factor after experience replay”. All mitigation strategy parameters remain consistent with the previous settings. 3.3. Datasets This section describes datasets, benchmarks, and their prepro- cessing methods used for training and evaluation. 3.3.1. Training We use LibriSpeech 960-hour[38] for ASR and TTS training and Magpie-Air [39]2for SQA and experience replay. In ASR stage, LibriSpeech is used for training, with each example assigned one of ten randomly selected instructions, such as ”Please repeat the following words:”. In TTS stage, LibriSpeech is also used, with randomly as- signed instructions like ”Please speak out loud the following words:”. To generate word-by-word text-speech interleaved se- quences, we align text transcriptions and speech utterances at the word level using Whisper-Timestamped3. In SQA stage, Magpie-Air is used for training. We fil- ter examples based on length and topic, removing categories like math and coding that are challenging to describe in speech. Speech questions and answers are synthesized from text using SpeechT5 [40], creating both text and speech versions of QA pairs. Whisper-Timestamped is used to align answers and gen- erate interleaved text-speech token sequences. When applying experience replay, we use Magpie-Air as D0, randomly sampling text instruction-response pairs for train- ing. Since Magpie-Air is constructed by prompting Llama 3 8B, an LLM from the same series as ours, it serves as a suitable dataset for experience replay. 3.3.2. Evaluation To assess the model’s learned capabilities, we evaluate spoken question answering (SQA) on the speech modality. To measure the catastrophic forgetting in the SLM, we evaluate question answering (QA) and instruction-following on the text modality. Spoken question answering: Following the evaluation set- tings of Moshi and Spectron [41], we use three datasets for SQA: (1) Spoken WebQuestions, (2) LLaMA-Questions and (3) Audio Trivia QA. SQA is evaluated under two settings: speech- to-text (S2T), where accuracy is computed by directly matching 2Magpie-Align/Llama-3-Magpie-Air-3M-v0.1 3https://github.com/linto-ai/whisper-timestamped Table 1: Accuracies (%) for various strategies to mitigate catastrophic forgetting. Merge : Model merging. Scaling : Scaling the LoRA factor. w/ R : With Experience Replay. The results are reported across four datasets: LLaMA, Web, Trivia, and IFEval, with each dataset further evaluated on different tasks (T2T, S2T, S2S).
https://arxiv.org/abs/2505.17496v1
Mitigation StrategyLLaMA Web Trivia IFEval T2T S2T S2S T2T S2T S2S T2T S2T S2ST2T Prompt Instruction Original 70.0 - - 61.5 - - 78.9 - - 67.1 77.1 None 14.3 7.3 8.0 3.6 1.5 0.8 6.0 3.1 1.9 9.2 20.1 Merge (Linear) 19.3 9.0 7.7 5.6 1.1 0.6 7.8 3.9 1.1 8.5 19.3 Merge (TIES) 12.7 6.3 2.7 3.7 0.7 0.0 4.9 2.0 0.4 10.9 22.2 Merge (DARE) 4.0 6.0 1.7 1.2 0.7 0.0 1.3 2.0 0.3 11.8 23.1 Scaling ( α= 15 ) 15.0 7.0 6.7 4.5 1.3 1.1 5.9 3.9 2.3 9.1 18.6 Scaling ( α= 14 ) 16.0 7.3 6.7 4.0 1.2 0.3 0.3 3.3 1.9 7.9 18.3 Replay 66.3 50.3 28.7 55.2 24.2 9.1 66.4 25.2 11.1 47.5 57.9 Merge (Linear) w/ R 68.0 44.7 16.7 56.4 19.2 3.6 68.8 16.3 5.2 50.3 61.3 Merge (TIES) w/ R 66.7 42.0 17.0 55.3 17.6 2.7 67.5 15.9 4.8 50.1 60.0 Merge (DARE) w/ R 69.3 40.0 15.7 53.1 18.4 3.0 64.0 14.3 3.8 43.8 52.2 Scaling ( α= 15 ) w/ R 68.7 52.7 28.7 54.4 24.9 8.0 68.6 25.4 11.2 43.7 59.4 Scaling ( α= 14 ) w/ R 68.7 50.7 28.7 55.3 25.5 6.9 66.5 27.2 11.1 49.0 60.6 Figure 3: Evaluation results on instruction-following and ques- tion answering. LLaMA, Web, and Trivia denote LLaMA- Questions, Spoken WebQuestions, and Audio Trivia QA. IFEval-P and IFEval-I stand for IFEval in prompt-level and instruction-level. w/ R means with experience replay. text responses, and speech-to-speech (S2S), where the speech responses is first transcribed with Whisper-large-v3 and consid- ered correct if the transcription contains the correct answer. Question answering: We use the same datasets as in SQA but evaluate only on the text modality (Text-to-Text, T2T). Ac- curacy is used as the evaluation metric. Instruction following: We use IFEval [42] for evaluating instruction-following on the text modality (T2T setting). IFEval computes accuracy at both the prompt and instruction level to assess the model’s ability to follow instructions accurately. 4. Results 4.1. Catastrophic forgetting Fig.3 shows the evaluation results on instruction-following and question answering in each training stage on T2T setting. For SLM without any mitigation strategy, it is obvious that catas- trophic forgetting appear during training. As training stage moves on, the accuracy of both evaluation tasks decrease in dif- ferent degrees. There is a gap can be observed easily between ASR stage and TTS stage on both evaluation tasks, which alsoshows that the most serious forgetting appears in TTS stage. Interestingly, accuracy for question anawering slightly grow in SQA stage. We assume this is because the SLM recalls some of its knowledge from the text sequence at this stage, even if only to a small extent. As for SLM applied with experience replay, although there is still a little forgetting during training, the ex- tent has been mitigated significantly compared to one without experience replay. 4.2. Mitigation strategies Table.1 shows the results for all mitigation strategies. From the results, the are several findings: (1) Experience replay surpasses the other strategies: Ac- cording to the results, experience replay surpasses all the
https://arxiv.org/abs/2505.17496v1
other single strategies on tasks evaluated for mitigation (T2T) as well as new ability (S2T, S2S). (2) Mixed Strategy can further boost the performance: Compared to experience replay, other mixed strategies can achieve better performance of new ability in S2T setting in some cases. However, mixed strategy with discounting LoRA-scaling factor more robust than model merging. (3) Experience replay remains robustness in every set- ting: Although all mitigation strategies can do some improve- ments in almost all settings. However, only experience replay remains robust in S2S setting and surpasses other strategies. 5. Conclusion This paper investigates mitigation strategies for continual learn- ing in developing spoken language models (SLMs) from large language models (LLMs). The results demonstrate that expe- rience replay is the most effective method, with further perfor- mance gains achievable by combining it with other techniques. Through a case study, we highlight catastrophic forgetting as a significant challenge and showcase the potential of these strate- gies to address it. Future work will involve more comprehensive studies, including diverse training pipelines, models and various strategies to inspire the speech community to develop more ef- ficient training methods. 6. References [1] OpenAI, “Gpt-4 technical report,” arXiv preprint arXiv:2303.08774 , 2023. [2] G. Team et al. , “Gemini: a family of highly capable multimodal models,” arXiv preprint arXiv:2312.11805 , 2023. [3] A. Dubey et al. , “The llama 3 herd of models,” arXiv preprint arXiv:2407.21783 , 2024. [4] A. Yang et al. , “Qwen2.5 technical report,” arXiv preprint arXiv:2412.15115 , 2024. [5] K. Lakhotia et al. , “On generative spoken language modeling from raw audio,” vol. 9, pp. 1336–1354, 2021. [6] S. Chen et al. , “Neural codec language models are zero-shot text to speech synthesizers,” IEEE Transactions on Audio, Speech and Language Processing , vol. 33, pp. 705–718, 2025. [7] L. Barrault et al. , “Seamless: Multilingual expressive and stream- ing speech translation,” arXiv preprint arXiv:2312.05187 , 2023. [8] C. yu Huang et al. , “Dynamic-superb phase-2: A collaboratively expanding benchmark for measuring the capabilities of spoken language models with 180 tasks,” 2024. [Online]. Available: https://arxiv.org/abs/2411.05361 [9] C.-Y . Huang et al. , “Dynamic-superb: Towards a dynamic, col- laborative, and comprehensive instruction-tuning benchmark for speech,” in ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) , 2024, pp. 12 136–12 140. [10] S. Arora, K.-W. Chang et al. , “On the landscape of spoken language models: A comprehensive survey,” arXiv preprint arXiv:2504.08528 , 2025. [11] S. Arora et al. , “UniverSLU: Universal spoken language under- standing for diverse tasks with natural language instructions,” in Proceedings of the 2024 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Lan- guage Technologies (Volume 1: Long Papers) , K. Duh, H. Gomez, and S. Bethard, Eds., Jun. 2024, pp. 2754–2774. [12] J. Tian et al. , “ESPnet-SpeechLM: An open speech language model toolkit,” in Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Com- putational Linguistics: Human Language Technologies (System Demonstrations) , Apr. 2025, pp. 116–124. [13] Y .
https://arxiv.org/abs/2505.17496v1
Gong et al. , “Joint audio and speech understanding,” in 2023 IEEE Automatic Speech Recognition and Understanding Work- shop (ASRU) . IEEE, 2023. [14] C. Tang et al. , “SALMONN: Towards generic hearing abilities for large language models,” in The Twelfth International Conference on Learning Representations , 2024. [15] K.-H. Lu et al. , “Desta: Enhancing speech language models through descriptive speech-text alignment,” in Proc. Interspeech 2024 , 2024, pp. 4159–4163. [16] K.-H. Lu, Z. Chen, S.-W. Fu, C.-H. H. Yang, J. Balam, B. Gins- burg, Y .-C. F. Wang, and H.-y. Lee, “Developing instruction- following speech language model without speech instruction- tuning data,” arXiv preprint arXiv:2409.20007 , 2024. [17] Y . Chu et al. , “Qwen-audio: Advancing universal audio under- standing via unified large-scale audio-language models,” arXiv preprint arXiv:2311.07919 , 2023. [18] Y . Chu, J. Xu, Q. Yang, H. Wei, X. Wei, Z. Guo, Y . Leng, Y . Lv, J. He, J. Lin et al. , “Qwen2-audio technical report,” arXiv preprint arXiv:2407.10759 , 2024. [19] D. Zhang et al. , “SpeechGPT: Empowering large language mod- els with intrinsic cross-modal conversational abilities,” in Find- ings of the Association for Computational Linguistics: EMNLP 2023 , Dec. 2023, pp. 15 757–15 773. [20] Z. Borsos et al. , “Audiolm: A language modeling approach to au- dio generation,” IEEE/ACM Transactions on Audio, Speech, and Language Processing , vol. 31, pp. 2523–2533, 2023.[21] A. D ´efossez et al. , “Moshi: a speech-text foundation model for real-time dialogue,” arXiv preprint arXiv:2410.00037 , 2024. [22] Z. Xie and C. Wu, “Mini-omni: Language models can hear, talk while thinking in streaming,” arXiv preprint arXiv:2408.16725 , 2024. [23] A. Zeng et al. , “Glm-4-voice: Towards intelligent and human-like end-to-end spoken chatbot,” arXiv preprint arXiv:2412.02612 , 2024. [24] C.-K. Yang et al. , “Building a taiwanese mandarin spoken lan- guage model: A first attempt,” arXiv preprint arXiv:2411.07111 , 2024. [25] I. J. Goodfellow et al. , “An empirical investigation of catastrophic forgetting in gradient-based neural networks,” arXiv preprint arXiv:1312.6211 , 2013. [26] E. Yang et al. , “Model merging in llms, mllms, and beyond: Methods, theories, applications and opportunities,” arXiv preprint arXiv:2408.07666 , 2024. [27] Y . Lin et al. , “Mitigating the alignment tax of rlhf,” in Proceedings of the 2024 Conference on Empirical Methods in Natural Lan- guage Processing , 2024, pp. 580–606. [28] K.-H. Lu et al. , “Desta: Enhancing speech language models through descriptive speech-text alignment,” in Interspeech 2024 , 2024, pp. 4159–4163. [29] D. Rolnick et al. , “Experience replay for continual learning,” Ad- vances in neural information processing systems , vol. 32, 2019. [30] J. Zheng et al. , “Lifelong learning of large language model based agents: A roadmap,” arXiv preprint arXiv:2501.07278 , 2025. [31] X. Zhang et al. , “Vqacl: A novel visual question answering con- tinual learning setting,” in Proceedings of the IEEE/CVF Con- ference on Computer Vision and Pattern Recognition , 2023, pp. 19 102–19 112. [32] C. Tang et al. , “Salmonn: Towards generic hearing abilities for large language models,” in The Twelfth International Conference on Learning Representations .
https://arxiv.org/abs/2505.17496v1
[33] P. Yadav, D. Tam et al. , “Ties-merging: Resolving interference when merging models,” in Advances in Neural Information Pro- cessing Systems , A. Oh et al. , Eds., vol. 36. Curran Associates, Inc., 2023, pp. 7093–7115. [34] L. Yu et al. , “Language models are super mario: Absorbing abili- ties from homologous models as a free lunch,” in Forty-first Inter- national Conference on Machine Learning . [35] A. Conneau et al. , “Unsupervised cross-lingual representation learning for speech recognition,” in Interspeech 2021 , 2021, pp. 2426–2430. [36] K. Krishna and M. Narasimha Murty, “Genetic k-means algo- rithm,” IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) , vol. 29, no. 3, pp. 433–439, 1999. [37] J. Kong et al. , “Hifi-gan: Generative adversarial networks for ef- ficient and high fidelity speech synthesis,” Advances in neural in- formation processing systems , vol. 33, pp. 17 022–17 033, 2020. [38] V . Panayotov et al. , “Librispeech: An asr corpus based on pub- lic domain audio books,” in 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) , 2015, pp. 5206–5210. [39] Z. Xu et al. , “Magpie: Alignment data synthesis from scratch by prompting aligned llms with nothing,” arXiv preprint arXiv:2406.08464 , 2024. [40] J. Ao and R. a. Wang, “SpeechT5: Unified-modal encoder- decoder pre-training for spoken language processing,” in Proceed- ings of the 60th Annual Meeting of the Association for Computa- tional Linguistics (Volume 1: Long Papers) , May 2022, pp. 5723– 5738. [41] E. Nachmani et al. , “Spoken question answering and speech continuation using spectrogram-powered llm,” arXiv preprint arXiv:2305.15255 , 2023. [42] J. Zhou et al. , “Instruction-following evaluation for large language models,” arXiv preprint arXiv:2311.07911 , 2023.
https://arxiv.org/abs/2505.17496v1
arXiv:2505.17503v1 [cs.CL] 23 May 2025CReSt: A Comprehensive Benchmark for Retrieval-Augmented Generation with Complex Reasoning over Structured Documents Minsoo Khang∗ Upstage AI mkhang@upstage.aiSangjun Park∗ Upstage AI sangjun@upstage.aiTeakgyu Hong Upstage AI teakgyu.hong@upstage.ai Dawoon Jung† Upstage AI dawoon@upstage.ai Abstract Large Language Models (LLMs) have made substantial progress in recent years, yet evaluating their capabilities in practical Retrieval-Augmented Generation (RAG) scenarios remains challenging. In practical applications, LLMs must demonstrate complex reasoning, refuse to answer appropriately, provide precise citations, and effectively understand document layout. These capabilities are crucial for advanced task handling, uncertainty awareness, maintaining reliability, and structural un- derstanding. While some of the prior works address these aspects individually, there is a need for a unified framework that evaluates them collectively in practical RAG scenarios. To address this, we present CReSt (A Comprehensive Benchmark for Retrieval-Augmented Generation with Complex Reasoning over Structured Documents), a benchmark designed to assess these key dimensions holistically. CReSt comprises 2,245 human-annotated examples in English and Korean, de- signed to capture practical RAG scenarios that require complex reasoning over structured documents. It also introduces a tailored evaluation methodology to comprehensively assess model performance in these critical areas. Our evaluation shows that even advanced LLMs struggle to perform consistently across these dimensions, underscoring key areas for improvement. We release CReSt to support further research and the development of more robust RAG systems. The dataset and code are available at: https://github.com/UpstageAI/CReSt . 1 Introduction Large Language Models (LLMs) have demonstrated remarkable capabilities across a wide range of applications (OpenAI et al., 2024; Meta, 2024), yet they still exhibit limitations. To address these shortcomings, Retrieval-Augmented Generation (RAG) has emerged as a key paradigm, enhancing LLM performance by grounding responses in external knowledge sources (Gao et al., 2023c). RAG pipelines often rely on web pages or PDF documents as external sources of knowledge (Tan et al., 2025; Tanaka et al., 2025) and these applications require complex reasoning over semi-structured documents (Fan et al., 2024). To effectively serve such use-cases, LLMs must possess a combination of critical skills: (1) the ability to perform complex reasoning, (2) the ability to appropriately refuse ∗Equal contribution. †Corresponding author. Preprint. to answer when provided information is insufficient, (3) the ability to cite the supporting document as evidence, and (4) the ability to understand documents in structured format, such as HTML. In this paper, we introduce a benchmark designed to holistically evaluate multiple capabilities of LLMs in document-based RAG scenarios. CReSt is constructed entirely from scratch using realistic source documents, unlike prior benchmarks that rely on refining existing datasets or draw heavily from overlapping sources such as Wikipedia. These documents are parsed into both plain-text and HTML formats based on which we generate question–answer (QA) pairs that reflect the complexity and reasoning demands of real-world applications. For comprehensive evaluation, we also include refusal cases (Chen et al., 2024), where questions are unanswerable due to insufficient information. Additionally, each QA pair is annotated with explicit citations that identify the supporting documents, facilitating precise grounding and verifiability. To generate complex reasoning questions, we design a QA generation method that synthesizes challenging examples in both
https://arxiv.org/abs/2505.17503v1
English and Korean. These questions are subsequently revised by human annotators for quality assurance. Through our evaluation of several state-of-the-art LLMs, we observe that many models struggle with the CReSt benchmark, particularly in deciding whether they should refuse or not. Furthermore, we demonstrate that incorporating recently proposed reasoning strategies leads to performance gains, underscoring CReSt’s sensitivity to reasoning proficiency. We hope our work contributes to the development of advanced RAG applications. 2 Related Works Table 1: Comparison of Document-based RAG Benchmarks. Multi-Dimensional Reasoning: bench- mark requires multiple-types of reasoning. Structured Input: documents are provided in forms other than plain-text.*represents cases where condition differs for different subset of the benchmark. Benchmark Multi-Dimensional Structured Citation Refusal Language Reasoning Input Coverage HotpotQA (Yang et al., 2018) Yes No Yes No English FEVER (Thorne et al., 2018) No No Yes Yes English KILT (Petroni et al., 2021) Yes No Yes No*English ALCE (Gao et al., 2023a) No No Yes No English RGB (Chen et al., 2024) Yes No No Yes English, Chinese CRUD-RAG (Lyu et al., 2024) Yes No No No Chinese UDA (Hui et al., 2024) Yes Yes No No English FRAMES (Krishna et al., 2025) Yes Yes No No English CReSt (Ours) Yes Yes Yes Yes English, Korean RAG has rapidly emerged as a crucial paradigm for addressing the inherent limitations of LLMs, such as hallucinations and insufficient grounding in factual knowledge (Gao et al., 2023c). While various benchmarks have been introduced to assess RAG systems, existing studies typically focus on limited subsets of criteria, without considering real-world complexities which demand holistic evaluation of RAG systems, including complex reasoning, answer refusal capability, citation accuracy, and the ability to process diverse document formats. Table 1 shows the comparison overview of CReSt against other benchmarks across various criteria. Most existing benchmarks, such as HotPotQA (Yang et al., 2018), FEVER (Thorne et al., 2018), and KILT (Petroni et al., 2021) predominantly utilize plain-text documents. However, real-world applications often involve HTML documents, which encode structural and formatting information. Extending these benchmarks to include HTML-formatted documents would enhance their applicabil- ity in practical RAG scenarios. Recent benchmarks, such as ALCE (Gao et al., 2023a), RGB (Chen et al., 2024), and FRAMES (Krishna et al., 2025), have introduced tasks targeting different aspects of RAG evaluation. ALCE focuses on citation generation for long-form questions; RGB assesses robustness against noisy or misleading contexts; and FRAMES incorporates multiple reasoning dimensions including tabular, temporal, and numerical reasoning. While these benchmarks address important aspects critical in RAG evaluations, they still fall short of providing a holistic evaluation framework for practical deployments. In contrast, our benchmark addresses these significant gaps by evaluating integrated capabilities essential for realistic, complex RAG applications. We incorporate both plain-text and HTML documents to better reflect real-world knowledge sources, systematically generate complex reasoning questions, mandate citation accuracy, 2 Source Data CC-MAIN NANET QA GenerationData Preprocessing Key-value PairBasicQA SimpleQA ComplexQA Negative Document Chunk Retrieval Data EmbeddingNegative ChunksChunked DataHuman Verification Answerability Check Answer & Evidence ValidationChunkingLanguage DetectionHTML Plain-textDocument Parse (DP) Appending Negative Chunks to QAsDocument Parse (DP)HTML Plain-textFigure 1: Dataset construction pipeline. and include
https://arxiv.org/abs/2505.17503v1
explicit scenarios for refusal when valid answers are unavailable. By combining these elements, our work presents a more comprehensive and robust framework for evaluating advanced RAG systems, thereby contributing meaningfully towards their practical deployment. 3 Dataset CReSt is a RAG benchmark comprising over 2,000 data instances in both English and Korean. An overview of the dataset construction pipeline is provided in Figure 1. Each instance is structured as: <Document chunk(s), Query, Answer, Citation indices> . The document chunks include both plain-text and HTML formats and are categorized into positive and negative chunks. Positive chunks contain relevant evidence necessary to answer the query, while negative chunks represent similar topics but unnecessary information. CReSt comprises over 20,000 document chunks, with a well-balanced distribution between plain-text and HTML representations ( ∼55% HTML). The benchmark includes citation labels that identify the specific positive chunks associated with each question–answer pair, enabling precise referencing and verification of the evidence. To ensure broad document-domain coverage in both English and Korean, CReSt sources raw doc- uments from two publicly available collections: PDF files from Common Crawl (CC-MAIN) for English and crawled document images from National Assembly Library (NANET) of Korea for Korean. From each source, 2,000 documents are randomly sampled to initiate the dataset construction process. These are subsequently converted into both HTML and plain-text representations using a publicly accessible document-to-text conversion tool3. 3.1 Data Preprocessing As CC-MAIN offers a broad coverage in both language and document type, filtering is necessary to isolate high-quality English content from CC-MAIN. To achieve this, we employ FastText’s language identification model (lid.176.bin) to detect the primary language of each CC-MAIN document based on its textual content. Only those confidently classified as English are retained for subsequent stages of the pipeline, ensuring that the English subset of CReSt remains linguistically consistent. Once the language-consistent documents are identified, each is parsed into the HTML and plain-text formats, the two most commonly used representation of document text. First, for HTML format, the documents are processed using a document-to-text converter. This tool extracts not only the raw textual content, but also preserves essential layout and structural metadata, such as font size, text positioning, and HTML element categories inferred from the visual document. Next, for plain- text formats, each of the HTML representation is converted into its plain-text equivalent using the html2text package4. This results in dual-format representations: HTML and plain text, for every document in the dataset. By maintaining both formats, CReSt better reflects the diversity and complexity of document-augmented QA scenarios encountered in practical RAG applications. 3Upstage’s Document Parse v240910 was used in this work. 4https://github.com/Alir3z4/html2text 3 Following textualization, both the HTML and plain-text representations are segmented into randomly sized chunks, ranging from 2,048 to 16,384 in length. This chunking strategy is designed to simulate the natural variability of document lengths encountered in real-world applications, while also supporting chunk-based retrieval and generation tasks. The resulting preprocessed chunks are stored in a database and serve as the foundation for downstream stages of the dataset construction, including question generation, and negative document sampling. 3.2 QA Generation Based on the
https://arxiv.org/abs/2505.17503v1
HTML and plain-text chunks extracted from the source documents, CReSt adopts a multi-stage QA generation curriculum designed to systematically produce both ‘simple’ and ‘complex’ reasoning question-answer (QA) pairs. This curriculum facilitates a comprehensive evaluation of a LLM’s document-based RAG capabilities by covering a broad range of reasoning scenarios. Before describing the stages of the QA generation process, we first formalize the definition of a reasoning QA pair in CReSt and introduce the criteria used to distinguish between ‘simple’ and ‘complex’ reasoning instances. In CReSt, a reasoning QA pair refers to a question that requires one or more of the following reasoning skills to arrive at a correct answer: numerical reasoning, tabular reasoning, multi-constraint reasoning, temporal reasoning, and format reasoning—adapted from the reasoning taxonomy introduced in Krishna et al. (2025). In addition, we introduce textual reasoning, defined as reasoning that requires advanced reading comprehension of the semantics conveyed in the given document. A QA pair is categorized as ‘simple’ if it involves only a single type of reasoning from the list above, whereas it is considered ‘complex’ if it requires combination of different reasoning types. Generating reasoning-oriented QA pairs entirely by hand is both costly and difficult to scale, while relying solely on model-based generation introduces a risk of factual inaccuracies and reasoning errors. To address these challenges, CReSt adopts a multi-stage QA generation curriculum that guides the generation process in a structured and systematic manner. This curriculum not only facilitates the production of diverse simple and complex QA pairs, but also enables greater control over reasoning types and ensures broad coverage of reasoning patterns. To ensure the correctness and reliability of the generated QA pairs, each example is subsequently validated through human verification. The curriculum consists of four stages: Key-value pair generation, BasicQA generation, SimpleQA generation, and ComplexQA generation. The core principle of this pipeline is to begin by extracting key information or evidence from the source documents, then formulating basic (query-and-fetch) natural language questions. These basic questions serve as the foundation for the SimpleQA stage, where single reasoning type is applied. Finally, multiple simple reasoning question-answer pairs are combined and composed to construct ComplexQA pairs that require multi-type reasoning to arrive at the correct answer. QA pairs were generated using GPT-4o OpenAI (2024), following a multi-stage QA generation curriculum implemented in a multi-turn conversational format. The model is prompted iteratively across different stages of the pipeline to progressively generate question-answer pairs of increasing complexity. The full set of prompts used in each stage is provided in the Appendix B.2. At the beginning of the curriculum, for each generation instance, 1 to 5 chunks are randomly sampled from each document, forming the contextual basis for key-value pair generation, which serves as the foundation for downstream QA construction. Each multi-turn conversation is conducted separately for both the HTML and plain-text representations of the selected document chunks. 3.3 Negative Document Chunk Retrieval After generating simple and complex reasoning QA pairs, each conditioned on 1 to 5 chunks from a single document, negative document chunks are retrieved and appended to simulate real-world retrieval challenges, where
https://arxiv.org/abs/2505.17503v1
similar yet irrelevant content may be included. These negative chunks act as retrieval noise, challenging the model’s ability to ground its answers in the correct evidence. Negative candidates are retrieved based on semantic similarity in the document embedding space. Specifically, we compute top- knearest neighbors using embeddings generated by a publicly available 4 embedding model5. For each QA instance, the value of kis set such that the combined number of positive (relevant) and negative (irrelevant) chunks sums to 10, ensuring a consistent retrieval setting across all examples. 3.4 Human Verification To ensure the quality, factual correctness, and evidence alignment of the generated QA pairs, each instance in CReSt undergoes a final human verification step. Human annotators are presented with the question, its corresponding answer, and the set of document chunks (both positive and negative) used during generation. For each QA pair, two types of verifications are performed concurrently: (1) Answerability check , which determines whether the question can be answered based on the provided document chunks; and (2) Answer and evidence validation , where annotators assess the correctness of the QA pair and provide citation labels for the corresponding evidence in the document chunks. Answerability Check Annotators assess whether the question can be accurately answered using the provided document chunks. If insufficient information is available to support a valid answer, the QA pair is labeled as unanswerable. These unanswerable QA pairs are subsequently used to evaluate the model’s refusal capability. To improve data curation efficiency, annotators are instructed to make minor corrections to the question, when appropriate, if such adjustments render the previously unanswerable QA pair answerable. Answer and Evidence Validation Annotators verify the alignment between the question and answer by reviewing all document chunks (without access to their positive or negative labels). If the question or answer is found to be inaccurate, misaligned or require correction, annotators make the appropriate revision to the QA pair. Additionally, during the verification or revision process, annotators assign evidence labels to identify the supporting chunk(s) from the set of document chunks. This step not only verifies alignment between the answer and its supporting chunk but also facilitates evaluation of the model’s citation capability. 4 Benchmarks and Evaluations CReSt provides a holistic evaluation of key capabilities required in practical RAG scenarios, including answer correctness, accurate citation, and appropriate refusal when information is insufficient. 4.1 Metrics In our dataset, answers are free-form texts rather than a single word or a selection from multiple choices. Therefore, automated lexical overlap metrics (e.g., Lexicon F1 or BLEU) are not suitable for evaluating this task. Instead, to comprehensively assess the model’s answer quality, we adopt anLLM-as-a-judge (Zheng et al., 2023) framework. Specifically, given a gold answer and a model- predicted answer, we query an external LLM to evaluate the semantic equivalence between them. The answers are classified as Correct (fully aligned with the gold answer), Partially Correct (contains only part of the required information), or Wrong (missing essential content or contradictory). This 3-way judgment scheme is illustrated in Appendix B.3. To assess the model’s ability to handle unanswerable questions, we explicitly
https://arxiv.org/abs/2505.17503v1
prompt it to generate a pre-defined refusal statement— I cannot answer because the question is unanswerable with the documents. when the information provided is not sufficient to answer the question, following the approach introduced by (Chen et al., 2024). We then use the presence or absence of this statement to evaluate Refusal Accuracy . From the perspective of evaluating refusal ability, it is equally important in practical scenarios that models do not refuse when sufficient context is provided. Therefore, we propose an unified evaluation metric that assesses the model’s capability across both answerable and unanswerable cases. We define six possible scenarios based on the gold and predicted response types, and assign a Unified Score to each case as shown in Table 2. This scoring scheme ensures that overly conservative models, those that frequently refuse to answer, do not receive high scores, as refusals in answerable cases are penalized. We compute it as the arithmetic mean of the non-refusal and refusal scores. 5Upstage’s Embedding model was used in this work. 5 Table 2: Unified Score scoring scheme based on gold and predicted outputs Gold Predicted Score Non-Refusal Correct 1.0 Non-Refusal Partially Correct 0.5 Non-Refusal Wrong 0.0 Non-Refusal Refusal -1.0 Refusal Refusal 1.0 Refusal Non-Refusal 0.0In addition to answer correctness and refusal handling, the ability to identify the document containing the supporting evidence is also crucial in practical RAG scenarios. We define citation ability as the model’s capacity to generate source references (e.g., [1], [2], etc.) appended to the answer, indicating the corresponding document chunk(s) containing the evidence. We evaluate the citation ability using Citation Precision =|P∩G| |P|andCitation Recall =|P∩G| |G|as introduced by Gao et al. (2023b). Unlike their NLI-based method, we leverage gold citation annotations and compute these metrics through direct set-based com- parison. Specifically, let Gdenote the set of gold citations andPdenote the set of predicted citations. 4.2 Experimental Setups In our experiments, we employed a diverse set of open-source and proprietary models. For the open-source models, we utilized the Qwen2.5 (Alibaba, 2024) series, which includes models of various sizes: 3B, 7B, 14B, 32B, and 72B. We also included Llama-3.3 70B (Meta, 2024) as a representative open-source model. Among proprietary models, we selected GPT-4o (gpt-4o-2024- 08-06)(OpenAI, 2024) and GPT-4.1 (gpt-4.1-2025-04-14) (OpenAI, 2025a), as well as o3-mini (o3-mini-2025-01-31)(OpenAI, 2025b) and o4-mini (o4-mini-2025-04-16) (OpenAI, 2025c), which have recently attracted attention for their strong reasoning capabilities. We used the GPT-4o (gpt-4o-2024-08-06) as the judge model for the LLM-as-a-judge. The prompts used for both model inference and evaluation are provided in the Appendix B.1. As shown in the prompts, all models were instructed to follow a Chain-of-Thought (CoT) (Wei et al., 2022) reasoning format. However, during evaluation, we considered only the final answer enclosed between <Answer> and</Answer> tags, excluding the intermediate reasoning process from scoring. 4.3 Evaluation Results 4.3.1 Answer Correctness Table 3: Answer correctness results across English and Korean. Here, C, P, and W denote the rates of Correct, Partially Correct, and Wrong answers, respectively. English Korean Unified Score Non-Refusal (C/P/W) Refusal Accuracy Unified Score Non-Refusal (C/P/W) Refusal Accuracy Qwen2.5-3B-Instruct 0.1756 10.84% / 44.68% /
https://arxiv.org/abs/2505.17503v1
44.49% 6.13% 0.1645 3.40% / 53.68% / 42.92% 9.16% Qwen2.5-7B-Instruct 0.2482 22.81% / 36.88% / 40.30% 28.16% 0.2037 10.20% / 52.12% / 37.68% 9.57% Qwen2.5-14B-Instruct 0.2730 29.64% / 40.73% / 29.64% 8.24% 0.2513 19.03% / 53.12% / 27.84% 6.52% Qwen2.5-32B-Instruct 0.3650 27.19% / 43.92% / 28.90% 36.21% 0.3309 17.56% / 57.22% / 25.21% 30.35% Qwen2.5-72B-Instruct 0.3074 37.83% / 39.92% / 22.24% 8.62% 0.2999 24.08% / 61.05% / 14.87% 6.52% Llama-3.3-70B-Instruct 0.3253 23.95% / 47.91% / 28.14% 31.23% 0.2598 12.04% / 69.97% / 17.99% 6.92% GPT-4o 0.3777 34.79% / 40.68% / 24.52% 32.95% 0.3841 24.93% / 56.94% / 18.13% 33.20% GPT-4.1 0.3679 46.77% / 35.55% / 17.68% 12.64% 0.3826 43.63% / 46.74% / 9.63% 10.79% o3-mini 0.3870 59.70% / 28.14% / 12.17% 4.21% 0.3753 50.28% / 42.63% / 7.08% 3.46% o4-mini 0.4390 59.89% / 24.71% / 15.40% 20.88% 0.4458 47.88% / 42.78% / 9.35% 23.01% The results, summarized in Table 3, present the correctness of the answer on different models. Overall, proprietary models, such as GPT and o-series, consistently outperform open-source models in both English and Korean settings. However, even the best-performing model, o4-mini, achieves a unified score of only 0.43 to 0.44, indicating that the benchmark presents genuinely challenging cases that remain difficult even for state-of-the-art (SoTA) models. When compared with a non-reasoning model like GPT-4.1, its superior performance highlights that better performance is achieved on the CReSt benchmark when equipped with strong reasoning capabilities. This reinforces the benchmark’s alignment to practical document RAG applications, where advanced reasoning is often essential. Beyond the Unified Scores, the breakdown into Non-Refusal and Refusal accuracy offers deeper insights when comparing model performance. For example, although GPT-4o and GPT-4.1 achieve similar Unified Scores, a notable gap is observed in their Refusal Accuracy. GPT-4o demonstrates 6 strong ability in identifying unanswerable questions, yet its accuracy on answerable cases is com- paratively lower. Conversely, GPT-4.1 performs better on answerable questions but struggles more with refusal cases. The importance of the balance is further reflected with Qwen model series. Although there is a general trend of improved performance with increasing model size, the 32B model outperforms the 72B model in terms of the Unified Score, despite exhibiting a higher wrong rate in the non-refusal setting. A closer examination suggests that although the 72B model achieved higher scores in non-refusal cases, its limited ability to accurately capture refusal cases results in a lower overall score. This highlights a key aspect of our benchmark, where performance in both refusal and non-refusal scenarios is essential and should be evaluated jointly. In the aspect of language robustness, LLaMA 3.3–70B shows a considerable performance gap between English and Korean, which is a comparatively low-resource language. This underscores the need for language-specific evaluation and positions our benchmark as a key tool for developing Korean-capable models. 4.3.2 Citation Performance Table 4: Citation performance across models. English Korean Precision Recall F1 Precision Recall F1 Qwen2.5-3B-Instruct 5.24% 7.86% 6.29% 6.60% 11.00% 8.25% Qwen2.5-7B-Instruct 42.06% 55.47% 47.84% 35.24% 54.41% 42.78% Qwen2.5-14B-Instruct 32.65% 36.61% 34.52% 46.85% 56.23% 51.11% Qwen2.5-32B-Instruct 63.47% 74.09% 68.37% 61.39% 71.29% 65.97% Qwen2.5-72B-Instruct 67.63% 78.32% 72.58% 61.71%
https://arxiv.org/abs/2505.17503v1
75.71% 68.00% Llama-3.3-70B-Instruct 30.85% 46.19% 36.99% 32.54% 54.95% 40.87% GPT-4o 45.53% 57.33% 50.75% 53.18% 74.94% 62.21% GPT-4.1 67.81% 84.86% 75.38% 66.17% 92.73% 77.23% o3-mini 76.19% 84.27% 80.03% 74.26% 82.50% 78.16% o4-mini 77.82% 80.93% 79.34% 75.53% 83.01% 79.09% Citation is particularly important in RAG scenarios, where factual grounding is crucial. As shown in Table 4, citation capability generally correlates with answer correctness: where models that cite relevant evidence better tend to achieve higher answer accuracy. However, an interesting discrepancy emerges when comparing Qwen2.5-32B-Instruct and Qwen2.5- 72B-Instruct. Although the 72B model achieves higher citation scores, its overall Unified Score is lower than that of the 32B model. This suggests that while the 72B model is more effective at identify- ing relevant documents, it struggles to accurately distinguish between answerable and unanswerable queries, ultimately lowering its overall performance. This case illustrates that, although citation capability is essential in RAG scenarios, it must be assessed alongside other critical dimensions to enable a truly holistic and meaningful comparison of model performance. 4.3.3 Performance across Different Levels of Difficulty Table 5: Performances across reasoning difficulty levels (C: Correct, P: Partially Correct, W: Wrong). SimpleQA ComplexQA Unified Score Non-Refusal (C/P/W) Refusal Unified Score Non-Refusal (C/P/W) Refusal Qwen2.5-72B-Instruct 0.3697 54.71% / 29.60% / 15.70% 9.38% 0.2630 25.41% / 47.52% / 27.06% 8.38% Llama-3.3-70B-Instruct 0.3751 40.36% / 39.91% / 19.73% 21.88% 0.2695 11.88% / 53.80% / 34.32% 34.26% GPT-4o 0.4366 51.12% / 28.25% / 20.63% 29.69% 0.3276 22.77% / 49.83% / 27.39% 34.01% GPT-4.1 0.4072 56.95% / 27.35% / 15.70% 14.84% 0.3434 39.27% / 41.58% / 19.14% 11.93% o4-mini 0.4493 71.75% / 16.59% / 11.66% 12.50% 0.4142 51.16% / 30.69% / 18.15% 23.60% Table 5 shows the performance across the difficulty levels of the task. As expected, performance generally declines on ComplexQA compared to SimpleQA, highlighting the increased challenge posed by deeper reasoning requirements. Notably, the performance gap between o4-mini and GPT-4o widens under complex tasks, suggesting that models with stronger reasoning capabilities, such as o4-mini, exhibit greater robustness when faced with more demanding reasoning scenarios. 7 4.3.4 Performance across Reasoning Types Reasoning types provided in CReSt enable focused analysis of model performance across distinct reasoning categories. As shown in Figure 2, o4-mini demonstrates strong and balanced performance across all reasoning categories. In contrast, other models show greater variability depending on the reasoning type. For example, both GPT-4o and Qwen2.5-72B-Instruct underperform in tabular reasoning, while o4-mini excels in numerical reasoning and GPT-4o shows particular strength in textual reasoning. Multi-constraint Numerical T emporal T abularFormatT extual 0.25 0.30 0.35 0.40 0.45 0.50Multi-Dimensional Reasoning Capability Qwen2.5 32B Qwen2.5 72B Llama 3.3 70B gpt-4o gpt-4.1 o4-mini Figure 2: Reasoning capability comparison across types. 4.3.5 Inference Methods Table 6: Performance comparison across different inference methods for gpt-4o-mini and gpt-4o models in English dataset. Direct Answer excels in conservative refusal accuracy, Least-to-Most achieves the highest overall performance, and other methods boost partial correctness but incur more errors, highlighting a trade-off between reasoning depth and accuracy. Unified Non-Refusal (C/P/W) Refusal Accuracy CoT (Baseline) gpt-4o-mini 0.3276 26.47% / 47.45% / 26.09% 26.10% gpt-4o 0.3777 34.79% / 40.68% /
https://arxiv.org/abs/2505.17503v1
24.52% 32.95% Llama-3.3-70B-Instruct 0.3844 26.98% / 47.17% / 25.85% 31.23% Qwen2.5-32B-Instruct 0.3478 32.45% / 40.75% / 26.79% 26.15% Direct Answer gpt-4o-mini 0.3123 21.51% / 16.23% / 62.26% 87.93% gpt-4o 0.4030 35.17% / 26.43% / 38.40% 60.92% Llama-3.3-70B-Instruct 0.2778 16.98% / 26.98% / 56.04% 74.90% Qwen2.5-32B-Instruct 0.3037 17.55% / 28.68% / 53.77% 74.33% CoD gpt-4o-mini 0.3288 23.96% / 27.74% / 48.30% 63.22% gpt-4o 0.3677 30.94% / 39.06% / 30.00% 41.38% Llama-3.3-70B-Instruct 0.3272 39.56% / 33.33% / 27.11% 48.47% Qwen2.5-32B-Instruct 0.3647 26.60% / 43.21% / 30.19% 38.51% Plan-And-Solve gpt-4o-mini 0.3299 27.36% / 46.60% / 26.04% 24.38% gpt-4o 0.3717 36.42% / 45.85% / 17.74% 19.35% Llama-3.3-70B-Instruct 0.3198 22.83% / 48.49% / 28.68% 31.61% Qwen2.5-32B-Instruct 0.3436 33.40% / 42.83% / 23.77% 21.26% Least-to-Most gpt-4o-mini 0.3666 29.06% / 29.81% / 41.13% 57.85% gpt-4o 0.4502 38.49% / 29.62% / 31.89% 59.00% Llama-3.3-70B-Instruct 0.4001 33.40% / 37.17% / 29.43% 45.02% Qwen2.5-32B-Instruct 0.3638 33.77% / 28.87% / 37.36% 51.15% Alongside model comparisons, we further evaluate recent advancements in inference methods to assess their effectiveness in our benchmark. The baseline used in prior experiments is the Chain-of- 8 Thought (CoT) method (Wei et al., 2022). For comparison, we include a Direct Answer setting, where models are instructed to respond immediately without engaging in any explicit reasoning process. We also explore several alternative approaches. Chain-of-draft (CoD) (Xu et al., 2025) improves efficiency over CoT by replacing the thought process with a brief draft consisting of only a few words. Plan-And-Solve method (Wang et al., 2023) similarly prompts step-by-step reasoning but first instructs the model to formulate a plan before executing it to solve the problem. Least-to-Most (L2M) method (Zhou et al., 2023) takes a decomposition-based approach, guiding the model to break the original task into smaller sub-problems and solve them sequentially. Table 6 presents distinct performance patterns across inference methods when applied to various LLMs. Among them, L2M approach combined with GPT-4o demonstrates the highest overall perfor- mance, underscoring its effectiveness in decomposing complex queries into manageable subproblems. This observation might stem from the alignment with dataset construction process, where incremental question generation naturally corresponds to the decomposition strategy in inference. Interestingly, simpler inference methods, such as Direct Answer, tend to achieve higher refusal accuracy, reflecting a conservative response strategy that prioritizes avoiding incorrect outputs. However, this comes at the expense of overall response quality, as seen in the lower unified scores. The fact that L2M also achieves relatively strong refusal accuracy while maintaining high response quality suggests that our benchmark rewards more advanced reasoning strategies that effectively balance both refusal handling and accurate response generation. Additionally, methods like CoT and Plan-and-Solve still exhibit notable reductions in wrong responses within non-refusal situation, indicating that structured reasoning steps can mitigate reasoning errors. Model-specific trends reveal that GPT-4o performs unusually well under Direct Answer, maintaining a rare balance between refusal and response accuracy. This trend suggests that GPT-4o’s capacity to handle both aspects effectively under a straightforward usage. Meanwhile, open-source models struggle under Direct Answer, with a sharp drop in overall scores. These models seem to rely on structured reasoning to mitigate errors, as
https://arxiv.org/abs/2505.17503v1
seen in their relatively stable performance under CoT and Plan-And-Solve. This analysis underscores the importance of selecting inference methods that align with both the model’s reasoning capabilities. 5 Conclusion In this paper, we propose CReSt, a benchmark designed to holistically evaluate the diverse capabilities required of LLMs for developing real-world RAG (Retrieval-Augmented Generation) applications. We construct a dataset in both English and Korean that requires complex reasoning, grounded in a diverse set of documents representative of real-world use cases. Based on this dataset, we design the CReSt benchmark to evaluate model capabilities in realistic RAG scenarios. Through extensive experiments, we demonstrate that CReSt effectively reveals the strengths and weaknesses of various models in practical settings, offering valuable insights. We believe our benchmark will serve as a useful resource for future research and production-level development of RAG systems. 6 Limitations Although CReSt contributes significant advances in the development of real-world RAG applications, several limitations are remain. First, our benchmark is designed to evaluate the capabilities of LLMs within RAG applications, rather than to assess the end-to-end performance of fully integrated RAG pipelines. As such, evaluating the overall performance of the complete RAG pipeline, including retrieval, ranking, and generation modules, is beyond the scope of this study. Second, our experiments reveal that model performance is highly sensitive to prompt design. While this highlights the potential for further improvements through prompt optimization, we do not explore rigorous prompt engineering strategies in this work and leave such investigations for future research. Third, although our dataset includes phrase level annotations for citations, for the sake of simplicity, we evaluate citation performance at the document level in this study. In future work, we plan to leverage these fine-grained annotations to conduct more detailed and precise evaluations of citation accuracy. Finally, although we aim to support real-world application development by utilizing diverse documents and QA pairs, the dataset may still lack sufficient coverage of the full range of real-world scenarios and language diversity. Expanding the dataset to address these limitations is also left for future work. 9 References Alibaba. Qwen2.5: A party of foundation models! https://qwenlm.github.io/blog/qwen2. 5/, 2024. Jiawei Chen, Hongyu Lin, Xianpei Han, and Le Sun. Benchmarking large language models in retrieval- augmented generation. In Proceedings of the Thirty-Eighth AAAI Conference on Artificial Intelli- gence and Thirty-Sixth Conference on Innovative Applications of Artificial Intelligence and Four- teenth Symposium on Educational Advances in Artificial Intelligence , pages 17754–17762, 2024. doi: 10.1609/aaai.v38i16.29728. URL https://doi.org/10.1609/aaai.v38i16.29728 . Wenqi Fan, Yujuan Ding, Liangbo Ning, Shijie Wang, Hengyun Li, Dawei Yin, Tat-Seng Chua, and Qing Li. A survey on rag meeting llms: Towards retrieval-augmented large language models. In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD ’24) , pages 6491–6501. ACM, 2024. doi: 10.1145/3637528.3671470. Tianyu Gao, Howard Yen, Jiatong Yu, and Danqi Chen. Enabling large language models to generate text with citations. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing , pages 6465–6488, Singapore, December 2023a. Associa- tion for Computational Linguistics. doi: 10.18653/v1/2023.emnlp-main.398. URL https: //aclanthology.org/2023.emnlp-main.398/ . Tianyu Gao, Howard Yen, Jiatong Yu,
https://arxiv.org/abs/2505.17503v1
and Danqi Chen. Enabling large language models to generate text with citations. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP) , Singapore, 2023b. Association for Computational Linguistics. Yunfan Gao, Yun Xiong, Xinyu Gao, Kangxiang Jia, Jinliu Pan, Yuxi Bi, Yi Dai, Jiawei Sun, Meng Wang, and Haofen Wang. Retrieval-augmented generation for large language models: A survey. arXiv preprint arXiv:2312.10997 , 2023c. URL https://arxiv.org/abs/2312.10997 . Yulong Hui, Yao Lu, and Huanchen Zhang. Uda: A benchmark suite for retrieval augmented generation in real-world document analysis. In A. Globerson, L. Mackey, D. Belgrave, A. Fan, U. Paquet, J. Tomczak, and C. Zhang, editors, Advances in Neu- ral Information Processing Systems , volume 37, pages 67200–67217. Curran Associates, Inc., 2024. URL https://proceedings.neurips.cc/paper_files/paper/2024/file/ 7c06759d1a8567f087b02e8589454917-Paper-Datasets_and_Benchmarks_Track.pdf . Satyapriya Krishna, Kalpesh Krishna, Anhad Mohananey, Steven Schwarcz, Adam Stambler, Shyam Upadhyay, and Manaal Faruqui. Fact, fetch, and reason: A unified evaluation of retrieval- augmented generation. In Proceedings of the 2025 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers) , pages 4745–4759, Albuquerque, New Mexico, April 2025. Association for Computational Linguistics. URL https://aclanthology.org/2025.naacl-long.243/ . Yuanjie Lyu, Zhiyu Li, Simin Niu, Feiyu Xiong, Bo Tang, Wenjin Wang, Hao Wu, Huanyong Liu, Tong Xu, and Enhong Chen. CRUD-RAG: A comprehensive chinese benchmark for retrieval- augmented generation of large language models. ACM Transactions on Information Systems , 43 (2):1–32, 2024. doi: 10.1145/3701228. URL https://doi.org/10.1145/3701228 . Meta. Llama 3.3. https://www.llama.com/docs/model-cards-and-prompt-formats/ llama3_3/ , 2024. OpenAI. Hello gpt-4o. https://openai.com/index/hello-gpt-4o/ , 2024. OpenAI. Introducing gpt-4.1 in the api. https://openai.com/index/gpt-4-1/ , 2025a. OpenAI. Openai o3-mini. https://openai.com/index/openai-o3-mini/ , 2025b. OpenAI. Introducing openai o3 and o4-mini. https://openai.com/index/ introducing-o3-and-o4-mini/ , 2025c. OpenAI, Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, Red Avila, Igor Babuschkin, Suchir Balaji, Valerie Balcom, Paul Baltescu, Haiming Bao, Mohammad Bavarian, Jeff Belgum, et al. Gpt-4 technical report, 2024. URL https://arxiv.org/abs/2303.08774 . 10 Fabio Petroni, Aleksandra Piktus, Angela Fan, Patrick Lewis, Majid Yazdani, Nicola De Cao, James Thorne, Yacine Jernite, Vladimir Karpukhin, Jean Maillard, Vassilis Plachouras, Tim Rocktäschel, and Sebastian Riedel. KILT: a benchmark for knowledge intensive language tasks. InProceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies , pages 2523–2544, Online, June 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.naacl-main.200. URL https://aclanthology.org/2021.naacl-main.200/ . Jiejun Tan, Zhicheng Dou, Wen Wang, Mang Wang, Weipeng Chen, and Ji-Rong Wen. Htmlrag: Html is better than plain text for modeling retrieved knowledge in rag systems. In Proceedings of the ACM on Web Conference 2025 (WWW ’25) , pages 1733–1746. ACM, 2025. doi: 10.1145/ 3696410.3714546. Ryota Tanaka, Taichi Iki, Taku Hasegawa, Kyosuke Nishida, Kuniko Saito, and Jun Suzuki. Vdocrag: Retrieval-augmented generation over visually-rich documents. In CVPR , 2025. James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. FEVER: a large-scale dataset for fact extraction and verification. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, Volume 1 (Long Papers) , pages 809–819, New
https://arxiv.org/abs/2505.17503v1
Orleans, Louisiana, June 2018. Association for Computational Linguistics. doi: 10.18653/v1/N18-1074. URL https://aclanthology.org/N18-1074 . Lei Wang, Wanyu Xu, Yihuai Lan, Zhiqiang Hu, Yunshi Lan, Roy Ka-Wei Lee, and Ee-Peng Lim. Plan-and-solve prompting: Improving zero-shot chain-of-thought reasoning by large language models. In Anna Rogers, Jordan Boyd-Graber, and Naoaki Okazaki, editors, Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 2609–2634, Toronto, Canada, July 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.acl-long.147. URL https://aclanthology.org/2023.acl-long.147/ . Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and Denny Zhou. Chain-of-thought prompting elicits reasoning in large language models. InProceedings of the 36th International Conference on Neural Information Processing Systems , 2022. Silei Xu, Wenhao Xie, Lingxiao Zhao, and Pengcheng He. Chain of draft: Thinking faster by writing less. arXiv preprint arXiv:2502.18600 , 2025. Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. Hotpotqa: A dataset for diverse, explainable multi-hop question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing , pages 2369–2380, Brussels, Belgium, October-November 2018. Association for Computational Linguistics. doi: 10.18653/v1/D18-1259. URL https://aclanthology.org/ D18-1259 . Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. Judging llm-as-a-judge with mt-bench and chatbot arena. In Proceedings of the 37th International Conference on Neural Information Processing Systems , 2023. Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Claire Cui, Olivier Bousquet, Quoc V Le, and Ed H. Chi. Least-to-most prompting enables complex reasoning in large language models. In The Eleventh International Conference on Learning Representations , 2023. URL https://openreview.net/forum?id=WZH7099tgfM . 11 A Dataset Statistics Table 7 shows the number of examples categorized by refusal status of the answers, language, and difficulty, while Table 8 illustrates the distribution of reasoning types appearing in both SimpleQA and ComplexQA. Figure 3 represents the distribution of reasoning types across QA examples. Table 7: Number of examples grouped by refusal status, language, and difficulty. Category Value Number of Examples Refusal Status Refusal 1013 Non-refusal 1232 Language English 1048 Korean 1197 Difficulty SimpleQA 743 ComplexQA 1502 Table 8: Number of examples per reasoning type, grouped by difficulty type. Reasoning Type SimpleQA ComplexQA All Multi-Constraint Reasoning 357 1362 1719 Numerical Reasoning 145 703 848 Temporal Reasoning 99 700 799 Tabular Reasoning 86 446 532 Format Reasoning 55 368 423 Textual Reasoning 0 170 170 Others 1 41 42 1 2 3 or more Number of Reasoning Types Required 0 100 200 300 400 500Number of QA pairs493 376363Distribution of Reasoning Complexity in Benchmark Que Figure 3: Distribution of Reasoning Complexity in Benchmark data. Our benchmark includes various questions with reasoning types ranging from one to three or more, allowing for multidimensional evaluation of model capabilities. 12 B Prompts B.1 Prompts for Inference Methods We employed various inference methodologies for diverse reasoning, and the prompts used are as follows: Direct Answer in Figure 4, CoT in
https://arxiv.org/abs/2505.17503v1
Figure 5, CoD in Figure 6, Plan-and-Solve in Figure 7, and Least-to-Most in Figure 8. Prompt for Direct Answer inference User: You are a helpful assistant tasked with answering questions strictly based on the content of the provided documents. The documents may contain irrelevant or inaccurate information, so please reason carefully and critically when forming your answer. <Rules> 1. Only use information that is explicitly stated in the documents. Do not rely on prior knowledge or make assumptions beyond the content. 2. If the question cannot be answered solely based on the provided documents, respond with: "I cannot answer because the question is unanswerable with the documents." Then briefly explain why the information is insufficient. 3. Always cite the document numbers used to derive your answer, using the format [1], [2], etc. 4. If multiple documents were referenced, include all relevant numbers at the end of your answer. 5. Answer the question directly. Do not return any preamble, explanation, or reasoning. </Rules> <Question> {question} </Question> <Documents> {docs} </Documents> Figure 4: Prompt used for Direct Answer inference in the Inference Methods experiment 13 Prompt for CoT Inference User: You are a helpful assistant tasked with answering questions strictly based on the content of the provided documents. The documents may contain irrelevant or inaccurate information, so please reason carefully and critically when forming your answer. <Rules> 1. Only use information that is explicitly stated in the documents. Do not rely on prior knowledge or make assumptions beyond the content. 2. If the question cannot be answered solely based on the provided documents, respond with: "I cannot answer because the question is unanswerable with the documents." Then briefly explain why the information is insufficient. 3. Always cite the document numbers used to derive your answer, using the format [1], [2], etc. 4. If multiple documents were referenced, include all relevant numbers at the end of your answer. 5. Think step by step to answer the following question. Return thinking steps between <Thinking> and </Thinking> and the answer between <Answer> and </Answer>. </Rules> <Question> {question} </Question> <Documents> {docs} </Documents> Figure 5: Prompt used for CoT inference in the Inference Methods experiment 14 Prompt for CoD Inference User: You are a helpful assistant tasked with answering questions strictly based on the content of the provided documents. The documents may contain irrelevant or inaccurate information, so please reason carefully and critically when forming your answer. <Rules> 1. Only use information that is explicitly stated in the documents. Do not rely on prior knowledge or make assumptions beyond the content. 2. If the question cannot be answered solely based on the provided documents, respond with: "I cannot answer because the question is unanswerable with the documents." Then briefly explain why the information is insufficient. 3. Always cite the document numbers used to derive your answer, using the format [1], [2], etc. 4. If multiple documents were referenced, include all relevant numbers at the end of your answer. 5. Think step by step, but only keep a minimum draft for each thinking step, with 5 words at most. Return thinking
https://arxiv.org/abs/2505.17503v1
steps between <Thinking> and </Thinking> and the answer between <Answer> and </Answer>. </Rules> <Question> {question} </Question> <Documents> {docs} </Documents> Figure 6: Prompt used for CoD inference in the Inference Methods experiment 15 Prompt for Plan-and-Solve Inference User: You are a helpful assistant tasked with answering questions strictly based on the content of the provided documents. The documents may contain irrelevant or inaccurate information, so please reason carefully and critically when forming your answer. <Rules> 1. Only use information that is explicitly stated in the documents. Do not rely on prior knowledge or make assumptions beyond the content. 2. If the question cannot be answered solely based on the provided documents, respond with: "I cannot answer because the question is unanswerable with the documents." Then briefly explain why the information is insufficient. 3. Always cite the document numbers used to derive your answer, using the format [1], [2], etc. 4. If multiple documents were referenced, include all relevant numbers at the end of your answer. 5. First understand the problem and devise a plan to solve the problem. Then, carry out the plan to solve the problem step by step. Return intermidiate steps between <Thinking> and </Thinking> and the answer between <Answer> and </Answer>. </Rules> <Question> {question} </Question> <Documents> {docs} </Documents> Figure 7: Prompt used for Plan-and-Solve inference in the Inference Methods experiment 16 Prompt for Least-to-Most inference [Decomposition Prompt] User: You are an assistant that decomposes a question into simpler sub-questions. Context: {docs} Question: question Return each sub-question on its own line, without numbering. [Solving Context Prompt] User: You are a helpful assistant tasked with answering questions strictly based on the content of the provided documents. The documents may contain irrelevant or inaccurate information, so please reason carefully and critically when forming your answer. Rules: 1. Only use information that is explicitly stated in the documents. Do not rely on prior knowledge or make assumptions beyond the content. 2. If the question cannot be answered solely based on the provided documents, respond with: I cannot answer because the question is unanswerable with the documents. Then briefly explain why the information is insufficient. 3. Always cite the document numbers used to derive your answer, using the format [1], [2], etc. 4. If multiple documents were referenced, include all relevant numbers at the end of your answer. 5. Think step by step to answer the following question. <Rules> <Documents> {docs} </Documents> [Solving Prompt] User: Answer the following question using the information provided in the context. <Question> {sub_question} </Question> Figure 8: Prompt used for Least-to-Most inference in the Inference Methods experiment 17 B.2 Prompts for QA Generation The QA generation process of our benchmark consists of four stages, with the prompts used for each as follows: KIE in Figure 9, BasicQA in Figure 10, SimpleQA in Figure 11, and ComplexQA in Figure 12. KIE Generation Prompt User: You will be provided with {chunkcount} chunk(s) of document, each in {format} format. Your mission is to come up with a series of questions with varying levels of difficulty to test students’ understanding of the information contained within
https://arxiv.org/abs/2505.17503v1
these document chunks. This process will be conducted over multiple steps. Use {language} language to generate the key, value, and description. Here are the document chunks: {content} [First task] Your first task is to generate a collection of Key-Value pairs for a Key-Information Extraction (KIE) task. The extracted key information should have sufficient coverage and be compre- hensive enough to reconstruct a text passage that retains the essential ideas and meaning of the original text. Please provide the Key-Value pairs in the following format: ```json {{ "KIE": [ {{ "Key": "<Name of key 1>", "Value": "[List of values corresponding to the key found in the text chunks]", "Description": "<Description of the key to support understanding of the Key-Value pair>" }}, {{ "Key": "<Name of key 2>", "Value": "[List of values corresponding to the key found in the text chunks]", "Description": "<Description of the key to support understanding of the Key-Value pair>" }}, ... ] }} ``` You are to only respond in JSON format and providing the Key-Value pairs for all the chunks supplied. Figure 9: Prompt used for KIE Generation stage. 18 Basic QA Generation Prompt User: [Second task] Your task is to create a set of simple Q&A pairs using the key-value pairs extracted from the document, while also referring to the original document chunks. These Q&A pairs should align with the following characteristics: 1. Question Construction: Each question should focus on the key from the key-value pairs, while the corresponding value provides the answer. 2. Contextual Independence: The questions should not explicitly reference the document or assume its availability. This is because the students answering these questions are expected to have internalized the document’s content. Phrases like “In the document. . . ” should be avoided. 3. Reasoning Type: Each question should test one of the following reasoning types: •Form/Layout Understanding: The question assesses the student’s ability to comprehend the document’s layout or structure, rather than just its text content. •Tabular Understanding: The question evaluates the student’s ability to interpret and extract information from tabular data. •Text/Semantic Understanding: The question examines the student’s grasp of the textual content’s meaning and implications. Use {language} language to generate the questions and answers. The Q&A pairs should be in the following format: ```json {{ "simpleQA": [ {{ "id": "<ID of the simpleQA, e.g. simpleQA1, simpleQA2, ...>", "question": "<Question text>", "answer": "<Answer text>", "reasoning_type": "[List of reasoning types that the question tests, e.g. Form/Lay- out Understanding, Tabular Understanding, Text/Semantic Understanding]" }}, ... ] }} ``` You are to respond in JSON format only and ensure the questions are clear, concise, and directly tied to the key-value pairs provided. Figure 10: Prompt used for Basic QA Generation stage. 19 Simple QA Generation Prompt User: [Third task] Your next task is to refer to the simple Q&A pairs created in the previous turn and generate a new set of Q&A pairs that require application reasoning. Application reasoning refers to Q&As that require additional steps or though processes to answer, as opposed to simple query-and-fetch Q&A. Use {language} language to generate the questions and answers. Each
https://arxiv.org/abs/2505.17503v1
question should be clear, consider and aligned with the characteristics and reasoning types described below: Reasoning Types: 1. Numerical reasoning: The question requires the reader to perform arithmetic operations on the information provided in the document, such as counting, comparisons, calculations, etc. 2. Tabular reasoning: The question requires the reader to compare and contrast information across different tables, rows, columns, etc. 3. Multi-constraint reasoning: The question which contains multiple conditions / constraints which require readers to find the answer that satisfies all the conditions / constraints. 4. Temporal reasoning: The question requires the reader to reason about the time-based information provided in the document. 5. Format reasoning: The question requires the reader to reason about the format / post-processing of the information provided in the document (e.g. conversion of units, etc.). Modify existing or refer to the simple Q&A pairs, or add new ones to incorporate application reasoning types. Ensure each Q&A is self-contained and does not explicitly reference the document (e.g., avoid phrases like “In the document. . . ”). You are to response in the following JSON format: ```json { "simpleAppQA": [ { "id": "<ID of the simpleAppQA, e.g. simpleAppQA1, simpleAppQA2, ...>", "question": "<Question text>", "answer": "<Answer text>", "reasoning_type": "<Reasoning type of the question, e.g. Numerical reasoning, Tabular reasoning, Multi-constraint reasoning, Temporal reasoning, Format reasoning>" }, ... ] } ``` You are to respond in JSON format only and ensure the questions are clear, concise, and require reasoning beyond simple query-and-fetch. Figure 11: Prompt used for Simple QA Generation stage. 20 Complex QA Generation Prompt User: [Final task] Now, we will create complex application QAs based on the existing simple application QAs. These questions differ from simple application QAs by requiring multiple reasoning steps and incorporating a combination of reasoning types to arrive at the answer. Use {language} language to generate the questions and answers. When forming complex/multiple application QAs, follow these guidelines: 1. Merge and Modify Thoughtfully: •Combine information from different simple application QAs to form new, complex questions. •Avoid creating trivial questions that are merely a concatenation of existing QAs. Ensure the merged question requires deeper reasoning and processing. 2. Step-by-Step Reasoning: • Frame the question so that the student must: • Utilize the reasoning result of one step as an input for the next step. •Apply additional reasoning (numerical, temporal, tabular, multi-constraint, or format reasoning) to arrive at the final answer. 3. Challenge and Engagement: •Ensure the QAs challenge the student by requiring them to integrate knowledge and think critically. • Design the reasoning flow to be logical and non-trivial. The generated complex application QAs should be in the following JSON format: ```json { "complexQA": [ { "id": "<ID of the complexQA, e.g. complexQA1, complexQA2, ...>", "question": "<Question text>", "answer": "<Answer text>", "reasoning_type": "[List of reasoning types that the question tests, e.g. Numerical reasoning, Tabular reasoning, Multi-constraint reasoning, Temporal reasoning, Format reasoning]" }, ... ] } ``` You are to respond in JSON format only and ensure the questions are clear, concise, and require multiple reasoning steps to arrive at the answer. Generating these QAs could be
https://arxiv.org/abs/2505.17503v1
challenging, but it will help students develop a deeper understanding of the content. Figure 12: Prompt used for Complex QA Generation stage. 21 B.3 Prompt for LLM Evaluation Our benchmark, CReSt, utilizes LLM Evaluation for assessment in non-refusal environments, with the prompt used for this shown in Figure 13. Rating Prompt User: You are an evaluator for a Retrieval Question Answering (QA) task. Your task is to assess how closely the predicted answer matches the golden answer. **Evaluation Categories:** - **Correct**: The predicted answer is a perfect match or semantically identical to the golden answer. - **Partially Correct**: The predicted answer contains some key information from the golden answer but may be incomplete, missing details, or only partially aligned. - **Wrong**: The predicted answer is completely incorrect, missing essential details, or contains misleading information. **Consider the following factors when evaluating:** - **Exactness**: Does the predicted answer exactly match the golden answer? - **Paraphrasing**: If reworded, does it retain the same meaning? - **Completeness**: Is the full answer provided, or is it partial? - **Incorrect Information**: Does the predicted answer introduce any false or misleading details? **Error Category Guidelines:** *If the evaluation is not **Correct** (i.e., it is either "Partially Correct" or "Wrong"), also identify the most severe error type present by providing an **ErrorType** field. This field should contain one of the following categories that best describes the main error:* - **AnswerRefusal**: The answer refuses to provide a response or gives up on answering, despite a clear expectation to do so. - **NumericMistakes**: The answer contains incorrect arithmetic or inaccurate numeric references (e.g., population sizes, years, differences in ages, championship tallies). - **MissingDetail**: The answer shows a partial understanding by identifying the correct domain or background but omitting the necessary numeric or textual detail. - **Others**: Any other error types not covered by the above categories. **Input:** - **Question**: {question} - **Golden Answer**: {golden_answer} - **Predicted Answer**: {predicted_answer} **Your response should be formatted as follows:** ```plaintext **Justification**: <brief explanation of your evaluation> **Decision**: <Correct/Partially Correct/Wrong> **ErrorType**: <AnswerRefusal/NumericMistakes/MissingDetail/Others> // Keep empty if the evaluation is **Correct** ``` Figure 13: Prompt used to automatically rate the answers of LLMs in the experiments. 22
https://arxiv.org/abs/2505.17503v1
arXiv:2505.17505v1 [cs.CL] 23 May 2025L-MTP: Leap Multi-Token Prediction Beyond Adjacent Context for Large Language Models A P REPRINT Xiaohao Liu1Xiaobo Xia1Weixiang Zhao2Manyi Zhang3 Xianzhi Yu4Xiu Su5Shuo Yang2See-Kiong Ng1Tat-Seng Chua1 1National University of Singapore2Harbin Institute of Technology 3Tsinghua University4Chinese Academy of Sciences5Central South University {XIAOHAO .LIU@U.NUS.EDU,XIAOBOXIA .UNI@GMAIL .COM } ABSTRACT Large language models (LLMs) have achieved notable progress. Despite their success, next-token prediction (NTP), the dominant method for LLM training and inference, is constrained in both con- textual coverage and inference efficiency due to its inherently sequential process. To overcome these challenges, we propose leap multi-token prediction (L-MTP), an innovative token prediction method that extends the capabilities of multi-token prediction (MTP) by introducing a leap-based mechanism. Unlike conventional MTP, which generates multiple tokens at adjacent positions, L- MTP strategically skips over intermediate tokens, predicting non-sequential ones in a single forward pass. This structured leap not only enhances the model’s ability to capture long-range dependencies but also enables a decoding strategy specially optimized for non-sequential leap token generation, effectively accelerating inference. We theoretically demonstrate the benefit of L-MTP in improv- ing inference efficiency. Experiments across diverse benchmarks validate its merit in boosting both LLM performance and inference speed. The source code will be publicly available. 1 Introduction Large language models (LLMs) have demonstrated rapid and remarkable progress, driven by data, computing, and architectural innovation advances [1, 2, 3, 4, 5, 6, 7]. They exhibit strong capabilities in world knowledge acqui- sition [8, 9] and enable breakthroughs across a wide range of research domains, such as chemistry [10, 11], biol- ogy [12, 13], and medicine [14, 15]. As model scales and training data continue to increase, LLMs are attaining ever more powerful generalization and reasoning abilities [16, 17, 18, 19]. Next-token prediction (NTP) remains the mainstream strategy for both training and inference in LLMs [20, 21, 22, 23, 24, 25]. It generates tokens in an autoregressive manner, where each token is predicted based only on the preceding context (see Figure 1(a)). However, despite its conceptual simplicity, NTP results in inefficient generation, and limits the model to a focused yet short contextual horizon, and overlooks “hard” decisions [26, 27]. Intriguingly, LLMs are verified with inherent pre-planning capabilities, which indicates the potential of extending NTP to predict multiple tokens at once [27]. This gives rise to the multi-token prediction (MTP) paradigm [27] (see Figure 1(b)). Specifically, by incorporating additional language model heads, MTP enables the parallel prediction of a sequence of adjacent tokens and brings two key benefits. First, it provides a broader training signal by supervising multiple upcoming tokens at each step, which can enhance performance in tasks requiring long-range reasoning or planning [27, 28, 29]. Second, APREPRINT 01 12LLM BackboneHead 1 12 01234453LLM Backbone Head 4 12 01357684LLM Backbone Head 4 12345120135712678345LookBackwardOne token per timeMultiple tokens per timeTrainingInference (a) Next-token prediction. 01 12LLM BackboneHead 1 12 01234453LLM Backbone Head 4 12 01357684LLM Backbone Head 4 12345120135712678345LookBackwardOne token per timeMultiple tokens per timeTrainingInference (b) Multi-token prediction 01 12LLM BackboneHead 1 12 01234453LLM Backbone Head 4 12 01357684LLM Backbone Head 4 12345120135712678345LookBackwardOne token per timeMultiple tokens per
https://arxiv.org/abs/2505.17505v1
timeTrainingInference (c) Leap multi-token prediction Figure 1: Illustrations of LLM architectures with three prediction paradigms, including NTP (a), MTP (b), and L-MTP (c). NTP utilizes a single output head for sequential token prediction. MTP employs multiple output heads for adjacent multi-token forecasting. As a comparison, L-MTP reassigns prediction heads to leaping positions. For instance, given 4 heads, L-MTP predicts [1,3,5,7]tokens instead of the adjacent sequence [1,2,3,4]in MTP with a stride of 2 and the initial input token. The top depicts the training difference, while the bottom showcases the inference1. it enables faster inference by generating multiple tokens in a single forward pass, reducing latency and increasing throughput in applications with efficiency constraints [27]. In this paper, motivated by the philosophy of going broader and faster, we extend the MTP paradigm and propose leap multi-token prediction (L-MTP) , which further amplifies both contextual coverage and inference efficiency of LLMs. As illustrated in Figure 1(c), L-MTP introduces a leaping mechanism that skips intermediate tokens and directly predicts non-adjacent future tokens. Structurally, L-MTP retains the core architecture of MTP, where multiple prediction heads are applied in parallel. Nevertheless, instead of targeting consecutive positions, each head in L-MTP is reassigned to predict tokens at leaping intervals ( e.g., positions 1, 3, 5, and 7). This yields a broader training signal than MTP, as the model learns to capture longer-range dependencies beyond adjacent-token contexts. During inference, L-MTP further improves generation speed by reusing overlapping context across decoding steps. By jointly predicting multiple and strategically spaced tokens, L-MTP enables each forward pass to generate more tokens per step, which helps reduce the total number of decoding iterations required. This leads to faster inference compared to standard MTP, while maintaining consistency in the generated outputs. The study of L-MTP can be justified from both human thinking and recent trends in language model reasoning. In human thinking, we rarely reason in a strictly sequential fashion. Instead, we often skip over intermediate elements to complete reasoning more efficiently [30, 31, 32]. This leap-wise reasoning aligns naturally with L-MTP’s mechanism of skipping intermediate tokens and predicting non-adjacent ones. Similarly, in language model reasoning, recent advances in efficient reasoning have revealed that many intermediate reasoning steps can be compressed or abstracted without loss of correctness [33, 34, 35]. By predicting tokens at leaping positions, L-MTP mimics this abstraction pro- cess, not by explicitly modeling token importance, but by altering the prediction pattern to skip intermediate positions, to accelerate LLM inference. 1To avoid dense or confusing presentation, we here omit prediction, verification, and acceptance sub-procedures during the inference of MTP and L-MTP. More details can be checked in Section 2. 2 APREPRINT We provide a theoretical analysis to demonstrate the inference acceleration of our L-MTP, by focusing on the attenua- tion and consistency of output token probabilities. Besides, through comprehensive experiments, we show that L-MTP can improve the performance of LLMs in a series of tasks ( e.g., math and code tasks) and meanwhile improve infer- ence speed. For instance, L-MTP achieves competitive performance and outperforms MTP at most tasks. L-MTP also boosts existing
https://arxiv.org/abs/2505.17505v1
MTP models with 22% more inference speed-up with the same number of heads. Furthermore, we provide experimental evidence that L-MTP is extendable to speculative decoding techniques, making models up to 4 times faster at inference time across a wide range of settings. 2 Preliminaries In this section, we formulate and detail the training and inference procedures of next-token prediction (NTP) and multi-token prediction (MTP) for large language models (LLMs). NTP. Given input tokens x≤t, where ≤tin the subscript represents the abbreviation of {1,2, . . . , t}, the LLM with parameters θpredicts the next token xt+1and is optimized via the following objective: LNTP=−X Tlogp(xt+1|x≤t;θ), (1) where Tdenotes the number of tokens. The decoding process of NTP in inference also follows an autoregressive manner, which generates tokens one by one. The next token is sampled from p(xt+1|x≤t;θ). MTP. A natural extension for NTP is MTP [27] that predicts multiple tokens at once. Given input tokens x≤t, the LLM with parameters ¯θ, predicts the following ntokens, by involving more output heads ( e.g., 4 output heads totally). Therefore, the optimization objective is derived from Eq. (1) to: LMTP=−X Tlogp(x[t+n,...,t +2,t+1]|x≤t;¯θ). (2) In this case, the LLM is requested to pre-plan the adjacent context rather than the single next token. For MTP, during decoding, the next ntokens can be sampled independently, following p(xt+i|x≤t;¯θ), i∈ {1,2, . . . , n }. Recent research [27, 36] disentangles the LLM with the LLM backbone and output heads, where the former yields hidden states and the latter maps them into a vocabulary distribution. Therefore, the LLM backbone can be equipped with multiple heads to predict the tokens independently according to the shared hidden states. To this end, we have p(xt+n,...,t +2,t+1|x≤t;¯θ) =nY i=1 p(xt+i|z≤t;θi)·p(z≤t|x≤t;θ′) , (3) 123451. Prediction23 2. Verification in parallel3. Acceptance Figure 2: MTP with self- speculative decoding.where zdenotes the hidden states, θ′represents the parameters for the LLM back- bone, and θiis the parameters for the i-th output head. Following token predictions, MTP typically incorporates verification and acceptance sub-procedures to determine which tokens are eligible for output ( cf., Figure 2). Specifically, the verification sub- procedure invokes LLMs to evaluate the predictions in parallel and records their probabilities for determining the acceptance. Accepted token will be used to up- date the KV cache, and the process proceeds to the next iteration. Such decoding is proven as a lossless LLM acceleration technique, where the sampling is consistent with the distribution of vanilla autoregressive decoding [37]. 3 APREPRINT 22’2’’44’ 01357Shared 0111’1’’33’22’2’’33’44’22’2’’33’44’ Figure 3: Training recipe for L-MTP with objective LL-MTP .We warm up additional heads {θi}i>1, and then optimize the whole model, with multiple leap tokens as supervi- sion. 22’2’’44’ 01357Shared 0111’1’’33’22’2’’33’44’22’2’’33’44’ Figure 4: Incorporation with tree-attention. We take multiple candidates concurrently to construct the tree paths ( left), thus ex- ploring the accepted one. Our backward decoding strategy offers consecutive sequences, which can be verified with crafted tree- attention ( right ). 3 L-MTP: Leap Multi-Token Prediction Overview. Despite the potential of MTP, we go beyond it with one innovative solution to achieve a broader prediction
https://arxiv.org/abs/2505.17505v1
range and faster inference. Unlike conventional MTP, which focuses on consecutive tokens, L-MTP introduces a leap-based strategy, allowing it to predict tokens at non-sequential positions within the context window. This design enables the model to efficiently capture long-range dependencies without the need for dense token predictions. During inference, L-MTP reuses partially overlapping context across prediction steps, maximizing information utilization while minimizing redundant calculations. Given input tokens x≤t, L-MTP aims to predict a sequence of tokens at leaping intervals, specifically at positions, i.e.,x[t+k(n−1)+1,...,t+k+1,t+1], where kdenotes the number of jumped tokens. For example, with tinput tokens and k= 2, the model is expected to predict the tokens at positions [t+ 1, t+ 3, t+5, . . . , t +2n−1], effectively skipping intermediate tokens in each prediction step. We detail our L-MTP below. 3.1 L-MTP Training Recipe We equip an LLM with multiple output heads for predicting tokens at different positions. The head is a multilayer perceptron (MLP) with the last layer transforming hidden states to vocabulary (see more implementation details in Appendix B.5). We utilize two stages to train the LLM with multiple heads: (1) head warm-up; (2) full model tuning. Head warm-up. We first construct the self-distillation data by inputting the questions while collecting the output from the untapped LLM. These outputs follow the original distribution of LLM’s predictions. We optimize new heads by assigning them different supervisions, adhering to the leaping pattern xt+k(n−1)+1. The primary goal of this stage is to adapt new heads to the LLM. Therefore, the original head and LLM backbone are frozen. The training objective is formulated as L(1) L-MTP =−X Tlogp(x[t+k(n−1)+1,...,t+k+1]|z≤t;{θi}i>1). (4) Full model tuning. After that, we use the curated data to continue training the model. At this stage, all the components in our specialized LLM, including the LLM backbone and output heads, are optimized. The optimization objective is defined as L(2) L-MTP =−X Tlogp(xt+1]|x≤t;θ′, θ1) +β·logp(x[t+k(n−1)+1,...,t+k+1]|x≤t;θ′,{θi}i>1), (5) where βcontrols the contribution of additional heads. 4 APREPRINT 3.2 L-MTP Inference Procedure Despite L-MTP offering a broader prediction range, at every pass, it can only predict an incomplete sequence. For- tunately, we can step backward to leverage the prior predictions, or forward to utilize the posterior prediction to compensate for the incompleteness (an alternative solution explained in Appendix C.1). Furthermore, by exploiting speculative decoding techniques ( i.e., parallel tree decoding), L-MTP can achieve a larger accept rate, thus achieving further inference accelerate. Looking backward. Given x≤ttokens2, L-MTP predicts tokens {xt+k(i−1)+1}i∈[n], while leaving gaps between them ( i.e.,k−1tokens are skipped). However, if we look backward , the desired tokens have already been predicted by prior steps. For instance, tokens {xt+k(i−1)}i∈[n]are predicted given x≤t−1. In this case, we have: n p(xt+i|x≤t−(i−1)modk)|i∈ {1,2, . . . , k (n−1) + 1}o . (6) The continuous token sequence is sampled by looking backwards ( −)k−1steps. Here, mod khelps to switch the conditions. Since the priors are generated beforehand (or in parallel), we do not need to infer again, but retrieve them. Combining with tree attention. L-MTP seamlessly integrates with speculative decoding by sampling consecutive token sequences for verification. Drawing inspiration
https://arxiv.org/abs/2505.17505v1
from parallel decoding [38, 39, 36, 40], we combine L-MTP with tree attention to enable efficient decoding. We construct a hierarchical tree structure, where the i-th layer represents candidate tokens generated by the i-th prediction head. Paths in the tree are explored to identify the accepted one. To facilitate parallel verification, we design a tree attention mask that restricts each hidden state to attend only to its ancestors [38, 36]. Figure 4 illustrates the implementation of the tree attention with L-MTP decoding. Further details are provided in Appendix C.2. 4 Theoretical Analyses Given the input x≤t, LLMs are capable of predicting further future tokens, such as xt+i, i > 1. This motivates the development of MTP, where future tokens can be predicted from far previous tokens, rather than merely the last ones. By observing the acceptance rates of generated multiple tokens, we draw two properties: Definition 1 (Attenuation) .For a language model predicting multiple tokens conditioned on x≤t, the marginal prob- ability of predicting each subsequent token decreases as the prediction horizon increases. Formally, p(xt+1|x≤t)> p(xt+2|x≤t)>···> p(xt+n|x≤t),∀i∈ {1,2, . . . , n }, where nis the maximum prediction horizon (the number of heads), assuming the probabilities are well-defined and non-zero. Remark. Attenuation reflects the increasing uncertainty in the language model’s predictions as it forecasts tokens further into the future. This behavior arises because the prediction of xt+irelies on the fixed context x≤t, and the influence of this context diminishes with increasing i. As a result, the model’s confidence, as measured by the marginal probability p(xt+i|x≤t), decreases monotonically. Assumption 2 (Consistency) .The expected marginal probability of predicting xt+iis stable across arbitrary in- puts and follows a predictable function of the prediction horizon i. Formally: Ex≤t∼D[p(xt+i|x≤t)] = f(i),∀i∈ {1,2, . . . , n }, where f(i)is a function characterizing the expected probability of xt+i. Remark. Consistency ensures that the language model’s predictions for future tokens xt+iexhibit stable statistical behavior in expectation, regardless of the variability in the input sequences x≤t. The function f(i)encapsulates the expected confidence in predicting the i-th token ahead, which may decrease with iin accordance with the Attenuation definition ( i.e.,f(i)> f(i+ 1) ). 2Typically, t >1with the input containing at least one start token, like “ <|begin oftext|> ” in Llama series. 5 APREPRINT 𝜸=𝟎.𝟎𝟏𝜸 =𝟎.𝟎𝟓𝜸 =𝟎.𝟏𝜸=𝟎.𝟎𝟏𝜸 =𝟎.𝟎𝟓𝜸 =𝟎.𝟏𝜸=𝟎.𝟎𝟏𝜸 =𝟎.𝟎𝟓𝜸 =𝟎.𝟏𝜸=𝟎.𝟎𝟏 𝜸 =𝟎.𝟏𝜸 =𝟎.𝟎𝟓𝜸=𝟎.𝟎𝟏𝜸 =𝟎.𝟎𝟓𝜸 =𝟎.𝟏𝜸=𝟎.𝟎𝟏 𝜸 =𝟎.𝟏𝜸 =𝟎.𝟎𝟓 𝑘=1𝑘=2 (a)exp[−γ·(i+(i−1)modk)] 𝜸=𝟎.𝟎𝟏𝜸 =𝟎.𝟎𝟓𝜸 =𝟎.𝟏𝜸=𝟎.𝟎𝟏𝜸 =𝟎.𝟎𝟓𝜸 =𝟎.𝟏𝜸=𝟎.𝟎𝟏𝜸 =𝟎.𝟎𝟓𝜸 =𝟎.𝟏𝜸=𝟎.𝟎𝟏 𝜸 =𝟎.𝟏𝜸 =𝟎.𝟎𝟓𝜸=𝟎.𝟎𝟏𝜸 =𝟎.𝟎𝟓𝜸 =𝟎.𝟏𝜸=𝟎.𝟎𝟏 𝜸 =𝟎.𝟏𝜸 =𝟎.𝟎𝟓 𝑘=1𝑘=2 (b)Qm i=1exp[−γ·(i+(i−1)modk)] 𝜸=𝟎.𝟎𝟏𝜸 =𝟎.𝟎𝟓𝜸 =𝟎.𝟏𝜸=𝟎.𝟎𝟏𝜸 =𝟎.𝟎𝟓𝜸 =𝟎.𝟏𝜸=𝟎.𝟎𝟏𝜸 =𝟎.𝟎𝟓𝜸 =𝟎.𝟏𝜸=𝟎.𝟎𝟏 𝜸 =𝟎.𝟏𝜸 =𝟎.𝟎𝟓𝜸=𝟎.𝟎𝟏𝜸 =𝟎.𝟎𝟓𝜸 =𝟎.𝟏𝜸=𝟎.𝟎𝟏 𝜸 =𝟎.𝟏𝜸 =𝟎.𝟎𝟓 𝑘=1𝑘=2 (c)Pn mQm iexp[−γ·(i+(i−1)modk)] Figure 5: Different curves of expected marginal probability (a), joint probability (b), and accepted length (c), with k∈ {1,2}. The leap strategy extends the range of prediction at position i, and achieves a higher length of expectation. Acceptance length. The expected length for accepted tokens can be expressed as E[L] =Pn m=1p(L > m ). Accord- ing to different prediction strategies, we have the following expectations:   E[L]s=Pn m=1Qm i=1Ex≤t∼D[p(xt+i|x≤t]) (Vanilla strategy) E[L]l=Pk(n−1)+1 m=1Qm i=1Ex≤t∼D[p(xt+i|x≤t−(i−1)modk]) (L-MTP strategy)(7) where vanilla
https://arxiv.org/abs/2505.17505v1
strategy predicts tokens xt+1,xt+2, . . . , xt+nsequentially using the hidden state at t. L-MTP predicts two interleaved sequences. Specifically, L-MTP uses the hidden state at t−1to compensate for the non-predicted tokens. Theorem 3 (Less attenuation, more speed-up) .Letγrepresent the attenuation coefficient, and f(i) := exp [−γ·(i−1)]be the probability decay function modeling the predictive confidence at step i. Then there exists a constant C > 0such that E[L]l>E[L]holds asymptotically as n→ ∞ , provided that γn2≤C, i.e., γ=O(1/n2). See proof in Appendix A. Remark. L-MTP introduces a longer prediction range ( k(n−1) + 1 ) to compensate for the loss of confidence on leaping positions. Theorem 3 reveals the relation between attenuation and the number of prediction heads. Less attenuation indicates higher speed-up of L-MTP compared to the vanilla strategy. In practice, nwould not be too large. Less attenuation ( γ) with fewer overheads of heads ( n) leads to further higher speed up. We illustrate the simulated curves for a better understanding of the superiority of L-MTP (see Figure 5). Illustration of analyses. To intuitively demonstrate the effectiveness of our method, we provide the illustration of the above theoretical analyses as shown in Figure 5. We simulate the probabilities and expectations of length and observe that L-MTP ( k > 1) outperforms MTP ( k= 1) given different attenuation cases. Less attenuation leads to higher speedup of L-MTP. 5 Experiments In this section, we conduct experiments to address the following research question: •RQ1: How does L-MTP perform on different LLM tasks compared to other prediction paradigms? Can it benefit the training of LLMs, thus boosting model performance? •RQ2: Can L-MTP bring further inference acceleration by the decoding strategy of looking backward? Is L-MTP’s decoding strategy extendable to further models? •RQ3: What is the prediction accuracy of each output head? Does it satisfy our theoretical analyses in Section 4? •RQ4: What is the potential of L-MTP? Does it suggest further findings on different scales of models and data? 6 APREPRINT 5.1 Experimental Setup Base LLMs. The experiments utilize the following base large language models: Qwen 2.5 (3B and 7B parameters), Llama 3.2 (3B parameters), Llama 3.1 (8B parameters), and Gemma 3 (4B and 12B parameters). These models are selected to represent a diverse range of architectures and parameter scales for comprehensive evaluation. We elaborate on more details in Appendix B.2. Baselines. For efficacy comparison, we evaluate two prediction paradigms: next token prediction (NTP) and multi- token prediction (MTP). These paradigms assess the models’ ability to generate accurate and contextually relevant outputs under different prediction strategies. For efficiency, we use NTP with autoregressive decoding as the basis. We compare L-MTP with MTP, which leverages self-speculative decoding, and a trivial forward decoding way (see the implementation in Appendix C.1), to analyze the inference efficiency. Datasets. We curate the training dataset from Math [41], Evol-Instruct-Code [42, 43], and Alpaca-GPT4 [44]. In the first stage, we use the full data for self-distillation. For the second stage, we randomly select 10,000 examples with a ratio of 4:4:2, corresponding to math, code, and general data, respectively. To
https://arxiv.org/abs/2505.17505v1
benchmark the methods, we select Math500 [45] (4-shot) and GSM8K [46] (4-shot) for math evaluation, MBPP ,MBPP+[47, 48], HumanEval , andHumanEval+[49, 48] for code evaluation, and MMLU [50] and IFEval [51] for general evaluation. We detail the statistics and utilization of these datasets in Appendix B.3, Appendix B.4, and Appendix B.6. Evaluation metrics. For performance comparison, we utilize accuracy for both math and general tasks and pass@1 for code tasks. For efficiency analysis, we employ the speedup ratio as the metric, which is calculated by the relative generated tokens per second compared to the original. Higher values indicate better performance. Implementation details. To adapt L-MTP for NTP-based methods, we employ a two-stage training procedure. At the head warming up stage, we freeze the LLM backbone while training the heads with a learning rate of 1 ×10−3for 5 epochs. We utilize the cosine scheduler and set the warmup ratio as 0.1. At the next stage, we utilize LoRA [52] with rank being 32 and alpha being 16 to tune the full model. Here we only train the model for 3 epochs with the learning rate being 1 ×10−5. We set k= 2 andn= 4 by default. This training setting is also employed for MTP implementation to ensure fairness. We also provide the pseudo-code of L-MTP in Appendix B.1. All the experiments are conducted on 2 ×NVIDIA H100-80G GPUs. 5.2 Results and Discussions Overall performance (RQ1). To answer RQ1, we compare L-MTP with MTP and NTP across diverse datasets and involve a range of base models as backbones, as shown in Table 1. Through the comparison, we can observe the improvement brought by L-MTP for different scales and series of models, especially on math tasks for the Llama and Gemma series, and code tasks for the Qwen series. Furthermore, we find all models gain improvement on general tasks, exemplified by IFEval. Notably, L-MTP achieves better performance for most tasks compared to MTP. Intriguingly, we observe that in some cases, even NTP also brings worse results. Although L-MTP can compensate for the margin, the deterioration still cannot be mitigated. Carefully choosing higher-quality data would be beneficial. However, in this paper, we do not focus on how to select data, but on investigating the effect of L-MTP compared to MTP. Such a phenomenon also motivates us to explore more in-depth analyses and discussions. Inference acceleration (RQ2). L-MTP implements the decoding by looking backward to achieve inference acceler- ation without any architecture modifications or complex operations. We provide the inference speedup comparison as shown in Figure 6. We also implement a trivial solution for leaping prediction by looking forward, denoted as F-MTP. We provide more implementation details in Appendix C.1. Compared to MTP, L-MTP achieves comparable 7 APREPRINT Table 1: Performance comparison with different prediction paradigms across diverse tasks and benchmarks. In each case, the best average result (Avg.) by comparing among NTP, MTP, and L-MTP is demonstrated in bold. Math500 GSM8K MBPP MBPP+HumanEval HumanEval+MMLU IFEval Avg.Llama3.2-3BBase 2.20 1.06 50.00 40.48 27.44 24.39 54.23 18.23 27.25 NTP 3.00 3.71 47.09 36.24 21.34 17.68 54.34 20.74 25.52 MTP
https://arxiv.org/abs/2505.17505v1
3.40 3.87 46.83 36.51 21.95 18.29 54.22 18.59 25.46 L-MTP 4.80 5.91 46.56 36.51 24.39 20.73 54.17 20.38 26.68Llama3.1-8BBase 4.20 9.86 61.38 51.32 39.02 31.71 63.26 18.23 34.87 NTP 5.60 11.30 61.38 51.06 42.68 35.37 63.64 20.14 36.40 MTP 6.40 10.08 60.32 49.74 41.46 35.98 63.52 19.42 35.87 L-MTP 6.40 10.92 61.38 50.53 42.68 36.59 63.70 22.18 36.80Qwen2.5-3BBase 35.40 53.75 62.70 53.97 68.29 61.59 65.13 32.73 54.20 NTP 25.40 49.13 66.93 57.94 67.68 60.98 65.17 34.17 53.43 MTP 25.40 45.79 67.72 57.67 65.85 59.15 65.21 35.49 52.79 L-MTP 28.20 46.25 67.99 59.26 67.68 60.37 65.23 35.01 53.75Qwen2.5-7BBase 63.00 56.79 75.93 65.34 78.05 71.34 71.93 42.69 65.63 NTP 49.40 52.99 78.31 67.46 78.05 69.51 71.78 43.41 63.86 MTP 49.00 52.62 78.04 67.99 76.22 69.51 71.85 41.49 63.34 L-MTP 46.00 56.03 78.04 67.72 77.44 71.95 71.98 44.12 64.16Gemma3-4BBase 0.00 0.00 60.58 51.59 33.54 28.05 38.21 26.50 29.81 NTP 6.20 4.70 58.20 51.06 46.34 39.02 58.29 35.49 37.41 MTP 6.00 4.32 58.47 50.53 43.29 37.20 58.25 34.65 36.59 L-MTP 7.60 4.25 57.67 49.47 45.73 38.41 58.33 34.65 37.01Gemma3-12BBase 0.00 9.78 73.28 59.52 45.73 36.59 23.79 29.38 34.76 NTP 10.00 13.42 71.16 59.79 63.41 56.10 71.69 29.38 46.87 MTP 9.20 5.61 70.11 58.47 61.59 54.27 71.67 30.46 45.17 L-MTP 17.20 26.38 70.11 60.05 62.20 55.49 72.10 33.09 49.58 yet sometimes higher speedup, especially on GSM8K. L-MTP predicts the farther position, while leaving the blank filled by the previous prediction, thus achieving faster inference. Table 2: The speed up ratio comparison when extend- ing L-MTP to Medusa on different scales of models. GSM8K MBPP Viccuna 7BMTP 1.83× 1.97× L-MTP 2.32× 2.01× Viccuna 13BMTP 2.24× 1.98× L-MTP 2.43× 2.02×We also explore the potential by extending L-MTP decoding to existing models, like Medusa [36], which is specialized for im- proving the acceptance rate for multiple heads. We equip these models with L-MTP decoding and showcase the results in Ta- ble 2. Directly changing the decoding strategy to the leaping paradigm brings up to 1.3 ×speed up (22% relative boosting). These results demonstrate the potential of L-MTP, especially for models with higher acceptance rates. The expected distribution at each position (RQ3). We calculate the prediction accuracies at each position to verify our theoretical analyses in Section 4. We plot the different accuracies for different models (box), and show the average ones at each position (line) for both MTP and L-MTP in Figure 7. These practical results manifest the property of Attenuation ( cf., Definition 1) and Consistency ( cf., Assumption 2), and resemble our simulated illustration, particularly providing a strong support for our theoretical analyses. 8 APREPRINT G3 4B G3 12BL3.2 3BL3.1 8B Q2.5 3BQ2.5 7BNTPF-MTPMTPL-MTP01234GSM8K G3 4B G3 12BL3.2 3BL3.1 8B Q2.5 3BQ2.5 7BNTPF-MTPMTPL-MTP01234MBPP G3 4B G3 12BL3.2 3BL3.1 8B Q2.5 3BQ2.5 7BNTPF-MTPMTPL-MTP01234IFEval (a) GSM8K G3 4B G3 12BL3.2 3BL3.1 8B Q2.5 3BQ2.5 7BNTPF-MTPMTPL-MTP01234GSM8K G3 4B G3 12BL3.2 3BL3.1 8B Q2.5 3BQ2.5 7BNTPF-MTPMTPL-MTP01234MBPP G3 4B G3 12BL3.2 3BL3.1 8B Q2.5 3BQ2.5 7BNTPF-MTPMTPL-MTP01234IFEval (b) MBPP G3 4B G3 12BL3.2 3BL3.1 8B Q2.5 3BQ2.5 7BNTPF-MTPMTPL-MTP01234GSM8K G3 4B G3 12BL3.2 3BL3.1 8B Q2.5 3BQ2.5 7BNTPF-MTPMTPL-MTP01234MBPP G3 4B G3 12BL3.2 3BL3.1 8B Q2.5
https://arxiv.org/abs/2505.17505v1
3BQ2.5 7BNTPF-MTPMTPL-MTP01234IFEval (c) IFEval Figure 6: Speedup with self-speculative decoding for different series of LLMs (“G” ↔Gemma, “L” ↔Llama, and “Q”↔Qwen). The Z-axis represents the speedup ratio. 1 2 3 4 5 6 70.40.60.81.0MTP L-MTP Figure 7: The prediction accuracy at different positions estimated on the alpaca-eval split. Llama3.x Gemma-3 Qwen-2.50.30.40.50.60.70.8 3B4B 3B 8B12B 7BFigure 8: The prediction accuracy for different models. Myopia worsens as scale increases. 1K 2K 10K 50K 100K Full0.40.50.60.7 i=2 i=3 i=4 i=5 i=6 i=7Figure 9: The prediction accuracy improves as training data increases (from 1K to the full). Potential analysis (RQ4). We emphasize the potential of L-MTP by investigating the myopia of LLMs and the effect of data amount. (1) Myopic generation. We demonstrate the prediction accuracy across different scales of models, as shown in Figure 8. The accuracy drops consistently when changing the small model to the larger one for all series of LLMs, indicating the inherent myopia imposed by NTP pre-training. We also provide the loss curves during training in Appendix D, which show an inflection when warming up the heads. Recent work [27] suggests training a model from scratch with the MTP objective. This is also promising for L-MTP to inherently benefit the model with a broader range of predictions and faster inference. (2) Data scales. We illustrate the increasing prediction accuracy when adding more data in Figure 9 at the head warming up stage. Large-scale data introduces more diversity to help additional heads adapt the LLM backbone. However, we also observe that the increase is not linear. To obtain higher accuracy, equipping L-MTP with more sophisticated techniques for training or model architecture [53, 40] will be also promising. 6 Related Work 6.1 Multi-Token Prediction Previous studies demonstrate that multi-token prediction (MTP) encourages pre-planning capabilities in large language models (LLMs). Qi et al. [26] pioneer n-step-ahead prediction to optimize language models, mitigating overfitting to strong local dependencies. Gloeckle et al. [27] pretrain LLMs with additional prediction heads to achieve signif- icant performance improvements, particularly on code-related tasks. Industrial deployments have also adopted MTP to improve both training efficiency and pre-planning [28, 29]. MTP has sparked growing interest in exploring its potential, including adapting next-token prediction (NTP) models for MTP [54] and applying it to domains such as 9 APREPRINT speech [55]. Furthermore, recent work investigates MTP’s potential for inference acceleration by incorporating addi- tional prediction heads, as exemplified by Medusa [36]. We discuss LLM inference acceleration further in the next subsection. In addition, recent research on MTP shows significant promise, with the support of LLMs inherently maintaining certain pre-planning [56]. These methods typically assume prediction within an adjacent context for LLMs, predicting the next ntokens simultaneously at each time step. We go beyond its prediction pattern and introduce leaps between prediction tokens, therefore extending a broader training signal. 6.2 LLM Inference Acceleration There is a bunch of work focusing on accelerating LLMs, especially on their inference procedure [57, 58]. The remark- able techniques involve quantization [59, 60], pruning [61, 62], knowledge distillation [63, 44], compact architecture design [64, 65, 66], and dynamic network [67,
https://arxiv.org/abs/2505.17505v1
68]. The production deployment also advances to improve the infer- ence efficiency, like memory management [69, 70] and parallelism [71, 72]. In this paper, we focus on the inference acceleration benefited by LLM decoding. Prior works accelerate inference on greedy decoding [73, 74], while recent speculative decoding extends it with prov- ably losslessness [37, 75, 76]. Speculative decoding follows the principle of draft-then-verify , where a draft model (smaller) efficiently generate multiple tokens for the parallel verification via the target model (larger). The draft- ing procedure can employ an independent model [38, 77] or enhance the own model [78], like adding additional FFN heads [73, 36, 40]. During the verification, the vanilla sampling only process tokens in a single draft sequence [37, 79], while recent methods utilize the token tree to verify multiple draft sequences in parallel [38, 39, 36, 40], further im- prove the token acceptance rate. In L-MTP, we apply our looking backward decoding by utilizing the additional LLM heads for self-speculative decoding, paired with the tree-based verification. 7 Conclusion In this paper, we propose leap multi-token prediction as an improvement over vanilla multi-token prediction in the training and inference of large language models for generative or reasoning tasks. Both theoretical insights and em- pirical evidence are offered to justify the superiority of the proposed method, where both model performance and inference speed can be enhanced simultaneously in a series of scenarios. In future work, we would like to better un- derstand how to adaptively choose nandkin leap multi-token prediction losses. One possibility is to determine their values based on the local uncertainty or entropy of the predicted tokens, which allows the model to leap more aggres- sively in low-entropy regions while maintaining finer granularity in more ambiguous contexts. Also, reinforcement fine-tuning has emerged as a promising paradigm for training large language models. Incorporating our method into this training framework opens up exciting opportunities and is worth further exploration. 10 APREPRINT References [1] Luciano Floridi and Massimo Chiriatti. Gpt-3: Its nature, scope, limits, and consequences. Minds and Machines , 30:681–694, 2020. [2] Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783 , 2024. [3] Kimi Team, Angang Du, Bofei Gao, Bowei Xing, Changjiu Jiang, Cheng Chen, Cheng Li, Chenjun Xiao, Chenzhuang Du, Chonghua Liao, et al. Kimi k1. 5: Scaling reinforcement learning with llms. arXiv preprint arXiv:2501.12599 , 2025. [4] Zhe Chen, Weiyun Wang, Yue Cao, Yangzhou Liu, Zhangwei Gao, Erfei Cui, Jinguo Zhu, Shenglong Ye, Hao Tian, Zhaoyang Liu, et al. Expanding performance boundaries of open-source multimodal models with model, data, and test-time scaling. arXiv preprint arXiv:2412.05271 , 2024. [5] Bingning Wang, Haizhou Zhao, Huozhi Zhou, Liang Song, Mingyu Xu, Wei Cheng, Xiangrong Zeng, Yupeng Zhang, Yuqi Huo, Zecheng Wang, et al. Baichuan-m1: Pushing the medical capability of large language models. arXiv preprint arXiv:2502.12671 , 2025. [6] Gemini Team, Rohan Anil, Sebastian Borgeaud, Jean-Baptiste Alayrac, Jiahui Yu, Radu Soricut, Johan Schalk- wyk, Andrew M Dai, Anja Hauth,
https://arxiv.org/abs/2505.17505v1
Katie Millican, et al. Gemini: a family of highly capable multimodal models. arXiv preprint arXiv:2312.11805 , 2023. [7] Run Luo, Haonan Zhang, Longze Chen, Ting-En Lin, Xiong Liu, Yuchuan Wu, Min Yang, Minzheng Wang, Pengpeng Zeng, Lianli Gao, et al. Mmevol: Empowering multimodal large language models with evol-instruct. arXiv preprint arXiv:2409.05840 , 2024. [8] Zhiqi Ge, Hongzhe Huang, Mingze Zhou, Juncheng Li, Guoming Wang, Siliang Tang, and Yueting Zhuang. Worldgpt: Empowering llm as multimodal world model. In ACM MM , pages 7346–7355, 2024. [9] Ilker Yildirim and LA Paul. From task structures to world models: what do llms know? Trends in Cognitive Sciences , 2024. [10] Daniil A Boiko, Robert MacKnight, Ben Kline, and Gabe Gomes. Autonomous chemical research with large language models. Nature , 624(7992):570–578, 2023. [11] Kevin Maik Jablonka, Philippe Schwaller, Andres Ortega-Guerrero, and Berend Smit. Leveraging large language models for predictive chemistry. Nature Machine Intelligence , 6(2):161–169, 2024. [12] Hilbert Yuen In Lam, Xing Er Ong, and Marek Mutwil. Large language models in plant biology. Trends in Plant Science , 2024. [13] Shanghua Gao, Ada Fang, Yepeng Huang, Valentina Giunchiglia, Ayush Noori, Jonathan Richard Schwarz, Yasha Ektefaie, Jovana Kondic, and Marinka Zitnik. Empowering biomedical discovery with ai agents. Cell, 187(22):6125–6151, 2024. [14] Arun James Thirunavukarasu, Darren Shu Jeng Ting, Kabilan Elangovan, Laura Gutierrez, Ting Fang Tan, and Daniel Shu Wei Ting. Large language models in medicine. Nature Medicine , 29(8):1930–1940, 2023. [15] Jan Clusmann, Fiona R Kolbinger, Hannah Sophie Muti, Zunamys I Carrero, Jan-Niklas Eckardt, Narmin Ghaf- fari Laleh, Chiara Maria Lavinia L ¨offler, Sophie-Caroline Schwarzkopf, Michaela Unger, Gregory P Veldhuizen, et al. The future landscape of large language models in medicine. Communications Medicine , 3(1):141, 2023. [16] Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models. arXiv preprint arXiv:2001.08361 , 2020. 11 APREPRINT [17] Z. Z. Ren, Zhihong Shao, Junxiao Song, Huajian Xin, Haocheng Wang, Wanjia Zhao, Liyue Zhang, Zhe Fu, Qi- hao Zhu, Dejian Yang, Z. F. Wu, Zhibin Gou, Shirong Ma, Hongxuan Tang, Yuxuan Liu, Wenjun Gao, Daya Guo, and Chong Ruan. Deepseek-prover-v2: Advancing formal mathematical reasoning via reinforcement learning for subgoal decomposition. arXiv preprint arXiv:2504.21801 , 2025. [18] Xiao Bi, Deli Chen, Guanting Chen, Shanhuang Chen, Damai Dai, Chengqi Deng, Honghui Ding, Kai Dong, Qiushi Du, Zhe Fu, et al. Deepseek llm: Scaling open-source language models with longtermism. arXiv preprint arXiv:2401.02954 , 2024. [19] Aohan Zeng, Xiao Liu, Zhengxiao Du, Zihan Wang, Hanyu Lai, Ming Ding, Zhuoyi Yang, Yifan Xu, Wendi Zheng, Xiao Xia, et al. Glm-130b: An open bilingual pre-trained model. In ICLR , 2023. [20] An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, et al. Qwen2. 5 technical report. arXiv preprint arXiv:2412.15115 , 2024. [21] Liang Chen, Zekun Wang, Shuhuai Ren, Lei Li, Haozhe Zhao, Yunshui Li, Zefan Cai, Hongcheng Guo, Lei Zhang, Yizhe Xiong, et al. Next token prediction towards multimodal intelligence: A comprehensive survey. arXiv preprint arXiv:2412.18619 , 2024. [22]
https://arxiv.org/abs/2505.17505v1
Lei Zhang, Yunshui Li, Jiaming Li, Xiaobo Xia, Jiaxi Yang, Run Luo, Minzheng Wang, Longze Chen, Junhao Liu, Qiang Qu, et al. Hierarchical context pruning: Optimizing real-world code completion with repository-level pretrained code llms. In AAAI , pages 25886–25894, 2025. [23] Hangfeng He and Weijie J Su. A law of next-token prediction in large language models. arXiv preprint arXiv:2408.13442 , 2024. [24] James Flemings, Meisam Razaviyayn, and Murali Annavaram. Differentially private next-token prediction of large language models. arXiv preprint arXiv:2403.15638 , 2024. [25] Minh Nguyen, Andrew Baker, Clement Neo, Allen Roush, Andreas Kirsch, and Ravid Shwartz-Ziv. Turning up the heat: Min-p sampling for creative and coherent llm outputs. In ICLR , 2025. [26] Weizhen Qi, Yu Yan, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang, and Ming Zhou. Prophetnet: Predicting future n-gram for sequence-to-sequencepre-training. In EMNLP , pages 2401–2410, 2020. [27] Fabian Gloeckle, Badr Youbi Idrissi, Baptiste Rozi `ere, David Lopez-Paz, and Gabriel Synnaeve. Better & faster large language models via multi-token prediction. arXiv preprint arXiv:2404.19737 , 2024. [28] Aixin Liu, Bei Feng, Bing Xue, Bingxuan Wang, Bochao Wu, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, et al. Deepseek-v3 technical report. arXiv preprint arXiv:2412.19437 , 2024. [29] Xiaomi LLM-Core Team. Mimo: Unlocking the reasoning potential of language model – from pretraining to posttraining, 2025. [30] Valerie F Reyna and Charles J Brainerd. Fuzzy-trace theory: An interim synthesis. Learning and Individual Differences , 7(1):1–75, 1995. [31] Charles J Brainerd and Valerie F Reyna. Fuzzy-trace theory and false memory. Current Directions in Psycho- logical Science , 11(5):164–169, 2002. [32] Susan T Fiske and Shelley E Taylor. Social cognition, 2nd. NY: McGraw-Hill , pages 16–15, 1991. [33] Zhenghao Lin, Zhibin Gou, Yeyun Gong, Xiao Liu, Yelong Shen, Ruochen Xu, Chen Lin, Yujiu Yang, Jian Jiao, Nan Duan, et al. Rho-1: Not all tokens are what you need. arXiv preprint arXiv:2404.07965 , 2024. [34] Zhipeng Chen, Kun Zhou, Wayne Xin Zhao, Jingyuan Wang, and Ji-Rong Wen. Not everything is all you need: Toward low-redundant optimization for large language model alignment. arXiv preprint arXiv:2406.12606 , 2024. [35] Heming Xia, Yongqi Li, Chak Tou Leong, Wenjie Wang, and Wenjie Li. Tokenskip: Controllable chain-of- thought compression in llms. arXiv preprint arXiv:2502.12067 , 2025. 12 APREPRINT [36] Tianle Cai, Yuhong Li, Zhengyang Geng, Hongwu Peng, Jason D Lee, Deming Chen, and Tri Dao. Medusa: Simple llm inference acceleration framework with multiple decoding heads. In ICML , pages 5209–5235. PMLR, 2024. [37] Yaniv Leviathan, Matan Kalman, and Yossi Matias. Fast inference from transformers via speculative decoding. InICML , pages 19274–19286. PMLR, 2023. [38] Xupeng Miao, Gabriele Oliaro, Zhihao Zhang, Xinhao Cheng, Zeyu Wang, Zhengxin Zhang, Rae Ying Yee Wong, Alan Zhu, Lijie Yang, Xiaoxiang Shi, et al. Specinfer: Accelerating large language model serving with tree-based speculative inference and verification. In ASPLOS , pages 932–949, 2024. [39] Ziteng Sun, Ananda Theertha Suresh, Jae Hun Ro, Ahmad Beirami, Himanshu Jain, and Felix Yu. Spectr: Fast speculative decoding via optimal transport. NeurIPS , 36:30222–30242, 2023. [40] Yuhui Li, Fangyun Wei, Chao Zhang, and Hongyang Zhang. Eagle: Speculative sampling requires
https://arxiv.org/abs/2505.17505v1
rethinking feature uncertainty. ICML , 2024. [41] Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. NeurIPS , 2021. [42] Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xiubo Geng, Wenxiang Hu, Chongyang Tao, Jing Ma, Qingwei Lin, and Daxin Jiang. Wizardcoder: Empowering code large language models with evol-instruct. In ICLR , 2024. [43] Sahil Chaudhary. Code alpaca: An instruction-following llama model for code generation. https://github. com/sahil280114/codealpaca , 2023. [44] Baolin Peng, Chunyuan Li, Pengcheng He, Michel Galley, and Jianfeng Gao. Instruction tuning with gpt-4. arXiv preprint arXiv:2304.03277 , 2023. [45] Hunter Lightman, Vineet Kosaraju, Yuri Burda, Harrison Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. Let’s verify step by step. In ICLR , 2023. [46] Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168 , 2021. [47] Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, et al. Program synthesis with large language models. arXiv preprint arXiv:2108.07732 , 2021. [48] Jiawei Liu, Chunqiu Steven Xia, Yuyao Wang, and Lingming Zhang. Is your code generated by chatgpt really correct? rigorous evaluation of large language models for code generation. NIPS , 36:21558–21572, 2023. [49] Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde De Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374 , 2021. [50] Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understanding. ICLR , 2020. [51] Jeffrey Zhou, Tianjian Lu, Swaroop Mishra, Siddhartha Brahma, Sujoy Basu, Yi Luan, Denny Zhou, and Le Hou. Instruction-following evaluation for large language models. arXiv preprint arXiv:2311.07911 , 2023. [52] Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, et al. Lora: Low-rank adaptation of large language models. In ICLR , 2022. [53] Zachary Ankner, Rishab Parthasarathy, Aniruddha Nrusimha, Christopher Rinard, Jonathan Ragan-Kelley, and William Brandon. Hydra: Sequentially-dependent draft heads for medusa decoding. In CoLM , 2024. [54] Somesh Mehra, Javier Alonso Garcia, and Lukas Mauch. On multi-token prediction for efficient llm inference. arXiv preprint arXiv:2502.09419 , 2025. 13 APREPRINT [55] Yuhao Wang, Heyang Liu, Ziyang Cheng, Ronghua Wu, Qunshan Gu, Yanfeng Wang, and Yu Wang. V ocalnet: Speech llm with multi-token prediction for faster and high-quality generation. arXiv preprint arXiv:2504.04060 , 2025. [56] Wilson Wu, John Xavier Morris, and Lionel Levine. Do language models plan ahead for future tokens? In CoLM , 2024. [57] Zixuan Zhou, Xuefei Ning, Ke Hong, Tianyu Fu, Jiaming Xu, Shiyao Li, Yuming Lou, Luning Wang, Zhi- hang Yuan, Xiuhong Li, et al. A survey on efficient inference for large language models. arXiv preprint arXiv:2404.14294 , 2024. [58] Bill Yuchen Lin, Abhilasha Ravichander, Ximing Lu, Nouha Dziri, Melanie Sclar,
https://arxiv.org/abs/2505.17505v1
Khyathi Chandu, Chandra Bhagavatula, and Yejin Choi. The unlocking spell on base llms: Rethinking alignment via in-context learning. InICLR , 2024. [59] Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Quantized neural net- works: Training neural networks with low precision weights and activations. Journal of Machine Learning Research , 18(187):1–30, 2018. [60] Sehoon Kim, Coleman Hooper, Amir Gholami, Zhen Dong, Xiuyu Li, Sheng Shen, Michael W Mahoney, and Kurt Keutzer. Squeezellm: Dense-and-sparse quantization. arXiv preprint arXiv:2306.07629 , 2023. [61] Xinyin Ma, Gongfan Fang, and Xinchao Wang. Llm-pruner: On the structural pruning of large language models. NeurIPS , 36:21702–21720, 2023. [62] Shangqian Gao, Chi-Heng Lin, Ting Hua, Zheng Tang, Yilin Shen, Hongxia Jin, and Yen-Chang Hsu. Disp-llm: Dimension-independent structural pruning for large language models. NeurIPS , 37:72219–72244, 2024. [63] Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 , 2015. [64] Rewon Child, Scott Gray, Alec Radford, and Ilya Sutskever. Generating long sequences with sparse transformers. arXiv preprint arXiv:1904.10509 , 2019. [65] Angelos Katharopoulos, Apoorv Vyas, Nikolaos Pappas, and Franc ¸ois Fleuret. Transformers are rnns: Fast autoregressive transformers with linear attention. In ICML , pages 5156–5165. PMLR, 2020. [66] Insu Han, Rajesh Jayaram, Amin Karbasi, Vahab Mirrokni, David Woodruff, and Amir Zandieh. Hyperattention: Long-context attention in near-linear time. In ICLR , 2024. [67] Samyam Rajbhandari, Conglong Li, Zhewei Yao, Minjia Zhang, Reza Yazdani Aminabadi, Ammar Ahmad Awan, Jeff Rasley, and Yuxiong He. Deepspeed-moe: Advancing mixture-of-experts inference and training to power next-generation ai scale. In ICML , pages 18332–18346. PMLR, 2022. [68] Shiyi Cao, Shu Liu, Tyler Griggs, Peter Schafhalter, Xiaoxuan Liu, Ying Sheng, Joseph E Gonzalez, Matei Zaharia, and Ion Stoica. Moe-lightning: High-throughput moe inference on memory-constrained gpus. In ASPLOS , pages 715–730, 2025. [69] Coleman Hooper, Sehoon Kim, Hiva Mohammadzadeh, Michael W Mahoney, Sophia Shao, Kurt Keutzer, and Amir Gholami. Kvquant: Towards 10 million context length llm inference with kv cache quantization. NeurIPS , 37:1270–1303, 2024. [70] Zichang Liu, Aditya Desai, Fangshuo Liao, Weitao Wang, Victor Xie, Zhaozhuo Xu, Anastasios Kyrillidis, and Anshumali Shrivastava. Scissorhands: Exploiting the persistence of importance hypothesis for llm kv cache compression at test time. In NeurIPS , pages 52342–52364, 2023. [71] Hyungjun Oh, Kihong Kim, Jaemin Kim, Sungkyun Kim, Junyeol Lee, Du-seong Chang, and Jiwon Seo. Exegpt: Constraint-aware resource scheduling for llm inference. In ASPLOS , pages 369–384, 2024. 14 APREPRINT [72] Yixuan Mei, Yonghao Zhuang, Xupeng Miao, Juncheng Yang, Zhihao Jia, and Rashmi Vinayak. Helix: Serving large language models over heterogeneous gpus and network via max-flow. In ASPLOS , pages 586–602, 2025. [73] Mitchell Stern, Noam Shazeer, and Jakob Uszkoreit. Blockwise parallel decoding for deep autoregressive models. InNeurIPS , 2018. [74] Xin Sun, Tao Ge, Furu Wei, and Houfeng Wang. Instantaneous grammatical error correction with shallow aggressive decoding. arXiv preprint arXiv:2106.04970 , 2021. [75] Heming Xia, Zhe Yang, Qingxiu Dong, Peiyi Wang, Yongqi Li, Tao Ge, Tianyu Liu, Wenjie Li, and Zhifang Sui. Unlocking efficiency in large language model inference: A comprehensive survey of speculative decoding. arXiv preprint arXiv:2401.07851 , 2024. [76] Ming Yin, Minshuo
https://arxiv.org/abs/2505.17505v1
Chen, Kaixuan Huang, and Mengdi Wang. A theoretical perspective for speculative decoding algorithm. NeurIPS , 37:128082–128117, 2024. [77] Sen Yang, Shujian Huang, Xinyu Dai, and Jiajun Chen. Multi-candidate speculative decoding. arXiv preprint arXiv:2401.06706 , 2024. [78] Zhihao Zhang, Alan Zhu, Lijie Yang, Yihua Xu, Lanting Li, Phitchaya Mangpo Phothilimthana, and Zhihao Jia. Accelerating retrieval-augmented language model serving with speculation. ICLR , 2024. [79] Yongchao Zhou, Kaifeng Lyu, Ankit Singh Rawat, Aditya Krishna Menon, Afshin Rostamizadeh, Sanjiv Ku- mar, Jean-Franc ¸ois Kagy, and Rishabh Agarwal. Distillspec: Improving speculative decoding via knowledge distillation. arXiv preprint arXiv:2310.08461 , 2023. [80] Gemma Team, Aishwarya Kamath, Johan Ferret, Shreya Pathak, Nino Vieillard, Ramona Merhej, Sarah Perrin, Tatiana Matejovicova, Alexandre Ram ´e, Morgane Rivi `ere, et al. Gemma 3 technical report. arXiv preprint arXiv:2503.19786 , 2025. [81] Zhaorui Yang, Tianyu Pang, Haozhe Feng, Han Wang, Wei Chen, Minfeng Zhu, and Qian Liu. Self-distillation bridges distribution gap in language model fine-tuning. In ACL, pages 1028–1043, 2024. [82] Chunting Zhou, Pengfei Liu, Puxin Xu, Srinivasan Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, et al. Lima: Less is more for alignment. In NeurIPS , pages 55006–55021, 2023. [83] Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948 , 2025. 15 Appendix A Detailed Proof of Theorem 3 17 B Implementation Details 20 B.1 Pseudo-Code for L-MTP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 B.2 Base LLMs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 B.3 Training Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 B.4 Evaluation Benchmarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 B.5 Head Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 B.6 Data Curation . . . . . . . . . . . . . . . . . . . . . . .
https://arxiv.org/abs/2505.17505v1
. . . . . . . . . . . . . . . . . . . . . . . . 22 C Decoding Strategy 22 C.1 Forward Decoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 C.2 Tree Attention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 D Additional Experimental Results 23 E Broader Impact Statement 23 F Reproducibility 24 G Limitations 25 16 APREPRINT A Detailed Proof of Theorem 3 For LLMs with noutput heads that predict multiple tokens at once, the expectation of the accepted length can be represented as: E[L] =nX m=1mY i=1Ex≤t∼D[p(xt+i|x≤t)] (Vanilla) . (8) In this case, we only utilize the last hidden state, resulting in ntokens xt+n:t+1. Without changing the original probabilities, we propose a decoding strategy by looking backward that yields reorganized expectations: E[L]b=k(n−1)+1X m=1mY i=1Ex≤t∼D[p(xt+i|x≤t−(i−1)modk)] (L-MTP) . (9) Proof of Theorem 3. Proof. E[L]b=k(n−1)+1X m=1mY i=1Ex≤t∼D[p(xt+i|x≤t−(i−1)modk]) (10) =k(n−1)+1X m=1mY i=1f(i+ (i−1)modk) (11) =nX m=1mY i=1f(i+ (i−1)modk) +k(n−1)+1X m=n+1mY i=1f(i+ (i−1)modk) (12) E[L]b−E[L]|{z} ∆b=k(n−1)+1X m=1mY i=1p(xt+i|x≤t−(i−1)modk)−nX m=1mY i=1p(xt+i|x≤t) (13) =nX m=1"mY i=1f(i+ (i−1modk))−mY i=1f(i)# | {z } ∆1 b+k(n−1)+1X m=n+1mY i=1f(i+ (i−1modk)) | {z } ∆2 b. (14) Here∆bis the difference between two expectations, expressed as sums involving products of probabilities, with a function f(i)that decreases as iincreases. Besides, k≤n, which means the stride kis at most the number of heads n. That f(i)is a monotonically decreasing function, meaning f(i)≥f(j)fori < j . Formally, f(i+ (i−1modk))≤f(i),with equality only when imodk≡1. (15) Therefore, ∆b=∆1 b+∆2 b,∆1 b≤0,∆2 b>0. (16) For∆b>0, the positive ∆2 bmust outweigh the negative ∆1 b. The decay rate, controlled by γ, and the number of terms, controlled by n, will determine this balance. 17 APREPRINT To resolve the conditions for ∆b>0, we assume f(i)∼Θ(1/poly(i)),i.e.,f(i) = 1 /exp[γ·(i−1)], where γ >0 is the attenuation coefficient. ∆1 b=nX m=1"mY i=11 exp[γ(i+ (i−1modk))]−mY i=11 exp[γ·i]# . (17) ∆2 b=kn−1X m=n+1mY i=11 exp[γ(i+ (i−1modk))]. (18) ∆b=nX m=1"mY i=11 exp[γ(i+ (i−1modk))]−mY i=11 exp[γ·i]# +kn−1X m=n+1mY i=11 exp[γ(i+ (i−1modk))](19) =nX m=1" exp[−γmX i=1(i+ (i−1modk))]−exp[−γmX i=1i]# +kn−1X m=n+1exp[−γmX i=1(i+ (i−1modk))].(20) We resolve it in the case for k= 2. Specifically, ∆b=nX m=1 exp[−γmX i=1(i+ (i−1mod2))]−exp[−γmX i=1i]! +2n−1X m=n+1exp[−γmX i=1(i+ (i−1mod2))] (21) =nX m=1 exp[−γm(m+ 1) 2+jm 2k ]−exp[−γm(m+ 1) 2] +2n−1X m=n+1exp[−γ(m(m+ 1) 2+⌊m 2⌋)] (22) =nX m=1exp[−γm(m+ 1) 2] exp[−γjm 2k ]−1 +2n−1X m=n+1exp[−γ(m(m+ 1) 2+⌊m 2⌋)] (23) Consider the upper bound of |∆1 b|: |∆1 b|=nX m=1exp[−γm(m+ 1) 2] 1−exp[−γjm 2k ] (24) ≤nX m=1γjm 2k exp[−γm(m+ 1) 2] (1−exp(−x)≤x,∀x≥0) (25) ≤γ 2nX m=1mexp[−γm(m+ 1) 2] (jm 2k ≤m 2) (26) ≤γ 2Zn+1 0xexp[−γx2 2]dx (27) =γ 2·1 γZ√γ(n+1) 0yexp[−y2 2]dy(lety=√γx) (28) =γ 2·1 γ 1−exp[−γ(n+
https://arxiv.org/abs/2505.17505v1
1)2 2] (Za 0ye−y2/2dy= 1−e−a2/2) (29) =1 2 1−exp[−γ(n+ 1)2 2] . (30) Afterward, consider the lower bound of |∆2 b|: ∆2 b=2n−1X m=n+1exp[−γ(m(m+ 1) 2+jm 2k )] (31) 18 APREPRINT ≥2n−1X m=n+1exp −γm2 2 (32) ≥Z2n n+1exp[−γx2 2]dx (33) ≥1√γZ√γ·2n √γ(n+1)exp[−y2 2]dy(y=√γx) (34) ≥1√γexp[−2γn2] 2γn−exp[−γ(n+ 1)2] γ(n+ 1) . (35) (Zb aexp[−y2/2]dy≥exp[−b2/2] b−exp[−a2/2] a,forb > a > 0) ≳1√γ·exp[−2γn2] 2n. (36) Substitute the bounds: ∆2 b>|∆1 b| ⇒1√γ·exp[−2γn2] 2n>1 2(1−exp[−γ·(n+ 1)2 2]). Introducing β=γn2, this becomes: exp[−2β] n√β>β 2⇒exp[−2β]>nβ3/2 2⇒exp[−2β]√β> β⇒exp[−2β]> β3/2⇒1> β3/2e2β. For the inequality to hold, it suffices that β=O(1), which implies γ=O 1/n2 . 19 APREPRINT B Implementation Details B.1 Pseudo-Code for L-MTP We provide the training and inference pseudo-code for L-MTP. For training, we only need to add a leap control kto reassign the prediction positions to modify MTP to L-MTP. Training of L-MTP # multi - head forward computing >>> for i in range ( self . n_head ): logits . append ( self . heads [i]( hidden_states )) # multi - head forward computing <<< # .... # Leap Multi - token Prediction >>> # if k == 1: L-MTP = MTP for i, logits_ in enumerate ( logits ): h_logits = logits_ [:, : -(k*(i+1))]. contiguous () h_labels = labels [... , k*(i+1) :]. contiguous () loss_i = self . loss_fct ( logits = h_logits , labels = h_labels , vocab_size = self . config . vocab_size ) loss += loss_i # Leap Multi - token Prediction <<< # ... For inference, we add a cache to store the previous hidden state for our decoding. The decoding involves both previous and current hidden states to yield k(n−1)additional tokens. Inference of L-MTP # multi - head forward computing >>> for i in range ( self . n_head ): logits . append ( self . heads [i]( hidden_states )) # multi - head forward computing <<< # .... # get the previous one if self . model . past_hidden_states is None : self . model . past_hidden_states = hidden_states [:, -2]. unsqueeze (1) past_hidden_states = torch . stack ([ self . model . past_hidden_states , hidden_states [:, -1]. unsqueeze (1)], dim =0) logits_ = self . heads ( past_hidden_states ) lmtp_logits = logits_ . flatten ( start_dim =0, end_dim =1) # update the past hidden states self . model . past_hidden_states = past_hidden_states [ -1] # ... 20 APREPRINT B.2 Base LLMs The experiments leverage a diverse set of base large language models (LLMs) to ensure a comprehensive evaluation across varying architectures and parameter scales. Below, we introduce the selected models: Qwen 2.5 (3B and 7B) [20] developed by Alibaba Cloud, Llama 3.2 (3B), Llama 3.1 (8B) [2] developed by Meta AI, and Gemma 3 (4B and 12B) [80] developed by Google. B.3 Training Datasets Math3[41]: This dataset comprises a curated collection of mathematical problems and solutions, spanning topics such as algebra, calculus, geometry, and discrete mathematics. It is designed to enhance the reasoning and problem- solving capabilities of large language models, particularly in numerical and symbolic computation tasks. We utilize the training dataset with 7.5K
https://arxiv.org/abs/2505.17505v1
problems. Evol-Instruct-Code4[42, 43]: This dataset is an evolved version of instruction-based code generation data, built upon iterative refinement and augmentation techniques. It contains a wide range of programming tasks, solutions, and explanatory instructions across multiple languages ( e.g., Python, Java, C++). This dataset is curated following the code generation instruction process described in the WizardCoder [42]. It is based on the Code Alpaca 20k dataset [43] and evolves each instruction through a randomly chosen evolution prompt, see more details in its public repository5. As a result, this dataset is constructed with 80K examples with the merging of the seed dataset and three evolutions. Alpaca-GPT46[44]: The dataset is a collection of English instruction-following data generated by GPT-4 using Al- paca prompts, specifically designed for fine-tuning LLMs. This dataset is a derivative of the original Alpaca dataset and leverages the same prompts but uses GPT-4 to generate the completions, resulting in higher quality and more de- tailed responses. The original Alpaca dataset used text-davinci-003 to complete the prompts. In contrast, Alpaca-GPT4 uses GPT-4, resulting in more detailed and higher-quality responses. The dataset consists of 52K unique instruction- following examples. B.4 Evaluation Benchmarks MATH500 [45] is a subset of the MATH dataset, comprising 500 challenging mathematical problems designed to test advanced mathematical reasoning and problem-solving skills. It includes problems from various domains such as algebra, calculus, geometry, and number theory, primarily at high school and early undergraduate levels. GSM8K [46] is a dataset of grade-school-level math word problems. It focuses on elementary arithmetic, basic algebra, and logical reasoning, which requires models to understand natural language descriptions and perform multi- step calculations. We utilize its test split, which contains 1,319 examples in total. MBPP & MBPP+[47, 48] is a dataset of 974 Python programming problems designed to evaluate code generation and problem-solving abilities. Tasks range from simple functions to moderately complex algorithms, requiring correct implementation in Python. MBPP+adds more unique test-cases (30 ×) from the original MBPP [48]. HumanEval & HumanEval+[49, 48] is a dataset of 164 hand-crafted Python programming problems, focusing on evaluating the functional correctness of code generation. Each problem includes a function signature, description, and 3https://github.com/hendrycks/math 4https://huggingface.co/datasets/ise-uiuc/Magicoder-Evol-Instruct-110K 5https://github.com/nickrosh/evol-teacher 6https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM 21 APREPRINT test cases to verify the solution. HumanEval+adds more unique test-cases (80 ×) and fixes incorrect ground-truth solutions in HumanEval [48]. MMLU [50] is a comprehensive benchmark consisting of 57 tasks covering topics from STEM (science, technology, engineering, and mathematics), humanities, social sciences, and professional fields. The tasks are multiple-choice questions at high school, college, and professional levels, designed to evaluate the model’s broad knowledge and reasoning capabilities. Performance is measured by accuracy across all tasks. IFEval [51] is a dataset designed to assess a model’s ability to follow explicit instructions in natural language. It includes a variety of tasks where models must generate responses that adhere strictly to given guidelines, such as formatting, content constraints, or specific reasoning steps. B.5 Head Architecture We describe the head architecture of Medusa [36], which is also adopted in our implementation. Specifically, given the hidden state zat the last layer of LLMs, the head will first
https://arxiv.org/abs/2505.17505v1
transform it via z′=z+SiLU (Wz+b), where W∈Rd×d, b∈Rd×1,dis the dimension of hidden state and SiLU is the a Sigmoid Linear Unit (SiLU) function, denoted as SiLU (x) =x·σ(x). After that, the transformed hidden state is mapped to the logits, with the output dimensions being the size of the vocabulary. Such a process can be formulated as Wheadz′. Notably, Wheadis initialized with the weight of the original head of the backbone LLM, and Wis initialized with zeros. B.6 Data Curation Self-distillation [81]. At the head warm-up stage, the main goal is to align additional heads with the original head to improve acceptance rates, as also suggested in [36]. Therefore, we employ the self-distillation strategy for different backbone LLMs. In this case, we use vLLM7for efficiently generating the responses for every data point. The generated dataset will be stored and then serve as the training data for warm-up training. Downsampling [82, 58]. At the continued training stage, we downsample the dataset randomly, where we take 4,000 examples for both code and math datasets and 2,000 examples for the general dataset. Therefore, we prepare 10K examples for continuing to train the model. We keep the curated dataset fixed for a fair comparison in our experiments. C Decoding Strategy C.1 Forward Decoding We also provide another trivial alternative by looking forward , denoted F-MTP. For instance, F-MTP predicts tokens {xt+k(i−1)+2}i∈[n]are predicted given x≤t+1. In this case, we have: n p(xt+i|x<t+(i−1)modk)|i∈ {0,1, . . . , kn }o , (37) where the token sequence is sampled by looking forward ( +)k−1steps. Forward decoding prioritizes early tokens, resulting in tokens {xt+k}being all predicted by the original LLM head. This decoding strategy serves as our baseline for efficient analysis. 7https://docs.vllm.ai 22 APREPRINT C.2 Tree Attention Tree construction. Following the verification of token tree [38, 36], we merge the candidate tokens generated from multiple LLM heads to construct the tree. We employ a greedy search method from top to bottom to explore a layered graph and find the node sequences (paths) with maximum cumulative expectation. The expectation value is calculated by the estimated accuracy of each head. It starts from a root node and iteratively expands paths by selecting the neighbor with the highest expectation, computed as the product of node accuracies along the path. Neighbors are generated by either moving to the next node in the same layer or extending to the next layer, up to a specified maximum depth and child count per layer. We cache computed expectations to avoid redundant calculations and return a list of selected node sequences. We illustrate a tree structure example in Figure 10. We can observe that the token tree provides multiple token sequences (paths) for the following verification, thus improving the token acceptance rate. root (0,) (1,) (0, 0)(2,) (3,) (1, 0) (0, 1)(4,) (5,) (2, 0) (0, 2)(6,)(7,)(8,) (3, 0) (0, 3)(9,) (1, 1) (0, 0, 0)(10,) (4, 0) (0, 4)(11,) (5, 0) (0, 5)(12,) (2, 1)(13,) (1, 2)(14,) (0, 0, 1)(15,) (16,) (6, 0) (0, 6) (8, 0) (7, 0) (0, 7) (0, 8) (0, 0, 2)(3, 1) (1,
https://arxiv.org/abs/2505.17505v1
3) (9, 0)(17,) (18,) (1, 0, 0) (0, 1, 0)(0, 9) (2, 2) (0, 0, 3)(4, 1) (10, 0) (1, 4) (0, 10)(19,) (0, 0, 4)(11, 0) (0, 11) (5, 1) (1, 5) (2, 3) (3, 2) (0, 0, 5)(12, 0) (2, 0, 0) Figure 10: Token tree constructed according to the accuracy of heads. We use 3 heads for the Vicuna 7B model. The head accuracy is estimated when employing L-MTP decoding. Tree decoding. Upon the pre-defined token tree structure, we employ tree decoding to process the generated multiple predictions. First, we initialize the tree attention and indices given the tree paths ( cf., Figure 4). Once we generate multiple tokens’ logits, we select the top-k candidates, which are assembled as input for the target models. By utilizing the tree attention, the target model yields the prediction of the original head for verification in parallel, finally accepting the candidates and starting the next iteration. D Additional Experimental Results Training procedure. We present the training loss trends in Figure 11 for head warm-up and Figure 12 for full model tuning, with losses steadily decreasing and convergence to stability. E Broader Impact Statement Leap multi-token prediction (L-MTP) redefines the traditional autoregressive paradigm by skipping intermediate to- kens and predicting non-adjacent ones, mirroring human-like reasoning. This leap-based strategy broadens the contex- tual window, enhances inference efficiency, and reduces computational costs, making LLMs more suitable for complex 23 APREPRINT 204060Gemma3-4BGemma3-4B MTP L-MTP 204060Llama3.2-3BLlama3.2-3B MTP L-MTP 20406080100 Qwen2.5-3BQwen2.5-3B MTP L-MTP 20406080 Gemma3-12BGemma3-12B MTP L-MTP 20406080 Llama3.1-8BLlama3.1-8B MTP L-MTP 20406080Qwen2.5-7BQwen2.5-7B MTP L-MTP Figure 11: The illustration of the warm-up procedure of adapting multiple output heads to LLMs. The curves showcase the loss changes along with the head warm-up training for MTP and L-MTP. 0.50.60.70.80.9 Gemma3-4BGemma3-4BGemma3-4B NTP MTP L-MTP0.70.80.91.0 Llama3.2-3BLlama3.2-3BLlama3.2-3B NTP MTP L-MTP 0.50.60.70.8Qwen2.5-3BQwen2.5-3BQwen2.5-3B NTP MTP L-MTP 0.40.60.8Gemma3-12BGemma3-12BGemma3-12B NTP MTP L-MTP0.60.70.80.9Llama3.1-8BLlama3.1-8BLlama3.1-8B NTP MTP L-MTP 0.40.50.60.7Qwen2.5-7BQwen2.5-7BQwen2.5-7B NTP MTP L-MTP Figure 12: Full model fine-tuning with respect to different prediction paradigms. The curves showcase the loss changes along with the full model tuning for NTP, MTP, and L-MTP. decision-making tasks. Environmentally, L-MTP’s efficient decoding lowers energy consumption and supports greener AI deployments. Its design also expands access to high-performance LLMs, reducing barriers for resource-constrained developers. Furthermore, L-MTP introduces new possibilities for non-sequential reasoning, which paves the way for more efficient and scalable language models. F Reproducibility We provide implementation details, including illustrative algorithm descriptions and pseudo-code. The source code will be publicly released for reproducibility. 24 APREPRINT G Limitations Modern large language models are rapidly scaling up, with recent models reaching tens or even hundreds of billions of parameters ( e.g., DeepSeek-R1-70B/671B [83] and Llama-3.1-405B [2]). Our experiments are conducted on models up to 12B due to computational constraints. Despite this limitation, the results effectively validate our core ideas. In future work, we plan to extend our method to larger models to further assess its scalability and effectiveness at greater capacity. 25
https://arxiv.org/abs/2505.17505v1
arXiv:2505.17508v1 [cs.LG] 23 May 2025On the Design of KL-Regularized Policy Gradient Algorithms for LLM Reasoning Yifan Zhang *1Yifeng Liu *1Huizhuo Yuan1Yang Yuan2,3 Quanquan Gu1†Andrew C Yao2,3† 1University of California, Los Angeles2IIIS, Tsinghua University 3Shanghai Qi Zhi Institute Abstract Policy gradient algorithms have been successfully applied to enhance the reasoning capabil- ities of large language models (LLMs). Despite the widespread use of Kullback-Leibler (KL) regularization in policy gradient algorithms to stabilize training, the systematic exploration of how different KL divergence formulations can be estimated and integrated into surrogate loss functions for online reinforcement learning (RL) presents a nuanced and systematically explorable design space. In this paper, we propose Regularized Policy Gradient (RPG), a sys- tematic framework for deriving and analyzing KL-regularized policy gradient methods in the online RL setting. We derive policy gradients and corresponding surrogate loss functions for objectives regularized by both forward and reverse KL divergences, considering both normal- ized and unnormalized policy distributions. Furthermore, we present derivations for fully differentiable loss functions as well as REINFORCE-style gradient estimators, accommodating diverse algorithmic needs. We conduct extensive experiments on RL for LLM reasoning using these methods, showing improved or competitive results in terms of training stability and performance compared to strong baselines such as GRPO, REINFORCE++, and DAPO. The code is available at https://github.com/complex-reasoning/RPG. 1 Introduction Reinforcement learning (RL), particularly policy gradient (PG) methods, provides a powerful framework for solving sequential decision-making problems in complex environments. These methods have been successfully applied in diverse domains, ranging from robotics to game playing, and have recently become instrumental in fine-tuning large language models (LLMs) to align with human preferences and instructions (Ouyang et al., 2022) and enhancing the reasoning capabilities of LLMs (Shao et al., 2024; Guo et al., 2025). Classical PG algorithms like REINFORCE (Williams, 1992) optimize policies directly but often suffer from high gradient variance. Advanced methods like Proximal Policy Optimization (PPO) (Schulman et al., 2017) improve stability and sample efficiency, enabling large-scale applications, often by operating in an off-policy manner and employing techniques like training critic models for the estimation of value functions. *Equal contribution; †Corresponding authors. 1 Inputs (Iteration t): Current Policy π(t) θ Reference π(t) old Rewards R(x)RPG Core Engine 1. Construct J(θ(t)) =Eπ(t) θ[R]−β·KL 2. Derive ∇θ(t)J(θ(t)) 3. Formulate Surrogate Loss L(θ(t)) 4. Optimize to get π(t+1) θOutputs: Updated Policy π(t+1) θ Key Design Choices (Configuration) 1. KL Divergence Type: •Forward KL(πold∥πθ) •Reverse KL(πθ∥πold)2. KL Form: •Normalized •Unnormalized (UKL / k3)3. Loss Estimator: •Fully Differentiable •REINFORCE-styleGoal: LLM Reasoning Update for next iteration ( t+ 1): π(t+1) old←π(t+1) θ Figure 1: Overview of the iterative Regularized Policy Gradient (RPG) framework proposed in this work. At each iteration t, the central RPG Core Engine processes inputs: the current policy π(t) θ, a reference policy π(t) old, and associated rewards R(x). The engine’s operation encompasses four main steps: (1) constructing the KL-regularized objective J(θ(t)), which combines the expected reward with a KL divergence term; (2) deriving the off-policy policy gradient ∇θ(t)J(θ(t)); (3) formulating a corresponding surrogate loss function L(θ(t)); and (4) optimizing the policy parameters to yield an updated policy π(t+1)
https://arxiv.org/abs/2505.17508v1
θ, aimed at enhancing LLM reasoning capabilities. The specific behavior of the RPG Core Engine is configured by three key design choices: (i) the KL Divergence Type (Forward KL(πold∥πθ)or Reverse KL(πθ∥πold)); (ii) the KL Form (Normalized or Un-normalized, e.g., using UKL / k3estimators); and (iii) the Loss Estimator type (Fully Differentiable or REINFORCE-style with Stop-Gradient). The framework operates iteratively, with the updated policy π(t+1) θfrom one iteration informing the inputs for the next, including the update of the reference policy π(t+1) old, to facilitate continuous learning and performance improvement. A crucial technique for stabilizing policy optimization, especially when deviating from strictly on-policy updates or aiming to control policy complexity, is regularization. Kullback-Leibler (KL) divergence is a commonly used regularizer, penalizing the deviation of the learned policy πθfrom a reference policy πref(e.g., policy from previous iteration πθoldor a fixed prior policy πSFT). KL regularization helps prevent destructive policy updates, encourages exploration around known good policies, and can prevent catastrophic forgetting or overly confident outputs (Ouyang et al., 2022). Despite the widespread use of KL regularization in methods such as PPO (often implicitly through reward penalties) and explicit formulations like GRPO (Shao et al., 2024), there exists a considerable variety in how the KL divergence is formulated and estimated. Different choices include Forward KL and Reverse KL, handling potentially unnormalized distributions (Minka et al., 2 2005) (leading to unnormalized KL (UKL) and unnormalized reverse KL (URKL) formulations), and the use of various estimators like the k2andk3estimators (Schulman, 2020) designed to potentially reduce variance or offer different properties compared to the standard log-ratio ( k1) estimator. Furthermore, the interplay between the choice of KL formulation, the policy optimization setting (on-policy vs. off-policy), and the derivation of appropriate surrogate loss functions (fully differentiable vs. REINFORCE-style gradient estimators) can lead to subtle differences. This paper provides a systematic derivation and analysis of KL-regularized policy gradient methods. Our main contributions are: 1.We derive policy gradients and corresponding surrogate loss functions for objectives regularized by Forward and Reverse KL divergences, considering both standard normalized (KL) and unnormalized (UKL) forms. 2.Our methods operate within an iterative training framework where the reference model πreffor KL regularization is the policy from the last iteration, πold, providing a dynamic and adaptive regularization target. 3.We systematically provide derivations for fully differentiable loss functions (offering connections to variational inference) and REINFORCE-style gradient estimators (employing the stop-gradient operator). These are developed for the online setting, using off-policy gradient estimation via importance sampling from a prior policy πold. We explicitly detail the connection between the k3 estimator and our unnormalized KL (UKL) framework. 4.Based on our derivations, we identify a theoretical inconsistency in the GRPO objective’s KL term estimation and propose a corrected gradient estimator and corresponding loss function that properly incorporates importance weighting. We also analyze the KL handling in REINFORCE++ (Hu, 2025), examining its non-standard KL penalty term and its implications for off-policy regularization. 5.We present extensive experimental results on RL for LLM reasoning, demonstrating that our pro- posed methods can achieve stable training dynamics and competitive or improved performance compared to strong baselines like
https://arxiv.org/abs/2505.17508v1
GRPO, REINFORCE++, and DAPO (Yu et al., 2025). 2 Background Policy gradient (PG) methods are a cornerstone of modern reinforcement learning (RL), optimizing parameterized policies πθby estimating the gradient of an expected objective function J(θ)with respect to the policy parameters θ. Typically, J(θ)represents the expected cumulative discounted re- ward over trajectories τ= (s0, a0, r0, s1, . . . , s T, aT, rT)generated by the policy: J(θ) =Eτ∼πθ[G(τ)], where G(τ) =PT t=0γtrtis the trajectory return (with discount factor γ), and the expectation is taken over the trajectories sampled according to the policy πθ(a|s)and the environment dynamics p(s′|s, a). The Generalized Policy Gradient Theorem (GPPT) provides a foundation for deriving these gradients (see Appendix G for the proof). Theorem 2.1 (Generalized Policy Gradient Theorem) .Letπθ(x)be a probability density or mass function parameterized by θ, representing the probability of sampling item x. Let f(x, θ)be a scalar-valued function associated with x, potentially depending on θ. Under suitable regularity conditions, the gradient of the expectation Ex∼πθ[f(x, θ)]with respect to θis: ∇θEx∼πθ[f(x, θ)] =Ex∼πθ[f(x, θ)∇θlogπθ(x) +∇θf(x, θ)]. (2.1) 3 The term E[f∇logπ]reflects how changes in θaffect the probability of sampling x, while E[∇f] reflects how changes in θdirectly affect the function value f. The classic REINFORCE algorithm (Williams, 1992) applies the GPPT to the standard RL objective J(θ) =Eτ∼πθ[G(τ)]. In this case, f(τ, θ) =G(τ), the total trajectory return, which does not depend directly on θ(i.e.,∇θG(τ) = 0 ). The theorem simplifies, and the gradient can be expressed using per-timestep contributions (Sutton et al., 1998): ∇θJ(θ) =Eτ∼πθ"TX t=0Gt∇θlogπθ(at|st)# , where Gt=PT k=tγk−trkis the return-to-go from timestep t. Due to space limit, we defer the detailed introduction of REINFORCE to Appendix A.1. 2.1 KL Regularization in Policy Gradients A common technique to stabilize policy optimization, especially in off-policy settings or when fine-tuning large models, is regularization. The Kullback-Leibler (KL) divergence is frequently used to penalize the deviation of the learned policy πθfrom a reference policy πref(which could beπθold, an initial supervised fine-tuned model, or another prior). KL(P∥Q)≥0with equality iff P=Qalmost everywhere. It is asymmetric (i.e., KL(P∥Q)̸= KL( Q∥P)). Minimizing the forward KLKL(πref∥πθ)encourages πθto cover the support of πref(zero-forcing), while minimizing the reverse KL KL(πθ∥πref)encourages πθto be concentrated where πrefhas high probability mass (mode-seeking). Adding a KL penalty to the RL objective, such as J(θ) =Eπθ[R]−βKL(πθ∥πref), helps control the policy update size, prevents large deviations from πref, encourages exploration near known good policies, and can mitigate issues like catastrophic forgetting or overly confident outputs, particularly relevant in LLM fine-tuning (Ouyang et al., 2022). For PPO (see Appendix A.2), this penalty can be incorporated implicitly via reward shaping: r′ t=rt−βlog(πθ(at|st)/πref(at|st)). Alternatively, it can be added explicitly to the objective function, as in GRPO. The specific form of the KL divergence (forward/reverse), whether distributions are normalized (KL vs. UKL), and the choice of estimator (e.g., standard log-ratio vs. k3estimator (Schulman, 2020)) can vary, leading to different properties (mode seeking v.s. zero-forcing) and gradient estimators, as explored later in this paper (Sections 3 and 4). 2.2 Group Relative Policy Optimization (GRPO) Group Relative Policy Optimization (GRPO) (Shao et al., 2024)
https://arxiv.org/abs/2505.17508v1
adapts the PPO framework for training LLMs, notably by eliminating the need for a learned value function (critic). Instead of using GAE, GRPO estimates the advantage bAi,tat token tof output oibased on the relative rewards within a group of Goutputs {o1, . . . , o G}sampled from the old policy πθoldfor the same prompt q. Crucially, GRPO modifies the PPO objective by explicitly adding a KL regularization term directly to the objective function. Its objective (simplified notation) is: JGRPO (θ) =Eq∼P(Q),{oi}∼πold1 GGX i=11 |oi||oi|X t=1 JClip i,t(θ)−β·KLest πθ(·|hi,t)∥πref(·|hi,t) , 4 where hi,t= (q, oi,<t)is the history, JClip i,t(θ)represents the PPO-Clip term from (A.3) applied using the group-relative advantage estimate bAi,t, and πrefis a reference model (e.g., the initial SFT model). For the KL penalty, GRPO employs the k3estimator form (Schulman, 2020), evaluated per token oi,t: KLest(πθ∥πref)≈k3πref(oi,t|hi,t) πθ(oi,t|hi,t) =πref(oi,t|hi,t) πθ(oi,t|hi,t)−logπref(oi,t|hi,t) πθ(oi,t|hi,t)−1. This uses the functional form k3(y) =y−logy−1as discussed in Schulman (2020), applied with y=πref(oi,t|hi,t)/πθ(oi,t|hi,t). This form is related to the unnormalized reverse KL divergence, UKL( πθ∥πref)(see Section 3.4 and Appendix B for a detailed discussion). However, a key observation regarding GRPO’s KL penalty is its estimation. If the KL penalty in GRPO is intended to approximate β·UKL( πθ(·|hi,t)∥πref(·|hi,t)), its off-policy estimation (sampling oi,tfrom πold) would generally involve an importance weight wi,tmultiplying the k3term. The direct subtraction without this weight means the gradient derived from GRPO’s objective may not precisely correspond to the gradient of the intended target objective JClip−βUKL( πθ∥πref)in the off-policy setting. The practical impact would depend on factors like the similarity between πθ andπoldand the magnitude of β. Our results in Section 3 provide derivations for KL-regularized objectives that explicitly account for off-policy sampling via importance weights. 2.3 REINFORCE++ REINFORCE++ (Hu, 2025) is an RLHF algorithm that avoids critic overhead and aims to mitigate issues of prompt-specific baselines. It operates without a critic ( γ= 1), sampling one response per prompt. Advantages are computed by first calculating a per-token pre-normalization advantage Aq,ot: Aq,ot=r(o1:t, q)−β·TX i=tKL(i),where KL(t) = logπRL old(ot|q, o<t) πSFT(ot|q, o<t) . (2.2) The normalized advantage Anorm q,ot≜Aq,ot−mean(Aq,ot) std(Aq,ot)is then used as the advantage estimate bAt within the PPO-Clip objective structure in (A.3) . The KL formulation in (2.2) is a log-ratio involving the probabilities from the sampling/old policy πRL oldand the reference SFT policy πSFT. This differs from typical KL regularization terms like KL(πθ∥πref)orKL(πref∥πθ), which directly involve the current policy πθthat is being optimized. Additionally, this KL(t)term is incorporated into the advantage estimate Aq,otbefore this advantage is used within the PPO-Clip objective structure. The PPO-Clip objective itself applies an importance weight wt(θ) =πθ(at|st)/πold(at|st)to the advantage estimate. This formulation means the KL(t)term acts as a fixed reward shaping component based on πRL oldandπSFT, rather than a dynamic KL regularization of πθ(e.g., towards πSFT). Thus the regularization of πθis indirect. If this term were intended as a KL divergence involving πθ, its placement within bAt(subsequently multiplied by wt(θ)) would be non-standard for estimating the gradient of a KL-regularized objective. Despite its name, REINFORCE++ uses the PPO- Clip objective (cf. (A.3) ), deviating from the classic REINFORCE gradient estimator form
https://arxiv.org/abs/2505.17508v1
(e.g., E[bAt∇θlogπθ(at|st)]with a loss like (A.2) ). This implies an off-policy setup with importance sampling and clipping. 5 Table 1: Summary of fully differentiable surrogate loss functions L(θ)for policy gradient estimation. Minimizing L(θ)corresponds to maximizing the regularized objective J(θ) =Eπθ[R(x)]−β· Divergence , where πθis the policy being optimized. Samples xare drawn from last iteration’s old policy πold(or its normalized version eπold). These losses yield the target gradient −∇θJ(θ)directly via differentiation. Notation: w(x) =πθ(x)/πold(x)is the importance weight, R(x)is the reward, β is the regularization strength, πoldis the old policy with total mass Zold, andeπold=πold/Zold. Regularization Normalized (Ew.r.t. πold) Unnormalized (Ew.r.t.eπold) Forward KL Eh −w(x)R(x)−βlogπθ(x)i ZoldEh −w(x)R(x) +β w(x)−logw(x)−1i Reverse KL E w(x) −R(x) +βlogw(x) ZoldEh −w(x)R(x) +β w(x) logw(x)−w(x) + 1i 3 Regularized Policy Gradients We now derive policy gradient estimators for objectives regularized by KL divergence, assuming an online and off-policy setting where expectations are estimated using samples drawn from an old policy πoldvia importance sampling. We derive the corresponding surrogate loss functions suitable for gradient-based optimization and summarize them in Table 1. All the proofs are deferred to Appendix H. 3.1 Forward KL Regularization Consider the objective function with forward KL regularization: JFKL(θ) =Ex∼πθ[R(x)]−βKL(πold∥πθ), (3.1) where KL(πold∥πθ) =Ex∼πoldh logπold(x) πθ(x)i andβ >0is the regularization parameter. Using impor- tance sampling with importance weight w(x) =πθ(x)/πold(x), the gradient and the corresponding fully differentiable surrogate loss LFKL(θ)(minimized via gradient descent) are given by Theorem 3.1. Theorem 3.1 (Policy Gradient and Differentiable Loss for Forward KL) .Consider the forward KL regularized objective function in (3.1). The gradient of JFKL(θ)with respect to θis: ∇θJFKL(θ) =Ex∼πold w(x)R(x) +β ∇θlogπθ(x) , where w(x) =πθ(x)/πold(x)is the importance weight. The corresponding surrogate loss function for gradient descent optimization is: LFKL(θ) =Ex∼πold −w(x)R(x)−βlogπθ(x) , which satisfies ∇θLFKL(θ) =−∇θJFKL(θ). Remark 3.2 (Connection to Maximum Likelihood Estimation) .If the reward R(x) = 0 for all x, maximizing JFKL(θ)reduces to minimizing βKL(πold∥πθ), which is equivalent to maximizing Ex∼πold[logπθ(x)]. In this case, minimizing the corresponding loss LFKL(θ) =Ex∼πold[−βlogπθ(x)] is equivalent to Maximum Likelihood Estimation (MLE) of the parameters θusing data sampled from πold, which is used as pretrain/SFT loss in RL methods such as InstructGPT (Ouyang et al., 2022) and VAPO (Yuan et al., 2025). 6 3.2 Unnormalized Forward KL Regularization In scenarios where distributions might not be normalized (i.e.,R xπ(x)dx̸= 1), the standard KL divergence might not fully capture the dissimilarity. The unnormalized forward KL divergence addresses this by adding a mass correction term. Let πold(x)be a potentially unnormalized reference measure with total mass Zold=R xπold(x)dx. Leteπold(x) =πold(x)/Zoldbe the corresponding normalized probability distribution, such thatR eπold(x)dx= 1. Definition 3.3 (Unnormalized Forward KL) .The unnormalized forward KL divergence (Minka et al., 2005; Zhu & Rohwer, 1995) between the measure πoldand the density πθis defined as: UKL( πold∥πθ) =Z xπold(x) logπold(x) πθ(x)dx | {z } Generalized KL+Z x πθ(x)−πold(x) dx | {z } Mass Correction. This form is particularly relevant when dealing with reference measures that may not be perfectly normalized or when connecting to certain KL estimators like k3(see Remark 3.8). Consider the objective using UKL regularization as follows: JUFKL (θ) =Ex∼πθ[R(x)]−βUKL( πold∥πθ). (3.2) To
https://arxiv.org/abs/2505.17508v1
estimate this off-policy using samples from the normalized reference eπold(x) =πold(x)/Zold, we define the importance weight w(x) =πθ(x)/πold(x)(using the unnormalized πold). The gradient and corresponding loss function, incorporating the total mass Zoldof the reference measure, are given in Theorem 3.4. Theorem 3.4 (Policy Gradient and Differentiable Loss for Unnormalized Forward KL) .Consider the unnormalized KL regularized objective function in (3.2). The gradient of JUFKL (θ)is: ∇θJUFKL (θ) =ZoldEx∼eπoldh w(x)R(x)−β(w(x)−1) ∇θlogπθ(x)i . The corresponding surrogate loss for gradient descent optimization, estimated using samples {xi} ∼eπold, is: LUFKL (θ) =ZoldEx∼eπold −w(x)R(x) +β w(x)−logw(x)−1 , satisfying ∇θLUFKL (θ) =−∇θJUFKL (θ). Remark 3.5 (Interpretation of UFKL Loss and Gradient) .The regularization component of the surrogate loss LUFKL (θ), specifically ZoldEx∼eπold[β(w(x)−logw(x)−1)], corresponds to an off- policy estimate of the unnormalized forward KL divergence term β·UKL( πold∥πθ)present in the objective JUFKL (θ). This connection is established via the k3estimator (see Remark 3.8 and Appendix B). Furthermore, the gradient term −β(w(x)−1)effectively modifies the reward, guiding πθto match not only the shape of πoldbut also its overall mass Zold, due to the mass correction component in UKL( πold∥πθ). 7 3.3 Reverse KL Regularization Now, consider the objective regularized by reverse KL divergence: JRKL(θ) =Ex∼πθ[R(x)]−βKL(πθ∥πold). (3.3) Again, we use importance sampling with w(x) =πθ(x)/πold(x), where KL(πθ∥πold) =Ex∼πθ logπθ(x) πold(x) . This objective and its KL term can be rewritten entirely using expectations over πold. The resulting gradient and corresponding surrogate loss LRKL(θ)are given in Theorem 3.6. Theorem 3.6 (Policy Gradient and Differentiable Loss for Reverse KL) .Consider the reverse KL regularized objective function in (3.3). The gradient of JRKL(θ)is: ∇θJRKL(θ) =Ex∼πoldh w(x) R(x)−β(logw(x) + 1) ∇θlogπθ(x)i . A corresponding surrogate loss function for gradient descent optimization is: LRKL(θ) =Ex∼πold w(x) −R(x) +βlogw(x) , satisfying ∇θLRKL(θ) =−∇θJRKL(θ). 3.4 Unnormalized Reverse KL Regularization Similar to the forward case, we can define an unnormalized reverse KL divergence, relaxing the normalization constraint on the reference distribution πold. Let πold(x)be a potentially unnor- malized reference measure with total mass Zold=R πold(x)dx. Leteπold(x) =πold(x)/Zoldbe the corresponding normalized probability distribution. Definition 3.7 (Unnormalized Reverse KL) .The unnormalized reverse KL divergence between the density πθand the measure πoldis defined as: UKL( πθ∥πold) =Z xπθ(x) logπθ(x) πold(x)dx | {z } Generalized KL+Z x πold(x)−πθ(x) dx | {z } Mass Correction. The mass correction term simplifies to Zold−R πθ(x)dx. Remark 3.8. (Equivalence to k3estimator) The k3estimator (Schulman, 2020), often used for its empirical properties (e.g., in GRPO (Shao et al., 2024)), is defined for a density ratio y(x)as: k3(y) :=y−1−logy. (3.4) As shown in Appendix B, this functional form directly relates to unnormalized KL divergences. For instance, KLk3(πθ∥πold) :=Ex∼πθ[k3(πold(x)/πθ(x))]is equivalent to UKL( πθ∥πold). This equiv- alence relationship justifies the exploration of UKL/URKL formulations within our framework. 8 Consider the objective using URKL: JURKL (θ) =Ex∼πθ[R(x)]−βUKL( πθ∥πold), (3.5) where UKL is defined above. As with UFKL, we derive the gradient and loss using expecta- tions over the normalized reference eπoldand the importance weight w(x) =πθ(x)/πold(x)(with unnormalized πold). The results are summarized in Theorem 3.9. Theorem 3.9 (Policy Gradient and Differentiable Loss for Unnormalized Reverse KL) .Consider the reverse unnormalized KL regularized objective
https://arxiv.org/abs/2505.17508v1
function in (3.5). The gradient of JURKL (θ)is: ∇θJURKL (θ) =ZoldEx∼eπoldh w(x) R(x)−βlogw(x) ∇θlogπθ(x)i . A corresponding surrogate loss for gradient descent optimization, estimated using samples {xi} ∼ eπold, is: LURKL (θ) =ZoldEx∼eπold −w(x)R(x) +β w(x) logw(x)−w(x) , satisfying ∇θLURKL (θ) =−∇θJURKL (θ). The constant Zoldscales the loss and gradient and may be omitted in practice. Remark 3.10 (URKL Loss and Mass Correction) .The surrogate loss LURKL (θ)is designed such that its gradient is −∇θJURKL (θ). Specifically, the term ZoldEx∼eπold[β(w(x) logw(x)−w(x))]in the loss directly relates to the off-policy estimation of the unnormalized reverse KL divergence βUKL( πθ∥πold), omitting a constant related to the total mass Zoldwhich does not affect the gradi- ent. The policy gradient’s effective reward scaling factor, (R(x)−βlogw(x)), is simpler than its normalized RKL counterpart. 4 REINFORCE-Style Regularized Policy Gradients In Section 3, we derived policy gradient estimators and corresponding fully differentiable surro- gate losses L(θ)for KL-regularized objectives. Those losses were constructed such that ∇θL(θ) = −∇θJ(θ)directly, typically by setting L(θ) =−JIS(θ)(where JISis the importance-sampled objec- tive) up to constants. Notice that the gradients derived in Section 3 (Theorems 3.1 through 3.9) share a structural similarity with the REINFORCE estimator: ∇θJ(θ) =Ex∼πsampling [Weight (x, θ)∇θlogπθ(x)] where πsampling isπoldor its normalized version eπold, and Weight (x, θ)encapsulates the reward and KL regularization terms, differing for each specific objective. This structural similarity motivates an alternative REINFORCE-style implementation using the stop-gradient operator SG. The general form of such losses and the detailed rationale for how they yield the target gradient via automatic differentiation are presented in Appendix C.1 (see (C.1)). We explore these REINFORCE-style estimators as part of our framework, as they offer an alternative implementation path and demonstrate competitive empirical performance (Section 5). Proofs are in Appendix I. 9 Table 2: Alternative surrogate loss functions L(θ)for estimation of regularized policy gradients, using the REINFORCE-style structure with the stop-gradient operator ( SG). Minimizing L(θ) corresponds to maximizing the objective J(θ) =Eπθ[R(x)]−β·Divergence . Expectations are taken w.r.t. the sampling distribution ( x∼πoldorx∼eπold). These losses yield the target gradient via automatic differentiation due to the SGoperator treating its argument as constant during backpropagation. Compare with the fully differentiable losses in Table 1. Here w(x) =πθ(x)/πold(x) is importance weight relative to πold.Zoldis the mass of πold,R(x)is reward, βis regularization strength. Regularization Normalized (πsampling =πold) Unnormalized (πsampling =eπold) Forward KL −Eh SG (w(x)R(x) +β) logπθ(x)i −Eh SG (Zold(w(x)R(x)−β(w(x)−1))) log πθ(x)i Reverse KL −Eh SG (w(x) (R(x)−βlogw(x)−β)) log πθ(x)i −Eh SG (Zoldw(x)(R(x)−βlogw(x))) log πθ(x)i 4.1 REINFORCE-Style RPG with Forward KL Regularization We can convert Forward KL regularization of RPG to REINFORCE-style using the stop-gradient operator: Theorem 4.1 (REINFORCE-Style Loss for Forward KL) .For the forward KL regularized objective function in (3.1) , the corresponding REINFORCE-style surrogate loss function for gradient descent optimization via automatic differentiation is: LREINFORCE-style FKL(θ) =−Ex∼πold[SG (w(x)R(x) +β) logπθ(x)], where w(x) =πθ(x)/πold(x). This loss aims to produce the gradient −∇θJFKL(θ)via automatic differentiation. Remark 4.2. This REINFORCE-style loss requires SGto prevent backpropagation through w(x)in the weight term. Baselines can be added to R(x)inside SGfor variance reduction (see Appendix D). We present the corresponding REINFORCE-style loss formulations
https://arxiv.org/abs/2505.17508v1
for unnormalized forward KL, normalized reverse KL, and unnormalized reverse KL regularized objectives in Appendix C. 5 Experiments In this section, we empirically evaluate our proposed Regularized Policy Gradient (RPG) frame- work, including both its fully differentiable (RPG) and REINFORCE-style (RPG-REINFORCE) variants. We compare their performance against established baselines on challenging mathematical reasoning tasks using large language models, including GRPO (Shao et al., 2024), DAPO (Yu et al., 2025), REINFORCE++ (Hu, 2025), and its variant REINFORCE++-Baseline. Our evaluation focuses on task-specific accuracy, training stability, and key training dynamics such as reward, policy entropy, and response length. Base Models and Datasets. We conduct experiments using the Qwen2.5-7B-Instruct and Qwen2.5- Math-7B large language models (Yang et al., 2024a,b). For training, we utilize the DAPO-Math-17k dataset (Yu et al., 2025), filtered to include only English samples, resulting in a 13.9k sample training 10 /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000018/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001a/uni00000013/uni00000011/uni0000001b/uni00000024/uni00000030/uni00000026/uni00000015/uni00000016/uni00000003/uni00000044/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015/uni00000024/uni00000030/uni00000026/uni00000015/uni00000016/uni00000003/uni00000044/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f(a) AMC23 /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000013/uni00000011/uni00000013/uni00000019/uni00000013/uni00000011/uni00000013/uni0000001b/uni00000013/uni00000011/uni00000014/uni00000013/uni00000013/uni00000011/uni00000014/uni00000015/uni00000013/uni00000011/uni00000014/uni00000017/uni00000024/uni0000002c/uni00000030/uni00000028/uni00000015/uni00000017/uni00000003/uni00000044/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015/uni00000024/uni0000002c/uni00000030/uni00000028/uni00000015/uni00000017/uni00000003/uni00000044/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (b) AIME24 /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000013/uni00000011/uni00000013/uni00000015/uni00000013/uni00000011/uni00000013/uni00000017/uni00000013/uni00000011/uni00000013/uni00000019/uni00000013/uni00000011/uni00000013/uni0000001b/uni00000013/uni00000011/uni00000014/uni00000013/uni00000024/uni0000002c/uni00000030/uni00000028/uni00000015/uni00000018/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015/uni00000024/uni0000002c/uni00000030/uni00000028/uni00000015/uni00000018/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (c) AIME25 /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000014/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000016/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000018/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001a/uni00000026/uni00000055/uni0000004c/uni00000057/uni0000004c/uni00000046/uni00000003/uni00000036/uni00000046/uni00000052/uni00000055/uni00000048/uni00000026/uni00000055/uni0000004c/uni00000057/uni0000004c/uni00000046/uni00000003/uni00000036/uni00000046/uni00000052/uni00000055/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (d) Reward (Critic Score) /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000013/uni00000011/uni00000013/uni00000018/uni00000013/uni00000011/uni00000014/uni00000013/uni00000013/uni00000011/uni00000014/uni00000018/uni00000013/uni00000011/uni00000015/uni00000013/uni00000013/uni00000011/uni00000015/uni00000018/uni00000013/uni00000011/uni00000016/uni00000013/uni00000013/uni00000011/uni00000016/uni00000018/uni00000024/uni00000046/uni00000057/uni00000052/uni00000055/uni00000003/uni00000028/uni00000051/uni00000057/uni00000055/uni00000052/uni00000053/uni0000005c/uni00000024/uni00000046/uni00000057/uni00000052/uni00000055/uni00000003/uni00000028/uni00000051/uni00000057/uni00000055/uni00000052/uni00000053/uni0000005c /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (e) Entropy /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000019/uni00000018/uni00000013/uni0000001a/uni00000013/uni00000013/uni0000001a/uni00000018/uni00000013/uni0000001b/uni00000013/uni00000013/uni0000001b/uni00000018/uni00000013/uni0000001c/uni00000013/uni00000013/uni00000035/uni00000048/uni00000056/uni00000053/uni00000052/uni00000051/uni00000056/uni00000048/uni00000003/uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000035/uni00000048/uni00000056/uni00000053/uni00000052/uni00000051/uni00000056/uni00000048/uni00000003/uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (f) Response Length Figure 2: Training dynamics and benchmark performance for fully differentiable Regularized Policy Gradient (RPG) compared to baselines (GRPO, DAPO, REINFORCE++, REINFORCE++-Baseline). set. Model performance is evaluated on several mathematical reasoning benchmarks: AIME2024 (MAA, 2024a,b), AIME2025 (MAA, 2025a,b), and AMC23 (MAA, 2023). Implementation and Framework. Experiments are implemented using the verl framework (Sheng et al., 2025) with the vLLM engine (Kwon et al., 2023) for efficient LLM serving and inference. For practical implementation of our RPG methods, we emphasize that the probabilities (or log- probabilities) from the last iteration’s model ( πold) for the sampled data can be pre-computed and stored. This allows the KL regularization terms to be calculated without needing to keep πoldin GPU memory during the training step of the current policy πθ. Consequently, only onemodel ( πθ) needs to be actively managed in GPU memory for training, which is faster and more memory- efficient compared to approaches like GRPO and REINFORCE++ that typically require access to at least two models (the current policy and a reference/sampling policy) during optimization. Further details on the implementation, including specific hyperparameter settings (e.g., learning rate, KL coefficient), are provided in Appendix E. Stabilization and Advanced RL Techniques. Our RPG implementations (both fully differentiable and REINFORCE-style) incorporate stabilization techniques like baseline subtraction and PPO- style objective clipping (specifically, Dual-Clip (Ye et al., 2020; Schulman et al., 2017)), crucial for robust off-policy learning. Detailed algorithmic descriptions are provided in Appendix D (see Algorithm 1 for RPG with Dual-Clip and Algorithm 2 for the
https://arxiv.org/abs/2505.17508v1
REINFORCE-style equivalent, along with Figures 4 and 5 for visualization). For PPO-style clipping, we set (ϵ1, ϵ2) = (0 .2,0.28)for RPG, DAPO, REINFORCE++, and REINFORCE++-Baseline. For RPG-REINFORCE and GRPO, we use (ϵ1, ϵ2) = (0 .1,0.1). This choice for RPG-REINFORCE is informed by ablation studies detailed in 11 Table 3: Combined performance metrics on the AMC23, AIME24, and AIME25 mathematical reasoning benchmarks, showing “Last” and “Best” scores. The “Last” score is from the 400th training step, assuming the training process remained stable to that point. The highest score in each column is bolded , and the second highest is underlined . RPG and RPG-REINFORCE methods are highlighted with light cyan and light green backgrounds, respectively. MethodAMC23 AIME24 AIME25 Last Best Last Best Last Best GRPO 0.6266 0.7250 0.1094 0.1406 0.0281 0.0948 REINFORCE++ 0.7625 0.7664 0.0521 0.1177 0.0302 0.0740 REINFORCE++-Baseline 0.8711 0.8711 0.0990 0.1510 0.0656 0.0969 DAPO 0.8039 0.8734 0.0760 0.1240 0.0531 0.1063 RPG-FKL 0.8695 0.8836 0.1083 0.1490 0.0427 0.1083 RPG-RKL 0.8648 0.8672 0.1167 0.1469 0.0677 0.1240 RPG-UFKL 0.8703 0.8703 0.0885 0.1427 0.0927 0.1177 RPG-URKL 0.8258 0.8641 0.0875 0.1271 0.0677 0.0917 RPG-REINFORCE-FKL 0.8727 0.8727 0.1208 0.1667 0.0573 0.0875 RPG-REINFORCE-RKL 0.8305 0.8516 0.1125 0.1375 0.0490 0.0875 RPG-REINFORCE-UFKL 0.8391 0.8602 0.1229 0.1458 0.0740 0.0979 RPG-REINFORCE-URKL 0.8531 0.8531 0.1208 0.1500 0.0813 0.0938 Appendix F.9. Furthermore, to enhance training efficiency and data quality, we adopted techniques introduced by DAPO (Yu et al., 2025), including a dynamic sampling strategy with a group filtering mechanism (which oversamples challenging prompts and filters out those with near-perfect or near-zero accuracy based on initial rollouts) and an overlong punishment component in the reward shaping to discourage excessively verbose outputs. Investigation of Optimizers. We investigated AdamW (Loshchilov & Hutter, 2019) and Schedule- Free AdamW (Defazio et al., 2024). Schedule-Free optimizers aim to eliminate the need for explicit learning rate schedules after an initial warmup by maintaining a constant learning rate, relying on internal model parameter averaging. This continuous averaging contrasts with schedulers like Warmup-Stable-Decay (WSD) (Hu et al., 2024), which typically involve explicit learning rate annealing. The inherent weight averaging in Schedule-Free methods can promote more stable training dynamics. Schedule-Free AdamW, which aims for stability without explicit learning rate schedules, improves performance on the AMC23 task, particularly for higher-variance algorithms such as GRPO and REINFORCE++. In the context of RL with iterative policy updates, this stability is particularly advantageous as it can lead to a more consistent reference policy πold, potentially benefiting overall policy optimization. Detailed discussion and results are in Appendix F (e.g., Figures 7, 9, 11, 13). Results and Analysis. Table 3 summarizes the performance of our RPG algorithms against baselines, reporting both the last and best scores achieved during training on these benchmarks. Figure 2 complements these results by illustrating the evaluation scores and training dynamics for the fully differentiable RPG variants and baselines when training the Qwen-2.5-7B-Instruct model with the AdamW optimizer. These figures display performance on the AMC23, AIME24, 12 /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000018/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001a/uni00000013/uni00000011/uni0000001b/uni00000024/uni00000030/uni00000026/uni00000015/uni00000016/uni00000003/uni00000044/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015/uni00000024/uni00000030/uni00000026/uni00000015/uni00000016/uni00000003/uni00000044/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f(a) AMC23 /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013
https://arxiv.org/abs/2505.17508v1
/uni00000036/uni00000057/uni00000048/uni00000053/uni00000013/uni00000011/uni00000013/uni00000019/uni00000013/uni00000011/uni00000013/uni0000001b/uni00000013/uni00000011/uni00000014/uni00000013/uni00000013/uni00000011/uni00000014/uni00000015/uni00000013/uni00000011/uni00000014/uni00000017/uni00000024/uni0000002c/uni00000030/uni00000028/uni00000015/uni00000017/uni00000003/uni00000044/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015/uni00000024/uni0000002c/uni00000030/uni00000028/uni00000015/uni00000017/uni00000003/uni00000044/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (b) AIME24 /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000013/uni00000011/uni00000013/uni00000015/uni00000013/uni00000011/uni00000013/uni00000016/uni00000013/uni00000011/uni00000013/uni00000017/uni00000013/uni00000011/uni00000013/uni00000018/uni00000013/uni00000011/uni00000013/uni00000019/uni00000013/uni00000011/uni00000013/uni0000001a/uni00000013/uni00000011/uni00000013/uni0000001b/uni00000013/uni00000011/uni00000013/uni0000001c/uni00000013/uni00000011/uni00000014/uni00000013/uni00000024/uni0000002c/uni00000030/uni00000028/uni00000015/uni00000018/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015/uni00000024/uni0000002c/uni00000030/uni00000028/uni00000015/uni00000018/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (c) AIME25 /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000014/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000016/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000018/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001a/uni00000026/uni00000055/uni0000004c/uni00000057/uni0000004c/uni00000046/uni00000003/uni00000036/uni00000046/uni00000052/uni00000055/uni00000048/uni00000026/uni00000055/uni0000004c/uni00000057/uni0000004c/uni00000046/uni00000003/uni00000036/uni00000046/uni00000052/uni00000055/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (d) Reward (Critic Score) /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000013/uni00000011/uni00000013/uni00000018/uni00000013/uni00000011/uni00000014/uni00000013/uni00000013/uni00000011/uni00000014/uni00000018/uni00000013/uni00000011/uni00000015/uni00000013/uni00000013/uni00000011/uni00000015/uni00000018/uni00000013/uni00000011/uni00000016/uni00000013/uni00000013/uni00000011/uni00000016/uni00000018/uni00000024/uni00000046/uni00000057/uni00000052/uni00000055/uni00000003/uni00000028/uni00000051/uni00000057/uni00000055/uni00000052/uni00000053/uni0000005c/uni00000024/uni00000046/uni00000057/uni00000052/uni00000055/uni00000003/uni00000028/uni00000051/uni00000057/uni00000055/uni00000052/uni00000053/uni0000005c /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (e) Entropy /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000019/uni00000018/uni00000013/uni0000001a/uni00000013/uni00000013/uni0000001a/uni00000018/uni00000013/uni0000001b/uni00000013/uni00000013/uni0000001b/uni00000018/uni00000013/uni0000001c/uni00000013/uni00000013/uni00000035/uni00000048/uni00000056/uni00000053/uni00000052/uni00000051/uni00000056/uni00000048/uni00000003/uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000035/uni00000048/uni00000056/uni00000053/uni00000052/uni00000051/uni00000056/uni00000048/uni00000003/uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (f) Response Length Figure 3: Performance of REINFORCE-Style Regularized Policy Gradient (RPG-REINFORCE) methods compared to baselines. Plots display accuracy on mathematical reasoning benchmarks (AMC23, AIME24, AIME25) and key training dynamics (reward, policy entropy, response length). and AIME25 benchmarks, alongside key training metrics: reward (critic score), policy entropy, and average response length. Additional results, including those with different base models and the Schedule-Free AdamW optimizer (as summarized in Section 5 and detailed in Appendix F), and ablation studies (Appendix F.9), are deferred to the appendix due to space constraints. The quantitative results in Table 3 demonstrate the strong performance of the proposed RPG framework. For instance, on AMC23, RPG-FKL achieves the best overall score (Best: 0.8836), while RPG-REINFORCE-FKL shows the top “Last” score (Last: 0.8727). Both significantly outperform GRPO (Last: 0.6266, Best: 0.7250) and REINFORCE++ (Last: 0.7625, Best: 0.7664). On AIME24, RPG-REINFORCE variants take the lead, with RPG-REINFORCE-FKL attaining the highest “Best” score (0.1667) and RPG-REINFORCE-UFKL the best “Last” score (0.1229). REINFORCE++-Baseline is also competitive on AIME24 (Best: 0.1510). For the AIME25 benchmark, RPG-RKL yields the top ’Best’ score (0.1240). Notably, RPG-REINFORCE-FKL records an exceptionally high “Last” score of 0.95729 on AIME25; this value stands out significantly, especially when compared to its own “Best” score (0.0875) and other results in the table. RPG-UFKL also shows a strong “Last” score on AIME25 (0.0927). Overall, RPG and RPG-REINFORCE methods consistently rank at or near the top across the different metrics and benchmarks, often surpassing the baseline algorithms. Figure 2 further elucidates these findings for the fully differentiable RPG variants (RPG-FKL, RPG-RKL, RPG-UFKL, RPG-URKL). These algorithms generally exhibit stable training progressions regarding reward (critic score) and policy entropy, as shown in subfigures (d) and (e), compared to some baselines like GRPO, which can show more volatility. This stability likely contributes to 13 their robust benchmark performances (subfigures a-c). The response lengths (subfigure f) for RPG methods also appear well-controlled. These observations align with the strong final scores reported in Table 3 for these variants. Similarly, Figure 3 (further detailed in Appendix F) shows that RPG-REINFORCE formulations, particularly RPG-REINFORCE-FKL and RPG-REINFORCE-UFKL, also demonstrate robust per- formance, often competitive with or exceeding baselines, corroborating the strong results seen in Table 3. Their training curves generally indicate good stability and effective learning. The consistently high performance across various RPG formulations underscores the utility of the systematically derived KL-regularized objectives explored in this work. 6 Related Work Fine-tuning large language models (LLMs) using human feedback has become a critical step in developing capable and aligned AI systems. Broadly, methods
https://arxiv.org/abs/2505.17508v1
fall into two main categories: those relying on policy optimization using an explicit reward model learned from feedback, and those directly optimizing policies based on preference data. RLHF via Policy Optimization. The classic RLHF involves training a reward model (RM) rϕ(x, y) to predict human preferences and then using reinforcement learning to optimize the language model policy πθto maximize the expected reward from the RM, often regularizing against deviating too far from an initial reference policy πref. This approach was pioneered by Christiano et al. (2017) and gained widespread prominence with its application to LLMs like InstructGPT (Ouyang et al., 2022) and ChatGPT (OpenAI, 2022), which utilized Proximal Policy Optimization (PPO) (Schulman et al., 2017). PPO became a workhorse due to its relative stability, achieved by constraining policy updates via a clipped surrogate objective. The standard PPO setup for RLHF involves the policy πθ, a value function Vψ, the RM rϕ, the reference policy πref. RLHF via Direct Preference Optimization. An alternative and increasingly popular approach bypasses explicit reward modeling by directly optimizing the policy πθbased on preference data, typically pairwise comparisons (yw, yl)indicating that response ywis preferred over ylfor a given prompt x. Inspired by the Bradley-Terry model (Bradley & Terry, 1952), Direct Preference Optimization (DPO) (Rafailov et al., 2023) derived a simple loss function directly relating preference probabilities to policy likelihoods under πθand a reference policy πref. DPO maximizes the relative likelihood of preferred responses using a logistic loss: LDPO∝ −E[logσ(β∆logp)], where ∆logp is the difference in log-probabilities of ywandylbetween πθandπref. DPO’s simplicity and effectiveness led to its wide adoption in models like Llama-3 (Grattafiori et al., 2024), Qwen2 (Yang et al., 2024a), and Phi-3 (Abdin et al., 2024). Numerous variants have followed: SLiC-HF (Zhao et al., 2023) uses a pairwise hinge loss for calibration; IPO (Azar et al., 2024) uses an identity link function; SimPO (Meng et al., 2024) offers a simpler objective focusing on the margin; KTO (Ethayarajh et al., 2024) handles binary (good/bad) feedback; DQO (Ji et al., 2024) incorporates direct Q-value modeling; RAFT (Dong et al., 2023), RSO (Liu et al., 2024) and RFT (Yuan et al., 2023) use a rejection sampling perspective. Recognizing that preferences might evolve, iterative methods like Iterative DPO (Xiong et al., 2024), PCO (Xu et al., 2023) and SPIN (Chen et al., 2024) alternate between generation/preference learning and policy updates, often using the current policy’s outputs in a self-improvement loop. Game theory offers another lens, with Nash Learning from Human Feedback (NLHF) (Munos et al., 2024) framing RLHF as finding a Nash equilibrium between policies. Self-play ideas appear in SPPO (Wu et al., 2025) and GPO (Zhang et al., 2025), where the 14 policy generates pairs for comparison. Methods like GPM (Zhang et al., 2025) aim to handle more general preference structures efficiently using latent embeddings beyond pairwise comparisons. RL for Enhancing LLM Reasoning. Beyond general alignment with human preferences, RL techniques are increasingly explored to specifically enhance the multi-step reasoning capabilities of LLMs in domains like mathematics, coding, and complex instruction following. In these contexts, RL optimizes the policy to
https://arxiv.org/abs/2505.17508v1
generate sequences (e.g., chain-of-thought, code blocks) that lead to successful outcomes, often using rewards derived from external feedback like unit test results, execution outcomes, or correctness checks by an automated judge or specialized reward model trained on reasoning quality. For instance, the DeepSeekMath model (Shao et al., 2024) employed the GRPO algorithm, a value-free PPO variant, demonstrating significant improvements in mathematical problem-solving benchmarks through RL fine-tuning. DeepSeek-R1 (Guo et al., 2025) represents efforts in applying advanced techniques potentially involving RL for complex tasks, although specific methods might vary. Furthermore, preference-based methods like SPPO and GPO have been applied to reasoning-specialized models such as Kimi-1.5 (Team et al., 2025), and the resulting improvements observed on benchmarks involving coding and math suggest that preference-based RLHF can also contribute to refining reasoning abilities, potentially by optimizing implicit properties related to logical consistency and correctness within the preference data. The need for a value function (critic model) used in PPO incurs significant computational costs, and standard PPO can face stability challenges with sparse rewards common in LLM tasks. Addressing these issues has driven recent work. Several methods aim to improve efficiency by removing the value network: ReMax (Li et al., 2024) adapts REINFORCE (Williams, 1992) using Monte Carlo returns and normalization; GRPO (Shao et al., 2024) uses a group-average reward baseline and adds a k3-based KL penalty to the objective; and VinePPO (Kazemnejad et al., 2024) uses MC sampling from intermediate steps. Other approaches focus on stability and alternative baselines, such as RLOO (Ahmadian et al., 2024), which uses leave-one-out statistics within a group, and REINFORCE++ (Hu, 2025), which enhances REINFORCE with token-level KL penalties (using the k2estimator) and normalization. Dr. GRPO (Liu et al., 2025) identifies and corrects a bias found in GRPO’s advantage estimators, DAPO (Yu et al., 2025) introduces strategies like Clip-Higher, reward over-sampling, and a token-level loss to handle long sequences and entropy collapse, while VAPO (Yuan et al., 2025) builds upon it with length-adaptive advantage estimation. 7 Conclusion We introduced RPG, a framework for deriving and analyzing KL-regularized policy gradient algorithms for online, off-policy RL. We provided derivations for policy gradients and surrogate loss functions covering forward/reverse KL, normalized/unnormalized distributions, and both fully differentiable and REINFORCE-style estimators. Our experiments on LLM reasoning tasks demonstrate that methods from this framework can achieve stable training and improved or competitive performance against strong baselines. References Marah Abdin, Jyoti Aneja, Hany Awadalla, Ahmed Awadallah, Ammar Ahmad Awan, Nguyen Bach, Amit Bahree, Arash Bakhtiari, Jianmin Bao, Harkirat Behl, et al. Phi-3 technical report: A 15 highly capable language model locally on your phone. arXiv preprint arXiv:2404.14219 , 2024. Arash Ahmadian, Chris Cremer, Matthias Gallé, Marzieh Fadaee, Julia Kreutzer, Olivier Pietquin, Ahmet Üstün, and Sara Hooker. Back to basics: Revisiting reinforce-style optimization for learning from human feedback in llms. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pp. 12248–12267, 2024. Mohammad Gheshlaghi Azar, Zhaohan Daniel Guo, Bilal Piot, Remi Munos, Mark Rowland, Michal Valko, and Daniele Calandriello. A general theoretical paradigm to understand learning from human preferences.
https://arxiv.org/abs/2505.17508v1
In International Conference on Artificial Intelligence and Statistics , pp. 4447–4455. PMLR, 2024. Ralph Allan Bradley and Milton E Terry. Rank analysis of incomplete block designs: I. the method of paired comparisons. Biometrika , 39(3/4):324–345, 1952. Zixiang Chen, Yihe Deng, Huizhuo Yuan, Kaixuan Ji, and Quanquan Gu. Self-play fine-tuning converts weak language models to strong language models. In Forty-first International Conference on Machine Learning, ICML 2024, Vienna, Austria, July 21-27, 2024 , 2024. Paul F. Christiano, Jan Leike, Tom B. Brown, Miljan Martic, Shane Legg, and Dario Amodei. Deep reinforcement learning from human preferences. In Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V . N. Vishwanathan, and Roman Garnett (eds.), Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA , pp. 4299–4307, 2017. Aaron Defazio, Xingyu Yang, Ahmed Khaled, Konstantin Mishchenko, Harsh Mehta, and Ashok Cutkosky. The road less scheduled. Advances in Neural Information Processing Systems , 37:9974– 10007, 2024. Hanze Dong, Wei Xiong, Deepanshu Goyal, Yihan Zhang, Winnie Chow, Rui Pan, Shizhe Diao, Jipeng Zhang, Kashun Shum, and Tong Zhang. RAFT: reward ranked finetuning for generative foundation model alignment. Trans. Mach. Learn. Res. , 2023, 2023. Kawin Ethayarajh, Winnie Xu, Niklas Muennighoff, Dan Jurafsky, and Douwe Kiela. Kto: Model alignment as prospect theoretic optimization. arXiv preprint arXiv:2402.01306 , 2024. Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783 , 2024. Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948 , 2025. Jian Hu. Reinforce++: A simple and efficient approach for aligning large language models. arXiv preprint arXiv:2501.03262 , 2025. Shengding Hu, Yuge Tu, Xu Han, Chaoqun He, Ganqu Cui, Xiang Long, Zhi Zheng, Yewei Fang, Yuxiang Huang, Weilin Zhao, et al. Minicpm: Unveiling the potential of small language models with scalable training strategies. arXiv preprint arXiv:2404.06395 , 2024. 16 Kaixuan Ji, Guanlin Liu, Ning Dai, Qingping Yang, Renjie Zheng, Zheng Wu, Chen Dun, Quanquan Gu, and Lin Yan. Enhancing multi-step reasoning abilities of language models through direct q-function optimization. arXiv preprint arXiv:2410.09302 , 2024. Amirhossein Kazemnejad, Milad Aghajohari, Eva Portelance, Alessandro Sordoni, Siva Reddy, Aaron Courville, and Nicolas Le Roux. Vineppo: Unlocking rl potential for llm reasoning through refined credit assignment. arXiv preprint arXiv:2410.01679 , 2024. Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph Gonzalez, Hao Zhang, and Ion Stoica. Efficient memory management for large language model serving with pagedattention. In Proceedings of the 29th Symposium on Operating Systems Principles , pp. 611–626, 2023. Ziniu Li, Tian Xu, Yushun Zhang, Zhihang Lin, Yang Yu, Ruoyu Sun, and Zhi-Quan Luo. Remax: A simple, effective, and efficient reinforcement learning method for aligning large language models. InForty-first International Conference on Machine Learning, ICML 2024, Vienna, Austria, July 21-27, 2024 , 2024. Tianqi
https://arxiv.org/abs/2505.17508v1
Liu, Yao Zhao, Rishabh Joshi, Misha Khalman, Mohammad Saleh, Peter J. Liu, and Jialu Liu. Statistical rejection sampling improves preference optimization. In The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna, Austria, May 7-11, 2024 . OpenReview.net, 2024. Zichen Liu, Changyu Chen, Wenjun Li, Penghui Qi, Tianyu Pang, Chao Du, Wee Sun Lee, and Min Lin. Understanding r1-zero-like training: A critical perspective. arXiv preprint arXiv:2503.20783 , 2025. Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019 , 2019. Mathematical Association of America’s American Mathematics Competitions MAA. 2023 AMC, 2023. URL https://artofproblemsolving.com/wiki/index.php/AMC_12_ Problems_and_Solutions . Mathematical Association of America’s American Mathematics Competitions MAA. 2024 AIME- I, 2024a. URL https://artofproblemsolving.com/wiki/index.php/2024_AIME_I . Accessed: 2025-05-08. Mathematical Association of America’s American Mathematics Competitions MAA. 2024 AIME- II, 2024b. URL https://artofproblemsolving.com/wiki/index.php/2024_AIME_II . Accessed: 2025-05-08. Mathematical Association of America’s American Mathematics Competitions MAA. 2025 AIME-I, 2025a. URL https://artofproblemsolving.com/wiki/index.php/2025_AIME_I . Mathematical Association of America’s American Mathematics Competitions MAA. 2025 AIME-II, 2025b. URL https://artofproblemsolving.com/wiki/index.php/2025_AIME_II . Yu Meng, Mengzhou Xia, and Danqi Chen. Simpo: Simple preference optimization with a reference- free reward. Advances in Neural Information Processing Systems , 37:124198–124235, 2024. 17 Tom Minka et al. Divergence measures and message passing. Technical report, Microsoft Research , 2005. Rémi Munos, Michal Valko, Daniele Calandriello, Mohammad Gheshlaghi Azar, Mark Rowland, Zhaohan Daniel Guo, Yunhao Tang, Matthieu Geist, Thomas Mesnard, Côme Fiegel, Andrea Michi, Marco Selvi, Sertan Girgin, Nikola Momchev, Olivier Bachem, Daniel J. Mankowitz, Doina Precup, and Bilal Piot. Nash learning from human feedback. In Forty-first International Conference on Machine Learning, ICML 2024, Vienna, Austria, July 21-27, 2024 , 2024. OpenAI. ChatGPT, 2022. URL https://chat.openai.com/ . Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems , 35:27730– 27744, 2022. Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D Manning, Stefano Ermon, and Chelsea Finn. Direct preference optimization: Your language model is secretly a reward model. Advances in Neural Information Processing Systems , 36:53728–53741, 2023. John Schulman. Approximating kl divergence. http://joschu.net/blog/kl-approx.html , March 2020. Accessed on May 26, 2025. John Schulman, Philipp Moritz, Sergey Levine, Michael I. Jordan, and Pieter Abbeel. High- dimensional continuous control using generalized advantage estimation. In Yoshua Bengio and Yann LeCun (eds.), 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings , 2016. John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 , 2017. Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, YK Li, Y Wu, et al. Deepseekmath: Pushing the limits of mathematical reasoning in open language models. arXiv preprint arXiv:2402.03300 , 2024. Guangming Sheng, Chi Zhang, Zilingfeng Ye, Xibin Wu, Wang Zhang, Ru Zhang, Yanghua Peng, Haibin Lin, and Chuan Wu. Hybridflow: A flexible and efficient RLHF framework. In Proceedings of the Twentieth European Conference on Computer Systems, EuroSys 2025,
https://arxiv.org/abs/2505.17508v1
Rotterdam, The Netherlands, 30 March 2025 - 3 April 2025 , pp. 1279–1297. ACM, 2025. Richard S Sutton, Andrew G Barto, et al. Reinforcement learning: An introduction , volume 1. MIT press Cambridge, 1998. Kimi Team, Angang Du, Bofei Gao, Bowei Xing, Changjiu Jiang, Cheng Chen, Cheng Li, Chenjun Xiao, Chenzhuang Du, Chonghua Liao, et al. Kimi k1. 5: Scaling reinforcement learning with llms. arXiv preprint arXiv:2501.12599 , 2025. Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning , 8:229–256, 1992. 18 Yue Wu, Zhiqing Sun, Huizhuo Yuan, Kaixuan Ji, Yiming Yang, and Quanquan Gu. Self-play preference optimization for language model alignment. In The Thirteenth International Conference on Learning Representations, ICLR 2025, Singapore, April 24-28, 2025 , 2025. Wei Xiong, Hanze Dong, Chenlu Ye, Ziqi Wang, Han Zhong, Heng Ji, Nan Jiang, and Tong Zhang. Iterative preference learning from human feedback: bridging theory and practice for rlhf under kl- constraint. In Proceedings of the 41st International Conference on Machine Learning , pp. 54715–54754, 2024. Jing Xu, Andrew Lee, Sainbayar Sukhbaatar, and Jason Weston. Some things are more cringe than others: Preference optimization with the pairwise cringe loss. arXiv preprint arXiv:2312.16682 , 18, 2023. An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayi- heng Liu, Fei Huang, Haoran Wei, et al. Qwen2. 5 technical report. arXiv preprint arXiv:2412.15115 , 2024a. An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayi- heng Liu, Fei Huang, Haoran Wei, et al. Qwen2. 5 technical report. arXiv preprint arXiv:2412.15115 , 2024b. Deheng Ye, Zhao Liu, Mingfei Sun, Bei Shi, Peilin Zhao, Hao Wu, Hongsheng Yu, Shaojie Yang, Xipeng Wu, Qingwei Guo, et al. Mastering complex control in moba games with deep rein- forcement learning. In Proceedings of the AAAI conference on artificial intelligence , volume 34, pp. 6672–6679, 2020. Qiying Yu, Zheng Zhang, Ruofei Zhu, Yufeng Yuan, Xiaochen Zuo, Yu Yue, Tiantian Fan, Gaohong Liu, Lingjun Liu, Xin Liu, et al. Dapo: An open-source llm reinforcement learning system at scale. arXiv preprint arXiv:2503.14476 , 2025. Yufeng Yuan, Qiying Yu, Xiaochen Zuo, Ruofei Zhu, Wenyuan Xu, Jiaze Chen, Chengyi Wang, TianTian Fan, Zhengyin Du, Xiangpeng Wei, et al. Vapo: Efficient and reliable reinforcement learning for advanced reasoning tasks. arXiv preprint arXiv:2504.05118 , 2025. Zheng Yuan, Hongyi Yuan, Chengpeng Li, Guanting Dong, Keming Lu, Chuanqi Tan, Chang Zhou, and Jingren Zhou. Scaling relationship on learning mathematical reasoning with large language models. arXiv preprint arXiv:2308.01825 , 2023. Yifan Zhang, Ge Zhang, Yue Wu, Kangping Xu, and Quanquan Gu. Beyond bradley-terry models: A general preference model for language model alignment. In Proceedings of the 42nd International Conference on Machine Learning , 2025. Yao Zhao, Rishabh Joshi, Tianqi Liu, Misha Khalman, Mohammad Saleh, and Peter J Liu. Slic-hf: Sequence likelihood calibration with human feedback. arXiv preprint arXiv:2305.10425 , 2023. Huaiyu Zhu and Richard Rohwer. Information geometric measurements of generalisation. Preprint , 1995. 19 Appendix A REINFORCE and Proximal Policy Optimization (PPO) 22 A.1 REINFORCE . . . . . . . . . . . . .
https://arxiv.org/abs/2505.17508v1
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 A.2 Proximal Policy Optimization (PPO) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 B Equivalence of k3Estimator and Unnormalized KL Divergence 23 CREINFORCE-Style Regularized Policy Gradients with Various KL Regularization Forms 23 C.1 Rationale for REINFORCE-Style Loss Formulation . . . . . . . . . . . . . . . . . . . . 23 C.2 REINFORCE-Style RPG with Unnormalized Forward KL Regularization . . . . . . . 24 C.3 REINFORCE-Style RPG with Reverse KL Regularization . . . . . . . . . . . . . . . . 24 C.4 REINFORCE-Style RPG with Unnormalized Reverse KL Regularization . . . . . . . 24 D More on Algorithmic Details 25 D.1 Stabilization Techniques for Regularized Policy Gradients . . . . . . . . . . . . . . . 25 D.2 Stabilization Techniques for REINFORCE-Style Regularized Policy Gradients . . . . 26 E Detailed Experimental Setup 29 F More Experimental Results 32 F.1 Regularized Policy Gradient using Qwen-2.5-7B-Instruct and AdamW Optimizer . . 32 F.2 Regularized Policy Gradient using Qwen-2.5-7B-Instruct and Schedule-Free Optimizer 32 F.3 Regularized Policy Gradient using Qwen-2.5-Math-7B and AdamW Optimizer . . . 34 F.4 Regularized Policy Gradient using Qwen-2.5-Math-7B and Schedule-Free Optimizer 34 F.5 REINFORCE-Style Regularized Policy Gradient with ϵ1= 0.1, ϵ2= 0.1using Qwen- 2.5-7B-Instruct and AdamW Optimizer . . . . . . . . . . . . . . . . . . . . . . . . . . 36 F.6 REINFORCE-Style Regularized Policy Gradient with ϵ1= 0.1, ϵ2= 0.1using Qwen- 2.5-7B-Instruct and Schedule-Free Optimizer . . . . . . . . . . . . . . . . . . . . . . . 36 F.7 REINFORCE-Style Regularized Policy Gradient with ϵ1= 0.1, ϵ2= 0.1using Qwen- 2.5-Math-7B and AdamW Optimizer . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 F.8 REINFORCE-Style Regularized Policy Gradient with ϵ1= 0.1, ϵ2= 0.1using Qwen- 2.5-Math-7B and Schedule-Free Optimizer . . . . . . . . . . . . . . . . . . . . . . . . . 38 F.9 REINFORCE-Style Regularized Policy Gradient with ϵ1= 0.2, ϵ2= 0.28using Qwen- 2.5-7B-Instruct and AdamW Optimizer . . . . . . . . . . . . . . . . . . . . . . . . . . 40 F.10 REINFORCE-Style Regularized Policy Gradient with ϵ1= 0.2, ϵ2= 0.28using Qwen- 2.5-7B-Instruct and Schedule-Free Optimizer . . . . . . . . . . . . . . . . . . . . . . . 40 F.11 REINFORCE-Style
https://arxiv.org/abs/2505.17508v1
Regularized Policy Gradient with ϵ1= 0.2, ϵ2= 0.28using Qwen- 2.5-Math-7B and AdamW Optimizer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 F.12 REINFORCE-Style Regularized Policy Gradient with ϵ1= 0.2, ϵ2= 0.28using Qwen- 2.5-Math-7B and Schedule-Free Optimizer . . . . . . . . . . . . . . . . . . . . . . . . . 42 G Proofs of Theorem 2.1 (Generalized Policy Gradient Theorem) 44 20 H Proofs for Regularized Policy Gradients 44 H.1 Proof of Theorem 3.1 (Policy Gradient and Differentiable Loss for Forward KL) . . . 44 H.2 Proof of Theorem 3.4 (Policy Gradient and Differentiable Loss for Unnormalized Forward KL) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 H.3 Proof of Theorem 3.6 (Policy Gradient and Differentiable Loss for Reverse KL) . . . 47 H.4 Proof of Theorem 3.9 (Policy Gradient and Differentiable Loss for Unnormalized Reverse KL) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 I Proofs for REINFORCE-Style Regularized Policy Gradients 50 I.1 Proof of Theorem 4.1 (REINFORCE-style Policy Gradient for Forward KL) . . . . . . 50 I.2 Proof of Theorem C.1 ((REINFORCE-style Policy Gradient for Unnormalized For- ward KL) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 I.3 Proof of Theorem C.2 (REINFORCE-Style Loss) . . . . . . . . . . . . . . . . . . . . . . 52 I.4 Proof of Theorem C.3 (REINFORCE-Style Loss for Unnormalized Reverse KL) . . . . 52 I.5 Summary of REINFORCE-style Algorithms . . . . . . . . . . . . . . . . . . . . . . . . 53 21 A REINFORCE and Proximal Policy Optimization (PPO) A.1 REINFORCE REINFORCE performs Monte Carlo (MC) updates after sampling a complete trajectory, using the sampled return Gtas an unbiased estimate of the state-action value function Qπθ(st, at). However, these MC estimates often exhibit high variance, leading to slow and unstable learning. To reduce variance, a state-dependent baseline b(st)(commonly an estimate of the state value function, Vπθ(st)) is subtracted from the return-to-go: ∇θJ(θ) =Eτ∼πθ"TX t=0(Gt−b(st))∇θlogπθ(at|st)# =Eτ∼πθ"TX t=0bAt∇θlogπθ(at|st)# . (A.1) Here, bAt=Gt−b(st)is an estimate of the advantage function Aπθ(st, at) =Qπθ(st, at)−Vπθ(st). Subtracting a baseline that only depends on the state stdoes not bias the gradient
https://arxiv.org/abs/2505.17508v1
estimate, since Eat∼πθ(·|st)[b(st)∇θlogπθ(at|st)] =b(st)∇θP atπθ(at|st) =b(st)∇θ1 = 0 . REINFORCE with baseline is typically implemented by minimizing the loss: LREINFORCE (θ) =−Eτ∼πθ"TX t=0SG(bAt) logπθ(at|st)# , (A.2) using the stop-gradient operator SG(·)to prevent gradients from flowing into the advantage estimate bAt. As REINFORCE uses samples collected under the current policy πθfor gradient estimation, it is an on-policy algorithm. A.2 Proximal Policy Optimization (PPO) On-policy methods like REINFORCE can be sample-inefficient, requiring new trajectories for each gradient update. Proximal Policy Optimization (PPO) (Schulman et al., 2017) improves stability and sample efficiency by enabling multiple updates using the same batch of data collected under a slightly older policy πθold. This makes PPO effectively off-policy . PPO achieves this by optimizing a surrogate objective function that discourages large deviations between the current policy πθand the old policy πθold. The most widely used variant, PPO-Clip, employs a clipped objective: JPPO-Clip(θ) =Eth min wt(θ)bAt,clip(wt(θ),1−ϵ,1 +ϵ)bAti , (A.3) where the expectation Etis taken over timesteps in the collected batch sampled from πold. Here, wt(θ) =πθ(at|st) πold(at|st)is the importance sampling ratio. bAtis an advantage estimate, typically computed using Generalized Advantage Estimation (GAE) (Schulman et al., 2016), which leverages observed rewards and a learned state-value function V(s)to reduce variance. Notably, in many practical implementations, especially in Reinforcement Learning from Human Feedback (RLHF) for large language models (Ouyang et al., 2022), a KL divergence penalty against a reference policy πref(e.g., the initial supervised model) is often incorporated implicitly by modifying the reward signal before calculating the advantage. For example, the reward used for GAE calculation might become r′ t=rt−βlog(πθ(at|st)/πref(at|st)). When this r′ tis used within GAE to compute bAt, the KL penalty term is effectively folded into the advantage estimate that multiplies 22 the importance weight wt(θ)in the objective function. This approach contrasts with adding the KL penalty as a separate term to the final objective, as seen in GRPO (Section 2.2) or the formal derivations in Section 3. The hyperparameter ϵ(e.g., 0.2) defines the clipping range [1−ϵ,1 +ϵ]for the importance ratio wt(θ). This clipping limits the influence of potentially noisy importance weights when the policy changes significantly, preventing destructive updates and further stabilizing the off-policy training. PPO optimizes the policy πθby maximizing JPPO-Clip(θ). B Equivalence of k3Estimator and Unnormalized KL Divergence As mentioned in Section 3.4, the k3estimator for KL divergence (Schulman, 2020) is equivalent to the unnormalized KL (UKL) divergence. The k3function is defined as k3(y) =y−1−logy. Forward KL- k3andUKL( πold∥πθ): The forward KL- k3divergence is KLk3(πold∥πθ) :=Ex∼πold[k3(πθ(x)/πold(x))]. Ex∼πold k3πθ(x) πold(x) =Ex∼πoldπθ(x) πold(x)−1−logπθ(x) πold(x) =Z xπold(x)πθ(x) πold(x)−1 dx−Z xπold(x) logπθ(x) πold(x)dx =Z x(πθ(x)−πold(x))dx+Z xπold(x) logπold(x) πθ(x)dx = UKL( πold∥πθ). Reverse KL- k3andUKL( πθ∥πold): The reverse KL- k3divergence is KLk3(πθ∥πold) :=Ex∼πθ[k3(πold(x)/πθ(x))]. Ex∼πθ k3πold(x) πθ(x) =Ex∼πθπold(x) πθ(x)−1−logπold(x) πθ(x) =Z xπθ(x)πold(x) πθ(x)−1 dx−Z xπθ(x) logπold(x) πθ(x)dx =Z x(πold(x)−πθ(x))dx+Z xπθ(x) logπθ(x) πold(x)dx = UKL( πθ∥πold). CREINFORCE-Style Regularized Policy Gradients with Various KL Regularization Forms C.1 Rationale for REINFORCE-Style Loss Formulation As noted in Section 4 of the main text, the derived off-policy policy gradients (Theorems 3.1 through 3.9) share a structural similarity with the REINFORCE estimator: ∇θJ(θ) =Ex∼πsampling [Weight (x, θ)∇θlogπθ(x)]. 23 This structure
https://arxiv.org/abs/2505.17508v1
suggests an alternative way to implement the gradient update, analogous to the REINFORCE-style approach used in the on-policy setting. Specifically, one could define a surrogate loss of the form: LREINFORCE-style (θ) =−Ex∼πsampling [SG ( Weight (x, θ)) log πθ(x)]. (C.1) The rationale is that applying automatic differentiation to this loss should yield: ∇θLREINFORCE-style (θ)Autodiff=−Ex∼πsampling [SG ( Weight (x, θ))∇θlogπθ(x)]. When this gradient is used for optimization, the stop-gradient SGis conceptually removed, resulting in an update aligned with −∇θJ(θ). This relies on SGpreventing gradients from flowing through theθ-dependence within Weight (x, θ)(specifically, the dependence via the importance weight w(x)). The following subsections detail these REINFORCE-style loss formulations for each KL regularization type. C.2 REINFORCE-Style RPG with Unnormalized Forward KL Regularization Similarly, we can also transform the Unnormalized Forward KL Regularization of RPG into REINFORCE-style as follows: Theorem C.1 (REINFORCE-Style Loss for Unnormalized Forward KL) .For the objective JUFKL (θ) = Eπθ[R(x)]−βUKL( πold∥πθ), whose gradient (sampling from eπold) is ∇θJUFKL (θ) =Ex∼eπold[Zold(w(x)R(x)−β(w(x)−1))∇θlogπθ(x)](Theorem 3.4), a corresponding REINFORCE-style surrogate loss is: LREINFORCE-style UFKL(θ) =−Ex∼eπold[SG (Zold(w(x)R(x)−β(w(x)−1))) log πθ(x)], where eπold=πold/Zoldandw(x) =πθ(x)/πold(x)(using unnormalized πold). This loss aims to produce the gradient −∇θJUFKL (θ)via automatic differentiation. C.3 REINFORCE-Style RPG with Reverse KL Regularization Theorem C.2 (REINFORCE-Style Loss for Reverse KL) .For the objective JRKL(θ) =Eπθ[R(x)]− βKL(πθ∥πold), whose gradient is ∇θJRKL(θ) =Ex∼πold[w(x)(R(x)−β(logw(x) + 1)) ∇θlogπθ(x)] (Theorem 3.6), a corresponding REINFORCE-style surrogate loss is: LREINFORCE-style RKL(θ) =−Ex∼πold[SG (w(x) (R(x)−βlogw(x)−β)) log πθ(x)], (C.2) where w(x) =πθ(x)/πold(x). This loss aims to produce the gradient −∇θJRKL(θ)via automatic differentiation. C.4 REINFORCE-Style RPG with Unnormalized Reverse KL Regularization Theorem C.3 (REINFORCE-Style Loss for Unnormalized Reverse KL) .For the objective JURKL (θ) = Eπθ[R(x)]−βUKL( πθ∥πold), whose gradient (sampling from eπold) is 24 ∇θJURKL (θ) =Ex∼eπold[Zoldw(x)(R(x)−βlogw(x))∇θlogπθ(x)](Theorem 3.9), a corresponding REINFORCE-style surrogate loss is: LREINFORCE-style URKL(θ) =−Ex∼eπold[SG (Zoldw(x) (R(x)−βlogw(x))) log πθ(x)], where eπold=πold/Zoldandw(x) =πθ(x)/πold(x)(using unnormalized πold). This loss aims to produce the gradient −∇θJURKL (θ)via automatic differentiation. D More on Algorithmic Details D.1 Stabilization Techniques for Regularized Policy Gradients Practical implementations of off-policy policy gradient methods often require stabilization tech- niques to manage variance or prevent destructively large policy updates. Common techniques include: •Dual-Clip Objective: This method adapts the clipping mechanism from PPO, with a modi- fication for negative advantages proposed by Ye et al. (2020), to stabilize updates (Schulman et al., 2017). The Dual Clip objective aims to maximize JDualClip=Ex∼πold[LDualClip(x, θ)], where bA(x)is an estimate of the advantage analogue (e.g., R(x)−bor the full term derived from the regularized gradient), w(x) =πθ(x)/πold(x)is the importance ratio, and LDualClip(x, θ)is defined as: –IfbA(x)≥0:LDualClip(x, θ) = min( w(x)bA(x),clip(w(x),1−ϵ1,1 +ϵ2)bA(x)). –IfbA(x)<0:LDualClip(x, θ) = max(min( w(x)bA(x),clip(w(x),1−ϵ1,1 +ϵ2)bA(x)), cbA(x)). where ϵ1, ϵ2>0are clipping parameters and c > 1provides a lower bound for negative advantages. To use this with gradient descent (which minimizes a loss L), we minimize the negative of the Dual Clip objective term. Using −min(a, b) = max( −a,−b)and−max( a, b) = min( −a,−b), the corresponding loss term for a single sample xis: –IfbA(x)≥0:LDualClip(x, θ) = max −w(x)bA(x),−clip(w(x),1−ϵ1,1 +ϵ2)bA(x) . –IfbA(x)<0: Let Lclip= max −w(x)bA(x),−clip(w(x),1−ϵ1,1 +ϵ2)bA(x) . Then, LDualClip(x, θ) = min Lclip,−cbA(x) . Here, bA(x)should represent the advantage or an analogous term
https://arxiv.org/abs/2505.17508v1
derived from the gradient of the original (non-negated) regularized objective (e.g., Theorem 3.6). The overall loss is L(θ) =Ex∼πold[LDualClip(x, θ)]. This loss function is differentiable with respect to θ(which appears in w(x)and potentially bA(x)if it includes terms like logw(x)). This loss formulation ensures that updates are conservative. For positive advantages, it acts like standard PPO-Clip. For negative advantages, it prevents the objective from becoming arbitrarily large (loss becoming arbitrarily small) by introducing the lower bound cbA(x)on the objective (upper bound −cbA(x)on the loss). •Baseline Subtraction: Used to define the advantage bA(x) =R(x)−b(x), reducing the variance of the gradient estimates. The baseline b(x)should ideally not depend strongly on θ. A common 25 choice is a value function estimate V(x)or simply the batch average reward b=1 NPR(xi). The definition of bA(x)might also incorporate regularization terms depending on the base objective chosen (see RKL example below). For instance, applying Dual Clip to stabilize the reverse KL objective (Theorem 3.6). The gradient involves the term w(x) (R(x)−b)−β(logw(x) + 1) | {z } Analogue to bARKL(x,w;b)∇logπθ. Using this bARKLin the Dual Clip loss structure LDualClip RKL(θ) =Ex∼πold[LDualClip RKL(x, θ)]where: • IfbARKL(x, w;b)≥0: LDualClip RKL(x, θ) = max −w(x)bARKL,−clip(w(x),1−ϵ1,1 +ϵ2)bARKL! . • IfbARKL(x, w;b)<0: Let Lclip= max −w(x)bARKL,−clip(w(x),1−ϵ1,1 +ϵ2)bARKL! . LDualClip RKL(x, θ) = min Lclip,−cbARKL , where bARKL(x, w;b) = ( R(x)−b)−β(logw(x) + 1) . Simpler approximations might use bA(x) = R(x)−b. Using PPO-style clipping alters the optimization objective compared to the original KL-regularized objectives, trading strict adherence for enhanced stability. The choice of base objective structure, definition of bA, and stabilization techniques depends on the specific application. D.2 Stabilization Techniques for REINFORCE-Style Regularized Policy Gradients While the REINFORCE-style losses derived in this section (Table 2) provide theoretically grounded gradient estimators for the regularized objectives, practical implementations often benefit signifi- cantly from stabilization techniques common in policy gradient methods. These techniques aim to reduce variance and control the magnitude of policy updates, which is especially crucial in the off-policy setting where importance weights w(x)and can exacerbate instability. •Baseline Subtraction and Regularized Advantage Definition: This is a standard variance reduction technique. Critically, when combining with stabilization like PPO clipping in this REINFORCE-style context, the term playing the role of the advantage (bAt) that gets clipped should ideally incorporate not just the baselined reward but also the regularization terms derived from the objective’s gradient. Recall the REINFORCE-style gradient structure ∇θJ(θ) =Ex∼πsampling [Weight (x, θ)∇θlogπθ(x)]. The PPO objective involves terms like wtbAt. To align these, we define the regularized advantage bAtsuch that wtbAtapproximates the key part of Weight (x, θ). For example: –For RKL (Theorem C.2), WeightRKL=w(x)(R(x)−β(logw(x) + 1)) . We define the regular- ized advantage as bARKL t= (R(x)−b(x))−β(logw(x) + 1) . –For URKL (Theorem C.3), WeightURKL=Zoldw(x)(R(x)−βlogw(x)). Ignoring Zold, we define bAURKL t = (R(x)−b(x))−βlogw(x). 26 Algorithm 1 RPG with Dual-Clip Stabilization Require: Reference policy πold, Reward function R(x), Initial policy parameters θ0 Require: Base objective structure Jchosen (implies regularization type), Regularization strength β≥0 Require: Learning rate α >0, Batch size N > 0, Number of epochs K≥1per iteration Require: Dual Clip parameters: ϵ1>0, ϵ2>0, c > 1 Require:
https://arxiv.org/abs/2505.17508v1
Baseline method (e.g., batch/group average, value function Vϕ) 1:Initialize policy parameters θ←θ0 2:Initialize value function parameters ϕ(if baseline uses Vϕ) 3:foreach training iteration do 4: Sample batch D={xi}N i=1∼πold ▷Collect data using old policy 5: Compute Rifori= 1..N 6: Compute baselines bifori= 1..N(e.g., bi=1 NP jRjorbi=Vϕ(xi)) 7: fork= 1toKdo ▷Multiple optimization epochs on the same batch 8: Initialize batch loss Lbatch= 0 9: fori= 1toNdo 10: wi=πθ(xi) πold(xi),logwi= log πθ(xi)−logπold(xi) ▷Compute importance weight 11: Define Advantage analogue bAibased on Jchosen ,Ri,bi,wi,β. 12: ▷Ex: For RKL, bAi= (Ri−bi)−β(logwi+ 1) . Note: bAidepends on current θviawi 13: ifDual Clip enabled then 14: loss_term1 i=−wi×bAi ▷Negative of unclipped term, gradient flows through wi 15: wi,clipped =clip(wi,1−ϵ1,1 +ϵ2) 16: loss_term2 i=−wi,clipped×bAi ▷Negative of clipped term 17: Lclip(i) = max( loss_term1 i,loss_term2 i) 18: ifbAi≥0then 19: Lterm(i) =Lclip(i) 20: else ▷bAi<0 21: loss_lower_bound i=−c×bAi ▷Lower bound term 22: Lterm(i) = min( Lclip(i),loss_lower_bound i) 23: end if 24: else 25: ▷Define base loss term (unclipped) based on chosen objective’s negative gradient structure 26: ▷Ex: For RKL loss (no clip): Lterm(i) =wi(−(Ri−bi) +βlogwi) 27: Lterm(i) =−wi×bAi 28: end if 29: Lbatch=Lbatch+Lterm(i) 30: end for 31: bL(θ) =1 NLbatch ▷Compute final batch loss for minimization 32: g← ∇ θbL(θ) ▷Compute gradient (flows through wiandbAi) 33: θ←OptimizerUpdate (θ, g, α ) ▷Update policy parameters 34: ifusing a learned baseline Vϕthen 35: Update value function parameters ϕ(e.g., by minimizing E[(Vϕ(xi)−Ri)2]over the batch) 36: end if 37: end for 38:end for 39:return Optimized policy parameters θ –For FKL or UFKL, the structure might not cleanly separate into w(x)×(. . .). In such cases, 27 0.5 1 1.5 2 2.5−2−1.5−1−0.50 1−ϵ1 1 +ϵ2 c w(x) =πθ(x)/πold(x)Loss term LDualClip(x, θ)CasebA(x)≥0(e.g.,bA= 1) Grad via w Grad = 0 (w.r.t w) 0.5 1 1.5 2 2.50.511.522.5 1−ϵ1 1 +ϵ2 c w(x) =πθ(x)/πold(x)Loss term LDualClip(x, θ)CasebA(x)<0(e.g.,bA=−1) Grad = 0 (w.r.t w) Grad via w Figure 4: Visualization of the Dual-Clip loss term LDualClip(x, θ)vs. importance weight w(x), as described in Section D.1 and Algorithm 1. This formulation is typically implemented as fully differentiable w.r.t θ(viaw(x)and potentially bA(x)ifbAdepends on θ, e.g., via logw(x)), unlike REINFORCE-style implementations that use SG(bA)orSG(ℓi)within the loss. For visualization, bA(x)is treated as constant ( bA= 1left,bA=−1right) to isolate the effect of w.Solid blue: Loss depends linearly on w, gradient ∇θLflows via w(x).Dotted magenta: Loss is constant w.r.t w, gradient ∇θLdoes not flow via w(x)in this segment (though it might flow via bAifbAdepends on θ). Left: Case bA <0. Right: Case bA≥0. a common simplification is to use bAt=R(x)−b(x)and accept that the clipping primarily stabilizes the reward term’s contribution. This calculated bAt(incorporating reward, baseline, and KL terms) is then treated as constant using the stop-gradient operator, SG(bAt), when plugged into the clipping loss function. •PPO-Style Objective Clipping (Dual-Clip Variant): PPO (Schulman et al., 2017) introduces objective clipping to limit the impact of large importance ratios w(x). The Dual-Clip variant (Ye et al., 2020) refines this, particularly for negative advantages, using a lower bound parameter c >1. When applied in the REINFORCE-style setting, the PPO Dual-Clip objective aims to maximize (simplified notation, expectation over t∼πold): JDualClip(θ) =Et[LDualClip
https://arxiv.org/abs/2505.17508v1
t (θ)] where bAtis the regularized advantage defined above (incorporating Rt, bt, and KL terms), wt(θ) = πθ(at|st) πold(at|st), and LDualClip t (θ)is defined based on the sign of SG(bAt): –IfSG(bAt)≥0:LDualClip t (θ) = min( wt(θ) SG(bAt),clip(wt(θ),1−ϵ1,1 +ϵ2) SG(bAt)) –IfSG(bAt)<0:LDualClip t (θ) = max(min( wt(θ) SG(bAt),clip(wt(θ),1−ϵ1,1+ϵ2) SG(bAt)), cSG(bAt)) Here, ϵ1, ϵ2are clipping hyperparameters, and cis the lower bound factor. Note that θinfluences this objective only through wt(θ), asbAtis detached via SG. To implement this using gradient descent (minimizing a loss), we minimize the negative of 28 the PPO Dual-Clip objective. The loss function becomes LDualClip(θ) =Et[LDualClip t (θ)], where LDualClip t (θ) =−LDualClip t (θ). Explicitly: –IfSG(bAt)≥0:LDualClip t (θ) = max( −wt(θ) SG(bAt),−clip(wt(θ),1−ϵ1,1 +ϵ2) SG(bAt)). –IfSG(bAt)<0: Let Lclip= max( −wt(θ) SG(bAt),−clip(wt(θ),1−ϵ1,1 +ϵ2) SG(bAt)). Then, LDualClip t (θ) = min( Lclip,−cSG(bAt)). This PPO Dual-Clip loss function LDualClip(θ)replaces the simpler REINFORCE-style losses derived earlier (like LREINFORCE-style RKLin(C.2) ). The gradient ∇θLDualClip(θ)is computed via auto- matic differentiation, where the gradient flows through wt(θ)but is stopped at bAt. This approach uses the PPO objective structure with the appropriately defined regularized advantage for stabilization in an off-policy REINFORCE-style update. Algorithm 2 details this implementation. E Detailed Experimental Setup Hyperparameters. Unless otherwise specified, all experiments use a learning rate of 1×10−6with a weight decay of 0.1 and gradient clipping at 1.0. Training proceeds for 400 steps, including an initial 10 warm-up steps, after which a constant learning rate is maintained. The global training batch size is 512. For each sample in the batch, we roll out 16 responses using a temperature of 1.0. The per-GPU mini-batch size is 32, and experiments are conducted on 8 NVIDIA H100 GPUs. The maximum training and rollout length is set to 16,384 tokens, with dynamic batching enabled. The KL regularization coefficient βis set to 1×10−4. Specific Clipping Parameters and Adopted Techniques. As mentioned in Section 5, for PPO- style clipping, we set (ϵ1, ϵ2) = (0 .2,0.28)for RPG, DAPO, REINFORCE++, and REINFORCE++- Baseline. For RPG-REINFORCE and GRPO, we use (ϵ1, ϵ2) = (0 .1,0.1). This choice for RPG- REINFORCE is informed by ablation studies detailed in Appendix F.9. These studies show that while (ϵ1, ϵ2) = (0 .1,0.1)provides stable performance, larger clip parameters like (0.2,0.28)can lead to instability for RPG-REINFORCE variants, particularly with extensively pre-trained models like Qwen-2.5-Math-7B (as observed in Figures 12 and 13 in Appendix F.9). This suggests that such models may benefit from tighter clipping to encourage exploitation. Optimizer Details. As mentioned in Section 5, we investigated the impact of different optimizers, comparing the widely-used AdamW (Loshchilov & Hutter, 2019) with the more recent Schedule- Free AdamW (Defazio et al., 2024). Schedule-Free optimizers aim to eliminate the need for explicit learning rate schedules after an initial warmup by maintaining a constant learning rate, relying on internal model parameter averaging. This continuous averaging contrasts with schedulers like Warmup-Stable-Decay (WSD) (Hu et al., 2024), which typically involve explicit learning rate annealing. Our empirical results, detailed in Appendix F (e.g., Figures 7, 9, 11, 13), confirm the benefits of Schedule-Free AdamW. 29 0.5 1 1.5 2 2.50.511.5 1−ϵ1 1 +ϵ2 c wi=πθ(xi)/πold(xi)Loss
https://arxiv.org/abs/2505.17508v1
coefficient LiCase ψi≥0 Grad via ℓi Grad = 0 0.5 1 1.5 2 2.5−2.5−2−1.5−1 1−ϵ1 1 +ϵ2 c wi=πθ(xi)/πold(xi)Loss coefficient LiCase ψi<0 Grad = 0 Grad via ℓi Figure 5: Visualization of the loss coefficient Livs. importance weight wibased on the specific implementation in Algorithm 2. This version swaps the main branching condition compared to previous versions (branches on ψi>0). The plot assumes ℓi=−logπθ(xi) = 1 for visualizing thevalue ofLi. The line styles indicate the nature of the gradient ∇θLi:Solid blue: Gradient exists, flowing only viaℓi. The coefficient multiplying ∇θℓidepends on SG(wi).Dotted magenta: Gradient is zero. This occurs when ℓiis detached via SGin the loss calculation. Left: Case ψi≥0. Right: Case ψi<0. 30 Algorithm 2 REINFORCE-Style RPG with Dual-Clip Stabilization Require: Reference policy πold, Reward function R(x), Initial policy parameters θ0 Require: KL Component function Compute _KL_Component( x, θ, π old), KL Component Coefficient β Require: Learning rate α >0, Batch size N > 0, Number of epochs K≥1per iteration Require: Dual Clip parameters: ϵ1>0(low), ϵ2>0(high), c >1 Require: Baseline method (e.g., batch average, value function Vϕ) 1:Initialize policy parameters θ←θ0 2:Initialize value function parameters ϕ(if baseline uses Vϕ) 3:foreach training iteration do 4: Sample batch D={xi}N i=1∼πold 5: Compute rewards Rifori= 1..N 6: Compute baselines bifori= 1..N(e.g., bi=1 NP jRjorbi=Vϕ(xi)) 7: fork= 1toKdo ▷Multiple optimization epochs on the same batch 8: Initialize batch loss Lbatch= 0 9: fori= 1toNdo 10: wi=πθ(xi) πold(xi)▷Importance weight 11: ℓi=−logπθ(xi) ▷Negative log probability 12: AR,i=Ri−bi ▷Baseline-subtracted reward 13: CKL,i=β·Compute _KL_Component( xi, θ, π old(xi)) ▷KL component 14: A′ i=AR,i+ SG( CKL,i)/SG(wi) ▷Effective advantage 15: ψi=A′ i×ℓi ▷Branching term 16: ifψi≥0then 17: whigh= 1 + ϵ2 18: ifwi< w highthen 19: Li=ψi×SG(wi) ▷Grad exists 20: else ▷ wi≥whigh 21: A′ high=AR,i+ SG( CKL,i)/SG(whigh) 22: ψhigh=A′ high×SG(ℓi) 23: Li=ψhigh×SG(whigh) 24: end if 25: else ▷ ψi≤0 26: wlow= 1−ϵ1 27: ifwi≤wlowthen 28: A′ low=AR,i+ SG( CKL,i)/SG(wlow) 29: ψlow=A′ low×SG(ℓi) 30: Li=ψlow×SG(wlow) 31: else if wi< cthen 32: Li=ψi×SG(wi) ▷Grad exists 33: else ▷ wi≥c 34: Li=AR,i×SG(ℓi)×c+ SG( CKL,i)×SG(ℓi) 35: end if 36: end if 37: Lbatch=Lbatch+Li 38: end for 39: L(θ) =1 NLbatch ▷Compute average batch loss 40: g← ∇ θL(θ) ▷Compute gradient 41: θ←OptimizerUpdate (θ, g, α ) ▷Update policy parameters 42: ifusing a learned baseline Vϕthen 43: Update value function parameters ϕ 44: end if 45: end for 46:end for 47:return Optimized policy parameters θ 31 F More Experimental Results F.1 Regularized Policy Gradient using Qwen-2.5-7B-Instruct and AdamW Optimizer /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000018/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001a/uni00000013/uni00000011/uni0000001b/uni00000024/uni00000030/uni00000026/uni00000015/uni00000016/uni00000003/uni00000044/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015/uni00000024/uni00000030/uni00000026/uni00000015/uni00000016/uni00000003/uni00000044/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (a) AMC23 /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000013/uni00000011/uni00000013/uni00000019/uni00000013/uni00000011/uni00000013/uni0000001b/uni00000013/uni00000011/uni00000014/uni00000013/uni00000013/uni00000011/uni00000014/uni00000015/uni00000013/uni00000011/uni00000014/uni00000017/uni00000024/uni0000002c/uni00000030/uni00000028/uni00000015/uni00000017/uni00000003/uni00000044/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015/uni00000024/uni0000002c/uni00000030/uni00000028/uni00000015/uni00000017/uni00000003/uni00000044/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (b) AIME24 /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000013/uni00000011/uni00000013/uni00000015/uni00000013/uni00000011/uni00000013/uni00000017/uni00000013/uni00000011/uni00000013/uni00000019/uni00000013/uni00000011/uni00000013/uni0000001b/uni00000013/uni00000011/uni00000014/uni00000013/uni00000024/uni0000002c/uni00000030/uni00000028/uni00000015/uni00000018/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015/uni00000024/uni0000002c/uni00000030/uni00000028/uni00000015/uni00000018/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (c) AIME25 /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000014/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000016/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000018/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001a/uni00000026/uni00000055/uni0000004c/uni00000057/uni0000004c/uni00000046/uni00000003/uni00000036/uni00000046/uni00000052/uni00000055/uni00000048/uni00000026/uni00000055/uni0000004c/uni00000057/uni0000004c/uni00000046/uni00000003/uni00000036/uni00000046/uni00000052/uni00000055/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (d) Reward (Critic Score) /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000013/uni00000011/uni00000013/uni00000018/uni00000013/uni00000011/uni00000014/uni00000013/uni00000013/uni00000011/uni00000014/uni00000018/uni00000013/uni00000011/uni00000015/uni00000013/uni00000013/uni00000011/uni00000015/uni00000018/uni00000013/uni00000011/uni00000016/uni00000013/uni00000013/uni00000011/uni00000016/uni00000018/uni00000024/uni00000046/uni00000057/uni00000052/uni00000055/uni00000003/uni00000028/uni00000051/uni00000057/uni00000055/uni00000052/uni00000053/uni0000005c/uni00000024/uni00000046/uni00000057/uni00000052/uni00000055/uni00000003/uni00000028/uni00000051/uni00000057/uni00000055/uni00000052/uni00000053/uni0000005c /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (e) Entropy /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013
https://arxiv.org/abs/2505.17508v1
/uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000019/uni00000018/uni00000013/uni0000001a/uni00000013/uni00000013/uni0000001a/uni00000018/uni00000013/uni0000001b/uni00000013/uni00000013/uni0000001b/uni00000018/uni00000013/uni0000001c/uni00000013/uni00000013/uni00000035/uni00000048/uni00000056/uni00000053/uni00000052/uni00000051/uni00000056/uni00000048/uni00000003/uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000035/uni00000048/uni00000056/uni00000053/uni00000052/uni00000051/uni00000056/uni00000048/uni00000003/uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (f) Response Length Figure 6: Performance of fully differentiable Regularized Policy Gradient (RPG) methods compared to baselines. Plots display accuracy on mathematical reasoning benchmarks (AMC23, AIME24, AIME25) and key training dynamics (reward, policy entropy, response length). Base model: Qwen- 2.5-7B-Instruct. Optimizer: AdamW. F.2 Regularized Policy Gradient using Qwen-2.5-7B-Instruct and Schedule-Free Opti- mizer 32 /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000018/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001a/uni00000013/uni00000011/uni0000001b/uni00000013/uni00000011/uni0000001c/uni00000024/uni00000030/uni00000026/uni00000015/uni00000016/uni00000003/uni00000044/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015/uni00000024/uni00000030/uni00000026/uni00000015/uni00000016/uni00000003/uni00000044/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f(a) AMC23 /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000013/uni00000011/uni00000013/uni00000019/uni00000013/uni00000011/uni00000013/uni0000001b/uni00000013/uni00000011/uni00000014/uni00000013/uni00000013/uni00000011/uni00000014/uni00000015/uni00000013/uni00000011/uni00000014/uni00000017/uni00000013/uni00000011/uni00000014/uni00000019/uni00000024/uni0000002c/uni00000030/uni00000028/uni00000015/uni00000017/uni00000003/uni00000044/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015/uni00000024/uni0000002c/uni00000030/uni00000028/uni00000015/uni00000017/uni00000003/uni00000044/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (b) AIME24 /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000013/uni00000011/uni00000013/uni00000015/uni00000013/uni00000011/uni00000013/uni00000017/uni00000013/uni00000011/uni00000013/uni00000019/uni00000013/uni00000011/uni00000013/uni0000001b/uni00000013/uni00000011/uni00000014/uni00000013/uni00000013/uni00000011/uni00000014/uni00000015/uni00000024/uni0000002c/uni00000030/uni00000028/uni00000015/uni00000018/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015/uni00000024/uni0000002c/uni00000030/uni00000028/uni00000015/uni00000018/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (c) AIME25 /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000014/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000016/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000018/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001a/uni00000026/uni00000055/uni0000004c/uni00000057/uni0000004c/uni00000046/uni00000003/uni00000036/uni00000046/uni00000052/uni00000055/uni00000048/uni00000026/uni00000055/uni0000004c/uni00000057/uni0000004c/uni00000046/uni00000003/uni00000036/uni00000046/uni00000052/uni00000055/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (d) Reward (Critic Score) /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000014/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000016/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000018/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001a/uni00000024/uni00000046/uni00000057/uni00000052/uni00000055/uni00000003/uni00000028/uni00000051/uni00000057/uni00000055/uni00000052/uni00000053/uni0000005c/uni00000024/uni00000046/uni00000057/uni00000052/uni00000055/uni00000003/uni00000028/uni00000051/uni00000057/uni00000055/uni00000052/uni00000053/uni0000005c /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (e) Entropy /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000019/uni00000018/uni00000013/uni0000001a/uni00000013/uni00000013/uni0000001a/uni00000018/uni00000013/uni0000001b/uni00000013/uni00000013/uni0000001b/uni00000018/uni00000013/uni0000001c/uni00000013/uni00000013/uni00000035/uni00000048/uni00000056/uni00000053/uni00000052/uni00000051/uni00000056/uni00000048/uni00000003/uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000035/uni00000048/uni00000056/uni00000053/uni00000052/uni00000051/uni00000056/uni00000048/uni00000003/uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (f) Response Length Figure 7: Performance of fully differentiable Regularized Policy Gradient (RPG) methods compared to baselines. Base model: Qwen-2.5-7B-Instruct. Optimizer: Schedule-Free AdamW. 33 F.3 Regularized Policy Gradient using Qwen-2.5-Math-7B and AdamW Optimizer /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000013/uni00000011/uni00000013/uni00000013/uni00000013/uni00000011/uni00000013/uni00000018/uni00000013/uni00000011/uni00000014/uni00000013/uni00000013/uni00000011/uni00000014/uni00000018/uni00000013/uni00000011/uni00000015/uni00000013/uni00000013/uni00000011/uni00000015/uni00000018/uni00000013/uni00000011/uni00000016/uni00000013/uni00000024/uni0000002c/uni00000030/uni00000028/uni00000015/uni00000017/uni00000003/uni00000044/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015/uni00000024/uni0000002c/uni00000030/uni00000028/uni00000015/uni00000017/uni00000003/uni00000044/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (a) AIME24 /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b/uni00000024/uni00000030/uni00000026/uni00000015/uni00000016/uni00000003/uni00000044/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015/uni00000024/uni00000030/uni00000026/uni00000015/uni00000016/uni00000003/uni00000044/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (b) AMC23 /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000013/uni00000011/uni00000017/uni00000018/uni00000013/uni00000011/uni00000018/uni00000013/uni00000013/uni00000011/uni00000018/uni00000018/uni00000013/uni00000011/uni00000019/uni00000013/uni00000013/uni00000011/uni00000019/uni00000018/uni00000013/uni00000011/uni0000001a/uni00000013/uni00000030/uni00000044/uni00000057/uni0000004b/uni00000010/uni00000018/uni00000013/uni00000013/uni00000003/uni00000024/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000030/uni00000044/uni00000057/uni0000004b/uni00000010/uni00000018/uni00000013/uni00000013/uni00000003/uni00000024/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (c) MATH500 /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000014/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000016/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000018/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001a/uni00000026/uni00000055/uni0000004c/uni00000057/uni0000004c/uni00000046/uni00000003/uni00000036/uni00000046/uni00000052/uni00000055/uni00000048/uni00000026/uni00000055/uni0000004c/uni00000057/uni0000004c/uni00000046/uni00000003/uni00000036/uni00000046/uni00000052/uni00000055/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (d) Reward (Critic Score) /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000013/uni00000011/uni00000013/uni00000013/uni00000013/uni00000011/uni00000013/uni00000018/uni00000013/uni00000011/uni00000014/uni00000013/uni00000013/uni00000011/uni00000014/uni00000018/uni00000013/uni00000011/uni00000015/uni00000013/uni00000013/uni00000011/uni00000015/uni00000018/uni00000013/uni00000011/uni00000016/uni00000013/uni00000013/uni00000011/uni00000016/uni00000018/uni00000024/uni00000046/uni00000057/uni00000052/uni00000055/uni00000003/uni00000028/uni00000051/uni00000057/uni00000055/uni00000052/uni00000053/uni0000005c/uni00000024/uni00000046/uni00000057/uni00000052/uni00000055/uni00000003/uni00000028/uni00000051/uni00000057/uni00000055/uni00000052/uni00000053/uni0000005c /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (e) Entropy /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000019/uni00000013/uni00000013/uni00000019/uni00000018/uni00000013/uni0000001a/uni00000013/uni00000013/uni0000001a/uni00000018/uni00000013/uni0000001b/uni00000013/uni00000013/uni0000001b/uni00000018/uni00000013/uni0000001c/uni00000013/uni00000013/uni0000001c/uni00000018/uni00000013/uni00000035/uni00000048/uni00000056/uni00000053/uni00000052/uni00000051/uni00000056/uni00000048/uni00000003/uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000035/uni00000048/uni00000056/uni00000053/uni00000052/uni00000051/uni00000056/uni00000048/uni00000003/uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (f) Response Length Figure 8: Performance of fully differentiable Regularized Policy Gradient (RPG) methods compared to baselines. Plots display accuracy on mathematical reasoning benchmarks (AIME24, AMC23, MATH500) and key training dynamics (reward, policy entropy, response length). Base model: Qwen-2.5-Math-7B. Optimizer: AdamW. F.4 Regularized Policy Gradient using Qwen-2.5-Math-7B and Schedule-Free Opti- mizer 34 /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000013/uni00000011/uni00000013/uni00000013/uni00000013/uni00000011/uni00000013/uni00000018/uni00000013/uni00000011/uni00000014/uni00000013/uni00000013/uni00000011/uni00000014/uni00000018/uni00000013/uni00000011/uni00000015/uni00000013/uni00000013/uni00000011/uni00000015/uni00000018/uni00000013/uni00000011/uni00000016/uni00000013/uni00000013/uni00000011/uni00000016/uni00000018/uni00000024/uni0000002c/uni00000030/uni00000028/uni00000015/uni00000017/uni00000003/uni00000044/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015/uni00000024/uni0000002c/uni00000030/uni00000028/uni00000015/uni00000017/uni00000003/uni00000044/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f(a) AIME24 /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b/uni00000024/uni00000030/uni00000026/uni00000015/uni00000016/uni00000003/uni00000044/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015/uni00000024/uni00000030/uni00000026/uni00000015/uni00000016/uni00000003/uni00000044/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (b) AMC23 /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000013/uni00000011/uni00000017/uni00000018/uni00000013/uni00000011/uni00000018/uni00000013/uni00000013/uni00000011/uni00000018/uni00000018/uni00000013/uni00000011/uni00000019/uni00000013/uni00000013/uni00000011/uni00000019/uni00000018/uni00000013/uni00000011/uni0000001a/uni00000013/uni00000030/uni00000044/uni00000057/uni0000004b/uni00000010/uni00000018/uni00000013/uni00000013/uni00000003/uni00000024/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000030/uni00000044/uni00000057/uni0000004b/uni00000010/uni00000018/uni00000013/uni00000013/uni00000003/uni00000024/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (c) MATH500 /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000014/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000016/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000018/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001a/uni00000026/uni00000055/uni0000004c/uni00000057/uni0000004c/uni00000046/uni00000003/uni00000036/uni00000046/uni00000052/uni00000055/uni00000048/uni00000026/uni00000055/uni0000004c/uni00000057/uni0000004c/uni00000046/uni00000003/uni00000036/uni00000046/uni00000052/uni00000055/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (d) Reward (Critic Score) /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000014/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000016/uni00000013/uni00000011/uni00000017/uni00000024/uni00000046/uni00000057/uni00000052/uni00000055/uni00000003/uni00000028/uni00000051/uni00000057/uni00000055/uni00000052/uni00000053/uni0000005c/uni00000024/uni00000046/uni00000057/uni00000052/uni00000055/uni00000003/uni00000028/uni00000051/uni00000057/uni00000055/uni00000052/uni00000053/uni0000005c /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (e) Entropy /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000019/uni00000018/uni00000013/uni0000001a/uni00000013/uni00000013/uni0000001a/uni00000018/uni00000013/uni0000001b/uni00000013/uni00000013/uni0000001b/uni00000018/uni00000013/uni0000001c/uni00000013/uni00000013/uni0000001c/uni00000018/uni00000013/uni00000035/uni00000048/uni00000056/uni00000053/uni00000052/uni00000051/uni00000056/uni00000048/uni00000003/uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000035/uni00000048/uni00000056/uni00000053/uni00000052/uni00000051/uni00000056/uni00000048/uni00000003/uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (f) Response Length
https://arxiv.org/abs/2505.17508v1
Figure 9: Performance of fully differentiable Regularized Policy Gradient (RPG) methods compared to baselines. Base model: Qwen-2.5-Math-7B. Optimizer: Schedule-Free AdamW. 35 F.5 REINFORCE-Style Regularized Policy Gradient with ϵ1= 0.1, ϵ2= 0.1using Qwen- 2.5-7B-Instruct and AdamW Optimizer /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000018/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001a/uni00000013/uni00000011/uni0000001b/uni00000024/uni00000030/uni00000026/uni00000015/uni00000016/uni00000003/uni00000044/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015/uni00000024/uni00000030/uni00000026/uni00000015/uni00000016/uni00000003/uni00000044/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (a) AMC23 /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000013/uni00000011/uni00000013/uni00000019/uni00000013/uni00000011/uni00000013/uni0000001b/uni00000013/uni00000011/uni00000014/uni00000013/uni00000013/uni00000011/uni00000014/uni00000015/uni00000013/uni00000011/uni00000014/uni00000017/uni00000024/uni0000002c/uni00000030/uni00000028/uni00000015/uni00000017/uni00000003/uni00000044/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015/uni00000024/uni0000002c/uni00000030/uni00000028/uni00000015/uni00000017/uni00000003/uni00000044/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (b) AIME24 /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000013/uni00000011/uni00000013/uni00000015/uni00000013/uni00000011/uni00000013/uni00000016/uni00000013/uni00000011/uni00000013/uni00000017/uni00000013/uni00000011/uni00000013/uni00000018/uni00000013/uni00000011/uni00000013/uni00000019/uni00000013/uni00000011/uni00000013/uni0000001a/uni00000013/uni00000011/uni00000013/uni0000001b/uni00000013/uni00000011/uni00000013/uni0000001c/uni00000013/uni00000011/uni00000014/uni00000013/uni00000024/uni0000002c/uni00000030/uni00000028/uni00000015/uni00000018/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015/uni00000024/uni0000002c/uni00000030/uni00000028/uni00000015/uni00000018/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (c) AIME25 /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000014/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000016/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000018/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001a/uni00000026/uni00000055/uni0000004c/uni00000057/uni0000004c/uni00000046/uni00000003/uni00000036/uni00000046/uni00000052/uni00000055/uni00000048/uni00000026/uni00000055/uni0000004c/uni00000057/uni0000004c/uni00000046/uni00000003/uni00000036/uni00000046/uni00000052/uni00000055/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (d) Reward (Critic Score) /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000013/uni00000011/uni00000013/uni00000018/uni00000013/uni00000011/uni00000014/uni00000013/uni00000013/uni00000011/uni00000014/uni00000018/uni00000013/uni00000011/uni00000015/uni00000013/uni00000013/uni00000011/uni00000015/uni00000018/uni00000013/uni00000011/uni00000016/uni00000013/uni00000013/uni00000011/uni00000016/uni00000018/uni00000024/uni00000046/uni00000057/uni00000052/uni00000055/uni00000003/uni00000028/uni00000051/uni00000057/uni00000055/uni00000052/uni00000053/uni0000005c/uni00000024/uni00000046/uni00000057/uni00000052/uni00000055/uni00000003/uni00000028/uni00000051/uni00000057/uni00000055/uni00000052/uni00000053/uni0000005c /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (e) Entropy /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000019/uni00000018/uni00000013/uni0000001a/uni00000013/uni00000013/uni0000001a/uni00000018/uni00000013/uni0000001b/uni00000013/uni00000013/uni0000001b/uni00000018/uni00000013/uni0000001c/uni00000013/uni00000013/uni00000035/uni00000048/uni00000056/uni00000053/uni00000052/uni00000051/uni00000056/uni00000048/uni00000003/uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000035/uni00000048/uni00000056/uni00000053/uni00000052/uni00000051/uni00000056/uni00000048/uni00000003/uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (f) Response Length Figure 10: Performance of REINFORCE-Style Regularized Policy Gradient (RPG-REINFORCE) methods with clip parameters (ϵ1, ϵ2) = (0 .1,0.1)compared to baselines. Plots display accuracy on mathematical reasoning benchmarks (AMC23, AIME24, AIME25) and key training dynamics (reward, policy entropy, response length). Base model: Qwen-2.5-7B-Instruct. Optimizer: AdamW. F.6 REINFORCE-Style Regularized Policy Gradient with ϵ1= 0.1, ϵ2= 0.1using Qwen- 2.5-7B-Instruct and Schedule-Free Optimizer 36 /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000018/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001a/uni00000013/uni00000011/uni0000001b/uni00000013/uni00000011/uni0000001c/uni00000024/uni00000030/uni00000026/uni00000015/uni00000016/uni00000003/uni00000044/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015/uni00000024/uni00000030/uni00000026/uni00000015/uni00000016/uni00000003/uni00000044/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f(a) AMC23 /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000013/uni00000011/uni00000013/uni00000019/uni00000013/uni00000011/uni00000013/uni0000001b/uni00000013/uni00000011/uni00000014/uni00000013/uni00000013/uni00000011/uni00000014/uni00000015/uni00000013/uni00000011/uni00000014/uni00000017/uni00000013/uni00000011/uni00000014/uni00000019/uni00000013/uni00000011/uni00000014/uni0000001b/uni00000024/uni0000002c/uni00000030/uni00000028/uni00000015/uni00000017/uni00000003/uni00000044/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015/uni00000024/uni0000002c/uni00000030/uni00000028/uni00000015/uni00000017/uni00000003/uni00000044/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (b) AIME24 /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000013/uni00000011/uni00000013/uni00000015/uni00000013/uni00000011/uni00000013/uni00000017/uni00000013/uni00000011/uni00000013/uni00000019/uni00000013/uni00000011/uni00000013/uni0000001b/uni00000013/uni00000011/uni00000014/uni00000013/uni00000013/uni00000011/uni00000014/uni00000015/uni00000024/uni0000002c/uni00000030/uni00000028/uni00000015/uni00000018/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015/uni00000024/uni0000002c/uni00000030/uni00000028/uni00000015/uni00000018/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (c) AIME25 /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000014/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000016/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000018/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001a/uni00000026/uni00000055/uni0000004c/uni00000057/uni0000004c/uni00000046/uni00000003/uni00000036/uni00000046/uni00000052/uni00000055/uni00000048/uni00000026/uni00000055/uni0000004c/uni00000057/uni0000004c/uni00000046/uni00000003/uni00000036/uni00000046/uni00000052/uni00000055/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (d) Reward (Critic Score) /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000014/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000016/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000018/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001a/uni00000024/uni00000046/uni00000057/uni00000052/uni00000055/uni00000003/uni00000028/uni00000051/uni00000057/uni00000055/uni00000052/uni00000053/uni0000005c/uni00000024/uni00000046/uni00000057/uni00000052/uni00000055/uni00000003/uni00000028/uni00000051/uni00000057/uni00000055/uni00000052/uni00000053/uni0000005c /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (e) Entropy /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000019/uni00000018/uni00000013/uni0000001a/uni00000013/uni00000013/uni0000001a/uni00000018/uni00000013/uni0000001b/uni00000013/uni00000013/uni0000001b/uni00000018/uni00000013/uni0000001c/uni00000013/uni00000013/uni00000035/uni00000048/uni00000056/uni00000053/uni00000052/uni00000051/uni00000056/uni00000048/uni00000003/uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000035/uni00000048/uni00000056/uni00000053/uni00000052/uni00000051/uni00000056/uni00000048/uni00000003/uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (f) Response Length Figure 11: Performance of REINFORCE-Style Regularized Policy Gradient (RPG-REINFORCE) methods with clip parameters (ϵ1, ϵ2) = (0 .1,0.1)compared to baselines. Base model: Qwen-2.5- 7B-Instruct. Optimizer: Schedule-Free AdamW. 37 F.7 REINFORCE-Style Regularized Policy Gradient with ϵ1= 0.1, ϵ2= 0.1using Qwen- 2.5-Math-7B and AdamW Optimizer /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000013/uni00000011/uni00000013/uni00000013/uni00000013/uni00000011/uni00000013/uni00000018/uni00000013/uni00000011/uni00000014/uni00000013/uni00000013/uni00000011/uni00000014/uni00000018/uni00000013/uni00000011/uni00000015/uni00000013/uni00000013/uni00000011/uni00000015/uni00000018/uni00000013/uni00000011/uni00000016/uni00000013/uni00000024/uni0000002c/uni00000030/uni00000028/uni00000015/uni00000017/uni00000003/uni00000044/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015/uni00000024/uni0000002c/uni00000030/uni00000028/uni00000015/uni00000017/uni00000003/uni00000044/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (a) AIME24 /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b/uni00000024/uni00000030/uni00000026/uni00000015/uni00000016/uni00000003/uni00000044/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015/uni00000024/uni00000030/uni00000026/uni00000015/uni00000016/uni00000003/uni00000044/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (b) AMC23 /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000013/uni00000011/uni00000017/uni00000018/uni00000013/uni00000011/uni00000018/uni00000013/uni00000013/uni00000011/uni00000018/uni00000018/uni00000013/uni00000011/uni00000019/uni00000013/uni00000013/uni00000011/uni00000019/uni00000018/uni00000013/uni00000011/uni0000001a/uni00000013/uni00000030/uni00000044/uni00000057/uni0000004b/uni00000010/uni00000018/uni00000013/uni00000013/uni00000003/uni00000024/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000030/uni00000044/uni00000057/uni0000004b/uni00000010/uni00000018/uni00000013/uni00000013/uni00000003/uni00000024/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (c) MATH500 /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000014/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000016/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000018/uni00000013/uni00000011/uni00000019/uni00000026/uni00000055/uni0000004c/uni00000057/uni0000004c/uni00000046/uni00000003/uni00000036/uni00000046/uni00000052/uni00000055/uni00000048/uni00000026/uni00000055/uni0000004c/uni00000057/uni0000004c/uni00000046/uni00000003/uni00000036/uni00000046/uni00000052/uni00000055/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (d) Reward (Critic Score) /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000013/uni00000011/uni00000013/uni00000013/uni00000013/uni00000011/uni00000013/uni00000018/uni00000013/uni00000011/uni00000014/uni00000013/uni00000013/uni00000011/uni00000014/uni00000018/uni00000013/uni00000011/uni00000015/uni00000013/uni00000013/uni00000011/uni00000015/uni00000018/uni00000013/uni00000011/uni00000016/uni00000013/uni00000013/uni00000011/uni00000016/uni00000018/uni00000024/uni00000046/uni00000057/uni00000052/uni00000055/uni00000003/uni00000028/uni00000051/uni00000057/uni00000055/uni00000052/uni00000053/uni0000005c/uni00000024/uni00000046/uni00000057/uni00000052/uni00000055/uni00000003/uni00000028/uni00000051/uni00000057/uni00000055/uni00000052/uni00000053/uni0000005c /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (e) Entropy /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000019/uni00000013/uni00000013/uni00000019/uni00000018/uni00000013/uni0000001a/uni00000013/uni00000013/uni0000001a/uni00000018/uni00000013/uni0000001b/uni00000013/uni00000013/uni0000001b/uni00000018/uni00000013/uni0000001c/uni00000013/uni00000013/uni0000001c/uni00000018/uni00000013/uni00000035/uni00000048/uni00000056/uni00000053/uni00000052/uni00000051/uni00000056/uni00000048/uni00000003/uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000035/uni00000048/uni00000056/uni00000053/uni00000052/uni00000051/uni00000056/uni00000048/uni00000003/uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (f) Response Length Figure 12: Performance of REINFORCE-Style
https://arxiv.org/abs/2505.17508v1
Regularized Policy Gradient (RPG-REINFORCE) methods with clip parameters (ϵ1, ϵ2) = (0 .1,0.1)compared to baselines. Plots display accuracy on mathematical reasoning benchmarks (AIME24, AMC23, MATH500) and key training dynamics (reward, policy entropy, response length). Base model: Qwen-2.5-Math-7B. Optimizer: AdamW. F.8 REINFORCE-Style Regularized Policy Gradient with ϵ1= 0.1, ϵ2= 0.1using Qwen- 2.5-Math-7B and Schedule-Free Optimizer 38 /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000013/uni00000011/uni00000013/uni00000013/uni00000013/uni00000011/uni00000013/uni00000018/uni00000013/uni00000011/uni00000014/uni00000013/uni00000013/uni00000011/uni00000014/uni00000018/uni00000013/uni00000011/uni00000015/uni00000013/uni00000013/uni00000011/uni00000015/uni00000018/uni00000013/uni00000011/uni00000016/uni00000013/uni00000024/uni0000002c/uni00000030/uni00000028/uni00000015/uni00000017/uni00000003/uni00000044/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015/uni00000024/uni0000002c/uni00000030/uni00000028/uni00000015/uni00000017/uni00000003/uni00000044/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f(a) AIME24 /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b/uni00000024/uni00000030/uni00000026/uni00000015/uni00000016/uni00000003/uni00000044/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015/uni00000024/uni00000030/uni00000026/uni00000015/uni00000016/uni00000003/uni00000044/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (b) AMC23 /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000013/uni00000011/uni00000017/uni00000018/uni00000013/uni00000011/uni00000018/uni00000013/uni00000013/uni00000011/uni00000018/uni00000018/uni00000013/uni00000011/uni00000019/uni00000013/uni00000013/uni00000011/uni00000019/uni00000018/uni00000013/uni00000011/uni0000001a/uni00000013/uni00000030/uni00000044/uni00000057/uni0000004b/uni00000010/uni00000018/uni00000013/uni00000013/uni00000003/uni00000024/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000030/uni00000044/uni00000057/uni0000004b/uni00000010/uni00000018/uni00000013/uni00000013/uni00000003/uni00000024/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (c) MATH500 /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000014/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000016/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000018/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001a/uni00000026/uni00000055/uni0000004c/uni00000057/uni0000004c/uni00000046/uni00000003/uni00000036/uni00000046/uni00000052/uni00000055/uni00000048/uni00000026/uni00000055/uni0000004c/uni00000057/uni0000004c/uni00000046/uni00000003/uni00000036/uni00000046/uni00000052/uni00000055/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (d) Reward (Critic Score) /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000014/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000016/uni00000013/uni00000011/uni00000017/uni00000024/uni00000046/uni00000057/uni00000052/uni00000055/uni00000003/uni00000028/uni00000051/uni00000057/uni00000055/uni00000052/uni00000053/uni0000005c/uni00000024/uni00000046/uni00000057/uni00000052/uni00000055/uni00000003/uni00000028/uni00000051/uni00000057/uni00000055/uni00000052/uni00000053/uni0000005c /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (e) Entropy /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000019/uni00000018/uni00000013/uni0000001a/uni00000013/uni00000013/uni0000001a/uni00000018/uni00000013/uni0000001b/uni00000013/uni00000013/uni0000001b/uni00000018/uni00000013/uni0000001c/uni00000013/uni00000013/uni0000001c/uni00000018/uni00000013/uni00000035/uni00000048/uni00000056/uni00000053/uni00000052/uni00000051/uni00000056/uni00000048/uni00000003/uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000035/uni00000048/uni00000056/uni00000053/uni00000052/uni00000051/uni00000056/uni00000048/uni00000003/uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (f) Response Length Figure 13: Performance of REINFORCE-Style Regularized Policy Gradient (RPG-REINFORCE) methods with clip parameters (ϵ1, ϵ2) = (0 .1,0.1)compared to baselines. Base model: Qwen-2.5- Math-7B. Optimizer: Schedule-Free AdamW. 39 F.9 REINFORCE-Style Regularized Policy Gradient with ϵ1= 0.2, ϵ2= 0.28using Qwen-2.5-7B-Instruct and AdamW Optimizer Ablation Studies. We implement the ablation study on different clip parameters of RPG-REINFORCE algorithms, including (0.1,0.1)and(0.2,0.28). And the results are displayed in Figures 10-17. From Figures 12 and 13 of the experiments with (ϵ1, ϵ2) = (0 .2,0.28)using Qwen-2.5-Math-7B models in, we observe collapses in RPG-REINFORCE variants. A possible reason for this phenomenon is that the Qwen-2.5-Math-7B has been well pre-trained and fine-tuned on mathematical data (Yang et al., 2024a), for which the encouragement of exploitation and suppression of exploration in parameter space are needed. Therefore, smaller clip parameters demonstrate stabler training curves and better performances. /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000018/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001a/uni00000013/uni00000011/uni0000001b/uni00000024/uni00000030/uni00000026/uni00000015/uni00000016/uni00000003/uni00000044/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015/uni00000024/uni00000030/uni00000026/uni00000015/uni00000016/uni00000003/uni00000044/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (a) AMC23 /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000013/uni00000011/uni00000013/uni00000019/uni00000013/uni00000011/uni00000013/uni0000001b/uni00000013/uni00000011/uni00000014/uni00000013/uni00000013/uni00000011/uni00000014/uni00000015/uni00000013/uni00000011/uni00000014/uni00000017/uni00000024/uni0000002c/uni00000030/uni00000028/uni00000015/uni00000017/uni00000003/uni00000044/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015/uni00000024/uni0000002c/uni00000030/uni00000028/uni00000015/uni00000017/uni00000003/uni00000044/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (b) AIME24 /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000013/uni00000011/uni00000013/uni00000015/uni00000013/uni00000011/uni00000013/uni00000017/uni00000013/uni00000011/uni00000013/uni00000019/uni00000013/uni00000011/uni00000013/uni0000001b/uni00000013/uni00000011/uni00000014/uni00000013/uni00000013/uni00000011/uni00000014/uni00000015/uni00000024/uni0000002c/uni00000030/uni00000028/uni00000015/uni00000018/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015/uni00000024/uni0000002c/uni00000030/uni00000028/uni00000015/uni00000018/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (c) AIME25 /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000014/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000016/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000018/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001a/uni00000026/uni00000055/uni0000004c/uni00000057/uni0000004c/uni00000046/uni00000003/uni00000036/uni00000046/uni00000052/uni00000055/uni00000048/uni00000026/uni00000055/uni0000004c/uni00000057/uni0000004c/uni00000046/uni00000003/uni00000036/uni00000046/uni00000052/uni00000055/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (d) Reward (Critic Score) /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000013/uni00000011/uni00000013/uni00000018/uni00000013/uni00000011/uni00000014/uni00000013/uni00000013/uni00000011/uni00000014/uni00000018/uni00000013/uni00000011/uni00000015/uni00000013/uni00000013/uni00000011/uni00000015/uni00000018/uni00000013/uni00000011/uni00000016/uni00000013/uni00000013/uni00000011/uni00000016/uni00000018/uni00000024/uni00000046/uni00000057/uni00000052/uni00000055/uni00000003/uni00000028/uni00000051/uni00000057/uni00000055/uni00000052/uni00000053/uni0000005c/uni00000024/uni00000046/uni00000057/uni00000052/uni00000055/uni00000003/uni00000028/uni00000051/uni00000057/uni00000055/uni00000052/uni00000053/uni0000005c /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (e) Entropy /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000019/uni00000018/uni00000013/uni0000001a/uni00000013/uni00000013/uni0000001a/uni00000018/uni00000013/uni0000001b/uni00000013/uni00000013/uni0000001b/uni00000018/uni00000013/uni0000001c/uni00000013/uni00000013/uni00000035/uni00000048/uni00000056/uni00000053/uni00000052/uni00000051/uni00000056/uni00000048/uni00000003/uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000035/uni00000048/uni00000056/uni00000053/uni00000052/uni00000051/uni00000056/uni00000048/uni00000003/uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (f) Response Length Figure 14: Ablation study: Performance of REINFORCE-Style Regularized Policy Gradient (RPG- REINFORCE) methods with clip parameters (ϵ1, ϵ2) = (0 .2,0.28)compared to baselines. Plots display accuracy on mathematical reasoning benchmarks (AMC23, AIME24, AIME25) and key training dynamics (reward, policy entropy, response length). Base model: Qwen-2.5-7B-Instruct. Optimizer: AdamW. F.10 REINFORCE-Style Regularized Policy Gradient with ϵ1= 0.2, ϵ2= 0.28using Qwen-2.5-7B-Instruct and Schedule-Free Optimizer 40 /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000018/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001a/uni00000013/uni00000011/uni0000001b/uni00000013/uni00000011/uni0000001c/uni00000024/uni00000030/uni00000026/uni00000015/uni00000016/uni00000003/uni00000044/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015/uni00000024/uni00000030/uni00000026/uni00000015/uni00000016/uni00000003/uni00000044/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e
https://arxiv.org/abs/2505.17508v1
/uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f(a) AMC23 /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000013/uni00000011/uni00000013/uni00000019/uni00000013/uni00000011/uni00000013/uni0000001b/uni00000013/uni00000011/uni00000014/uni00000013/uni00000013/uni00000011/uni00000014/uni00000015/uni00000013/uni00000011/uni00000014/uni00000017/uni00000013/uni00000011/uni00000014/uni00000019/uni00000024/uni0000002c/uni00000030/uni00000028/uni00000015/uni00000017/uni00000003/uni00000044/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015/uni00000024/uni0000002c/uni00000030/uni00000028/uni00000015/uni00000017/uni00000003/uni00000044/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (b) AIME24 /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000013/uni00000011/uni00000013/uni00000015/uni00000013/uni00000011/uni00000013/uni00000017/uni00000013/uni00000011/uni00000013/uni00000019/uni00000013/uni00000011/uni00000013/uni0000001b/uni00000013/uni00000011/uni00000014/uni00000013/uni00000013/uni00000011/uni00000014/uni00000015/uni00000024/uni0000002c/uni00000030/uni00000028/uni00000015/uni00000018/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015/uni00000024/uni0000002c/uni00000030/uni00000028/uni00000015/uni00000018/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (c) AIME25 /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000014/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000016/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000018/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001a/uni00000026/uni00000055/uni0000004c/uni00000057/uni0000004c/uni00000046/uni00000003/uni00000036/uni00000046/uni00000052/uni00000055/uni00000048/uni00000026/uni00000055/uni0000004c/uni00000057/uni0000004c/uni00000046/uni00000003/uni00000036/uni00000046/uni00000052/uni00000055/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (d) Reward (Critic Score) /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000014/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000016/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000018/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001a/uni00000024/uni00000046/uni00000057/uni00000052/uni00000055/uni00000003/uni00000028/uni00000051/uni00000057/uni00000055/uni00000052/uni00000053/uni0000005c/uni00000024/uni00000046/uni00000057/uni00000052/uni00000055/uni00000003/uni00000028/uni00000051/uni00000057/uni00000055/uni00000052/uni00000053/uni0000005c /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (e) Entropy /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000019/uni00000018/uni00000013/uni0000001a/uni00000013/uni00000013/uni0000001a/uni00000018/uni00000013/uni0000001b/uni00000013/uni00000013/uni0000001b/uni00000018/uni00000013/uni0000001c/uni00000013/uni00000013/uni0000001c/uni00000018/uni00000013/uni00000035/uni00000048/uni00000056/uni00000053/uni00000052/uni00000051/uni00000056/uni00000048/uni00000003/uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000035/uni00000048/uni00000056/uni00000053/uni00000052/uni00000051/uni00000056/uni00000048/uni00000003/uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (f) Response Length Figure 15: Ablation study: Performance of REINFORCE-Style Regularized Policy Gradient (RPG- REINFORCE) methods with clip parameters (ϵ1, ϵ2) = (0 .2,0.28)compared to baselines. Plots display accuracy on mathematical reasoning benchmarks (AMC23, AIME24, AIME25) and key training dynamics (reward, policy entropy, response length). Base model: Qwen-2.5-7B-Instruct. Optimizer: Schedule-Free AdamW. 41 F.11 REINFORCE-Style Regularized Policy Gradient with ϵ1= 0.2, ϵ2= 0.28using Qwen-2.5-Math-7B and AdamW Optimizer. /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000013/uni00000011/uni00000013/uni00000013/uni00000013/uni00000011/uni00000013/uni00000018/uni00000013/uni00000011/uni00000014/uni00000013/uni00000013/uni00000011/uni00000014/uni00000018/uni00000013/uni00000011/uni00000015/uni00000013/uni00000013/uni00000011/uni00000015/uni00000018/uni00000013/uni00000011/uni00000016/uni00000013/uni00000013/uni00000011/uni00000016/uni00000018/uni00000024/uni0000002c/uni00000030/uni00000028/uni00000015/uni00000017/uni00000003/uni00000044/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015/uni00000024/uni0000002c/uni00000030/uni00000028/uni00000015/uni00000017/uni00000003/uni00000044/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (a) AIME24 /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b/uni00000024/uni00000030/uni00000026/uni00000015/uni00000016/uni00000003/uni00000044/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015/uni00000024/uni00000030/uni00000026/uni00000015/uni00000016/uni00000003/uni00000044/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (b) AMC23 /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000013/uni00000011/uni00000014/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000016/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000018/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001a/uni00000030/uni00000044/uni00000057/uni0000004b/uni00000010/uni00000018/uni00000013/uni00000013/uni00000003/uni00000024/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000030/uni00000044/uni00000057/uni0000004b/uni00000010/uni00000018/uni00000013/uni00000013/uni00000003/uni00000024/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (c) MATH500 /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000014/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000016/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000018/uni00000013/uni00000011/uni00000019/uni00000026/uni00000055/uni0000004c/uni00000057/uni0000004c/uni00000046/uni00000003/uni00000036/uni00000046/uni00000052/uni00000055/uni00000048/uni00000026/uni00000055/uni0000004c/uni00000057/uni0000004c/uni00000046/uni00000003/uni00000036/uni00000046/uni00000052/uni00000055/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (d) Reward (Critic Score) /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000014/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000016/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000018/uni00000013/uni00000011/uni00000019/uni00000024/uni00000046/uni00000057/uni00000052/uni00000055/uni00000003/uni00000028/uni00000051/uni00000057/uni00000055/uni00000052/uni00000053/uni0000005c/uni00000024/uni00000046/uni00000057/uni00000052/uni00000055/uni00000003/uni00000028/uni00000051/uni00000057/uni00000055/uni00000052/uni00000053/uni0000005c /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (e) Entropy /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000016/uni00000013/uni00000013/uni00000017/uni00000013/uni00000013/uni00000018/uni00000013/uni00000013/uni00000019/uni00000013/uni00000013/uni0000001a/uni00000013/uni00000013/uni0000001b/uni00000013/uni00000013/uni0000001c/uni00000013/uni00000013/uni00000014/uni00000013/uni00000013/uni00000013/uni00000035/uni00000048/uni00000056/uni00000053/uni00000052/uni00000051/uni00000056/uni00000048/uni00000003/uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000035/uni00000048/uni00000056/uni00000053/uni00000052/uni00000051/uni00000056/uni00000048/uni00000003/uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (f) Response Length Figure 16: Ablation study: Performance of REINFORCE-Style Regularized Policy Gradient (RPG- REINFORCE) methods with clip parameters (ϵ1, ϵ2) = (0 .2,0.28)compared to baselines. Plots display accuracy on mathematical reasoning benchmarks (AIME24, AMC23, MATH500) and key training dynamics (reward, policy entropy, response length). Base model: Qwen-2.5-Math-7B. Optimizer: AdamW. F.12 REINFORCE-Style Regularized Policy Gradient with ϵ1= 0.2, ϵ2= 0.28using Qwen-2.5-Math-7B and Schedule-Free Optimizer 42 /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000013/uni00000011/uni00000013/uni00000013/uni00000013/uni00000011/uni00000013/uni00000018/uni00000013/uni00000011/uni00000014/uni00000013/uni00000013/uni00000011/uni00000014/uni00000018/uni00000013/uni00000011/uni00000015/uni00000013/uni00000013/uni00000011/uni00000015/uni00000018/uni00000013/uni00000011/uni00000016/uni00000013/uni00000013/uni00000011/uni00000016/uni00000018/uni00000024/uni0000002c/uni00000030/uni00000028/uni00000015/uni00000017/uni00000003/uni00000044/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015/uni00000024/uni0000002c/uni00000030/uni00000028/uni00000015/uni00000017/uni00000003/uni00000044/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f(a) AIME24 /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b/uni00000024/uni00000030/uni00000026/uni00000015/uni00000016/uni00000003/uni00000044/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015/uni00000024/uni00000030/uni00000026/uni00000015/uni00000016/uni00000003/uni00000044/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000003/uni00000050/uni00000048/uni00000044/uni00000051/uni00000023/uni00000016/uni00000015 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (b) AMC23 /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000013/uni00000011/uni00000017/uni00000018/uni00000013/uni00000011/uni00000018/uni00000013/uni00000013/uni00000011/uni00000018/uni00000018/uni00000013/uni00000011/uni00000019/uni00000013/uni00000013/uni00000011/uni00000019/uni00000018/uni00000013/uni00000011/uni0000001a/uni00000013/uni00000030/uni00000044/uni00000057/uni0000004b/uni00000010/uni00000018/uni00000013/uni00000013/uni00000003/uni00000024/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000030/uni00000044/uni00000057/uni0000004b/uni00000010/uni00000018/uni00000013/uni00000013/uni00000003/uni00000024/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (c) MATH500 /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000014/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000016/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000018/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001a/uni00000026/uni00000055/uni0000004c/uni00000057/uni0000004c/uni00000046/uni00000003/uni00000036/uni00000046/uni00000052/uni00000055/uni00000048/uni00000026/uni00000055/uni0000004c/uni00000057/uni0000004c/uni00000046/uni00000003/uni00000036/uni00000046/uni00000052/uni00000055/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (d) Reward (Critic Score) /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000013/uni00000011/uni00000013/uni00000013/uni00000011/uni00000014/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000016/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000018/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001a/uni00000024/uni00000046/uni00000057/uni00000052/uni00000055/uni00000003/uni00000028/uni00000051/uni00000057/uni00000055/uni00000052/uni00000053/uni0000005c/uni00000024/uni00000046/uni00000057/uni00000052/uni00000055/uni00000003/uni00000028/uni00000051/uni00000057/uni00000055/uni00000052/uni00000053/uni0000005c /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (e) Entropy /uni00000013 /uni00000018/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013 /uni00000016/uni00000013/uni00000013 /uni00000016/uni00000018/uni00000013 /uni00000017/uni00000013/uni00000013 /uni00000036/uni00000057/uni00000048/uni00000053/uni00000017/uni00000013/uni00000013/uni00000018/uni00000013/uni00000013/uni00000019/uni00000013/uni00000013/uni0000001a/uni00000013/uni00000013/uni0000001b/uni00000013/uni00000013/uni0000001c/uni00000013/uni00000013/uni00000035/uni00000048/uni00000056/uni00000053/uni00000052/uni00000051/uni00000056/uni00000048/uni00000003/uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b/uni00000035/uni00000048/uni00000056/uni00000053/uni00000052/uni00000051/uni00000056/uni00000048/uni00000003/uni0000002f/uni00000048/uni00000051/uni0000004a/uni00000057/uni0000004b /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e/uni00000010/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048 /uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni0000000e/uni0000000e /uni0000002a/uni00000035/uni00000033/uni00000032 /uni00000027/uni00000024/uni00000033/uni00000032 /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000035/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000029/uni0000002e/uni0000002f /uni00000035/uni00000033/uni0000002a/uni00000010/uni00000035/uni00000028/uni0000002c/uni00000031/uni00000029/uni00000032/uni00000035/uni00000026/uni00000028/uni00000010/uni00000038/uni00000035/uni0000002e/uni0000002f (f) Response Length Figure 17: Ablation study: Performance of REINFORCE-Style Regularized Policy Gradient (RPG- REINFORCE) methods with clip parameters (ϵ1, ϵ2) = (0 .2,0.28)compared to baselines. Plots display accuracy on mathematical reasoning benchmarks (AIME24, AMC23,
https://arxiv.org/abs/2505.17508v1
MATH500) and key training dynamics (reward, policy entropy, response length). Base model: Qwen-2.5-Math-7B. Optimizer: Schedule-Free AdamW. 43 G Proofs of Theorem 2.1 (Generalized Policy Gradient Theorem) Proof. The proof relies on the log-derivative trick, ∇θπθ(x) =πθ(x)∇θlogπθ(x), and the product rule under the integral sign: ∇θEx∼πθ[f(x, θ)] =∇θZ πθ(x)f(x, θ)dx =Z ∇θ(πθ(x)f(x, θ))dx(Swap ∇,Z ) =Z ((∇θπθ(x))f(x, θ) +πθ(x)(∇θf(x, θ)))dx =Z (πθ(x)(∇θlogπθ(x))f(x, θ) +πθ(x)(∇θf(x, θ)))dx(Log-derivative ) =Z πθ(x) (f(x, θ)∇θlogπθ(x) +∇θf(x, θ))dx =Ex∼πθ[f(x, θ)∇θlogπθ(x) +∇θf(x, θ)]. H Proofs for Regularized Policy Gradients This section provides detailed proofs for the theorems presented in Section 3, demonstrating that the gradients of the proposed fully differentiable off-policy surrogate losses correspond to the negative gradients of the respective original objectives. The core tool used is the policy gradient theorem: ∇θEx∼πθ[f(x, θ)] =Ex∼πθ[f(x, θ)∇θlogπθ(x) +∇θf(x, θ)]. We use the notation w(x) =πθ(x)/πold(x)for the importance weight. H.1 Proof of Theorem 3.1 (Policy Gradient and Differentiable Loss for Forward KL) Proof. We start by rewriting the objective function JFKL(θ)using expectations with respect to the fixed reference policy πold. The first term, the expected reward under πθ, can be rewritten using importance sampling: Ex∼πθ[R(x)] =Z πθ(x)R(x)dx=Zπθ(x) πold(x)πold(x)R(x)dx=Ex∼πold[w(x)R(x)]. The second term is the forward KL divergence: KL(πold∥πθ) =Ex∼πold logπold(x) πθ(x) =Ex∼πold[logπold(x)−logπθ(x)] =Ex∼πold[−logπθ(x)] +Ex∼πold[logπold(x)]. Substituting these into the objective function: JFKL(θ) =Ex∼πold[w(x)R(x)]−β(Ex∼πold[−logπθ(x)] +Ex∼πold[logπold(x)]) =Ex∼πold[w(x)R(x) +βlogπθ(x)]−βEx∼πold[logπold(x)]. 44 Since πold(x)does not depend on θ, the term βEx∼πold[logπold(x)]is a constant with respect to θ. Now we compute the gradient ∇θJFKL(θ). Assuming we can swap gradient and expectation (standard assumption in policy gradient methods): ∇θJFKL(θ) =∇θEx∼πold[w(x)R(x) +βlogπθ(x)] =Ex∼πold[∇θ(w(x)R(x) +βlogπθ(x))] =Ex∼πold[(∇θw(x))R(x) +β∇θlogπθ(x)]. We use the identity for the gradient of the importance weight: ∇θw(x) =∇θπθ(x) πold(x) =1 πold(x)∇θπθ(x) =πθ(x) πold(x)∇θπθ(x) πθ(x) =w(x)∇θlogπθ(x). Substituting this back into the gradient expression: ∇θJFKL(θ) =Ex∼πold[w(x)(∇θlogπθ(x))R(x) +β∇θlogπθ(x)] =Ex∼πold w(x)R(x) +β ∇θlogπθ(x) . This proves the first part of the theorem. Now, consider the surrogate loss function: LFKL(θ) =Ex∼πold[−w(x)R(x)−βlogπθ(x)]. We compute its gradient: ∇θLFKL(θ) =∇θEx∼πold[−w(x)R(x)−βlogπθ(x)] =Ex∼πold[∇θ(−w(x)R(x)−βlogπθ(x))] =Ex∼πold[−(∇θw(x))R(x)−β∇θlogπθ(x)] =Ex∼πold[−w(x)(∇θlogπθ(x))R(x)−β∇θlogπθ(x)] =−Ex∼πold w(x)R(x) +β ∇θlogπθ(x) . Comparing this with the gradient of the objective function, we see that ∇θLFKL(θ) =−∇θJFKL(θ). This confirms that minimizing LFKL(θ)corresponds to maximizing JFKL(θ)using gradient-based methods. H.2 Proof of Theorem 3.4 (Policy Gradient and Differentiable Loss for Unnormalized Forward KL) Proof. We start by expressing the components of JUFKL (θ)using expectations over the normalized reference distribution eπold(x) =πold(x)/Zold. The importance weight is w(x) =πθ(x)/πold(x), which implies πθ(x) =w(x)πold(x) =w(x)Zoldeπold(x). 45 The expected reward term: Ex∼πθ[R(x)] =Z πθ(x)R(x)dx=Z w(x)πold(x)R(x)dx =Z w(x)Zoldeπold(x)R(x)dx=ZoldEx∼eπold[w(x)R(x)]. The unnormalized KL divergence term UKL( πold∥πθ)has two parts. Part 1 (Generalized KL): Z πold(x) logπold(x) πθ(x)dx=Z Zoldeπold(x) logπold(x) πθ(x)dx =ZoldEx∼eπold log1 w(x) =ZoldEx∼eπold[−logw(x)]. Part 2 (Mass Correction): Z (πθ(x)−πold(x))dx=Z (w(x)πold(x)−πold(x))dx =Z (w(x)−1)πold(x)dx=Z (w(x)−1)Zoldeπold(x)dx =ZoldEx∼eπold[w(x)−1] =ZoldEx∼eπold[w(x)]−Zold. Combining these parts for the UKL term: UKL( πold∥πθ) =ZoldEx∼eπold[−logw(x)] +ZoldEx∼eπold[w(x)]−Zold. Now, substitute everything into the objective JUFKL (θ): JUFKL (θ) =ZoldEx∼eπold[w(x)R(x)]−β ZoldEx∼eπold[−logw(x)] +ZoldEx∼eπold[w(x)]−Zold =ZoldEx∼eπold[w(x)R(x) +βlogw(x)−βw(x) +β]. To compute the gradient ∇θJUFKL (θ), we differentiate the terms inside the expectation. The constant term βZold(arising from βinside the expectation) vanishes upon differentiation. ∇θJUFKL (θ) =∇θ ZoldEx∼eπold[w(x)R(x) +βlogw(x)−βw(x)] =ZoldEx∼eπold[∇θ(w(x)R(x)) +β∇θ(logw(x))−β∇θ(w(x))]. We need the gradients of w(x)andlogw(x): ∇θw(x) =w(x)∇θlogπθ(x) (as derived in Theorem 3.1 proof ) ∇θlogw(x) =∇θ(logπθ(x)−logπold(x)) =∇θlogπθ(x). Substituting
https://arxiv.org/abs/2505.17508v1
these into the gradient expression: ∇θJUFKL (θ) =ZoldEx∼eπold[(∇θw(x))R(x) +β∇θlogπθ(x)−β(∇θw(x))] =ZoldEx∼eπold[w(x)R(x)∇θlogπθ(x) +β∇θlogπθ(x)−βw(x)∇θlogπθ(x)] =ZoldEx∼eπold[(w(x)R(x)−βw(x) +β)∇θlogπθ(x)] =ZoldEx∼eπoldh w(x)R(x)−β(w(x)−1) ∇θlogπθ(x)i . 46 This proves the first part of the theorem. Now, consider the surrogate loss function: LUFKL (θ) =ZoldEx∼eπold −w(x)R(x) +β w(x)−logw(x)−1 . We compute its gradient: ∇θLUFKL (θ) =ZoldEx∼eπold[∇θ(−w(x)R(x)) +β∇θ(w(x)−logw(x)−1)] =ZoldEx∼eπold[−(∇θw(x))R(x) +β(∇θw(x)− ∇ θlogw(x))] =ZoldEx∼eπold[−w(x)R(x)∇θlogπθ(x) +β(w(x)∇θlogπθ(x)− ∇ θlogπθ(x))] =ZoldEx∼eπoldh −w(x)R(x) +βw(x)−β ∇θlogπθ(x)i =−ZoldEx∼eπoldh w(x)R(x)−β(w(x)−1) ∇θlogπθ(x)i . Comparing this with the gradient of the objective function, we find ∇θLUFKL (θ) =−∇θJUFKL (θ). This confirms the surrogate loss function. Note that the constant −1inside the logarithm term in the loss LUFKL corresponds to the constant βZoldin the objective JUFKL and does not affect the gradient. H.3 Proof of Theorem 3.6 (Policy Gradient and Differentiable Loss for Reverse KL) Proof. We rewrite the objective function JRKL(θ)using expectations with respect to πold. The expected reward term is Ex∼πθ[R(x)] =Ex∼πold[w(x)R(x)], as shown previously. The reverse KL divergence term is: KL(πθ∥πold) =Ex∼πθ logπθ(x) πold(x) =Ex∼πθ[logw(x)] =Z πθ(x) logw(x)dx =Zπθ(x) πold(x)πold(x) logw(x)dx =Ex∼πold[w(x) logw(x)]. Substituting these into the objective function: JRKL(θ) =Ex∼πold[w(x)R(x)]−βEx∼πold[w(x) logw(x)] =Ex∼πold[w(x)R(x)−βw(x) logw(x)]. Now we compute the gradient ∇θJRKL(θ): ∇θJRKL(θ) =∇θEx∼πold[w(x)R(x)−βw(x) logw(x)] =Ex∼πold[∇θ(w(x)R(x))−β∇θ(w(x) logw(x))]. We need the gradient of w(x) logw(x): ∇θ(w(x) logw(x)) = (∇θw(x)) log w(x) +w(x)∇θ(logw(x)) = (w(x)∇θlogπθ(x)) log w(x) +w(x)(∇θlogπθ(x)) =w(x)∇θlogπθ(x)(log w(x) + 1) . 47 Substituting this and ∇θw(x) =w(x)∇θlogπθ(x)into the gradient expression for JRKL(θ): ∇θJRKL(θ) =Ex∼πold[(∇θw(x))R(x)−βw(x)∇θlogπθ(x)(log w(x) + 1)] =Ex∼πold[w(x)(∇θlogπθ(x))R(x)−βw(x)(log w(x) + 1)∇θlogπθ(x)] =Ex∼πoldh w(x) R(x)−β(logw(x) + 1) ∇θlogπθ(x)i . This proves the first part of the theorem. Now, consider the surrogate loss function: LRKL(θ) =Ex∼πold w(x) −R(x) +βlogw(x) . We compute its gradient: ∇θLRKL(θ) =∇θEx∼πold[−w(x)R(x) +βw(x) logw(x)] =Ex∼πold[∇θ(−w(x)R(x)) +β∇θ(w(x) logw(x))] =Ex∼πold[−(∇θw(x))R(x) +βw(x)∇θlogπθ(x)(log w(x) + 1)] =Ex∼πold[−w(x)(∇θlogπθ(x))R(x) +βw(x)(log w(x) + 1)∇θlogπθ(x)] =Ex∼πoldh w(x) −R(x) +β(logw(x) + 1) ∇θlogπθ(x)i =−Ex∼πoldh w(x) R(x)−β(logw(x) + 1) ∇θlogπθ(x)i . Comparing this with the gradient of the objective function, we confirm that ∇θLRKL(θ) =−∇θJRKL(θ). H.4 Proof of Theorem 3.9 (Policy Gradient and Differentiable Loss for Unnormalized Reverse KL) Proof. We again express the objective components using expectations over the normalized reference distribution eπold(x) =πold(x)/Zold, with w(x) =πθ(x)/πold(x). The expected reward term: Ex∼πθ[R(x)] =ZoldEx∼eπold[w(x)R(x)]. The unnormalized reverse KL divergence UKL( πθ∥πold)has two parts. Part 1 (Generalized KL): Z πθ(x) logπθ(x) πold(x)dx=Z πθ(x) logw(x)dx =Z w(x)πold(x) logw(x)dx =Z w(x)Zoldeπold(x) logw(x)dx =ZoldEx∼eπold[w(x) logw(x)]. 48 Part 2 (Mass Correction): Z (πold(x)−πθ(x))dx=Z πold(x)dx−Z πθ(x)dx =Zold−Z w(x)πold(x)dx =Zold−Z w(x)Zoldeπold(x)dx =Zold−ZoldEx∼eπold[w(x)]. Combining these for the UKL term: UKL( πθ∥πold) =ZoldEx∼eπold[w(x) logw(x)] +Zold−ZoldEx∼eπold[w(x)]. Now, substitute into the objective JURKL (θ): JURKL (θ) =ZoldEx∼eπold[w(x)R(x)]−β ZoldEx∼eπold[w(x) logw(x)] +Zold−ZoldEx∼eπold[w(x)] =ZoldEx∼eπold[w(x)R(x)−βw(x) logw(x)−β+βw(x)]. We compute the gradient ∇θJURKL (θ). The constant term −βZoldvanishes upon differentiation. ∇θJURKL (θ) =∇θ ZoldEx∼eπold[w(x)R(x)−βw(x) logw(x) +βw(x)] =ZoldEx∼eπold[∇θ(w(x)R(x))−β∇θ(w(x) logw(x)) +β∇θw(x)]. Using the previously derived gradients ∇θw(x) = w(x)∇θlogπθ(x)and∇θ(w(x) logw(x)) = w(x)∇θlogπθ(x)(log w(x) + 1) : ∇θJURKL (θ) =∇θ ZoldEx∼eπold[w(x)R(x)−βw(x) logw(x) +βw(x)] =ZoldEx∼eπold[∇θ(w(x)R(x))−β∇θ(w(x) logw(x)) +β∇θw(x)] =ZoldEx∼eπold[(∇θw(x))R(x)−βw(x)∇θlogπθ(x)(log w(x) + 1) + β(∇θw(x))] =ZoldEx∼eπold[w(x)R(x)∇θlogπθ(x)−βw(x)(log w(x) + 1)∇θlogπθ(x) +βw(x)∇θlogπθ(x)] =ZoldEx∼eπoldh w(x)∇θlogπθ(x) R(x)−β(logw(x) + 1) + βi =ZoldEx∼eπoldh w(x)∇θlogπθ(x) R(x)−βlogw(x)i =ZoldEx∼eπold" w(x) R(x)−βlogw(x) ∇θlogπθ(x)# . This proves the first part of the theorem. Now, consider the surrogate loss function: LURKL (θ) =ZoldEx∼eπold −w(x)R(x) +β w(x) logw(x)−w(x) . 49 We compute its gradient: ∇θLURKL (θ)
https://arxiv.org/abs/2505.17508v1
=ZoldEx∼eπold[∇θ(−w(x)R(x)) +β∇θ(w(x) logw(x)−w(x))] =ZoldEx∼eπold[−(∇θw(x))R(x) +β(∇θ(w(x) logw(x))− ∇ θw(x))] =ZoldEx∼eπold[−w(x)R(x)∇θlogπθ(x) +β w(x)(log w(x) + 1)∇θlogπθ(x)−w(x)∇θlogπθ(x) =ZoldEx∼eπold[−w(x)R(x)∇θlogπθ(x) +βw(x) logw(x)∇θlogπθ(x)] =ZoldEx∼eπoldh w(x) −R(x) +βlogw(x) ∇θlogπθ(x)i =−ZoldEx∼eπold" w(x) R(x)−βlogw(x) ∇θlogπθ(x)# . Comparing this with the gradient of the objective function, we confirm that ∇θLURKL (θ) = −∇θJURKL (θ). The constant term +1(corresponding to −βZoldin the objective) that appeared in the derivation in Section 3.4 does not affect the gradient and is often omitted from the final loss expression used in practice. I Proofs for REINFORCE-Style Regularized Policy Gradients This section provides justifications for the REINFORCE-style surrogate loss functions presented in Section 4 (Theorems 4.1 to C.3). These proofs demonstrate how automatic differentiation applied to the proposed losses, utilizing the stop-gradient operator SG, yields the correct gradient direction (negative of the objective gradient derived in Section 3). The core idea relies on the operational definition of the stop-gradient operator SG(·)within automatic differentiation frameworks: ∇θSG(f(θ)) = 0 , while the forward computation uses the value of f(θ). We use the notation w(x) =πθ(x)/πold(x). I.1 Proof of Theorem 4.1 (REINFORCE-style Policy Gradient for Forward KL) Proof. The objective is JFKL(θ) =Eπθ[R(x)]−βKL(πold∥πθ). From Theorem 3.1, its gradient is: ∇θJFKL(θ) =Ex∼πold  w(x)R(x) +β | {z } WeightFKL(x,θ)∇θlogπθ(x) . The proposed REINFORCE-style surrogate loss is: LREINFORCE-style FKL(θ) =−Ex∼πold[SG (w(x)R(x) +β) logπθ(x)]. 50 We compute the gradient of this loss as it would be computed by an automatic differentiation system. Assuming the gradient can be swapped with the expectation: ∇θLREINFORCE-style FKL(θ) =−Ex∼πold[∇θ(SG ( w(x)R(x) +β) logπθ(x))] =−Ex∼πold (∇θSG (w(x)R(x) +β))| {z } =0by definition of SGlogπθ(x) + SG ( w(x)R(x) +β) (∇θlogπθ(x))] =−Ex∼πold[SG (w(x)R(x) +β)∇θlogπθ(x)]. This gradient expression, when used in an optimization algorithm (where SGis conceptually removed), corresponds to applying updates proportional to: −(−Ex∼πold[(w(x)R(x) +β)∇θlogπθ(x)]) =∇θJFKL(θ). Thus, minimizing LREINFORCE-style FKL(θ)using gradient descent with automatic differentiation effec- tively performs gradient ascent on the original objective JFKL(θ). I.2 Proof of Theorem C.1 ((REINFORCE-style Policy Gradient for Unnormalized For- ward KL) Proof. The objective is JUFKL (θ) =Eπθ[R(x)]−βUKL( πold∥πθ). From Theorem 3.4, its gradient is: ∇θJUFKL (θ) =Ex∼eπold Zold w(x)R(x)−β(w(x)−1) | {z } WeightUFKL(x,θ)∇θlogπθ(x) . The proposed REINFORCE-style surrogate loss is: LREINFORCE-style UFKL(θ) =−Ex∼eπold[SG (Zold(w(x)R(x)−β(w(x)−1))) log πθ(x)]. Computing the gradient via automatic differentiation: ∇θLREINFORCE-style UFKL(θ) =−Ex∼eπold[∇θ(SG ( Zold(. . .)) log πθ(x))] =−Ex∼eπold (∇θSG(Zold(. . .)))| {z } =0logπθ(x) + SG( Zold(. . .))(∇θlogπθ(x))  =−Ex∼eπold[SG (Zold(w(x)R(x)−β(w(x)−1)))∇θlogπθ(x)]. This gradient corresponds to the update direction −∇θJUFKL (θ)when the SGis dropped. Minimiz- ing this loss achieves gradient ascent on JUFKL (θ). IfZoldis omitted, the same argument applies to the proportionally scaled objective and loss. 51 I.3 Proof of Theorem C.2 (REINFORCE-Style Loss) Proof. The objective is JRKL(θ) =Eπθ[R(x)]−βKL(πθ∥πold). From Theorem 3.6, its gradient is: ∇θJRKL(θ) =Ex∼πold w(x) R(x)−β(logw(x) + 1) | {z } WeightRKL(x,θ)∇θlogπθ(x) . The proposed REINFORCE-style surrogate loss is: LREINFORCE-style RKL(θ) =−Ex∼πold[SG (w(x) (R(x)−βlogw(x)−β)) log πθ(x)]. Computing the gradient via automatic differentiation: ∇θLREINFORCE-style RKL(θ) =−Ex∼πold[∇θ(SG ( w(x)(. . .)) log πθ(x))] =−Ex∼πold (∇θSG(w(x)(. . .)))| {z } =0logπθ(x) + SG( w(x)(. . .))(∇θlogπθ(x))  =−Ex∼πold[SG (w(x) (R(x)−βlogw(x)−β))∇θlogπθ(x)]. This gradient corresponds to the update direction −∇θJRKL(θ)when the SGis dropped. Minimiz- ing this loss achieves gradient
https://arxiv.org/abs/2505.17508v1
ascent on JRKL(θ). I.4 Proof of Theorem C.3 (REINFORCE-Style Loss for Unnormalized Reverse KL) Proof. The objective is JURKL (θ) =Eπθ[R(x)]−βUKL( πθ∥πold). From Theorem 3.9, its gradient is: ∇θJURKL (θ) =Ex∼eπold Zoldw(x) R(x)−βlogw(x) | {z } WeightURKL(x,θ)∇θlogπθ(x) . The proposed REINFORCE-style surrogate loss is: LREINFORCE-style URKL(θ) =−Ex∼eπold[SG (Zoldw(x) (R(x)−βlogw(x))) log πθ(x)]. Computing the gradient via automatic differentiation: ∇θLREINFORCE-style URKL(θ) =−Ex∼eπold[∇θ(SG ( Zoldw(x)(. . .)) log πθ(x))] =−Ex∼eπold" (∇θSG(Zoldw(x)(. . .)))| {z } =0logπθ(x) + SG( Zoldw(x)(. . .))(∇θlogπθ(x))# =−Ex∼eπold[SG (Zoldw(x) (R(x)−βlogw(x)))∇θlogπθ(x)]. This gradient corresponds to the update direction −∇θJURKL (θ)when the SGis dropped. Minimiz- ing this loss achieves gradient ascent on JURKL (θ). IfZoldis omitted, the same argument applies to the proportionally scaled objective and loss. 52 I.5 Summary of REINFORCE-style Algorithms We have presented an alternative REINFORCE-style approach to formulating surrogate losses for off-policy regularized policy gradients. This approach leverages the structural similarity of the derived off-policy gradients to the REINFORCE estimator, explicitly using the stop-gradient operator SGwithin the loss function: LREINFORCE-style (θ) =−Ex∼πsampling [SG ( Weight (x, θ)) log πθ(x)], where Weight (x, θ)encapsulates the reward and regularization terms specific to each objective (FKL, UFKL, RKL, URKL). While this formulation provides a conceptual link to on-policy REINFORCE methods and might be convenient in some frameworks, it differs significantly from the direct differentiable-style losses in Section 3 (Table 1). Those direct losses yield the correct gradients ∇θL(θ) =−∇θJ(θ)by construction (often L(θ) =−JIS(θ)up to constants), without requiring SG. The REINFORCE-style off-policy losses rely critically on the stop-gradient SGpreventing gradient flow through the θ-dependence within the Weight (x, θ)term (primarily through the importance weight w(x)). Although automatic differentiation libraries are designed for this, the direct loss formulations avoid this reliance and represent a more direct pathway from the objective J(θ)to a suitable loss L(θ)in the differentiable off-policy setting. Therefore, while the REINFORCE-style losses are presented here for completeness and to highlight the gradient structure, the direct losses from Section 3 are generally considered the standard and more straightforward approach for optimizing these KL-regularized objectives in the off-policy manner described. Regardless of the chosen loss formulation, practical implementations necessitate Monte Carlo estimation using samples from πold(oreπold) and benefit significantly from variance reduction techniques (e.g., baseline subtraction applied to R(x)) and stabilization methods (e.g., importance weight clipping, as discussed in Section D.2). 53
https://arxiv.org/abs/2505.17508v1
arXiv:2505.17510v1 [cs.CL] 23 May 2025Large Language Models Do Multi-Label Classification Differently Marcus Ma*, Georgios Chochlakis*, Niyantha Maruthu Pandiyan, Jesse Thomason ,Shrikanth Narayanan University of Southern California Correspondence: {mjma, chochlak}@usc.edu Abstract Multi-label classification is prevalent in real- world settings, but the behavior of Large Lan- guage Models (LLMs) in this setting is under- studied. We investigate how autoregressive LLMs perform multi-label classification, with a focus on subjective tasks, by analyzing the output distributions of the models in each gen- eration step. We find that their predictive behav- ior reflects the multiple steps in the underlying language modeling required to generate all rel- evant labels as they tend to suppress all but one label at each step. We further observe that as model scale increases, their token distributions exhibit lower entropy, yet the internal ranking of the labels improves. Finetuning methods such as supervised finetuning and reinforce- ment learning amplify this phenomenon. To further study this issue, we introduce the task of distribution alignment for multi-label set- tings: aligning LLM-derived label distributions with empirical distributions estimated from an- notator responses in subjective tasks. We pro- pose both zero-shot and supervised methods which improve both alignment and predictive performance over existing approaches. Code available at https://github.com/gchochla/ LLM-multilabel-differently . 1 Introduction Many natural language processing tasks assume each input has a single, unambiguous label, rep- resented as a one-hot encoding (Srivastava et al. 2022; Wang et al. 2024; inter alia). However, in re- alistic settings, especially where categories are not mutually exclusive, this assumption fails. Multi- label classification, where instances can have none, one, or multiple labels, better captures the inherent ambiguity, richness of human categorization, and label correlations, notably in subjective tasks (Mo- hammad et al., 2018; Demszky et al., 2020). It also enables modeling degrees of belief, which is *equal contribution Figure 1: Autoregressive language modeling is incom- patible and interferes with multi-label classification: LLMs generate one label at a time with unrepresentative distributions misaligned from reference distributions. integral in subjective tasks to express confidence or intensity in each label (Paletz et al., 2023). Inten- sity is a tool not generally available in single-label settings. Despite their widespread applicability, multi-label tasks have received little attention in the context of Large Language Models (LLMs). A key reason may be the incompatibility be- tween the language modeling objective and the multi-label setting. LLMs are trained to generate probability distributions over vocabulary tokens via softmax normalization, naturally lending them- selves to single-label settings by restricting the nor- malization to label tokens. In contrast, multi-label classification does not require label probabilities to sum to one. Instead, each label’s confidence can, in principle, be modeled independently, which preva- lent LLMs are not trained to do, as their logits are meaningful only in relation to each other. Relative probabilities might still encode rele- vant information, useful for threshold-based pre- diction (He and Xia, 2018), but such methods are ill-suited for tasks involving graded or subjective judgments, where ground truth can lie in [0,1], not just{0,1}. Alternatively, LLMs can be allowed 1 to autoregressively generate a sequence of labels. However, the
https://arxiv.org/abs/2505.17510v1
resulting distributions at each step are conditioned on earlier outputs and remain con- strained by the same joint normalization, making them difficult to interpret as genuine model confi- dence scores (Breen et al., 2018). For example, a model with 60% confidence in a label still needs to allocate the remaining 40% among competing options, regardless of its “true” confidence. In this work, we investigate how LLMs generate multi-label predictions by analyzing their output distributions in each generation step. We show that LLMs exhibit spiky distributions, where each con- secutive step strongly favors a single label while suppressing others. This pattern produces a list of high-confidence individual predictions rather than a comprehensive probability distribution. Notably, these distributions lack consistency across steps: la- bels with high probability in earlier steps are rarely revisited in subsequent ones, even when the model continues generating labels, which suggests that LLMs are performing sequential single-label clas- sification and not holistic multi-label reasoning. To evaluate this phenomenon, we frame distribu- tional alignment as a core task: aligning LLM- derived distributions with ground-truth distribu- tions. To evaluate confidence, not just predictions, we also compare with empirical distributions de- rived from human annotator responses. Rather than relying on hard label agreement (e.g., majority vote), we embrace the plurality of human inter- pretations (Kahneman and Tversky, 1972; Tenen- baum et al., 2006; Griffiths et al., 2010; Aroyo and Welty, 2015) and approximate the distribution for each document by the empirical proportion of annotators selecting each label, resulting in val- ues∈[0,1]. We extend the distributional inference framework (Zhou et al., 2022) to the multi-label set- ting and evaluate both zero-shot and supervised ap- proaches for aligning LLM outputs with the human- annotation derived empirical distributions. Our contributions are the following: •In §4, we provide the first formal analysis of how LLMs handle multi-label classification, showing that their prediction behavior mirrors the steps inherent in the language modeling that favor a single-label setting. •In §5, we introduce and evaluate distribution alignment in the multi-label setting, using de- grees of belief as a reference distribution. We show that our proposed zero-shot and super- vised methods improve alignment and predic-tive quality over standard baselines on subjec- tive multi-label tasks. 2 Related Work 2.1 LLM Usage for Multi-label Predictions Single-label problems have dominated both early (e.g., ImageNet; Deng et al. 2009) and recent (BigBench; Srivastava et al. 2022) deep learning progress, despite the obvious limitations of single- label settings when the labels are not mutually ex- clusive. ImageNet (Deng et al., 2009) as a bench- mark, for instance, used the top- kaccuracy to eval- uate models in order to deal with the potential si- multaneous existence of multiple categories within each image, which was not reflected in the anno- tations. Similarly, previous multi-label modeling attempts treated the task as single-label by using the general cross-entropy loss with a threshold to turn the prediction into a proper multi-label output (He and Xia, 2018). Subsequent works switched to the binary cross-entropy loss, and tried to leverage the relationship between labels for additional supervi- sion (He and Xia, 2018; Alhuzali
https://arxiv.org/abs/2505.17510v1
and Ananiadou, 2021; Chochlakis et al., 2023). To the best of our knowledge, Niraula et al. (2024) is the only work to explicitly investigate LLM multi-label classification (Chen et al., 2022) in niche domains. Be t,ianu et al. (2024) explored a multi-label framework for finetuning BERT and Jung et al. (2023) trained a classifier on top of T5 encodings directly for multi-label classification rather than relying on model text generation. The two well-studied forms of multi-label classifica- tion are extreme multi-label classification (XMLC; Zhu and Zamani 2024), where models must as- sign many labels to a document from a very large label set (1000+ labels), and hierarchical multi- label classification (Tabatabaei et al., 2025), where labels are subdivided into sub-labels recursively. Subjective multi-label classification is relatively un- explored (Chochlakis et al., 2024). We thoroughly investigate LLMs in these settings by analyzing their classification patterns across datasets. 2.2 Subjective Language Tasks Many works have attempted to model individ- ual annotator perspectives and intensities (Paletz et al., 2023) instead of the majority vote, e.g., with EM (Dawid and Skene, 1979; Hovy et al., 2013), word embeddings Garten et al. (2019), and encoder- based approaches (Gordon et al., 2022; Mokhbe- 2 rian et al., 2022; Davani et al., 2022; Mokhberian et al., 2023). Modeling annotators with LLMs has shown limited success, and LLM biases have also been explored (Dutta et al., 2023; Abdurahman et al., 2024; Chochlakis et al., 2025). 2.3 Calibration for LLMs Increasing the size of neural networks generally improves performance and generalization (Hoff- mann et al., 2022; Brutzkus and Globerson, 2019; Kaplan et al., 2020). However, while smaller mod- els essentially produce well-calibrated predictions “for free” (Niculescu-Mizil and Caruana, 2005), as neural networks become increasingly complex, they are also less calibrated (Guo et al., 2017). Re- cent language models trained with Reinforcement Learning from Human Feedback (RLHF) have seen “spiky” probability distributions where models are overconfident in a select few output tokens while suppressing the probabilities of other options (Xie et al., 2024; Leng et al., 2025). Instruction tuning also appears to reduce calibration over base mod- els (Zhu et al., 2023). Several methods have been proposed to improve LLM calibration, including temperature scaling (Xie et al., 2024; Huang et al., 2024), adding calibration metrics as a learnable feature (Chen et al., 2023), and in-context prompt- ing (Zhao et al., 2024). Our proposed distribution alignment setting differs from calibration in that it compares the probabilities over the entire label set whereas calibration only compares the predicted label probability to the ground truth. 3 Datasets We present both objective and subjective multi- label datasets. We use 10-shot prompts with Llama3 (Dubey et al., 2024) (more details in §A). We apply softmax over initial label tokens to derive label probabilities at each step. Boxes (Kim and Schuster, 2023) Entity track- ing based on natural language description of “box” contents and “move” operations. Each box can contain none, one, or multiple objects. The dataset contains thousands of synthetic examples. SemEval 2018 Task 1 E-c (Mohammad et al., 2018) Multi-label emotion recognition of 11 emo- tions.
https://arxiv.org/abs/2505.17510v1
We use the English tweets. We refer to this as SemEval . Although it does not contain annotator labels, it has a frequent presence of multiple labels, allowing us to study the generation dynamics.MRFC (Trager et al., 2022) Multi-label moral foundation corpus of six moral foundations. 3 an- notators were assigned to each sample. GoEmotions (Demszky et al., 2020) Multi-label emotion recognition benchmark of 27 emotions. For efficiency, we pool the emotions to seven emo- tions via hierarchical clustering (see §A). On aver- age, 3.6 annotators were assigned to each sample. 4 Multi-Label Mechanisms of LLMs We evaluate whether LLMs produce diverse, con- sistent, and informative probability distributions. Specifically, we investigate whether the predicted probabilities at each generation step reflect the rela- tive confidence of the LLM and whether the relative ordering of labels provides insight into future pre- dictions. To this end, we analyze the distribution of the top two predicted probabilities at each la- bel generation step, along with the entropy of the distribution, allowing us to assess how spiky the distributions are, that is, how close the top proba- bility is to 1 and how low the entropy is. We also compare the top probabilities to evaluate whether their relative values reflect the model’s con- fidence. Crucially, we examine the second-highest probability and track how it evolves in the subse- quent generation step. By distinguishing between steps where the model continues generating more labels (denoted as intermediate ) and steps where it predicts the final label (denoted as last ), we assess whether the second-highest probability pro- vides a meaningful signal about the future behavior. Finally, we test whether the relative order of the probabilities is informative by comparing the second-highest probability in the current generation step to that of the label generated in the next. Figures 2 and 3 show the results based on the pre- dicted probabilities for all datasets using Llama3 8B and 70B Base, Instruct, and with Supervised Finetuning (Ouyang et al., 2022) (SFT; details in §A.5). We show only up to the second step to avoid clutter. Instances with only one shown generation step predicted only up to two labels. Corresponding entropy measures can be found in §D.2. Spikiness We see that as the models become larger or are finetuned, the distributions start to con- centrate around 100% . For instance, in SemEval , we see that Llama3 70B Instruct and SFT notice- ably spike for both generation steps. In contrast, Llama3 8B Base has mode ∼40%. For Boxes , 3 1 20.20.40.60.81.0Llama3 8B Base 1 2Llama3 8B Instruct 1 2SFT Llama3 8B Instruct 1 2Llama3 70B Base 1 2Llama3 70B Instruct 1 2SFT Llama3 70B InstructT op Probabilities for SemEval 2018 T ask 1last intermediate 1 20.20.40.60.81.0Llama3 8B Base 1 2Llama3 8B Instruct 1 2SFT Llama3 8B Instruct 1Llama3 70B Base 1 2Llama3 70B Instruct 1 2SFT Llama3 70B InstructT op Probabilities for GoEmotions 1 20.20.40.60.81.0Llama3 8B Base 1Llama3 8B Instruct 1 2SFT Llama3 8B Instruct 1 2Llama3 70B Base 1 2Llama3 70B Instruct 1 2SFT Llama3 70B InstructT op Probabilities for MFRC
https://arxiv.org/abs/2505.17510v1
1 20.20.40.60.81.0Llama3 8B Base 1 2Llama3 8B Instruct 1 2Llama3 70B Base 1 2Llama3 70B InstructT op Probabilities for Boxes Prediction Step rFigure 2: Top probabilities at each generation step when the last or an intermediate label is generated. Patterns are identical between the two settings, and bigger or finetuned models have clusters closer to 100%. 0.00.20.40.60.81.0 same: 40.1% same: 45.2%Llama3 8B Base same: 42.6% same: 51.8%Llama3 8B Instruct same: 30.8% same: 67.0%SFT Llama3 8B Instruct same: 44.5% same: 60.1%Llama3 70B Base same: 51.9% same: 63.3%Llama3 70B Instruct same: 42.6% same: 69.0%SFT Llama3 70B InstructSecond-highest Probabilities for SemEval 2018 T ask 1last intermediater+1 pred intermediate @ r+1 0.00.20.40.60.81.0 same: 50.2%same: 56.5%Llama3 8B Base same: 57.9% same: 76.4%Llama3 8B Instruct same: 78.9%same: 66.7%SFT Llama3 8B Instruct same: 69.6%Llama3 70B Base same: 77.3% same: 90.0%Llama3 70B Instruct same: 77.8% same: 94.2%SFT Llama3 70B InstructSecond-highest Probabilities for GoEmotions 0.00.20.40.60.81.0 same: 11.8%same: 50.0%Llama3 8B Base same: 30.7%Llama3 8B Instruct same: 52.5%same: 100.0%SFT Llama3 8B Instruct same: 12.0%same: 28.6%Llama3 70B Base same: 57.3% same: 70.3%Llama3 70B Instruct same: 45.2% same: 75.0%SFT Llama3 70B InstructSecond-highest Probabilities for MFRC 0.00.20.40.60.81.0 same: 57.4% same: 77.1%Llama3 8B Base same: 57.8% same: 71.0%Llama3 8B Instruct same: 51.6% same: 63.6%Llama3 70B Base same: 57.9% same: 72.8%Llama3 70B InstructSecond-highest Probabilities for Boxes Prediction Step r Figure 3: Second-highest probabilities at each generation step when the last or an intermediate label is generated. We also show the probability at the current step of the label that is actually predicted in the next step ( r+1pred ), the probability at the next generation step of the second highest probability of the current step ( intermediate @ r+1), and the percentage of cases the second-highest probability label at step rand the prediction at r+1 is the same . LLM distributions show poor relative ranking, and little distinction between the last andintermediate settings. the objective benchmark, we observe even more pronounced spikes, with probability mass clustered around ∼100% for all steps.Sequential Spikiness We observe that after the first label is generated, each additional label pro- duced by the LLM is accompanied by a similarly spiky distribution centered on the newly predicted 4 Figure 4: Sorted label probabilities when generating the first label for Llama3 70B Instruct. Most distributions are spiky, with the top label having near-1 probability. label. Interestingly, some distributions become spikier at later generation steps, potentially stem- ming from previously generated labels being as- signed near-zero probability. Stopping Criterion We find that models rarely have different distributions when predicting their last label compared to when they are going to con- tinue predicting more labels, providing little to no indication of when they will stop predicting. In- deed, we would expect the distributions to resemble MFRC with the Base models, when the probabil- ities for the second highest labels are distinctly greater, the model continues to produce more la- bels. However, this distinction does not appear in most settings. For instance, SemEval has the same trends between both, and the second probabilities of some of the models are greater when the model stops generating (e.g., 70B Instruct
https://arxiv.org/abs/2505.17510v1
and SFT), a counter-intuitive finding, because one would ex- pect lower weight on the rest of the labels when the model would stop generating. Relative Ranking We demonstrate that LLMs do not reliably pick the second highest label as their next prediction, even if they continue predicting. For instance, in SemEval , the label with the second highest probability in the first step is not predicted next between 48.1%and69.2%of the time across models. In GoEmotions , this behavior occurs be- tween 22.2%and49.8%of cases. In fact, if we take the label with the second-highest probability in the current step r, and look at its probability in the next step r+1 (shown as intermediate @ r+1), we see that it is clusters at 0. Similarly, when we look at the probability of the label predicted in step r+1, and see how its probability looked in the pre- vious step r(shown as r+1pred ), its probabilitytends to also be clustered around 0. Notably, we find that if the second highest label at any step is not predicted as the next generated label, it will not be not predicted at all most of the time (see §D.3). While is in some sense expected, since each generated label is newly conditioned on the previ- ously generated labels (we verify this in §D.6 by looking at the attention weights), it means that each generation step is only informative of the current label, since the relative ordering of predicted labels is not predictive of subsequent behavior. Language Modeling From the previous two find- ings, we conclude that LLMs’ distribution at the first (or any) generation step is not reflective of their confidence for each label, nor their subse- quent behavior, suggesting language modeling is interfering with classification , causing the model to spike for every generation, an artifact of the au- toregressive nature of LLMs, instead of generating a label distribution that is reflective of its confi- dence. We present more corroborating evidence in §5.4 with linear probing (Hewitt and Liang, 2019). Complete Distribution We find that most label probability distributions are spiky, with the top label having probability near 1 and other labels sharply degenerating to near-0 probability even if later predicted (Figure 4). We also find evidence that LLMs generate the most-likely label first, as the relative accuracy of each label drops between the first and second prediction in Figure 5. Sequen- tial spikiness explains these phenomena – LLMs generate the most-likely label first with high confi- dence and do not consider what a less likely second label would be until the first label is fully gener- ated. For the smaller models, we also observed a few instances where the model predicted the same label twice in a row. Rate of multiple predictions Finally, we report that the label type of in-context prompts greatly influences the rate of multi-label output. We show in Figure 6 how the percentage of multi-label (as opposed to single or no label) examples roughly corresponds to the percentage of multi-label out- put across models and datasets. Learning to predict multi-label outputs must
https://arxiv.org/abs/2505.17510v1
be highlighted very clearly in the in-context examples, suggesting that single- label formats have dominated the training of the model. Overall, these analyses demonstrate that LLMs do not create well-calibrated distributions when generating multiple labels; instead, they gen- 5 1 2 Label Order0.20.40.60.81.0Accuracy GoEmotions MFRC 1B Instruct 8B Instruct 70B InstructFigure 5: Average accuracy of the first and second label for multi-label generations based on the order in which it was generated, showing decreasing trends. Line color represents dataset and line pattern represents model size. 0.0 0.2 0.4 0.6 0.8 1.0 % of Multi-label In-Context Examples0.00.20.40.60.81.0% of Multi-label Outputs GoEmotions MFRC SemEval 1B Instruct 8B Instruct 70B Instruct Figure 6: Percentage of outputs that are multi-label given the percentage of in-context examples that are multi-label in a 10-shot prompt. Line color represents dataset and line pattern represents model size. erate spiky distributions, classifying labels one at a time. 5 Multi-Label Distribution Alignment To test how interpretable and calibrated the LLM- derived distributions are, we propose multi-label distributional alignment as a core task. Our focus in this work is multi-label subjective tasks, because they allow degrees of belief, and so allow us to evaluate model confidence, not just predictions, in multi-label settings. 5.1 Task Formulation for Multi-Label In the single-label setup, a probability distribution is produced over a label set L. However, in the multi-label case, each example can have an arbi- trary number of labels, each of which has its own binary probability of appearing (in practice, labels are additionally correlated). Thus, multi-label dis- tributions are |L|binary probabilities.5.1.1 Human Distribution Estimation Our underlying assumption is that given a task with subjective labels and multiple interpretations, the “truth” of the label is better represented as a confi- dence distribution over a potential label set. In this interpretation, for data point d, an annotation rep- resents a single sample a∼H(d), where His the underlying human distribution. Then, denoting Ias the indicator function, for label l∈L, we approxi- mate our empirical human-annotation distribution using annotator set Aas: ˆHl(d;A) =1 |A|X ai∈AI[l∈ai(d)]. (1) 5.1.2 Distribution Alignment Metrics We compute the negative log likelihood (NLL), L1 distance, and example-F1 (Du et al., 2019) to evaluate how well the empirical distribution aligns with the LLM-derived distribution. Example-F1 is a variant of F1 that can be evaluated per example. NLL Conceptually, NLL measures if a distribu- tion is confidently wrong about any answer. Given a discrete probability distribution Qdand a set of la- belsGd={gi|i∈[m], gi∈L}, we compute the likelihood of GdasQm g∈GdPQd(g), where PQd(li) is the probability of liunder Qd. Taking the neg- ative logarithm gives NLL. The best distribution that explains a sample minimizes NLL. L1 Distance One shortcoming of NLL is that it disproportionately penalizes small differences near 0, e.g., penalizing a likelihood of 10−7much more than 10−2, despite their practical similarity. L1 distance solves this problem by comparing the ab- solute difference of each label probability to its fre- quency in the sample:P l∈L|PQd(l)−ˆHl(d;A)|. L1 distance measures if the general shape of the distributions match 5.2 LLM Distribution Methods To investigate the task of distribution alignment
https://arxiv.org/abs/2505.17510v1
in the multi-label setting, we propose methods which are categorized into three groups: baseline methods , test-time methods , and supervised methods . 5.2.1 Baseline Methods Compare-to-None We use the output distribu- tion of the labels at the point at which the model generates its first label token (excluding, for exam- ple, formatting tokens). However, the individual values of raw logits hold little interpretability as 6 Single-Label Datasets Multi-Label Datasets Hatexplain MSPPodcast GoEmotions MFRC NLL↓L1↓F1↑NLL↓L1↓F1↑NLL↓L1↓F1↑NLL↓L1↓F1↑BaselineCompare-to-None 1.66 0.81 0.58 2.63 1.37 0.29 23.93 4.71 0.27 5.34 1.85 0.51 Hard Predictions 9.86 0.90 0.58 13.65 1.47 0.30 24.11 1.31 0.39 19.70 1.07 0.59Test-TimeUnary Breakdown 0.91 0.94 0.47 1.55 1.45 0.30 3.60 1.32 0.43 2.49 1.27 0.51 Binary Breakdown 1.12 1.06 0.29 1.65 1.44 0.24 7.62 2.64 0.41 3.55 2.11 0.41 Max-Over-Generations N/A N/A N/A N/A N/A N/A 4.04 1.27 0.39 2.32 0.92 0.63SupervisedBERT 2.69 0.73 0.66 4.29 1.27 0.38 2.72 0.63 0.64 3.00 0.43 0.82 Linear Probing N/S N/S N/S N/S N/S N/S 2.42 0.71 0.56 2.81 0.44 0.81 SFT Outputs N/S N/S N/S N/S N/S N/S 14.76 0.80 0.58 10.45 0.57 0.69 SFT Max-Over-Generations N/A N/A N/A N/A N/A N/A 4.15 0.72 0.57 4.87 0.54 0.73 Table 1: Distribution alignment scores for Llama3 70B Instruct on single and multi-label datasets between LLM and human distributions. F1↑is the example-F1 score. N/S: Not supplied to avoid clutter. ModelGoEmotions MFRC SemEval Perf Gold Pred Pred 2+ Perf Gold Pred Pred 2+ Perf Gold Pred Pred 2+ 8B Base 0.282 0.463 0.795 0.197 0.115 0.246 0.637 0.211 0.522 0.590 0.744 0.647 8B Instruct 0.434 0.520 0.903 0.217 0.285 0.380 0.877 0.074 0.537 0.624 0.779 0.675 70B Base 0.357 0.519 0.876 0.0 0.228 0.368 0.802 0.043 0.603 0.616 0.812 0.706 70B Instruct 0.396 0.554 0.856 0.440 0.314 0.420 0.819 0.550 0.588 0.627 0.844 0.758 Table 2: Micro F1 ↑of linear probes trained and evaluated on gold labels ( Gold ), trained and evaluated on model predictions ( Pred ), and evaluated on predictions beyond the first generated label ( Pred 2+ ). For comparison, we also show the performance of the model ( Perf). Embeddings are from the last layer for the first generated label. their value is only meaningful in the context of the rest of the tokens. We propose to compare the logit score of each label to the logit score of the “none” label to get an estimate of how likely that label is to occur independent of the other logits, leveraging the null prediction to contextualize the value of the logits. Let S(li)be the logit score for label li; we can therefore determine the logit score difference for each label di=S(li)−S(lnone). We then apply the sigmoid function to difor a valid probability: P(li= 1|di) =σ(di). Hard (Actual) Predictions We take the labels that the model actually outputs autoregressively; we set these values to 1−ϵand otherwise ϵto avoid arithmetic issues with NLL. 5.2.2 Test-Time Methods Given model’s good calibration on binary tasks (Niculescu-Mizil and Caruana, 2005), we propose: Unary Breakdown: Label-wise Preference In this approach, we create a binary classification problem for each individual label,
https://arxiv.org/abs/2505.17510v1
similar to the approach taken by Li et al. (2020). Namely, for a given example, we create a prompt that includes the original document to be classified, but insteadwe present a single label and query the model if the label is “reasonable”. We directly extract the probabilities for the “reasonable” label, which con- forms to the independence property of multi-label probabilities, because each label can be assigned a value∈[0,1]without constraints or normalization. |L|runs (one per label) per document are required. Binary Breakdown: Pair-wise Preference We break down a single example into multiple binary comparisons between all label pairs (|L|+1 2 runs per example), and then leverage the outcomes of these comparisons to derive probabilities for the labels. Namely, for every pair of labels, we provide both labels to the model and ask the model to select one of them as better representing the input. We derive the probabilities for the two labels by ap- plying softmax on the two logits. We then use the Bradley-Terry model (Bradley and Terry, 1952) to rank the labels based on their pairwise performance. Specifically, to estimate logit scores Swith pair- wise probabilities that label liis better than lj, we have P(liis better than lj) =pi>j=σ(si−sj), where σis the sigmoid function. This is calculated 7 by minimizing the predictive loss L: L=−1 2(X i,jpi>j·log(σ(si−sj)) + (1−pi>j)·log(σ(sj−si))).(2) In order to calculate the multi-label probabilities, similar to compare-to-none , we introduce a “none” label into the label set and derive final probabilities by comparing the Bradley-Terry logit scores of a given label to the “none” logit score. We also con- sider using strict 1’s and 0’s instead of probabilities, similar to ELO ranking (Elo, 1978) in §C, but find using probabilities to be more performant. Max-Over-Generations We take the probabil- ity distributions for every label generation step, and the final probability for each label is equal to the maximum value achieved over all distributions. This approach is a soft version of the Hard Predic- tions baseline, and requires access to model scores. 5.2.3 Supervised Methods We compare our approach with three supervised methods: Finetuned BERT ,Linear probes (He- witt and Liang, 2019) on the first label token of the last layer, and SFT, all described in §A.5. We also useLinear probes for interpretability purposes (Li et al., 2021) to study the informational content of the models’ embeddings. 5.3 Experimental Setup We apply our methods on the same Llama models (see §A.5). We test our proposed approaches on the main test set (details in §A.4). We test on the multi-label datasets of GoEmotions andMFRC that contain individual annotator labels. We also include evaluation on two single-label subjective datasets (details in §A.3), HateXplain (Mathew et al., 2021) and MSP-Podcast (Lotfian and Busso, 2019) to contextualize our multi-label findings. 5.4 Results Distribution Alignment We report distribution alignment results in Table 1 for Llama3 70B (re- sults for 8B in §D.4). Overall, we find that Test- Time and Supervised methods outperform both baseline methods. We draw particular attention to the max-over-generations method, which signifi- cantly outperforms both baselines with little addi- tional computational
https://arxiv.org/abs/2505.17510v1
overhead other than storing model scores across multiple generation steps. We see that unary breakdown performs similarly wellto max-over-generations, as isolating each label’s validity independently disentangles the bias of lan- guage modeling from the classification task. As a downside, unary breakdown incurs |L|times the generations per example. Surprisingly, we find that BERT performs the best of the supervised methods, which we use as additional evidence that LLMs classify labels one at a time, not simultaneously. Linear Probing The linear probing method ranks as the second best baseline, so the hidden states dur- ing first-label generation alone seem, at first glance, to contain enough information to perform well on the tasks. However, in Table 2, we present a more detailed analysis with linear probes. In addition to model and probing performance, we present the probes’ capability of predicting the predictions of the model themselves (i.e., the probes are trained on the predictions). We present the performance on the predictions on the Pred column, showing, as expected, much higher performance. However, when we look at how well the probes can predict any label after the first ( Pred 2+ ), we see a sub- stantial degradation in performance. Note that the task in theory becomes easier as we remove a la- bel from the problem. This degradation suggests that linear probing performs well mostly due to its high accuracy of the first label and has little pre- dictive power for any future labels, which aligns with our findings that LLMs predict labels one at a time. Even after supervised training, embeddings of the first label generation do not contain enough information to predict any subsequent labels. Effect of Instruction Tuning In §D.5, we demonstrate that finetuned models generally achieve higher performance, yet their NLL is worse. This result supports previous findings that finetuned model are more confident, since NLL punishes con- fidently wrong predictions more. 6 Conclusion We provide the first account of how LLMs per- form multi-label classification and find that LLMs generate spiky probability distributions and appear to predict labels one at a time rather than jointly. We argue that language modeling interferes with multi-label classification, making it difficult to in- terpret model confidences for labels until they are predicted. We provide supportive experimental evidence, demonstrating that a full generation of output is required to analyze LLMs’ label confi- 8 dences, and highlight the inconsistencies in the label probabilities across generation steps. Finally, we formulate the task of distribution alignment in the multi-label setting and propose novel methods and baselines to estimate better multi-label distri- butions from language models. We conclude that much work is still required in order to create distri- butions from LLMs that match the human distribu- tion in responses to subjective language tasks. 7 Limitations There are several potential limitations in this work. First, our assumption of underlying empirical dis- tributions derived from human annotator samples relies on the fact that the annotators are in fact valid and representative samples of the underlying true distribution. This does not account for the possi- bility that different annotators may be
https://arxiv.org/abs/2505.17510v1
biased in the same way and that combining their annotations does not remove this bias. Additionally, we limit our analysis to the Llama model family, which is inherently constrained to these models’ specific training and finetuning regimens. We acknowl- edge the possibility that our insights into multi- label generation for LLMs may differ for different model families. Finally, our proposed methodolo- gies of unary and binary breakdowns also increase the computational cost when compared to a single label generation, and that while these methods may show improvement over single generations, this in- creased cost is certainly a limitation towards their adoption. Acknowledgments This project was supported in part by funds from NSF CIVIC, and USC-Capital One Center for Re- sponsible AI Decision Making in Finance. The authors thank Thanathai Lertpetchpun, Kleanthis Avramidis, Emily Zhou and Jihwan Lee for helpful comments. References Suhaib Abdurahman, Mohammad Atari, Farzan Karimi- Malekabadi, Mona J Xue, Jackson Trager, Peter S Park, Preni Golazizian, Ali Omrani, and Morteza De- hghani. 2024. Perils and opportunities in using large language models in psychological research. PNAS nexus , 3(7):page245. Hassan Alhuzali and Sophia Ananiadou. 2021. Spanemo: Casting multi-label emotion classificationas span-prediction. In Proceedings of the 16th Con- ference of the European Chapter of the Association for Computational Linguistics: Main Volume . Asso- ciation for Computational Linguistics. Lora Aroyo and Chris Welty. 2015. Truth is a lie: Crowd truth and the seven myths of human annotation. AI Magazine , 36(1):15–24. Miruna Be t,ianu, Abele M ˘alan, Marco Aldinucci, Robert Birke, and Lydia Chen. 2024. Dallmi: Domain adap- tion for llm-based multi-label classifier. In Advances in Knowledge Discovery and Data Mining , pages 277–289, Singapore. Springer Nature Singapore. Ralph Allan Bradley and Milton E Terry. 1952. Rank analysis of incomplete block designs: I. the method of paired comparisons. Biometrika , 39(3/4):324– 345. Richard Breen, Kristian Bernt Karlson, and Anders Holm. 2018. Interpreting and understanding logits, probits, and other nonlinear probability models. An- nual Review of Sociology , 44(V olume 44, 2018):39– 54. Alon Brutzkus and Amir Globerson. 2019. Why do larger models generalize better? a theoreti- cal perspective via the xor problem. Preprint , arXiv:1810.03037. Xiaolong Chen, Jieren Cheng, Jingxin Liu, Wenghang Xu, Shuai Hua, Zhu Tang, and Victor S. Sheng. 2022. A survey of multi-label text classification based on deep learning. In Artificial Intelligence and Security: 8th International Conference, ICAIS 2022, Qinghai, China, July 15–20, 2022, Proceedings, Part I , page 443–456, Berlin, Heidelberg. Springer-Verlag. Yangyi Chen, Lifan Yuan, Ganqu Cui, Zhiyuan Liu, and Heng Ji. 2023. A close look into the cali- bration of pre-trained language models. Preprint , arXiv:2211.00151. Georgios Chochlakis, Gireesh Mahajan, Sabyasachee Baruah, Keith Burghardt, Kristina Lerman, and Shrikanth Narayanan. 2023. Leveraging label cor- relations in a multi-label setting: A case study in emotion. In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Process- ing (ICASSP) , pages 1–5. IEEE. Georgios Chochlakis, Alexandros Potamianos, Kristina Lerman, and Shrikanth Narayanan. 2024. The strong pull of prior knowledge in large language models and its impact on emotion recognition. arXiv preprint arXiv:2403.17125 . Georgios Chochlakis, Alexandros Potamianos, Kristina Lerman,
https://arxiv.org/abs/2505.17510v1
and Shrikanth Narayanan. 2025. Aggrega- tion artifacts in subjective tasks collapse large lan- guage models posteriors. In Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics . ACL. 9 Aida Mostafazadeh Davani, Mark Díaz, and Vinodku- mar Prabhakaran. 2022. Dealing with disagreements: Looking beyond the majority vote in subjective an- notations. Transactions of the Association for Com- putational Linguistics , 10:92–110. Alexander Philip Dawid and Allan M Skene. 1979. Maximum likelihood estimation of observer error- rates using the EM algorithm. Journal of the Royal Statistical Society: Series C (Applied Statistics) , 28(1):20–28. Dorottya Demszky, Dana Movshovitz-Attias, Jeongwoo Ko, Alan Cowen, Gaurav Nemade, and Sujith Ravi. 2020. GoEmotions: A dataset of fine-grained emo- tions. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics , pages 4040–4054. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hier- archical image database. In 2009 IEEE conference on computer vision and pattern recognition , pages 248–255. Ieee. Jingcheng Du, Qingyu Chen, Yifan Peng, Yang Xiang, Cui Tao, and Zhiyong Lu. 2019. Ml-net: multi-label classification of biomedical texts with deep neural networks. Journal of the American Medical Infor- matics Association , 26(11):1279–1285. Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. 2024. The llama 3 herd of models. arXiv preprint arXiv:2407.21783 . Senjuti Dutta, Sid Mittal, Sherol Chen, Deepak Ra- machandran, Ravi Rajakumar, Ian Kivlichan, Sunny Mak, Alena Butryna, and Praveen Paritosh. 2023. Modeling subjectivity (by Mimicking Annotator An- notation) in toxic comment identification across di- verse communities. Preprint , arXiv:2311.00203. Arpad E. Elo. 1978. The Rating of Chessplayers, Past and Present . Arco Publishing, New York. Justin Garten, Brendan Kennedy, Joe Hoover, Kenji Sagae, and Morteza Dehghani. 2019. Incorporating demographic embeddings into language understand- ing. Cognitive science , 43(1):e12701. Mitchell L Gordon, Michelle S Lam, Joon Sung Park, Kayur Patel, Jeff Hancock, Tatsunori Hashimoto, and Michael S Bernstein. 2022. Jury learning: Integrat- ing dissenting voices into machine learning models. InProceedings of the 2022 CHI Conference on Hu- man Factors in Computing Systems , pages 1–19. Thomas L. Griffiths, Nick Chater, Charles Kemp, Amy Perfors, and Joshua B. Tenenbaum. 2010. Probabilis- tic models of cognition: exploring representations and inductive biases. Trends in Cognitive Sciences , 14(8):357–364.Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q. Wein- berger. 2017. On calibration of modern neural net- works. Preprint , arXiv:1706.04599. Huihui He and Rui Xia. 2018. Joint binary neural net- work for multi-label learning with applications to emotion classification. In Natural Language Process- ing and Chinese Computing: 7th CCF International Conference, NLPCC 2018, Hohhot, China, August 26–30, 2018, Proceedings, Part I 7 , pages 250–259. Springer. John Hewitt and Percy Liang. 2019. Designing and interpreting probes with control tasks. arXiv preprint arXiv:1909.03368 . Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, Tom Hennigan, Eric Noland, Katie Millican, George van den Driessche, Bogdan
https://arxiv.org/abs/2505.17510v1
Damoc, Aurelia Guy, Simon Osindero, Karen Si- monyan, Erich Elsen, Jack W. Rae, Oriol Vinyals, and Laurent Sifre. 2022. Training compute-optimal large language models. Preprint , arXiv:2203.15556. Dirk Hovy, Taylor Berg-Kirkpatrick, Ashish Vaswani, and Eduard Hovy. 2013. Learning whom to trust with MACE. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies , pages 1120–1130. Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, et al. 2022. Lora: Low-rank adap- tation of large language models. ICLR , 1(2):3. Yukun Huang, Yixin Liu, Raghuveer Thirukovalluru, Arman Cohan, and Bhuwan Dhingra. 2024. Cali- brating long-form generations from large language models. Preprint , arXiv:2402.06544. Taehee Jung, Joo-kyung Kim, Sungjin Lee, and Dongyeop Kang. 2023. Cluster-guided label genera- tion in extreme multi-label classification. In Proceed- ings of the 17th Conference of the European Chap- ter of the Association for Computational Linguistics , pages 1670–1685, Dubrovnik, Croatia. Association for Computational Linguistics. Daniel Kahneman and Amos Tversky. 1972. Subjec- tive probability: A judgment of representativeness. Cognitive Psychology , 3(3):430–454. Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020. Scaling laws for neural language models. Preprint , arXiv:2001.08361. Najoung Kim and Sebastian Schuster. 2023. En- tity tracking in language models. arXiv preprint arXiv:2305.02363 . 10 Jixuan Leng, Chengsong Huang, Banghua Zhu, and Jiaxin Huang. 2025. Taming overconfidence in llms: Reward calibration in rlhf. Preprint , arXiv:2410.09724. Belinda Z Li, Maxwell Nye, and Jacob Andreas. 2021. Implicit representations of meaning in neural lan- guage models. In Proceedings of the 59th Annual Meeting of the Association for Computational Lin- guistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers) , pages 1813–1827. Cheng Li, Virgil Pavlu, Javed Aslam, Bingyu Wang, and Kechen Qin. 2020. Learning to calibrate and rerank multi-label predictions. In Machine Learning and Knowledge Discovery in Databases , pages 220–236, Cham. Springer International Publishing. R. Lotfian and C. Busso. 2019. Building naturalis- tic emotionally balanced speech corpus by retriev- ing emotional speech from existing podcast record- ings. IEEE Transactions on Affective Computing , 10(4):471–483. Binny Mathew, Punyajoy Saha, Seid Muhie Yimam, Chris Biemann, Pawan Goyal, and Animesh Mukher- jee. 2021. Hatexplain: A benchmark dataset for ex- plainable hate speech detection. In Proceedings of the AAAI conference on artificial intelligence , vol- ume 35, pages 14867–14875. Saif Mohammad, Felipe Bravo-Marquez, Mohammad Salameh, and Svetlana Kiritchenko. 2018. Semeval- 2018 task 1: Affect in tweets. In Proceedings of the 12th International Workshop on Semantic Evaluation , pages 1–17. Negar Mokhberian, Frederic R Hopp, Bahareh Haran- dizadeh, Fred Morstatter, and Kristina Lerman. 2022. Noise audits improve moral foundation classifica- tion. In 2022 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM) , pages 147–154. IEEE. Negar Mokhberian, Myrl G Marmarelis, Frederic R Hopp, Valerio Basile, Fred Morstatter, and Kristina Lerman. 2023. Capturing perspectives of crowd- sourced annotators in subjective learning tasks. arXiv preprint arXiv:2311.09743 . Alexandru Niculescu-Mizil and Rich Caruana. 2005. Predicting
https://arxiv.org/abs/2505.17510v1
good probabilities with supervised learn- ing. In Proceedings of the 22nd International Confer- ence on Machine Learning , ICML ’05, page 625–632, New York, NY , USA. Association for Computing Ma- chinery. Nobal Niraula, Samet Ayhan, Balaguruna Chi- dambaram, and Daniel Whyatt. 2024. Multi-label classification with generative large language mod- els. In 2024 AIAA DATC/IEEE 43rd Digital Avionics Systems Conference (DASC) , pages 1–7. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang,Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instruc- tions with human feedback. Advances in neural in- formation processing systems , 35:27730–27744. Susannah BF Paletz, Ewa M Golonka, Nick B Pandža, Grace Stanton, David Ryan, Nikki Adams, C An- ton Rytting, Egle E Murauskaite, Cody Buntain, Michael A Johns, et al. 2023. Social media emo- tions annotation guide (SMEmo): Development and initial validity. Behavior Research Methods , pages 1–51. Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, et al. 2022. Beyond the imitation game: Quantifying and extrapolating the capabili- ties of language models. Transactions on Machine Learning Research . Seyed Amin Tabatabaei, Sarah Fancher, Michael Par- sons, and Arian Askari. 2025. Can large language models serve as effective classifiers for hierarchical multi-label classification of scientific documents at industrial scale? In Proceedings of the 31st Inter- national Conference on Computational Linguistics: Industry Track , pages 163–174, Abu Dhabi, UAE. Association for Computational Linguistics. Joshua B. Tenenbaum, Thomas L. Griffiths, and Charles Kemp. 2006. Theory-based bayesian models of in- ductive learning and reasoning. Trends in Cognitive Sciences , 10(7):309–318. Jackson Trager, Alireza S Ziabari, Aida Mostafazadeh Davani, Preni Golazizian, Farzan Karimi- Malekabadi, Ali Omrani, Zhihe Li, Brendan Kennedy, Nils Karl Reimer, Melissa Reyes, et al. 2022. The moral foundations reddit corpus. arXiv preprint arXiv:2208.05545 . Yubo Wang, Xueguang Ma, Ge Zhang, Yuansheng Ni, Abhranil Chandra, Shiguang Guo, Weiming Ren, Aaran Arulraj, Xuan He, Ziyan Jiang, Tianle Li, Max Ku, Kai Wang, Alex Zhuang, Rongqi Fan, Xiang Yue, and Wenhu Chen. 2024. Mmlu-pro: A more robust and challenging multi-task language understanding benchmark. Preprint , arXiv:2406.01574. Johnathan Xie, Annie S. Chen, Yoonho Lee, Eric Mitchell, and Chelsea Finn. 2024. Calibrating lan- guage models with adaptive temperature scaling. Preprint , arXiv:2409.19817. Xinran Zhao, Hongming Zhang, Xiaoman Pan, Wen- lin Yao, Dong Yu, Tongshuang Wu, and Jianshu Chen. 2024. Fact-and-reflection (far) improves confi- dence calibration of large language models. Preprint , arXiv:2402.17124. Xiang Zhou, Yixin Nie, and Mohit Bansal. 2022. Dis- tributed NLI: Learning to predict human opinion dis- tributions for language reasoning. In Findings of the Association for Computational Linguistics: ACL 2022 , pages 972–987, Dublin, Ireland. Association for Computational Linguistics. 11 Chiwei Zhu, Benfeng Xu, Quan Wang, Yongdong Zhang, and Zhendong Mao. 2023. On the calibra- tion of large language models and alignment. In Findings of the Association for Computational Lin- guistics: EMNLP 2023 , pages 9778–9795, Singapore. Association for Computational Linguistics. Yaxin Zhu and Hamed Zamani. 2024. Icxml: An in-context learning framework for zero-shot extreme multi-label classification.
https://arxiv.org/abs/2505.17510v1
Preprint , arXiv:2311.09649. 12 A Additional Implementation Details A.1 Label Probabilities Throughout §5.2, we generate softmax probabili- ties of the label set by constraining the logit scores to just those of the initial tokens of labels. This deviates slightly from the true label probabilities, as we ignore all non-label token values during the softmax; however we note that, in practice, the softmax probabilities over just the label set do not deviate much from their probabilities over the en- tire vocabulary set, as the majority of top logits are label tokens. A.2 Multi-Label Datasets GoEmotions The seven emotion “clusters” are: admiration ,anger ,fear,joy,optimism ,sadness , andsurprise . MFRC The six moral foundations are: care,pro- portionality ,equality ,purity ,authority , and loy- alty. SemEval The eleven emotion labels are: anger , anticipation ,disgust ,fear,joy,love,optimism ,pes- simism ,sadness ,surprise , and trust. A.3 Single-Label Datasets HateXplain (Mathew et al., 2021) Benchmark of hateful and offensive speech. Each document is labeled as offensive ,hateful , ornormal , and where necessary it also contains the target of that senti- ment. Each sample was assigned to 3 annotators. MSP-Podcast v1.11 (Lotfian and Busso, 2019) Utterances from podcasts that have been labeled for emotion. The dataset comes with ground truth transcriptions, which we leverage to perform lan- guage modeling. 5.3 annotators on average were assigned to each sample. A.4 Dataset splits For Figures 2 and 3, we perform inference on the Base and Instruct models on the entire training set to get the largest population of data points we can. However, for the SFT models, since we needed a large enough training set, we use the train split to finetune the model and perform inference on the dev and test sets. For the linear probes, we train on the train set and evaluate on the dev and test sets. For the rest of our experiments, and for each dataset, we create two testing sets: a "multi-labelonly" set, containing data that exclusively has mul- tiple ground truth labels, which we use in §4; and a main testing set, which contains a uniform number of data across three label types (no label, single la- bel, and multi-label) and annotator disagreements (no disagreement and has disagreement) for our experiments in §5. For each test set we select 200 data points per dataset due to exploding number of runs we require for the methods we propose (e.g., unary requires a run per label). In the prompt, half of the in-context examples contain multiple labels. A.5 Models We use the following models, all downloaded from HuggingFace and implemented in PyTorch: • Llama3 1B Instruct (meta-llama/Llama-3.2-1B-Instruct ) • Llama3 8B Base (meta-llama/Llama-3.1-8B ) • Llama3 8B Instruct (meta-llama/Llama-3.1-8B-Instruct ) • Llama3 70B Base (meta-llama/Llama-3.1-70B ) • Llama3 70B Instruct (meta-llama/llama-3.3-70B-Instruct ) We used NVIDIA A100 80GB VRAM GPUs for 70B models, and NVIDIA A40 for smaller models. SFT Our supervised finetuning pipeline simply involves prompting an LLM with the same instruc- tions and prompt template as the other models, but without the 10 demonstrations that we otherwise use. We used LoRA (Hu et al., 2022).
https://arxiv.org/abs/2505.17510v1
During infer- ence, because we noticed a tendency for the model to respond with differing formats, we still used a 10-shot format to standardize the output. Unary breakdown We specifically use the term "reasonable" given the subjective nature of the tasks where multiple labels may be appropriate, as we found that using "yes" or "no" directly some- times causes the model to assign a more appropriate label even if both labels are applicable. BERT For the BERT results, we have used De- mux (Chochlakis et al., 2023). We use the same training regime as in the original paper, using the in- tra loss with a coefficient of 0.2for the multi-label settings, but training only on the train set instead 13 of integrating the dev set in training after early stopping. For the single-label settings, we simply switch to using the cross-entropy loss instead of the binary cross-entropy. Linear Probes We derive the hidden state at the last layer of the first label token that the model generates. We normalize and downsample with a factor of 4 using truncated SVD (to accommodate for the smaller dataset size compared to the hidden state dimension, especially of the 70B models). We then train one logistic regression model per label using scikit-learn ’sLogistic Regression . A.6 Caveat on NLL and L1 In the multi-label setting, since every possible la- bel has the potential to be included in an exam- ple, each sample technically contains data on every label, with the majority of labels being set to 0 (i.e., not assigned to the example). In scenarios where the majority of labels are 0, a degenerate solution of a "fixed" distribution, where all values are set to a constant such as 0.1, often performs very well. Thus, it is important to evaluate pure alignment metrics such as NLL and L1 in conjunc- tion with performance metrics such as accuracy or F1, as fixed distributions will perform very poorly on those metrics. B Prompt Examples We present some prototypical prompts we use throughout our whole study in Table 3. C Binary Breakdown with Probabilities vs. Outcomes We investigate two methods of binary breakdown in Figure 7: using the soft probabilities between 0 and 1 (i.e., p∈[0,1]for preferring one label to the other and using hard outcomes (i.e., p∈ {0,1}). We find that using for L1 distance and F1, the pre- ferred approach varies between datasets, but for NLL, using probabilities is always preferred. We find that when a single label is dominant, meaning it is preferred to every other label, using probabili- ties calibrates the breakdown better than using hard outcomes, as dominant labels still never achieve 100% probability in their comparisons. We there- fore conclude that using binary breakdown with probabilities rather than outcomes is the better ap- proach.Prompt examples Regular prompt (GoEmotions) Classify the following inputs into none, one, or multiple the following emotions per input: joy, optimism, admi- ration, surprise, fear, sadness and anger. Output exactly these emotions and no others. Input: "Can I speak to the Suns’ manager?" {"label": ["surprise"]} Input: Problem is that picture
https://arxiv.org/abs/2505.17510v1
is there before you ever set foot in the cabin; [NAME] put it there. Just makes it creepy. Unary Breakdown Prompt Example (Hatexplain) Classify the following question-label pairs as either "rea- sonable" or "unreasonable". Output either "reasonable" or "unreasonable" and nothing else. Question: that or only date asians either way the white race is dead Label: offensive Is the label reasonable: unreasonable Question: i want to hate you but i can not Label: normal Is the label reasonable: Binary Breakdown Prompt Example (MFRC) Classify the following question into one of the two following labels. Output either "a" or "b" depending on which label is better and nothing else. Question: It’s just capitalism is a pure distillation of those bad traits a. none b. loyalty Correct Label: b Question: His abuse will get worse. He’s literally already accusing you, what will he do if he doesn’t believe you’re being faithful? a. proportionality b. loyalty Correct Label: Table 3: Example prompts D Additional Results on LLM Multilabel Capabilities D.1 Probabilities: Alternative view For completeness, in Figure 8 we also present the equivalent box plots of Figures 2 and 3. D.2 Entropy of Predictions We also present the entropies of the predictions in Figure 9. Again, for all datasets but for MFRC , we see that the trends are indistinguishable between when the model will generate more labels com- pared to when it predicts its last label, showing 14 Hatexplain MSPPodcast GoEmotions MFRC051015202530Mean NLLLlama-3.1-8B NLL binary binary_outcome Hatexplain MSPPodcast GoEmotions MFRC0246810Mean NLLLlama-3.3-70B-Instruct NLL binary binary_outcome Hatexplain MSPPodcast GoEmotions MFRC012345Mean L1 DistanceLlama-3.1-8B L1 Distance binary binary_outcome Hatexplain MSPPodcast GoEmotions MFRC0.00.51.01.52.02.5Mean L1 DistanceLlama-3.3-70B-Instruct L1 Distance binary binary_outcome Hatexplain MSPPodcast GoEmotions MFRC0.000.050.100.150.200.250.300.35Mean Example F1Llama-3.1-8B Example F1 binary binary_outcome Hatexplain MSPPodcast GoEmotions MFRC0.000.050.100.150.200.250.300.350.40Mean Example F1Llama-3.3-70B-Instruct Example F1 binary binary_outcomeFigure 7: Comparison of binary breakdown when using the pairwise probabilities (“binary”) versus using pairwise outcomes (“binary_outcome”, i.e. rounding probabilities to 0 and 1). little evidence for properly calibrated probability distributions on multi-label tasks. D.3 Inconsistencies in second highest label scores In this section, we report the probability that the label associated with the second highest probabil- ity at any given generation step is, in fact, never predicted by the model if not predicted in the im- mediate next step. We limit our evaluation only to steps where the model does continue to predict more labels afterward, skipping the instances where the model stops predicting. In Table 4, we see that the label does not appear in the predictions at least 78.4%of the time in SemEval ,91.3%inGoE- motions ,89.9%inMFRC , and 56.8%inBoxes . Note that, as shown in Figure 3, the second ranked label is not predicted immediately after a large per- centage of time, resulting overall in large inconsis- tencies in the probabilities and the predictions of LLMs. In Figure 10, we study in more detail the con-sistency of the second-highest probability label, excluding the instances where it was not predicted at all, and show the histograms for each generation step. We find that increasing the model size im- proves the rate at which that label is predicted right after
https://arxiv.org/abs/2505.17510v1
it is ranked second, as Llama3 70B Instruct predicts the label with the second-highest probabil- ity as the second label 65% of the time compared to approximately 50% of the time with 8B Instruct. This indicates that with scale, the relative ordering of labels improves. D.4 Alignment of Llama3 8B We present results for the alignment of Llama3 8B in addition to the 70B presented in the main text. Results can be seen in Table 5. Our takeaways are virtually identical to 70B, so we refrain from repeating the analysis. 15 0.00.20.40.60.81.0Second Probabilitysame: 40.1% same: 45.2%Llama3 8B Base same: 42.6% same: 51.8%Llama3 8B Instruct same: 30.8% same: 67.0%SFT Llama3 8B Instruct same: 44.5% same: 60.1%Llama3 70B Base same: 51.9% same: 63.3%Llama3 70B Instruct same: 42.6% same: 69.0%SFT Llama3 70B Instruct 1 20.00.20.40.60.81.0T op Probability 1 2 1 2 1 2 1 2 1 2Probabilities for SemEval 2018 T ask 1last intermediater+1 pred intermediate @ r+1 0.00.20.40.60.81.0Second Probabilitysame: 50.2%same: 56.5%Llama3 8B Base same: 57.9% same: 76.4%Llama3 8B Instruct same: 78.9%same: 66.7%SFT Llama3 8B Instruct same: 69.6%Llama3 70B Base same: 77.3% same: 90.0%Llama3 70B Instruct same: 77.8% same: 94.2%SFT Llama3 70B Instruct 1 20.00.20.40.60.81.0T op Probability 1 2 1 2 1 1 2 1 2Probabilities for GoEmotions 0.00.20.40.60.81.0Second Probabilitysame: 11.8%same: 50.0%Llama3 8B Base same: 30.7%Llama3 8B Instruct same: 52.5%same: 100.0%SFT Llama3 8B Instruct same: 12.0%same: 28.6%Llama3 70B Base same: 57.3% same: 70.3%Llama3 70B Instruct same: 45.2% same: 75.0%SFT Llama3 70B Instruct 1 20.00.20.40.60.81.0T op Probability 1 1 2 1 2 1 2 1 2Probabilities for MFRC 0.00.20.40.60.81.0Second Probabilitysame: 57.4% same: 77.1%Llama3 8B Base same: 57.8% same: 71.0%Llama3 8B Instruct same: 51.6% same: 63.6%Llama3 70B Base same: 57.9% same: 72.8%Llama3 70B Instruct 1 20.00.20.40.60.81.0T op Probability 1 2 1 2 1 2Probabilities for Boxes Prediction Step rFigure 8: Top two probabilities at each generation step r(up to two for brevity) when the last label is generated, or when a middle label is generated. Shown are for four datasets, one per row. In each row, the bottom subfigure shows the top probability, and the top the second highest probability, in addition to the probability of the label that was actually predicted next at the current step ( r+1pred ), and the probability at the next generation step of the second highest probability ( mid @ r+1). Also shown is the percentage of cases the second-highest probability label at rand the prediction at r+1 were the same . Instances with only one shown generation step predicted only up to two labels. 8B Base 8B Instruct 8B SFT 70B Base 70B Instruct 70B SFT SemEval 88.1 85.3 90.4 78.4 78.8 82.8 GoEmotions 99.3 95.4 91.3 92.9 93.4 96.7 MFRC 100 99.7 94.7 94.0 96.4 89.9 Boxes 86.1 70.8 - 72.4 56.8 - Table 4: Percentage % of cases the second highest label in probability was not predicted at all at any subsequent step when it was not predicted immediately afterward, despite the model predicting at least one more label. 16 1 20123Entropy (bits)Llama3 8B Base 1 2Llama3 8B Instruct 1 2SFT Llama3 8B Instruct 1 2Llama3 70B Base
https://arxiv.org/abs/2505.17510v1
1 2Llama3 70B Instruct 1 2SFT Llama3 70B InstructEntropy for SemEval 2018 T ask 1last intermediate 1 20123Entropy (bits)Llama3 8B Base 1 2Llama3 8B Instruct 1 2SFT Llama3 8B Instruct 1Llama3 70B Base 1 2Llama3 70B Instruct 1 2SFT Llama3 70B InstructEntropy for GoEmotions 1 2012Entropy (bits)Llama3 8B Base 1Llama3 8B Instruct 1 2SFT Llama3 8B Instruct 1 2Llama3 70B Base 1 2Llama3 70B Instruct 1 2SFT Llama3 70B InstructEntropy for MFRC 1 201234Entropy (bits)Llama3 8B Base 1 2Llama3 8B Instruct 1 2Llama3 70B Base 1 2Llama3 70B InstructEntropy for Boxes Prediction Step rFigure 9: Entropies of prediction distributions at each generation step rwhen the last label is generated, or when a middle label is generated. Single-Label Datasets Multi-Label Datasets Hatexplain MSPPodcast GoEmotions MFRC NLL↓L1↓F1↑NLL↓L1↓F1↑NLL↓L1↓F1↑NLL↓L1↓F1↑BaselineCompare-to-None 0.97 0.97 0.42 1.59 1.34 0.31 33.58 5.42 0.21 20.23 4.82 0.23 Hard Predictions 12.63 1.17 0.42 13.55 1.44 0.31 27.47 1.49 0.32 40.79 2.21 0.26Test-TimeUnary Breakdown 0.98 1.01 0.35 1.62 1.48 0.12 4.99 3.21 0.29 5.29 3.03 0.22 Binary Breakdown 0.99 1.01 0.23 1.61 1.48 0.17 4.84 3.18 0.23 8.33 3.83 0.23 Max-Over-Generations N/A N/A N/A N/A N/A N/A 3.00 1.44 0.34 2.87 1.58 0.39SupervisedBERT 2.69 0.73 0.66 4.29 1.27 0.38 2.72 0.63 0.64 3.00 0.43 0.82 Linear Probing N/S N/S N/S N/S N/S N/S 2.57 0.70 0.57 2.49 0.39 0.83 SFT Outputs N/S N/S N/S N/S N/S N/S 14.76 0.80 0.58 10.45 0.57 0.69 SFT Max-Over-Generations N/A N/A N/A N/A N/A N/A 4.15 0.72 0.57 4.87 0.54 0.73 Table 5: Distribution alignment scores for Llama 3 8B on single and multi-label datasets between LLM and human distributions. F1↑is the example-F1 score. N/S: Not supplied to avoid clutter. D.5 Effect of Finetuning on Distribution Alignment Previous research into LLM calibration has found that RLHF (Ouyang et al., 2022) can make mod- els more overconfident in their predictions (Leng et al., 2025; Xie et al., 2024; Zhu et al., 2023). In Figure 11, we compare the F1 and NLL of Llama-2-70B (base model) and Llama-2-70B-chat (instruction-tuned) for several distribution meth- ods. As expected, the finetuned model generally achieves higher F1 than the base model; however, the NLL for the compare-to-none and max methods (which are the two methods that directly examine the label probabilities) is lower for the base model. This corroborates the aforementioned findings thatthe model gets more confident when finetuned – NLL punishes highly confident, wrong answers more than being more confident on correct answers. The similar NLL on unary and binary breakdowns demonstrates that these two methods are relatively robust to different levels of confidence. D.6 Attention to Input vs Labels We present the average attention to tokens in the prompt for models, when they generate the second or higher label. We intend to examine how much the models attend to the previous labels generated, establishing empirically the intuition that because of language modeling, the answers of the model deviate from whatever can be gauged from the first 17 ModelGoEmotions MFRC SemEval Boxes Input Label 1st Tokens Input Label 1st Tokens Input Label 1st Tokens Input Label 1st Tokens 8B Base 0.242 2.04 3.62
https://arxiv.org/abs/2505.17510v1
0.132 2.01 3.29 0.162 1.76 3.00 0.095 3.11 3.06 8B Instruct 0.242 2.08 3.48 0.242 2.08 3.48 0.163 1.74 2.84 0.094 2.92 2.69 Table 6: Average percentage % attention to Input andLabel tokens. We also show the average attention to the 1st Tokens of the labels only, avoiding formatting tokens and the rest of the generated tokens. 2 3 4 5 6 Sorted Index of Other Label Logits During First Label Generation0.00.20.40.60.81.0ProbabilityMFRC, Llama-3.1-8B-Instruct Probability This Rank's Label is Next Predicted Probability This Rank's Label is Predicted At All 2 3 4 5 6 Sorted Index of Other Label Logits During First Label Generation0.00.20.40.60.81.0ProbabilityMFRC, Llama-3.3-70B-Instruct Probability This Rank's Label is Next Predicted Probability This Rank's Label is Predicted At All Figure 10: Comparing if the label probability distribu- tion created while generating the first label is indica- tive of what the model will actually predict for multi- label generations on MFRC for Llama-3.1-8B (top) and Llama-3.3-70B (bottom). The first index value is not shown as this corresponds to the actual first label being generated. generated label token distribution. Table 6 shows that, on average, an order of magnitude higher weights are found in the label part of the prompt compared to the input (which also includes other labels because of the demonstrations). Attending to the format of the response is a plausible con- founder, so we also check the attention specifically to the first label tokens. This suggests that, indeed, subsequent labels are conditioned on the previous generations. We note that even though average attention is lower on the input, cumulative atten- tion is still greater, with approximately a 80%/20% compare-to-none output unary binary max0.000.050.100.150.200.250.300.350.40Mean Example F1Llama-2-70B Llama-2-70B-chat compare-to-none output unary binary max01020304050Mean NLLLlama-2-70B Llama-2-70B-chatFigure 11: Comparing the average example-F1 (top) and Negative Log Likelihood (bottom) between the base Llama-2-70B model and the instruction-finetuned Llama-2-70B-chat model, averaged over MFRC and GoEmotions. split in favor of the input, which is usually an or- der of magnitude or more longer than the labels themselves, again suggesting that a lot of attention weights are accumulated on the generated labels. 18
https://arxiv.org/abs/2505.17510v1
arXiv:2505.17512v1 [cs.AI] 23 May 2025Probe by Gaming: A Game-based Benchmark for Assessing Conceptual Knowledge in LLMs Shuhang Xu♢Weijian Deng♣Yixuan Zhou♠Fangwei ZhongB♢ ♢Beijing Normal University♣Australian National University ♠Beijing 101 Education Group BCorrespondence to fangweizhong@bnu.edu.cn Abstract Concepts represent generalized abstractions that enable humans to categorize and reason efficiently, yet it is unclear to what extent Large Language Models (LLMs) comprehend these semantic relationships. Existing benchmarks typically focus on factual recall and isolated tasks, failing to evaluate the ability of LLMs to understand conceptual boundaries. To address this gap, we introduce CK-Arena, a multi-agent interaction game built upon the Undercover game, designed to evaluate the capacity of LLMs to reason with concepts in interactive settings. CK-Arena challenges models to describe, differentiate, and infer conceptual boundaries based on partial information, encouraging models to explore commonalities and dis- tinctions between closely related concepts. By simulating real-world interaction, CK-Arena provides a scalable and realistic benchmark for assessing conceptual reasoning in dynamic environments. Experimental results show that LLMs’ under- standing of conceptual knowledge varies significantly across different categories and is not strictly aligned with parameter size or general model capabilities. The data and code are available at the project homepage: https://ck-arena.site . 1 Introduction As Large Language Models (LLMs) become integral to complex reasoning tasks, the demand is shift- ing from mere sequence prediction to a deeper grasp of conceptual structures and their related charac- teristics in the real world [ 1,2,3,4]. A concept represents a generalized abstraction that encapsulates shared properties of entities, enabling humans to categorize and reason efficiently [ 5,6,7,8,9,10]. For example, the concept Primates groups animals like monkeys andapes based on shared charac- teristics such as opposable thumbs, forward-facing eyes, and high cognitive abilities. While human cognition naturally leverages such conceptual structures for reasoning and adaptation, it remains unclear to what extent LLMs capture and utilize these abstractions. Current evaluations primarily focus on surface-level predictions, offering limited insight into whether LLMs truly understand concepts as structured semantic entities. Traditional benchmarks for LLMs evaluation have contributed to improvements in model performance [11,12,13,14], but they exhibit significant limitations. These benchmarks primarily assess token- level accuracy and factual recall through static question-answer formats, often breaking down knowledge into isolated questions. This fragmented evaluation approach captures surface-level information retrieval but fails to probe the inherent connections and boundaries between concepts. For example, a model may correctly identify that monkeys andapes belong to Primates , yet this does not indicate any understanding of the structural relationships or distinctive features that separate these groups within the broader taxonomy. Furthermore, as LLMs evolve towards more autonomous and interactive roles, traditional methods such as multiple-choice and true/false questions struggle to reflect their capabilities in complex and dynamic environments. The reliance on fixed datasets Preprint. Under review. Undercover ASSIGN Soccer Basketball It's a spherical object.Multi-Agent Games Conceptual Knowledge Automatic Judge System What is my identity? What is the connection between two concepts? What is another concept? How should I describe my concept?This is a piece of sports equipment. It's an object that rolls around on the grass. Orange or brownBlack and white Grass field
https://arxiv.org/abs/2505.17512v1
Spherical shape Three-point lineSports equipment90 minutes Free throwsScore to winOffside rule Goalkeeper Indoor court Commonalities and Unique Features Among ConceptsConcept pairs from knowledge graph Language Games as the LLM Testing Benchmark…… LLM-Based Agents as Different Roles for InteractionFigure 1: Conceptual knowledge arena (CK-Arena). A benchmark designed to evaluate the ability of Large Language Models (LLMs) to understand and reason with conceptual knowledge boundaries. Built upon the interactive game Undercover , CK-Arena challenges LLMs to take on roles as players and judges, navigating concept pairs that share both commonalities and unique distinctions. Through multi-agent interaction, LLMs generate descriptive statements, reason about semantic similarities and differences, and make strategic decisions based on partial information. Judges evaluate these interactions based on metrics such as novelty, relevance, and reasonableness, providing insights into the LLMs’ conceptual reasoning capabilities in realistic, dynamic environments. also limits scalability, as creating, maintaining, and updating these benchmarks is time-consuming and labor-intensive. This rigidity makes it difficult to adapt benchmarks to new concepts or evaluate models in evolving real-world scenarios. In this context, recent work has explored concept-based processing in areas such as conceptual design generation [ 15], concept editing [ 9], and abstract concept understanding [ 16,17]. However, despite these advances, there is still a lack of systematic benchmarks to evaluate conceptual processing capabilities. A well-designed benchmark is crucial to provide a standardized approach for evaluating LLMs in concept-based tasks, allowing effective measurement, comparison, and improvement of these models in concept comprehension and knowledge application. Simultaneously, interactive game-based environments have gained traction as novel evaluation paradigms to overcome the static nature of traditional benchmarks [ 18,19,20]. Unlike static question-answer formats, game-based evaluations create richer contexts for multi-step reasoning and decision-making. However, most game simulations primarily assess strategic reasoning, offering limited insight into the internal knowledge of models and their ability to convey structured concepts in dynamic multi-agent interactions. To address the limitations of traditional benchmarks in evaluating conceptual understanding, we propose Conceptual Knowledge Arena (CK-Arena), a multi-agent interaction game benchmark inspired by Undercover [21]. Figure 1 illustrates the key aspects involved in this work. Unlike conventional methods that focus on isolated tasks, CK-Arena is designed to assess conceptual reasoning in interactive, multi-agent scenarios. In CK-Arena, participants (LLM-based agents) are assigned one of two similar concepts, representing different identities— civilian orundercover agent . Without knowing others’ identities, agents engage in rounds of dialogue to describe their concepts, analyze others’ statements, and attempt to identify undercover agents by discerning commonalities and distinctions. CK-Arena introduces structured evaluation mechanisms, including statement-level metrics for novelty, relevance, and reasonableness, as well as player-level metrics such as win rate and survival rate. To accommodate models with varying reasoning capabilities, CK-Arena also includes a game variant called Undercover-Audience , where players focus on shared attributes, and audience agents vote based on perceived inconsistencies. This design allows for scalable, flexible evaluation of conceptual reasoning in interactive settings, reflecting LLMs’ ability to navigate semantic boundaries and engage in strategic communication. 2 Overall, our contributions are as follows: 1) A Game-based Conceptual Reasoning Benchmark: We introduce CK-Arena, a benchmark built upon the Undercover
https://arxiv.org/abs/2505.17512v1