text
string | source
string |
|---|---|
we specifically selected CE, CU, and PQ as they effectively reflect the overall quality and naturalness of our dataset. (We exclude PC from our analysis as it primarily measures the number of audio components, which is less relevant for our test samples where each audio clip con- tains only single-speaker utterances.) 2.4.1. Cross-dataset Comparison of Absolute Audio Quality We evaluate 2,000 random samples from each comparable cat- egory across LLM-Dys, VCTK-token, VCTK, LibriTTS, and LibriStutter [17]. Results show LLM-Dys achieves superior performance across almost all metrics compared to both fluent (VCTK, LibriTTS) and dysfluent speech datasets, as shown inFig 2. 2.4.2. Analysis of Speech Quality Before and After Dysfluency To further validate our methodology, we conduct a compara- tive analysis of speech quality metrics. We synthesize both the clean text and its corresponding dysfluent version using TTS, then compare their metrics to calculate the degradation rates. As shown in Fig 3, LLM-Dys achieves better metric preserva- tion and even slight improvements in certain categories, demon- strating its superior performance over rule-based approaches in maintaining speech naturalness and quality while introducing dysfluencies. 3. Token-based Dysfluency Detection We follow [23] to treat dysfluency detection as a token-based recognition problem and adopt Whisper-large-v3-turbo[32] as our base model. We divide dysfluency detection into word and phoneme levels. Based on the annotated dysfluency types in SEP-28K dataset, at word level, we train for insertion, pause, and repetition using word ins, word pau, and word rep subsets from LLM-Dys with a 1:1:1 ratio. During training, we incor- porate a proportion of VCTK dataset, which is explained in Section 4.3.3. At phoneme level, we train for prolongation, pause, and repetition using phn pro, phn pau, and phn rep sub- sets from LLM-Dys with a 1:1:1 ratio. Notably, we observe that SEP-28K dataset contains relatively few samples of phoneme- level pause dysfluency (referring to pauses occurring within words, such as ”dys...dysfluency”). Therefore, during train- ing and testing, we supplement phoneme-level pause samples with some word-level pause samples (phoneme:word = 3:7, all annotations are based on phoneme-level) to balance different types of dysfluency. Additionally, we incorporate a proportion of VCTK dataset as well. 4. Experiments 4.1. Datasets 1) LLM-Dys : Our synthetic dataset contains 11 dysfluency types, totaling 12,790 hours. Details in Section 2. 2) SEP- 28k [16]: Real-world dataset with 28,000 clips, labeled with blocks, prolongations, sound/word repetitions, and interjec- Table 2: Metrics on LLM-Dys Model MetricsWord Level Phoneme Level Ins Rep Pau Pau Rep Pro Ours (3*4000 samples)Recall 0.99 0.99 1.0 0.99 1.0 0.99 Precision 1.0 1.0 0.99 1.0 1.0 1.0 F1-score 0.99 0.99 1.0 0.99 1.0 0.99 TER(%,↓)4.63 2.52 2.54 0.78 1.04 0.72 TD(↓) 0.76 0.22 0.52 0.18 0.33 0.10 Table 3: Precision, Recall and F1-score on SEP-28k Model MetricsWord Level Phoneme Level Ins Rep Pau Pau Rep Pro Ours (3*4000 samples)Recall 0.87 0.91 0.71 0.75 0.90 0.85 Precision 0.95 0.52 0.89 0.75 0.92 0.97 F1-score 0.91 0.67 0.79 0.75 0.91 0.69 Ours (3*12000 samples)Recall 0.91 0.79 0.71 0.71 0.90 0.8 Precision 0.86 0.59 1.00 0.74 0.96 0.97 F1-score 0.89 0.68 0.83 0.72 0.93 0.88 Wagner et al.
|
https://arxiv.org/abs/2505.22029v1
|
[11] F1-score 0.77 0.64 0.62 0.62 0.54 0.56 Yolo-Stutter [14] Recall - 0.82 0.72 0.72 - 0.89 Table 4: Token Error Rate and Token Distance on SEP-28k Model MetricsWord Level Phoneme Level Ins Rep Pau Pau Rep Pro Ours (3*4000 samples)TER(%,↓)24.90 22.32 16.27 7.12 11.68 11.05 TD(↓) 1.17 0.27 1.59 1.06 1.50 1.38 Ours (3*12000 samples)TER(%,↓)23.10 19.8 415.88 9.89 12.17 11.55 TD(↓) 0.75 0.47 1.24 1.76 1.50 1.28 Table 5: Accuracy, Precision, Recall, and F1-score on UCLASS Model Level Accuracy Precision Recall F1-score Ours (3*12000 samples)Word 0.971 1.000 0.954 0.977 Phoneme 0.958 0.938 1.000 0.968 StutterNet [1] - 0.938 0.931 0.933 0.932 tions. Due to poor segmentation quality, we created a test set by manually annotating 200 samples each for word/phoneme- level evaluation, maintaining the same distribution of dysflu- ency types as in the original dataset. Annotations follow our model’s output format with dysfluency tokens added to clean text. 3) UCLASS [15]:Speech recordings from 128 stuttering children and adults. We randomly segmented 200 samples from this dataset, 80 for fine-tuning and 120 for testing, labeled bi- nary (fluent/dysfluent) with 1:1 ratio as in [1]. 4) VCTK [24]: Natural speech corpus from 110 English speakers with diverse accents. 5) LibriTTS [21]: High-quality synthetic dataset de- rived from LibriSpeech. 4.2. Metrics 1) Recall : Ratio of correctly identified to total actual disfluen- cies. 2) Precision : Ratio of correctly identified to total pre- dicted disfluencies. 3) F1-score : Harmonic mean of precision and recall. 4) Accuracy (Acc) : Model’s performance in identi- fying fluent speech and dysfluency types. 5) Token Error Rate (TER) : Transcription accuracy compared to reference text, sim- ilar to WER. 6) Token Distance (TD) : Token-level displace- ment between predicted and actual dysfluency positions. 4.3. Results 4.3.1. Evaluation on LLM-Dys We test on 300 unique utterances per dysfluency type from test- ing set. As shown in Table 2, the model achieves high perfor-mance despite limited training data (4,000 samples/type), likely due to consistent patterns in LLM-generated dysfluencies and our standardized TTS pipeline. 4.3.2. Evaluation on SEP-28k and UCLASS For SEP-28k, we conduct zero-shot evaluation (which inher- ently puts our model at a disadvantage compared to [11]) and still achieve state-of-the-art results, as shown in Table 3 and Table 4. Since the original SEP-28k annotations only contain block labels (without distinguishing between word- level and phoneme-level), we apply the block-level scores re- ported in [11] to both word-level and phoneme-level metrics. For UCLASS, we freeze the LLM-Dys fine-tuned Whisper encoder and add a classification head with three FC layers (512→256→2) for binary fluency detection. Fine-tuned with balanced samples [1], our model achieves SOTA performance using only 80 training clips, as shown in Table 5. /uni00000015/uni0000002e /uni00000017/uni0000002e /uni00000019/uni0000002e /uni0000001b/uni0000002e /uni00000014/uni00000013/uni0000002e /uni00000014/uni00000015/uni0000002e /uni00000014/uni00000017/uni0000002e /uni00000014/uni00000019/uni0000002e/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b/uni00000014/uni00000011/uni00000013/uni00000029/uni00000014/uni00000010/uni00000056/uni00000046/uni00000052/uni00000055/uni00000048 /uni0000003a/uni00000042/uni00000035/uni00000048/uni00000053 /uni0000003a/uni00000042/uni0000002c/uni00000051/uni00000056 /uni0000003a/uni00000042/uni00000033/uni00000044/uni00000058 /uni00000013/uni00000011/uni00000013/uni00000013 /uni00000013/uni00000011/uni00000013/uni00000018 /uni00000013/uni00000011/uni00000014/uni00000013 /uni00000013/uni00000011/uni00000014/uni00000018 /uni00000013/uni00000011/uni00000015/uni00000013/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b/uni00000014/uni00000011/uni00000013 /uni0000003a/uni00000042/uni0000002f/uni0000004c/uni00000045/uni00000055/uni0000004c/uni00000042/uni00000024/uni00000046/uni00000046 /uni0000003a/uni00000042/uni00000033/uni00000055/uni00000048 /uni0000003a/uni00000042/uni00000035/uni00000048/uni00000046 /uni0000003a/uni00000042/uni0000005a/uni00000029/uni00000014 /uni00000015/uni0000002e /uni00000017/uni0000002e /uni00000019/uni0000002e /uni0000001b/uni0000002e /uni00000014/uni00000013/uni0000002e /uni00000014/uni00000015/uni0000002e /uni00000014/uni00000017/uni0000002e /uni00000014/uni00000019/uni0000002e /uni00000036/uni00000044/uni00000050/uni00000053/uni0000004f/uni00000048/uni00000056/uni00000003/uni00000053/uni00000048/uni00000055/uni00000003/uni00000057/uni0000005c/uni00000053/uni00000048/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b/uni00000014/uni00000011/uni00000013/uni00000029/uni00000014/uni00000010/uni00000056/uni00000046/uni00000052/uni00000055/uni00000048 /uni00000033/uni00000042/uni00000033/uni00000044/uni00000058 /uni00000033/uni00000042/uni00000035/uni00000048/uni00000053 /uni00000033/uni00000042/uni00000033/uni00000055/uni00000052 /uni00000013/uni00000011/uni00000013/uni00000013 /uni00000013/uni00000011/uni00000013/uni00000018 /uni00000013/uni00000011/uni00000014/uni00000013 /uni00000013/uni00000011/uni00000014/uni00000018 /uni00000013/uni00000011/uni00000015/uni00000013 /uni00000035/uni00000044/uni00000057/uni0000004c/uni00000052/uni00000013/uni00000011/uni0000001b/uni00000014/uni00000011/uni00000013 /uni00000033/uni00000042/uni0000002f/uni0000004c/uni00000045/uni00000055/uni0000004c/uni00000042/uni00000024/uni00000046/uni00000046 /uni00000033/uni00000042/uni00000033/uni00000055/uni00000048 /uni00000033/uni00000042/uni00000035/uni00000048/uni00000046 /uni00000033/uni00000042/uni0000005a/uni00000029/uni00000014 Figure 5: Left: Impact of dataset size on dysfluency detection performance. Right: Impact of Fluent-to-Disfluent speech ratio on model performance (P = Phoneme-level, W =
|
https://arxiv.org/abs/2505.22029v1
|
Word-level, Libri = LibriTTS, Acc = Accuracy, Pre = Precision, Rec = Re- call, wF1 = weighted F1 score computed based on disfluency type frequencies). 4.3.3. Scaling Law Our scaling experiments reveal that the model’s performance, as measured by F1 scores, reaches a substantial level with a dataset size of 3×4000 samples. Further expansion to 3×12000 samples yields only marginal improvements, after which per- formance plateaus or slightly declines, as illustrated in Fig. 5. These findings suggest an optimal dataset size threshold for ef- ficient training in dysfluency detection tasks. 4.3.4. Impact of Fluent-to-Disfluent Speech Ratio Training solely on LLM-Dys reduces fluent speech detection accuracy. Our analysis reveals that the model achieves optimal performance when the fluent-to-disfluent speech ratio is approx- imately 0.05 under the disfluency distribution condition of SEP- 28k, as illustrated in Fig. 5 5. Conclusion and Future Work We introduce LLM-Dys, a large-scale dysfluency dataset span- ning 11 categories and 12,790 hours. Our method generates higher-quality synthetic speech than rule-based baselines while preserving dysfluency authenticity, and achieves state-of-the-art performance on real-world dysfluency detection. Experiments reveal optimal dataset sizes and the importance of balanced flu- ency ratios during training. Future directions include expand- ing speaking styles, emotional contexts, cross-lingual coverage, and integrating articulatory priors [33, 34, 35, 36] for improved simulation and detection. 6. Acknowledgements Thanks for support from UC Noyce Initiative, Society of Hell- man Fellows, NIH/NIDCD, and the Schwab Innovation fund. 7. References [1] M. Abubakar, M. Mujahid, K. Kanwal, S. Iqbal, N. Asghar, and A. Alaulamie, “Stutternet: stuttering disfluencies detection in syn- thetic speech signals via mel frequency cepstral coefficients fea- tures using deep learning,” IEEE Access , 2024. [2] Y .-J. Shih, Z. Gkalitsiou, A. G. Dimakis, and D. Harwath, “Self- supervised speech models for word-level stuttered speech de- tection,” in 2024 IEEE Spoken Language Technology Workshop (SLT) . IEEE, 2024, pp. 937–944. [3] L. Barrett, J. Hu, and P. Howell, “Systematic review of ma- chine learning approaches for detecting developmental stutter- ing,” IEEE/ACM Transactions on Audio, Speech, and Language Processing , vol. 30, pp. 1160–1172, 2022. [4] M. Jouaiti and K. Dautenhahn, “Dysfluency classification in stut- tered speech using deep learning for real-time applications,” in ICASSP . IEEE, 2022, pp. 6482–6486. [5] S. P. Bayerl, D. Wagner, E. N ¨oth, and K. Riedhammer, “Detect- ing dysfluencies in stuttering therapy using wav2vec 2.0,” Inter- speech , 2022. [6] V . Zayats, M. Ostendorf, and H. Hajishirzi, “Disfluency detec- tion using a bidirectional lstm,” arXiv preprint arXiv:1604.03209 , 2016. [7] S. Alharbi, A. J. Simons, S. Brumfitt, and P. D. Green, “Automatic recognition of children’s read speech for stuttering application,” in 6th. Workshop on Child Computer Interaction , 2017, pp. 1–6. [8] S. Alharbi, M. Hasan, A. J. Simons, S. Brumfitt, and P. Green, “Sequence labeling to detect stuttering events in read speech,” Computer Speech & Language , vol. 62, p. 101052, 2020. [9] J. Harvill, M. Hasegawa-Johnson, and C. Yoo, “Frame-level stut- ter detection,” in Interspeech , 2022. [10] O. Shonibare, X. Tong, and V . Ravichandran, “Enhancing asr for stuttered speech with limited data using
|
https://arxiv.org/abs/2505.22029v1
|
detect and pass,” arXiv preprint arXiv:2202.05396 , 2022. [11] D. Wagner, S. P. Bayerl, I. Baumann, K. Riedhammer, E. N ¨oth, and T. Bocklet, “Large language models for dysfluency detection in stuttered speech,” Interspeech , 2024. [12] J. Lian, C. Feng, N. Farooqi, S. Li, A. Kashyap, C. J. Cho, P. Wu, R. Netzorg, T. Li, and G. K. Anumanchipalli, “Unconstrained dysfluency modeling for dysfluent speech transcription and de- tection,” in 2023 IEEE Automatic Speech Recognition and Under- standing Workshop (ASRU) , 2023, pp. 1–8. [13] J. Lian and G. Anumanchipalli, “Towards hierarchical spoken lan- guage disfluency modeling,” in Proceedings of the 18th Confer- ence of the European Chapter of the Association for Computa- tional Linguistics , Mar. 2024, pp. 539–551. [14] X. Zhou, A. Kashyap, S. Li, A. Sharma, B. Morin, D. Baquirin, J. V onk, Z. Ezzes, Z. Miller, M. Tempini, J. Lian, and G. Anu- manchipalli, “Yolo-stutter: End-to-end region-wise speech dys- fluency detection,” in Interspeech 2024 , 2024, pp. 937–941. [15] P. Howell, S. Davis, and J. Bartrip, “The university college london archive of stuttered speech (uclass),” 2009. [16] C. Lea, V . Mitra, A. Joshi, S. Kajarekar, and J. P. Bigham, “Sep- 28k: A dataset for stuttering event detection from podcasts with people who stutter,” in ICASSP , 2021. [17] T. Kourkounakis, A. Hajavi, and A. Etemad, “Fluentnet: End-to- end detection of stuttered speech disfluencies with deep learning,” IEEE/ACM Transactions on Audio, Speech, and Language Pro- cessing , vol. 29, pp. 2986–2999, 2021. [18] J. Kim, J. Kong, and J. Son, “Conditional variational autoencoder with adversarial learning for end-to-end text-to-speech,” in Inter- national Conference on Machine Learning , 2021. [19] X. Zhou, C. J. Cho, A. Sharma, B. Morin, D. Baquirin, J. V onk, Z. Ezzes, Z. Miller, B. L. Tee, M. L. Gorno-Tempini et al. ,“Stutter-solver: End-to-end multi-lingual dysfluency detection,” in2024 IEEE Spoken Language Technology Workshop (SLT) . IEEE, 2024, pp. 1039–1046. [20] J. Lian, X. Zhou, Z. Ezzes, J. V onk, B. Morin, D. P. Baquirin, Z. Miller, M. L. Gorno Tempini, and G. Anumanchipalli, “Ssdm: Scalable speech dysfluency modeling,” in Advances in Neural In- formation Processing Systems , vol. 37, 2024. [21] H. Zen, V . Dang, R. Clark, Y . Zhang, R. J. Weiss, Y . Jia, Z. Chen, and Y . Wu, “Libritts: A corpus derived from librispeech for text- to-speech,” arXiv preprint arXiv:1904.02882 , 2019. [22] J. Lian, X. Zhou, Z. Ezzes, J. V onk, B. Morin, D. Baquirin, Z. Mille, M. L. G. Tempini, and G. K. Anumanchipalli, “Ssdm 2.0: Time-accurate speech rich transcription with non-fluencies,” arXiv preprint arXiv:2412.00265 , 2024. [23] X. Zhou, J. Lian, C. J. Cho, J. Liu, Z. Ye, J. Zhang, B. Morin, D. Baquirin, J. V onk, Z. Ezzes, Z. Miller, M. L. G. Tempini, and G. Anumanchipalli, “Time and tokens: Benchmarking end- to-end speech dysfluency detection,” 2024. [Online]. Available: https://arxiv.org/abs/2409.13582 [24] J. Yamagishi, C. Veaux, K. MacDonald et al. , “Cstr vctk corpus: English multi-speaker corpus for cstr voice cloning toolkit (ver- sion 0.92),” University of Edinburgh. The Centre for Speech Tech- nology Research (CSTR)
|
https://arxiv.org/abs/2505.22029v1
|
, pp. 271–350, 2019. [25] L. Wagner, B. Thallinger, and M. Zusag, “Crisperwhisper: Accu- rate timestamps on verbatim speech transcriptions,” Interspeech , 2024. [26] A. Tjandra, Y .-C. Wu, B. Guo, J. Hoffman, B. Ellis, A. Vyas, B. Shi, S. Chen, M. Le, N. Zacharov et al. , “Meta audiobox aes- thetics: Unified automatic quality assessment for speech, music, and sound,” arXiv preprint arXiv:2502.05139 , 2025. [27] J. Kaplan, S. McCandlish, T. Henighan, T. B. Brown, B. Chess, R. Child, S. Gray, A. Radford, J. Wu, and D. Amodei, “Scaling laws for neural language models,” arXiv preprint arXiv:2001.08361 , 2020. [28] Anthropic, “Introducing the next generation of claude,” 2024. [Online]. Available: https://www.anthropic.com/news/ claude-3-family [29] M. Bernard and H. Titeux, “Phonemizer: Text to phones transcription for multiple languages in python,” Journal of Open Source Software , vol. 6, no. 68, p. 3958, 2021. [Online]. Available: https://doi.org/10.21105/joss.03958 [30] Y . Chen, Z. Niu, Z. Ma, K. Deng, C. Wang, J. Zhao, K. Yu, and X. Chen, “F5-tts: A fairytaler that fakes fluent and faithful speech with flow matching,” arXiv preprint arXiv:2410.06885 , 2024. [31] S. E. Eskimez, X. Wang, M. Thakker, C. Li, C.-H. Tsai, Z. Xiao, H. Yang, Z. Zhu, M. Tang, X. Tan, Y . Liu, S. Zhao, and N. Kanda, “E2 tts: Embarrassingly easy fully non-autoregressive zero-shot tts,” in 2024 IEEE Spoken Language Technology Workshop (SLT) , 2024, pp. 682–689. [32] A. Radford, J. W. Kim, T. Xu, G. Brockman, C. McLeavey, and I. Sutskever, “Robust speech recognition via large-scale weak supervision,” in International conference on machine learning . PMLR, 2023, pp. 28 492–28 518. [33] C. J. Cho, P. Wu, T. S. Prabhune, D. Agarwal, and G. K. Anu- manchipalli, “Coding speech through vocal tract kinematics,” in IEEE JSTSP , 2025. [34] P. Wu, T. Li, Y . Lu, Y . Zhang, J. Lian, A. W. Black, L. Goldstein, S. Watanabe, and G. K. Anumanchipalli, “Deep Speech Synthesis from MRI-Based Articulatory Representations,” in Proc. INTER- SPEECH 2023 , 2023, pp. 5132–5136. [35] J. Lian, A. W. Black, L. Goldstein, and G. K. Anumanchipalli, “Deep Neural Convolutive Matrix Factorization for Articulatory Representation Decomposition,” in Proc. Interspeech 2022 , 2022, pp. 4686–4690. [36] J. Lian, A. W. Black, Y . Lu, L. Goldstein, S. Watanabe, and G. K. Anumanchipalli, “Articulatory representation learning via joint factor analysis and neural matrix factorization,” in ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) . IEEE, 2023, pp. 1–5.
|
https://arxiv.org/abs/2505.22029v1
|
arXiv:2505.22038v1 [cs.CV] 28 May 2025Balanced Token Pruning: Accelerating Vision Language Models Beyond Local Optimization Kaiyuan Li1∗, Xiaoyue Chen1∗, Chen Gao2, Yong Li2, Xinlei Chen1 1Tsinghua Shenzhen International Graduate School 2Tsinghua University {likaiyua23,chenxiao24}@mails.tsinghua.edu.cn {chgao96,liyong07}@tsinghua.edu.cn,chen.xinlei@sz.tsinghua.edu.cn Abstract Large Vision-Language Models (LVLMs) have shown impressive performance across multi-modal tasks by encoding images into thousands of tokens. However, the large number of image tokens results in significant computational overhead, and the use of dynamic high-resolution inputs further increases this burden. Previous approaches have attempted to reduce the number of image tokens through token pruning, typically by selecting tokens based on attention scores or image token diversity. Through empirical studies, we observe that existing methods often overlook the joint impact of pruning on both the current layer’s output (local) and the outputs of subsequent layers (global), leading to suboptimal pruning decisions. To address this challenge, we propose Balanced Token Pruning (BTP), a plug-and- play method for pruning vision tokens. Specifically, our method utilizes a small calibration set to divide the pruning process into multiple stages. In the early stages, our method emphasizes the impact of pruning on subsequent layers, whereas in the deeper stages, the focus shifts toward preserving the consistency of local outputs. Extensive experiments across various LVLMs demonstrate the broad effectiveness of our approach on multiple benchmarks. Our method achieves a 78% compression rate while preserving 96.7% of the original models’ performance on average. 1 Introduction Recent advances in Large Vision-Language Models (LVLMs) [ 6,11,20,22,32] have substantially improved visual understanding. These models typically employ a visual encoder to convert images into discrete tokens, which are then processed jointly with textual tokens by a large language model backbone. The incorporation of visual information significantly increases the total number of input tokens [ 2,21], a problem further amplified when handling high-resolution images. In scenarios with constrained computational resources and limited memory capacity, the large volume of visual tokens leads to considerable increases in inference latency and memory usage, thereby limiting the practical applicability of LVLMs in resource-limited environments. Prior studies have demonstrated that visual tokens often exhibit significant redundancy [ 5,19]. Consequently, visual token pruning has been proposed as an effective strategy to reduce input redundancy and enhance computational efficiency [ 34,29,12,38,35]. Visual token pruning faces two fundamental challenges: identifying the most important visual tokens and determining the appropriate layers for pruning. Existing token pruning strategies can be broadly classified into two categories: attention-based methods that leverage text-image interactions [ 5,33], and diversity- based methods that exploit the heterogeneity of visual representations [ 1]. However, the impact of * Equal contribution. Preprint. Under review. their distinct optimization objectives on overall model performance remains underexplored, and a systematic comparison between them is largely absent. Moreover, when it comes to pruning layer selection, existing methods rely heavily on validation performance and manually defined settings, lacking principled guidance based on the model’s intrinsic properties. Figure 1: Layer-wise visualization of attention in LVLMs. To address these problems, we first explore the nature of image token pruning from an intu- itive perspective: its impact on the current layer’s (local) output and its influence
|
https://arxiv.org/abs/2505.22038v1
|
on the out- puts of subsequent pruning layers (global) . We begin by visualizing the spatial distribution of image tokens that receive higher attention from text tokens across different layers. As shown in Figure 1, we observe that the image tokens attended by text tokens vary across different layers. This indicates that pruning solely based on the current layer tends to overlook its im- pact on subsequent layers. Then we further compare the similarity between the hidden state of the last word in the input prompt under two pruning methods and that of the original model. Figure 2: Impact of different pruning strategies on layer-wise representations.It can be found in Figure 2 that attention-based methods preserve output similarity well at early pruning layers, but the error accumulates in deeper layers. In contrast, diversity-based methods do not maintain output similarity at the initial lay- ers, but achieve better consistency in later pruning stages. This implies that attention-based pruning methods focus solely on optimizing the current pruning layer while ignoring their impact on sub- sequent layers, whereas diversity-based methods overlook the preservation of output quality at the current layer. Motivated by the above observation, we aim to tackle a fundamental challenge: how to prune with joint consideration of the current and subsequent layers to achieve global optimality. To address this challenge, we propose Balanced Token Pruning (BTP) , a visual token pruning method that balances local objectives (current layer) with global objectives (subsequent layers). We begin by analyzing and formulating a local-global objective for image token pruning. Based on this objective, BTP first partitions the pruning process into multiple stages using a small calibration set [ 27,14], leveraging the way LVLMs process images, as illustrated in Figure 4. In early stages, where more image tokens are retained, BTP emphasizes a diversity-based objective to preserve the quality of downstream representations. In later stages, where fewer tokens are retained, it prioritizes an attention-based objective to maintain the consistency of local outputs. With this design, we preserve token diversity in the early layers while focusing on task-relevant tokens in the later layers. Extensive experiments demonstrate the effectiveness of our proposed BTP method. We evaluate BTP across models of varying sizes and architectures, consistently achieving superior performance under higher compression ratios. Notably, our approach retains only 22% of the original image tokens on average while preserving 98% of the model’s original performance. Furthermore, end-to-end efficiency evaluations confirm that BTP significantly reduces both inference latency and memory usage in practice. 2 2 Related work 2.1 Large Vision-Language Models Recent progress in large vision language models (LVLMs) has been substantially accelerated by the open-sourcing of foundation models like LLaMA [ 30] and Vicuna [ 40]. Representative models, including LLaV A [ 20,21,22], Qwen-VL [ 2,32], and InternVL [ 6,11] leverage vision encoders [26,18,7] to encode images into visual tokens, which are then integrated into the language model for unified multimodal representation and understanding. For example, LLaV A-1.5 encodes image into 576 visual tokens using a single-scale encoder. As these models increasingly support high-resolution visual inputs [ 2,21], the number of
|
https://arxiv.org/abs/2505.22038v1
|
visual tokens grows. Using a multi-resolution encoding strategy, LLaV A-NeXT can generate up to 2,880 tokens per image. 2.2 Visual Token Pruning Early efforts to reduce visual token redundancy primarily focus on attention-based pruning [ 4,13,39]. For example, FastV [ 5] prunes visual tokens with low attention scores after the filtering layer, with subsequent layers processing only the remaining token. Another approach, VTW [ 19], adopts a com- plete token elimination strategy, removing all visual tokens after a specified layer. PyramidDrop [ 33] introduces a more sophisticated approach, performing stage-wise pruning throughout the transformer, ranking visual tokens by their attention scores to the instruction token at each stage and progressively discarding the least informative ones. Compared to attention-based methods, diversity-based methods prioritize retaining a richer variety of semantic information. For instance, DivPrune [ 1] formulate token pruning as a Max-Min Diversity Problem [ 25,28]. Additionally, some methods fuse remaining tokens into retained tokens through token fusion such as LLaV A-PruMerge [ 29] and VisionZip [ 34]. Different from prior methods, our method jointly considers the impact of pruning on both the current layer and subsequent layers. 3 Preliminary 3.1 Visual token processing In the prefilling stage, images and texts are first encoded into embedding vectors (tokens), which are then processed by LVLM. We denote the input token sequence as Xwhich consists of the system promptXS, the image tokens XIand text tokens XT,X= (XS,XI,XT) .Xis then fed into the LLM backbone composed of Ndecoder layers. For the l-th decoder layer, we denote the input as X(l)and the layer output X(l+1)is: X(l+1)=X(l)+Atten(l)(LN(X(l))) + MLP(l)(LN(attn(l) output +X(l))), (1) where Atten(l)is the attention block, LNis the layer normalization and MLP(l)is the projector layer. It can be observed that the outputs of the attention block and the MLP block are closely tied to the attention mechanism [31]. Formally, the attention mechanism can be represent as: attnl output =Softmax (Ql(Kl)T+M√Dk)Vl, (2) where Ql,Kl,Vlare calculated by Query projector, Key projector and Value projector. Dkis hidden state dimension. Mis the casual mask which imposes a constraint such that each token is permitted to incorporate information only from tokens at earlier positions. Kl,Vlare stored in the KV cache for further decoding stage. 3.2 Visual token pruning formulations Prior works on image token pruning can be broadly categorized into attention-based methods [ 5,33] and diversity-based methods [ 1]. Attention based methods utilize text-image attention score to select important image tokens at specific layers. For input sample with mtext tokens, we can denote the importance score Simgof image tokens at l-th layer as: S(l) img=1 mmX i=1Atten(l)(XI,X(i) T). (3) 3 After obtaining the importance scores of the image tokens, these methods select a pruned image token setPatten⊂XIwith the highest scores. In contrast to attention score-based methods, diversity-based approaches focus on maximizing the diversity among selected image tokens. These methods are typically based either on the spatial diversity of the selected image token set or on the semantic diversity of the selected images. Formally, given a diversity metric F ⊂ {F spa,Fsem}, our goal is to identify a pruned set of image tokens Pdiv⊂XIthat
|
https://arxiv.org/abs/2505.22038v1
|
maximizes the objective function Ldiv: Ldiv= max F(Pdiv). (4) 4 Methodology 4.1 Limitations of existing methods Attention-based methods pursue local optima We analyze the impact of pruning image tokens on the subsequent text and response tokens. From Equations 1 and 2, we can see that pruning image tokens at l-th layer mainly affects the layer output X(l+1)by changing the attention output, which is a weighted sum of the value vectors Vl. If the norms of the Vlare similar, selecting image tokens with high importance scores defined in 3 effectively reduces the difference between the layer output before and after pruning. We provide supporting evidence for this assumption in the Appendix A.1. Formally, given original l-th layer output X(l+1) origin and pruned l-th layer output X(l+1) pruned , distance metric function D(·,·), we can define the objective function Latten of attention-based methods [ 5,33] as: Latten = min PD(X(l+1) origin ,X(l+1) pruned). (5) However, attention-based methods locally optimize the output error at individual layers. For instance, if pruning is conducted at the l-th layer and (l+k)-th layers, with PlandPl+kdenoting the respective optimal sets of selected image tokens. As shown in Figure 1, Pl+k̸⊂Pl. So, attention-based selection results in a globally suboptimal pruning strategy. Diversity-based methods ignore local constraints The diversity-based approach [ 1] aims to maximize the diversity of the selected tokens, thereby partially mitigating the issues encountered by attention-based methods as we can see in Figure 1. Because diversity-based methods tend to select tokens with maximally different semantic information. However it can be observed in Figure 2 that diversity-based approaches are ineffective in maintaining local output consistency, which can lead to a failure in preserving local output consistency during deep-layer pruning, resulting in degraded performance. Layer selection for pruning Current approaches typically rely on manually predefined pruning layers or utilize a small validation set to select pruning layers based on the observed performance. However, these methods require extensive trial-and-error and dataset-specific calibration. As de- scribed in Section 3.1, due to the presence of the causal mask M, the encoding of an image token in the LLM backbone is independent of the input question. Therefore, we aim to determine the pruning layers from the perspective of image token encoding. 4.2 Balanced token pruning with joint local and global objectives Local-global objective Based on the above analysis, we argue that an effective token pruning strategy should achieve local optimization by preserving the current layer’s output, while also considering the global impact of pruning on subsequent layers. Firstly, we formulate a global objective function. Suppose token pruning is performed at layers l1< l2< l3. For each pruned layer l∈ {l1, l2, l3}, we aim to select a subset of tokens Plsuch that the difference between the pruned outputs Xl+1 prunedand original outputs Xl+1 origin is minimized. To quantify hidden vectors’ difference, we use a unified distance function D(·,·)to measure the discrepancy between the outputs before and after pruning. Then our objective is to minimize the total output discrepancy across all pruned layers: Lglobal =|l|X i=1D(Xl+1 origin , Xl+1 Pli). (6) According to Equation 5, we
|
https://arxiv.org/abs/2505.22038v1
|
can get optimal pruned token set P∗ lbased on attention. However, since the attention distribution varies across input samples and Pl3⊆Pl2⊆Pl1, it is difficult to 4 Pruning Stage Selection Balanced Token Pruning Calibration Image SetMLLMImage Token Processing1 2 3 4 5Decoder Layers Pruning LayersSelect Image Token Important Image Tokens per Layer SYS Token Text TokenEncode Please describe … Shallow Layer Focus Diversity 𝑺𝒑𝒂𝒕𝒊𝒂𝒍 𝑰𝒏𝒊𝒕𝒊𝒂𝒍𝒊𝒛𝒂𝒕𝒊𝒐𝒏 Spatial MMDP Medium layer Balance Deep Layer Focus Attn Image EncoderText Encoder𝑰𝒎𝒂𝒈𝒆 𝑰𝒎𝒂𝒈𝒆Rebalance Attn 𝑾𝒆𝒊𝒈𝒉𝒕 =𝟎.𝟑 Diversity Selection 𝑾𝒆𝒊𝒈𝒉𝒕 =𝟎.𝟕 Local Pruning !Global Pruning !Diversity Selection 𝑾𝒆𝒊𝒈𝒉𝒕 =𝟎.𝟓 Rebalance Attn 𝑾𝒆𝒊𝒈𝒉𝒕 =𝟎.𝟓Attn Base SelectionFigure 3: Overview of BTP: We first use a calibration set to determine the pruning layers. In the early layers, we emphasize diversity-based pruning to preserve the output of subsequent layers. In the deeper layers, attention-based pruning is prioritized to maintain the output of the pruning layers. Due to the pruning strategy, we achieve an overall optimal pruning balance. predict which tokens will be important for deeper layers (e.g., l2,l3) when pruning at layer l1. To address this issue, we propose to optimize a local-global objective to approximate the optimal token setP∗ l. Building upon the local attention-based selection objective, we introduce a diversity term to approximate the token preferences of later layers. Assume a weight coefficient λ∈(0,1), we measure diversity by computing the sum of distance Fdis(·)among elements within a set: Llocal−global =−|l|X i=1(λiX j∈PiAtten(i)(X(j) I,XT) + (1 −λi)Fdis(Pi)). (7) The first term of Equation 7 ensures that the output of the pruned layer remains close to the original, while the second term encourages the selected tokens at previous layer l1to also include those important for deeper layers such as l2andl3. Balanced token pruning (BTP) Building upon the proposed local-global objective, we introduce our method. As shown in Figure 3, our approach divides token pruning into multiple stages denoted asS={s1, . . . , s n}. Under a predefined pruning ratio α, each stage retains a fixed fraction of image tokens from the previous stage. Since early pruning stages retain more tokens and influence the pruning decisions of later stages, their objectives need to emphasize token diversity. In contrast, deeper stages preserve fewer tokens and have less impact on subsequent stages. Therefore, we set the hyperparameter λito gradually increase across stages. Attention optimization: We optimize the attention objective by selecting the top- kimage tokens with the highest importance scores defined in Equation 3. To efficiently computing the importance scores, we only use the last token of the input prompt as XT, which reduces the computational complexity to O(n). We observe that the attention scores are influenced by positional encoding, which leads to a tendency to favor tokens located toward the end of the sequence. We apply a re-balancing operation to alleviate the influence of positional encoding. Assume that at l-th layer, we aim to prune the image tokens by selecting kindices Ikout of Ncandidates based on the attention scores Al. Instead of directly selecting the top- ktokens, we first over-select the top- k′tokens indices Ik′, where k′> k. To mitigate positional bias, we rebalance
|
https://arxiv.org/abs/2505.22038v1
|
the selection by first retaining tokens from earlier positions, followed by selecting additional tokens from later positions: Ipre=Ik′[Ik′<N 2], (8) Ipost=Ik′[Ik′≥N 2][:k− |Ipre|], (9) Ik=Concat (Ipre, Ipost). (10) 5 Through the rebalancing operation, we are able to preserve the attention objective while selecting more informative tokens. Diversity optimization: For optimizing the second objective related to diversity, we follow the formulation used in DivPrune by modeling it as a Max-Min Diversity Problem (MMDP). However, solving the MMDP objective requires O(n2)computational complexity and cannot be efficiently accelerated by GPUs, resulting in significant computational latency. This issue becomes more pronounced in high-resolution multimodal models with a larger number of image tokens. To address this challenge, we propose an initialization strategy based on spatial position information. We observe that image patches with large spatial distances tend to exhibit greater semantic differences, while spatially adjacent patches are often semantically similar. Based on this intuition, we initialize the set of selected image tokens by solving an MMDP problem over their spatial positions. Formally, given Nimage tokens XI, which are originally obtained by flattening a 2D image, we first formulate a 2D grid of size√ N×√ N. For any two tokens yandwfrom the Ntokens, their distance is defined as the Manhattan distance d(·,·)between their positions in the 2D grid. Based on this distance metric, we construct the initial token set Einitial : Einitial =argmax [ min y,w∈S(d(y, w) :∀S⊂XI]. (11) 4.3 Pruning layer selection We propose that determining which layers to prune is closely related to encoding process of image tokens. Specifically, pruning should occur either before or after the layers where the meaning of image tokens changes significantly, since it is difficult to identify truly important tokens in such layers. We compute the cosine similarity between image token hidden states Xl I, Xl+1 Ibefore and after each layer. For each layer, we plot the number of tokens with similarity below threshold τalongside the total attention allocated to image tokens. As shown in Figure 4, it can be observed that LVLMs tends to allocate more attention to image tokens in layers following those where the representations of image tokens undergo significant changes. Based on these insights, we propose a task-independent layer selection strategy for pruning. Using a fixed set of 64 samples across all datasets, we identify layers immediately before and after major shifts in image token semantics. As shown in Figure 3, we perform pruning at selection layers, which enhances the effectiveness of our pruning strategy. LLaVA -v1.5 Qwen2.5 -vl Image Processing Attention Allocation Image Processing Attention Allocation Figure 4: Layer-wise image token hidden state dynamics and attention allocation in LVLMs. 5 Experiment Baselines and models To rigorously assess the generalizability of our proposed image token compression method, we integrate it into several state-of-the-art multimodal large models and conduct extensive experiments on diverse benchmark tasks. Specifically, we evaluate our approach on four representative models: LLaV A-v1.5-7B, LLaV A-v1.5-13B, LLaV A-v1.6-7B and Qwen2.5-VL- 7B-Instruct [ 2,20,21,22,32]. We select several plug-and-play compression baselines that support inference-time token pruning: FastV [ 5] and PyramidDrop [ 33], which select informative tokens via
|
https://arxiv.org/abs/2505.22038v1
|
attention mechanisms; DivPrune [ 1], which filters tokens based on visual diversity and VTW [ 19], which discards all image tokens at a specific transformer layer determined by validation performance. Benchmarks and evaluation We conduct comprehensive experiments on standard visual under- standing tasks using models of different sizes, model families, and compression ratios. We report the results on GQA, MMB, MME, POPE, SQA and MM-VeT [ 10,15,16,23,36,37]. All experiments are carried out using the LMMs-Eval [ 3,17] framework. In addition to accuracy on each dataset, we evaluate all methods in terms of FLOPs, inference latency, and KV cache memory usage. For 6 Table 1: Comparison of BTW with VTW, PDrop, FastV , and DivPrune across different models and datasets. *: For models using dynamic resolution, we report the token retention ratio instead of the absolute token count. Method Token TFLOPS GQA MME MMB enPOPE SQA MMVET Avg. LLaV A-1.5-7B Original 576 3.82 62.0 1510.7 64.3 85.8 69.4 29.0 100% VTW (AAAI25) [19] 236 1.67 51.3 1475.0 63.4 82.1 68.8 17.8 89% PDrop (CVPR25) [33] 192 1.30 57.1 1399.0 61.6 83.6 68.4 25.8 94% FastV (ECCV24) [5] 172 1.65 57.6 1465.0 61.6 81.0 68.9 29.3 96% DivPrune (CVPR25) [1] 128 0.83 58.8 1405.4 62.1 85.1 68.4 27.4 96% BTP (ours) 128 0.85 59.0 1487.0 62.7 85.6 69.1 29.1 98% LLaV A-1.5-13B Original 576 7.44 63.2 1521.7 68.8 87.0 72.7 37.4 100% VTW (AAAI25) [19] 236 2.97 55.6 1517.1 67.7 79.0 72.2 22.6 89% PDrop (CVPR25) [33] 192 2.46 60.5 1493.0 67.3 85.1 73.7 32.8 96% FastV (ECCV24) [5] 172 2.25 60.0 1473.0 67.0 83.6 72.9 31.9 95% DivPrune (CVPR25) [1] 128 1.63 58.8 1461.0 65.8 86.5 72.6 34.0 96% BTP (ours) 128 1.68 62.2 1519.7 68.0 86.9 72.7 34.5 98% LLaV A-1.6-7B * Original 100% 20.82 64.2 1519.3 67.1 86.4 73.6 37.5 100% VTW (AAAI25) [19] 40% 9.11 53.3 1472.8 65.6 84.1 68.3 16.3 85% PDrop (CVPR25) [33] 25% 6.77 60.4 1462.6 65.1 86.4 68.3 27.4 92% FastV (ECCV24) [5] 22% 5.76 60.3 1469.1 64.3 85.5 68.2 32.3 94% DivPrune (CVPR25) [1] 22% 4.20 61.4 1467.9 65.4 86.2 67.4 26.9 92% BTP (ours) 22% 4.52 60.6 1490.8 65.8 86.7 68.4 30.3 94% Qwen2.5-VL-7B * Original 100% 5.48 60.4 1690.8 82.5 87.4 76.7 16.1 100% VTW (AAAI25) [19] 40% 2.38 40.2 1129.8 58.7 61.5 69.7 4.5 65% PDrop (CVPR25) [33] 30% 1.81 49.9 1462.5 70.6 76.8 72.6 9.58 82% FastV (ECCV24) [5] 30% 1.79 52.6 1595.5 73.4 83.9 74.0 16.2 96% DivPrune (CVPR25) [1] 25% 1.34 50.1 1639.2 76.9 85.4 73.0 17.5 96% BTP (ours) 25% 1.67 57.2 1651.5 75.2 86.2 74.1 16.8 97% inference throughout, we follow the PyramidDrop. Specifically, we calculate the FLOPs of the l-th layer’s attention and MLP modules through 4nd2+ 2n2d+ 3ndm .nis the number of tokens, dis the hidden state size, and mis the intermediate size of the FFN. Implementation details All pruning experiments are conducted on 8 NVIDIA A800 GPUs using the HuggingFace Transformers library. To determine pruning stages, we randomly sample 64 instances from the LLaV A-655k [ 20,21,22] dataset and use the same set across
|
https://arxiv.org/abs/2505.22038v1
|
all models and benchmarks, thus avoiding separate calibration for each benchmark. We gradually reduce the number of image tokens at each stage. In the early layers, we use a larger λvalue to focus more on global information, while in the deeper layers, we use a smaller lambda to emphasize local details. More implementation details for different models are provided in the see Appendix A.2. Similar to the implementation of PyramidDrop, we compute the required attention scores separately within the FlashAttn module at the specified pruning layers, achieving full compatibility with FlashAttn [8,9]. It is worth noting that all our experiments are conducted with FlashAttention acceleration enabled. 5.1 Main results BTP outperforms SOTA methods across LVLMs As shown in Table 1, we conduct extensive experiments across different model families and parameter scales. Empirical results demonstrate that our approach consistently surpasses state-of-the-art methods on most benchmark tasks. Our method achieves 98% of the original average performance under a 22% compression rate across LLaV A models of different sizes. Moreover, our method consistently outperforms all models, achieving better results than both attention-based and diversity-based approaches. We also visualize the impact 7 of different methods on layer outputs in Figure 5, our method preserves consistency with the original outputs at both local and global levels. The Appendix A.3 further provides visualizations of the spatial distribution of image tokens selected by various methods. Our method yields more effective token selection in deeper layers. Figure 5: Effect of various pruned meth- ods on the output of decoder layers.BTP maintains stable performance across different compression ratios We assess the performance of our method across a range of compression ratios to verify its effectiveness. We find that FLOPs account only for the computational cost of the attention and MLP modules, while ignoring the overhead introduced by additional com- ponents. As a result, FLOPs alone fail to accurately reflect the actual inference latency. Therefore, as shown in Ta- ble 2, we compare the performance and average inference time of different methods under varying compression ra- tios. In can be observed that although DivPrune achieves lower theoretical FLOPs, its end to end latency even ex- ceeds that of the original uncompressed model. In contrast, our method leverages spatial division for initialization, significantly reducing the actual inference time. Across various compression ratios, our method consistently achieves better performance than state-of-the-art approaches on most datasets, without incurring additional computational overhead. Table 2: Performance comparison with FastV and DivPrune across varying compression ratios. We report the results on LLaVa-v1.5-7B. Method Average Token TFLOPS Latency GQA MME MMB SQA LLaV A-1.5-7B 576 3.82 0.145s 62.0 1510.7 64.3 69.4 FastV 128 0.86 0.122s(15% ↓) 49.6 1388.6 56.1 60.2 DivPrune 128 0.83 0.224s(54% ↑) 58.8 1405.4 62.1 68.4 BTP (ours) 128 0.85 0.134s(7% ↓) 59.0 1487.0 62.7 69.1 FastV 64 0.42 0.118s (18% ↓) 46.1 801.3 48.0 51.1 DivPrune 64 0.41 0.150s(0.5% ↑) 57.5 1350.0 58.5 67.6 BTP (ours) 64 0.42 0.120s(17% ↓) 55.0 1364.1 58.6 68.3 5.2 Efficiency analysis The additional overhead introduced by our method primarily arises from the attention computation and the selection of the
|
https://arxiv.org/abs/2505.22038v1
|
diversity set. Since we compute attention only between the final token and the image tokens, the added attention complexity is O(n). For the selection of the diversity set, our proposed spatial initialization strategy and progressive weight decay allow us to select only a small number of additional tokens. In this section, we compare the efficiency of our method with other approaches, evaluating from multiple perspectives including theoretical FLOPs, inference latency, KV cache size, and corresponding benchmark performance. For inference latency, we report the average inference time per sample. For KV cache memory usage, we report the average GPU memory consumption after compression. We conduct experiments using LLaV A-v1.5 and LLaV A-v1.6. Notably, LLaV A-v1.6 processes images at a higher resolution, resulting in a larger number of image tokens. Table 3: Evaluation of compression efficiency on different models Method Averge token Cache Size TFLOPS Latency LLaV A-COCO LLaV A-1.5-7B 576 0.34GB (100%) 3.82 2.24s 90.8 FastV 172 0.15GB (55.8% ↓) 1.65 2.11s (5% ↓) 80.6 DivPrune 128 0.11GB (67.6% ↓) 0.83 2.33s (4% ↑) 80.3 BTP (ours) 128 0.11GB (67.6% ↓) 0.85 2.13s (4% ↓) 80.9 LLaV A-1.6-7B 2880 1.11GB(100%) 20.82 4.24s 106.6 FastV 860 0.37GB (66.6% ↓) 6.45 3.77s (11% ↓) 92.6 DivPrune 633 0.28GB (74.7% ↓) 4.20 5.00s (17% ↑) 99.1 BTP (ours) 633 0.28GB (74.7% ↓) 4.52 3.91s(7% ↓) 98.9 8 As shown in Table 3, our method achieves the best performance while maintaining practical efficiency. 5.3 Ablation study Choice of balance factor value: We first analyze the effect of λin the local-global objective functions. This factor determines the trade-off at each layer between preserving local outputs and contributing to the global output. We gradually increase the value of λ. We define the ratio between the performance of the pruned model and that of the base model on the target task as the performance gain. The computation of performace performance gain is detailed in the Appendix A.4. Figure 6: Ablation study on balance factor.As shown in Figure 6, we conduct experiments on both LLaV A-v1.5 and LLaV A-v1.6 to evaluate the effectiveness of our approach. As illustrated, model performance peaks when λbalances the two objectives. In contrast, relying solely on either the local or global objective leads to a decline in performance. This highlights the importance of our method in effectively balancing the two objectives. Therefore, we use smaller λvalues (0.5–0.7) in the shallow layers and larger values (0.8–1.0) in the deeper layers. Effectiveness of rebalanced attention and spatial diversity initialization: We then perform ablation studies on the attention rebalance module and the spatial initialization module. We experimented with various combinations of the two modules. The results are presented in Table 4. It can be observed that removing the attention rebalance module results in a significant degradation in model performance. Table 4: Ablation study on attention re- balance module and spatial initialization module. RA SI Latency MME GQA POPE ✓✓ 0.134s 1487.0 59.0 85.6 ✓ 0.232s 1486.5 57.9 86.4 ✓ 0.140s 1464.6 57.4 85.1 0.131s 1478.1 57.3 84.4This degradation arises from the inherent bias in attention mechanisms, where positional encodings
|
https://arxiv.org/abs/2505.22038v1
|
tend to shift at- tention disproportionately toward later tokens, leading to suboptimal token selection. On the other hand, omitting the spatial initialization module causes a marked increase in inference latency, in some cases even surpassing that of the original unpruned model. This suggests that while pruning reduces token count, naive initialization can in- troduce computational overhead that negates the benefits of pruning, thereby limiting the method’s applicability in latency-sensitive real-world scenarios. This demonstrates the effectiveness of the proposed module in improving both model performance and inference speed. Table 5: Ablation study on layer selec- tion strategy. Method Stage Selection MME MMB LLaVa-v1.5Averaged 1483.2 62.3 Ours 1487.0 62.7 LLaVa-v1.6Averaged 1480.1 64.7 Ours 1490.8 65.8 Qwen2.5-vlAveraged 1551.6 73.8 Ours 1641.5 75.2Effectiveness of calibration-based pruning stage se- lection: To evaluate the effectiveness of our proposed calibration-based pruning stage selection, we compare it with a baseline that uniformly divides the pruning stages according to the total number of decoder layers, under the same compression rate. Experimental results are shown in Table 5. We observe that our pruning layer selection method outperforms uniform selection. This is especially evident on Qwen2.5-VL, where uniform selection leads to a significant performance drop. We attribute this to differences in how Qwen2.5-VL processes image tokens as shown in Figure 4. 6 Conclusion In this work, we conduct initial studies to investigate and verify the limitations of existing image token pruning methods. We further analyze the impact of two pruning strategies on model performance from the perspective of the objective function, and formulate a local-global pruning optimization objective. To reduce information loss during pruning, we propose Balanced Token Pruning (BTP) , a multi-stage pruning method. We first determine the pruning stages using a calibration set. In the early layers, we focus on a diversity-oriented objective to account for the influence of pruning on deeper layers, while in the later layers, we adopt an attention-based objective to better preserve local 9 information. Experiments on several popular models and multiple visual understanding benchmarks demonstrate the effectiveness of our approach. We hope our method can inspire further research on visual token redundancy and the design of more efficient multimodal large models. References [1]Saeed Ranjbar Alvar, Gursimran Singh, Mohammad Akbari, and Yong Zhang. Di- vprune: Diversity-based visual token pruning for large multimodal models. arXiv preprint arXiv:2503.02175 , 2025. [2]Shuai Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, Sibo Song, Kai Dang, Peng Wang, Shijie Wang, Jun Tang, et al. Qwen2.5-vl technical report. arXiv preprint arXiv:2502.13923 , 2025. [3]Li* Bo, Zhang* Peiyuan, Zhang* Kaichen, Pu* Fanyi, Du Xinrun, Dong Yuhao, Liu Haotian, Zhang Yuanhan, Zhang Ge, Li Chunyuan, and Ziwei Liu. Lmms-eval: Accelerating the development of large multimoal models, March 2024. [4]Jieneng Chen, Luoxin Ye, Ju He, Zhao-Yang Wang, Daniel Khashabi, and Alan Yuille. Effi- cient large multi-modal models via visual context compression. In The Thirty-eighth Annual Conference on Neural Information Processing Systems , 2024. [5]Liang Chen, Haozhe Zhao, Tianyu Liu, Shuai Bai, Junyang Lin, Chang Zhou, and Baobao Chang. An image is worth 1/2 tokens after layer 2: Plug-and-play inference acceleration for large vision-language models. In
|
https://arxiv.org/abs/2505.22038v1
|
European Conference on Computer Vision , pages 19–35. Springer, 2024. [6]Zhe Chen, Jiannan Wu, Wenhai Wang, Weijie Su, Guo Chen, Sen Xing, Muyan Zhong, Qinglong Zhang, Xizhou Zhu, Lewei Lu, et al. Internvl: Scaling up vision foundation models and aligning for generic visual-linguistic tasks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 24185–24198, 2024. [7]Moulik Choraria, Xinbo Wu, Sourya Basu, Nitesh Sekhar, Yue Wu, Xu Zhang, Prateek Singhal, and Lav R. Varshney. Semantically grounded qformer for efficient vision language understand- ing, 2024. [8]Tri Dao. FlashAttention-2: Faster attention with better parallelism and work partitioning. In International Conference on Learning Representations (ICLR) , 2024. [9]Tri Dao, Daniel Y . Fu, Stefano Ermon, Atri Rudra, and Christopher Ré. FlashAttention: Fast and memory-efficient exact attention with IO-awareness. In Advances in Neural Information Processing Systems (NeurIPS) , 2022. [10] Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, et al. Mme: A comprehensive evaluation benchmark for multimodal large language models. arXiv preprint arXiv:2306.13394 , 2023. [11] Zhangwei Gao, Zhe Chen, Erfei Cui, Yiming Ren, Weiyun Wang, Jinguo Zhu, Hao Tian, Shenglong Ye, Junjun He, Xizhou Zhu, et al. Mini-internvl: a flexible-transfer pocket multi- modal model with 5% parameters and 90% performance. Visual Intelligence , 2(1):1–17, 2024. [12] Wenxuan Huang, Zijie Zhai, Yunhang Shen, Shaosheng Cao, Fei Zhao, Xiangfeng Xu, Zheyu Ye, Yao Hu, and Shaohui Lin. Dynamic-llava: Efficient multimodal large language models via dynamic vision-language context sparsification. arXiv preprint arXiv:2412.00876 , 2024. [13] Wenxuan Huang, Zijie Zhai, Yunhang Shen, Shaosheng Cao, Fei Zhao, Xiangfeng Xu, Zheyu Ye, Yao Hu, and Shaohui Lin. Dynamic-llava: Efficient multimodal large language models via dynamic vision-language context sparsification, 2025. [14] Itay Hubara, Yury Nahshan, Yair Hanani, Ron Banner, and Daniel Soudry. Accurate post training quantization with small calibration sets. In International Conference on Machine Learning , pages 4466–4475. PMLR, 2021. 10 [15] Drew A Hudson and Christopher D Manning. Gqa: A new dataset for real-world visual reasoning and compositional question answering. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 6700–6709, 2019. [16] Mohit Iyyer, Wen-tau Yih, and Ming-Wei Chang. Search-based neural structured learning for sequential question answering. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 1821–1831, 2017. [17] Zhang Kaichen, Li Bo, Zhang Peiyuan, Pu Fanyi, Cahyono Joshua-Adrian, Hu Kairui, Liu Shuai, Zhang Yuanhan, Yang Jingkang, Li Chunyuan, and Liu Ziwei. Lmms-eval: Reality check on the evaluation of large multimodal models, 2024. [18] Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation, 2022. [19] Zhihang Lin, Mingbao Lin, Luxi Lin, and Rongrong Ji. Boosting multimodal large language models with visual tokens withdrawal for rapid inference. In Proceedings of the AAAI Confer- ence on Artificial Intelligence , volume 39, pages 5334–5342, 2025. [20] Haotian Liu, Chunyuan Li, Yuheng Li, and Yong Jae Lee. Improved baselines with visual instruction tuning, 2023. [21] Haotian Liu, Chunyuan Li, Yuheng Li, Bo Li,
|
https://arxiv.org/abs/2505.22038v1
|
Yuanhan Zhang, Sheng Shen, and Yong Jae Lee. Llava-next: Improved reasoning, ocr, and world knowledge, January 2024. [22] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning, 2023. [23] Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, et al. Mmbench: Is your multi-modal model an all-around player? In European conference on computer vision , pages 216–233. Springer, 2024. [24] Zirui Liu, Jiayi Yuan, Hongye Jin, Shaochen Zhong, Zhaozhuo Xu, Vladimir Braverman, Beidi Chen, and Xia Hu. Kivi: A tuning-free asymmetric 2bit quantization for kv cache. arXiv preprint arXiv:2402.02750 , 2024. [25] Daniel Cosmin Porumbel, Jin-Kao Hao, and Fred Glover. A simple and effective algorithm for the maxmin diversity problem. Annals of Operations Research , 186:275–293, 2011. [26] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agar- wal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision, 2021. [27] Charles C Ragin. Fuzzy sets: Calibration versus measurement. Methodology volume of Oxford handbooks of political science , 2, 2007. [28] Mauricio GC Resende, Rafael Martí, Micael Gallego, and Abraham Duarte. Grasp and path relinking for the max–min diversity problem. Computers & Operations Research , 37(3):498– 508, 2010. [29] Yuzhang Shang, Mu Cai, Bingxin Xu, Yong Jae Lee, and Yan Yan. Llava-prumerge: Adaptive token reduction for efficient large multimodal models. arXiv preprint arXiv:2403.15388 , 2024. [30] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timo- thée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 , 2023. [31] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems , 30, 2017. [32] Peng Wang, Shuai Bai, Sinan Tan, Shijie Wang, Zhihao Fan, Jinze Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, et al. Qwen2-vl: Enhancing vision-language model’s perception of the world at any resolution. arXiv preprint arXiv:2409.12191 , 2024. 11 [33] Long Xing, Qidong Huang, Xiaoyi Dong, Jiajie Lu, Pan Zhang, Yuhang Zang, Yuhang Cao, Conghui He, Jiaqi Wang, Feng Wu, et al. Pyramiddrop: Accelerating your large vision-language models via pyramid visual redundancy reduction. arXiv preprint arXiv:2410.17247 , 2024. [34] Senqiao Yang, Yukang Chen, Zhuotao Tian, Chengyao Wang, Jingyao Li, Bei Yu, and Jiaya Jia. Visionzip: Longer is better but not necessary in vision language models. arXiv preprint arXiv:2412.04467 , 2024. [35] Xubing Ye, Yukang Gan, Yixiao Ge, Xiao-Ping Zhang, and Yansong Tang. Atp-llava: Adaptive token pruning for large vision language models. arXiv preprint arXiv:2412.00447 , 2024. [36] Li Yifan, Du Yifan, Zhou Kun, Wang Jinpeng, Zhao Wayne-Xin, and Ji-Rong Wen. Evaluating object hallucination in large vision-language models. In The 2023 Conference on Empirical Methods in Natural Language Processing , 2023. [37] Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, and Lijuan Wang. Mm-vet: Evaluating large
|
https://arxiv.org/abs/2505.22038v1
|
multimodal models for integrated capabilities. In International conference on machine learning . PMLR, 2024. [38] Shaolei Zhang, Qingkai Fang, Zhe Yang, and Yang Feng. Llava-mini: Efficient image and video large multimodal models with one vision token. arXiv preprint arXiv:2501.03895 , 2025. [39] Shaolei Zhang, Qingkai Fang, Zhe Yang, and Yang Feng. Llava-mini: Efficient image and video large multimodal models with one vision token, 2025. [40] Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric. P Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. Judging llm-as-a-judge with mt-bench and chatbot arena, 2023. 12 A Appendix A.1 Key and Value of LVLMs Following previous works on token quantization KIVI [ 24], we visualize the KlandVlof different LVLMs, the results are shown below: 0200400600Token 050100 Column0510 (a) LLaV A key 0200400600Token 050100 Column024 (b) LLaV A value Figure 7: Visualization of key and value of LLaV A-v1.5 0 1000 2000Token 050100 Column0510 (a) LLaV A-v1.6 key 0 1000 2000Token 050100 Column024 (b) LLaV A-v1.6 value Figure 8: Visualization of key and value of LLaV A-v1.6 0250500750Token 050100 Column01020 (a) Qwen2.5-vl key 0250500750Token 050100 Column0123 (b) Qwen2.5-vl value Figure 9: Visualization of key and value of Qwen2.5-vl 13 A.2 Experiment Settings ForLLaVA-v1.5-7B ,LLaVA-v1.5-13B , and LLaVA-v1.6-7B , we divide the pruning process into five stages based on the image token handling pipeline described in the Appendix. In each stage, except for the last one, we retain 50% of the tokens from the previous stage. In the final stage, all tokens are discarded to maximize inference speed. For Qwen2.5-VL , since its image token processing can be clearly divided into two stages, we retain 25% of the tokens in the fourth stage and 12.5% in the final stage to preserve model performance. A.3 Visualization of token selection under different pruning strategies Figure 10: Visualization of Image Token Selection Across Different Methods A.4 Calculation of model gain Since evaluation metrics vary across tasks and the difficulty levels differ significantly, it is not reasonable to present all task results directly in a unified format. For example, the original LLaV A- v1.5 model scores 1510 on the MME benchmark but only 62 on GQA. To address this, we define a model gain metric as: Gain =Normalize (Pruned score Original score). (12) 14
|
https://arxiv.org/abs/2505.22038v1
|
arXiv:2505.22042v1 [cs.LG] 28 May 2025Estimating the Effects of Sample Training Orders for Large Language Models without Retraining Hao Yang1Haoxuan Li2Mengyue Yang3Xu Chen1∗Mingming Gong4 1Gaoling School of Artificial Intelligence, Renmin University of China 2Center for Data Science, Peking University 3School of Engineering Mathematics and Technology, University of Bristol 4Department of Machine Learning, Mohamed bin Zayed University of Artificial Intelligence hao.yang@ruc.edu.cn, hxli@stu.pku.edu.cn, mengyue.yang@bristol.ac.uk, xu.chen@ruc.edu.cn, mingming.gong@unimelb.edu.au Abstract The order of training samples plays a crucial role in large language models (LLMs), significantly impacting both their external performance and internal learning dy- namics. Traditional methods for investigating this effect generally require retrain- ing the model with various sample orders, which is computationally infeasible for LLMs. In this work, we improve traditional methods by designing a retraining-free framework. By approximating Adam optimizer updates with first- and second-order Taylor expansions and utilizing random projection methods to store intermediate checkpoints, our framework can efficiently estimate model parameters for arbitrary training sample orders. Next, we apply our framework to two downstream research problems: (1) Training curriculum design for LLMs — We base our retraining- free framework to propose a novel curriculum learning strategy that augments curriculum proposals with estimated model performances, enabling more informed sample scheduling. (2) LLMs’ memorization and generalization effect analysis — We use our retraining-free framework to estimate how the positions of train- ing samples influence LLMs’ capacity for memorization and generalization. We conduct extensive experiments to validate the effectiveness of our retraining-free framework in reproducing the true model performances, and further demonstrate its potential in optimizing LLM training curricula and analyzing the memorization and generalization effects of LLMs. 1 Introduction The order of training samples is crucial for optimizing large language models (LLMs), primarily due to the inherent nature of batch-based optimization methods ( e.g., mini-batch gradient descent) [ 56,41]. This insight has spurred significant research in areas such as training curriculum design for LLMs, which strategically schedules training samples to enhance model optimization [ 60,59,8], and LLMs’ memorization and generalization effect analysis [ 30,48,7,64], which investigates how the sequence of sample exposure influences the model’s ability to retain knowledge and generalize effectively. A straightforward strategy to study these problems is to train the target model multiple times with different sample orders, and then observe the results to either select the optimal one or analyze the underlying patterns [62, 56, 26]. In traditional machine learning, the above strategy is feasible because sample and parameter sizes are typically manageable, and training costs are relatively low [ 58,20]. However, in the era of LLMs, ∗Corresponding author. Preprint. Under review. this approach becomes impractical due to high training costs and the massive scale of samples and parameters. This naturally raises a novel and fundamental research question: Can we estimate the effect of sample ordering on LLM performance without retraining? Despite its significance, answering this question is challenging. To begin with, a practical strategy for estimating model performance under a target sample order is to first measure the performance for a reference sample order and then infer the target performance by establishing a relationship between these two orders. However, since
|
https://arxiv.org/abs/2505.22042v1
|
the target sample order can be arbitrary in an extremely large space, identifying a common basis to effectively bridge the reference and target performances becomes a non-trivial challenge. And then, even if we can successfully identify a common basis for relating different sample orders, efficiently storing this basis also poses a significant challenge, as it may involve a vast number of LLM parameters. To overcome the above challenges, in this paper, we propose a novel retraining- free framework by approximating the parameter updating process with Taylor expansions (called FUT for short). Specifically, we focus on the Adam optimizer and reformulate its update term as a function of the current model parameters. Next, we apply Taylor expansions to derive the relationships between the update terms across different model parameters based on the first- and second-order gradients of the loss function. This formulation establishes the common basis for correlating LLM performance across varying sample orders. Finally, we adopt the Random Projection technique based on the Johnson-Lindenstrauss (JL) theorem [ 51] to efficiently store the update terms for all training batches, significantly reducing memory consumption while maintaining accuracy. Building on the above foundational framework, we further apply it to two specific research problems: (1) Training curriculum design for LLMs. Unlike traditional curriculum learning strategies that rely on human heuristics to determine sample orders, our framework empowers users to select sample orders based on the final model performance. Furthermore, for each sample order, our framework provides performance estimations, enabling users to make more informed decisions. (2) LLMs’ memorization and generalization effect analysis. Unlike previous approaches that assess the impact of sample positioning on memorization and generalization through costly retraining or black-box neural network approximations, our framework offers an efficient and principled method to analyze these capabilities in LLMs. In summary, the main contributions of this paper can be summarized as follows: •We formally define the problem of "estimating the impact of training sample orders on model performance without retraining" in the context of LLMs. •To solve the above problem, we propose a principled framework based on Taylor expansions and the Random Projection technique to efficiently estimate LLM performance for arbitrary sample orders. •We apply our framework to two specific applications: (1) training curriculum design for LLMs and (2) LLMs’ memorization and generalization effect analysis to demonstrate its fundamental nature and general applicability. •We conduct extensive experiments to demonstrate the effectiveness of our framework in approxi- mating the true performance and validate its potential in addressing the aforementioned applications. 2 Problem Formulation Suppose we have a training dataset with Tbatches, denoted as Dtr={Bt}T−1 t=0, and an LLM M. We begin by training MonDtrfollowing a reference sample order and obtain the corresponding reference checkpoints2. Specifically, without loss of generality, we assume the reference sample order is B0, B1, . . . , B T−1, with the initial parameters of Mrepresented as θ0. After processing each batch Bt, the model parameters are updated from θttoθt+1. Ultimately, we collect the reference checkpoints as Θ ={θt}T t=0. For a new sample order, Bl0, Bl1, . . . , B lT−1, where
|
https://arxiv.org/abs/2505.22042v1
|
Bltis the (t+1)th training batch, our problem aims to efficiently derive the model parameters {γt}T t=0, where γt+1is the model parameter after training batch Blt, and we set γ0=θ0. 2Note that the reference sample order can be arbitrary or chosen based on user preference. 2 𝜃!=𝜃+−𝜂Γ(𝜃+,)…𝜃#$"𝜃#$!𝜃#𝜃+Γ(𝜃+,) Γ(𝜃+,)…Γ(𝜃+,)Γ(𝜃+,)Γ(𝜃!,) Γ(𝜃!,)…Γ(𝜃!,)Γ(𝜃!,)▽,Γ(𝜃!,) ▽,Γ(𝜃!,)▽,Γ(𝜃!,)…▽,Γ(𝜃!,)Γ(𝜃#$!,)Γ(𝜃#$!,)…Γ(𝜃#$!,)Γ(𝜃#$!,)▽,Γ(𝜃#$!,) ▽,Γ(𝜃#$!,)…▽,Γ(𝜃#$!,)▽,Γ(𝜃#$!,) …𝛾!-!=𝛾+-!−𝜂Γ(𝛾+-!,) 𝛾"-!=𝛾!-!−𝜂Γ(𝛾!-!,)Γ(𝛾!-!,)=Γ(𝜃!,)+▽,Γ𝜃!,+𝛾!-!−𝜃! 𝛾#$"-!𝛾#$!-!𝛾#-! 𝛾+-!=𝜃+… …𝛾!-%=𝛾+-%−𝜂Γ(𝛾+-%,)𝛾"-%=𝛾!-%−𝜂Γ(𝛾!-%,)𝛾#$"-%𝛾#$!-%𝛾#-%𝛾+-%=𝜃+Permuted Order 𝑘"Permuted Order 𝑘'Reference Order 𝑟Stage 1: Reference Model Training Stage 3: EstimatingStage 2: Update Term Storing 𝜃"=𝜃!−𝜂Γ(𝜃!,) …Γ(𝜃#$",)Γ(𝜃#$",)…Γ(𝜃#$",)Γ(𝜃#$",)▽,Γ(𝜃#$",) ▽,Γ(𝜃#$",)…▽,Γ(𝜃#$",) ▽,Γ(𝜃#$",)RandomProjection … [First-Order Taylor Expansion]①100% <<100%Memory ②Figure 1: Overview of the FUT framework. FUT operates in three stages: Stage 1: Compute the reference trajectory Θ ={θt}T t=0using a fixed data order r.Stage 2: Store update and gradient terms for all (θt, Blt)pairs, compressing them via random projection. Stage 3: Estimate trajectories {γki t}T t=0under permuted data orders {ki}N i=1using first-order Taylor expansion based on stored terms. A toy example along the dashed line illustrates: ①retrieving stored terms for expansion, and ②updating parameters along a permuted order. Relation with the influence function . The above problem shares similarities with the influence func- tion [ 27], as both study the effects of training samples. However, there are fundamental differences: our focus is on understanding the impact of sample ordering, while the influence function primarily examines the effect of removing individual samples. Moreover, our problem is situated within the context of LLMs, demanding efficient storage and management of large-scale model parameters. Straightforward solutions . To solve the above problem, one is to retrain Musing the new sample order Bl0, Bl1, . . . , B lT−1and obtain the model parameters {γt}T t=0after each batch. Another potential solution treats the sample order as the input to a neural network, with the model parameters as the output. In this way, a neural network could be trained to learn the correlation between the input and output, enabling parameter estimation without full retraining. However, the first solution demands substantial time and computational resources to retrain LLMs, rendering it practically infeasible. For the second solution, the limited availability of input-output pairs makes it difficult for a neural network to accurately learn the correlations, resulting in significantly lower performance. 3 The FUT Framework To address the limitations of the above straightforward solutions, in this section, we propose a principled retraining-free framework. The core idea of our approach is to establish a relationship between {γt}T t=0and{θt}T t=0by delving deeply into their respective generation processes. Then, we derive {γt}T t=0based on {θt}T t=0, which are precomputed as reference checkpoints. Here, we focus on the Adam optimizer due to its widespread use in LLM optimization. However, our method can be easily extended to other batch-based gradient methods, such as SGD. By applying the updating rule of Adam, we have:3 θt+1−θt=−ηΓ(θt, Bt),∀0≤t≤T−1 (1) In this equation, Γ(θt, Bt) =mt/(√vt+ϵ)is the update term, and mt= (β1mt−1+ (1−β1)∇θL(θt, Bt))/(1−βt 1), vt= (β2vt−1+ (1−β2)∇θL(θt, Bt)2)/(1−βt 2),(2) 3Without special mention, the updating is applied to each dimension of the parameter separately. 3 where∇θL(θt, Bt)represents the gradient of the loss function Lcomputed with respect to the model parameters θtusing the mini-batch Bt.ηis the learning rate.
|
https://arxiv.org/abs/2505.22042v1
|
mtandvtare the first and second momentum statistics, respectively. β1andβ2are both the smoothing coefficients that control the decay rate of past gradients. ϵis a small constant to prevent mtandvtfrom being divided by zero. Similar to the above updating rule, we have γt+1−γt=−ηΓ(γt, Blt) (0≤t≤T−1). To compute γt+1, we regard Γ(θ, B)as a function of the model parameters θ. By using Taylor expansions on Γ(γt, Blt), we have: Γ(γt, Blt)≈Γ(θt, Blt) + (γt−θt)∇θΓ(θt, Blt) (3) where ∇θΓ(θt, Blt)represents the gradient of Γ(θt, Blt)with respect to θ. In this equation, since Bltis one of B0, B1, . . . , B T−1, if we can obtain Γ(θt, Blt)and∇θΓ(θt, Blt)for all 0≤t≤T−1, thenγt+1can be recursively computed as follows: γt+1=γt−ηΓ(θt, Blt)−η(γt−θt)∇θΓ(θt, Blt), (4) where all the variables on the right-hand side are known. In this equation, Γ(θt, Blt)and∇θΓ(θt, Blt) form the basis for connecting γtandθt. According to the Adam computational rules, we have: ∇θΓ(θt, Blt) =∂mt ∂θ(√vt+ϵ)−∂√vt ∂θmt (√vt+ϵ)2(5) where ∂mt ∂θ=β1·∂mt−1 ∂θ+ (1−β1)· ∇2 θL(θt, Blt) 1−βt 1, ∂√vt ∂θ=β2·∂vt−1 ∂θ+ 2(1−β2)· ∇θL(θt, Blt)· ∇2 θL(θt, Blt) 2(1−βt 2)√vt.(6) By jointly observing equation (2) and (5), we can see Γ(θt, Blt)and∇θΓ(θt, Blt)only rely on ∇θL(θt, Blt)and∇2 θL(θt, Blt). These terms are the gradients of the loss function with respect to the reference checkpoint and the training batch. Since the reference checkpoints {θt}T t=0have already been collected before, we can efficiently compute ∇θL(θt, Blt)and∇2 θL(θt, Blt)simply by bringing θtandBltinto the gradient functions. The algorithm for deriving {γt}T t=0is shown in Algorithm 2. In specific, there are three stages. In the reference model training stage, we train MusingDtrbased on the reference sample order. After obtaining Θ ={θt}T t=0, in the update term storing stage, we derive and store Γ(θt, Blt)and ∇θΓ(θt, Blt)for all 0≤t≤T−1based on equation (2) and (5). At last, in the estimation stage, for a new sample order {lt}T−1 t=0, we compute {γt}T t=0based on equation (4) in a recursive manner. In practice, the first two stages are executed only once, after which the performance of any new sample order can be efficiently estimated. Figure 1 illustrates the complete FUT framework. Enhanced model with the second-order Taylor expansion . In the above method, we approximate Γ(γt, Blt)with the first-order Taylor expansion. To enhance accuracy, we extend our approach by incorporating the second-order term, resulting in an updated version of equation (3) as follows: Γ(γt, Blt)≈Γ(θt, Blt) + (γt−θt)∇θΓ(θt, Blt) +c·(γt−θt)2∇2 θΓ(θt, Blt) (7) where ∇2 θΓ(θt, Blt)is the second-order gradient of Γ(θt, Blt), with a constant cweighting its importance. By combining this equation with γt+1−γt=−ηΓ(γt, Blt), we have: γt+1=γt−ηΓ(θt, Blt)−η(γt−θt)∇θΓ(θt, Blt)−cη·(γt−θt)2∇2 θΓ(θt, Blt). (8) Please referred to appendix for more details to precompute ∇2 θΓ(θt, Blt). After obtaining ∇2 θΓ(θt, Blt), we can efficiently derive {γt}T t=0based on equation (8) in a recursive manner. Efficient storage of the update terms . According to the above analysis, our framework heavily rely onΓ(θt, Blt),∇θΓ(θt, Blt)and∇2 θΓ(θt, Blt). However, in the context of LLMs, their dimensions are extremely large, posing significant storage challenges. To address this issue, we leverage the Random Projection technique [ 9,62] based on the Johnson-Lindenstrauss
|
https://arxiv.org/abs/2505.22042v1
|
(JL) theorem [ 51] to efficiently reduce their dimensionality. To illustrate this process, consider storing a 2-dimensional matrix M∈Rd1×d2. We first generate a random matrix A∈Rd2×kthat follows a Gaussian 4 Algorithm 1 FUT Framework for Deriving {γt}T t=0with First-order Taylor Expansion Require: Initialized model parameter θ0, reference training batches {Bt}T−1 t=0, learning rate η, andϵ. Ensure: Derived sequence {γt}T t=0 1:Reference Model Training Stage: 2:fort= 0toT−1do 3: Compute the (t+ 1) th reference checkpoint: 4: θt+1←θt−ηΓ(θt, Bt)(Eq. 1) 5:end for 6:Obtain Θ ={θt}T t=0 7:Update Term Storing Stage: 8:fort= 0toT−1do 9: Compute first- and second-order update terms: 10: Γ(θt, Blt)←calculate ∇θL(θt, Blt)with checkpoint θton batch Blt 11: ∇θΓ(θt, Blt)←calculate ∇θL(θt, Blt),∇2 θL(θt, Blt)with checkpoint θton batch Blt 12:end for 13:Estimation Stage: 14:γ0←θ0 15:fort= 0toT−1do 16: First-order Taylor expansion for Γ(γt, Blt): 17: Γ(γt, Blt)←Γ(θt, Blt) + (γt−θt)∇θΓ(θt, Blt)(Eq. 3) 18: Update γt+1: 19: γt+1←γt−ηΓ(γt, Blt)(Eq. 4) 20:end for 21:Return {γt}T t=0 distribution N(0,1/k), where kis the target dimension chosen based on the JL theorem. Next, we perform dimensionality reduction by left-multiplying AwithM, that is, M′=MA. Here, M′∈Rd1×kis the compressed representation for storage. To recover the original matrix M, we similarly perform a left multiplication using the Moore-Penrose pseudoinverse of A, denoted as A+, that is, fM=M′A+. This approach effectively reduces the space complexity of MfromO(d1d2) toO(d1k), where k≪d, significantly alleviating the storage burden when precomputing it. For higher-order terms such as ∇θΓ(θt, Blt)and∇2 θΓ(θt, Blt), we similarly apply the random projection technique to reduce their storage complexity, making the process efficient and scalable. Comparison between the computational costs of retraining and our method . Assume that the time complexity for computing the loss gradient once is O(C). Enumerating model parameters under all possible training orders requires retraining the model on the original dataset for T!times, where in each permuted order, we need to perform ∇θL(θt, Blt)forTtimes. Therefore, the total time complexity of retraining is O(T·C·T!), which is computationally prohibitive for LLMs with billions of parameters. In contrast, our method estimates the model updates under different batch orders without retraining. Its main computational cost comes from computing the updating terms Γ(θt, Blt),∇θΓ(θt, Blt), and∇2 θΓ(θt, Blt). In specific, each of these terms requires a single backward computation of the model at checkpoint θtover batch Blt,i.e.,L(θt, Blt). Since there are T2such(θt, Blt)pairs in total, the overall time complexity of our method is O(T2·C). 4 Applications 4.1 Training Curriculum Design for LLMs Problem definition . Following the notations in Section 2, suppose we aim to train a model M on the dataset Dtr={Bt}T−1 t=0. Let πbe a permutation function that maps the standard index set {0,1, . . . , T −1}to{π(0), π(1), . . . , π (T−1)}, where π(t)∈[0, T−1]indicates that batch Bt is placed at the (π(t) + 1) -th position in the training sequence. Following common practice, we train the LLM for only one epoch [ 56]. The goal is to find an optimal permutation π∗such that the resulting model performs best on a validation set Dval, formally defined as: π∗:= arg max π∈ΠR(γπ T,Dval), (9) 5 where γπ Tdenotes the
|
https://arxiv.org/abs/2505.22042v1
|
final model parameters estimated using our FUT framework, and the training order ltis induced by π. The performance metric Ris implemented using Perplexity (PPL)[ 24], and Πdenotes the space of all possible permutation functions. Our solution based on FUT . Since objective (9) is non-differentiable, we design a Genetic Algorithm (GA) [ 25] to obtain π∗. In specific, we maintain a set of candidate sample orders and iteratively apply crossover and mutation operators to generate improved sample orders, aiming to optimize the model performance. For more details, we refer readers to appendix. Compared to traditional curriculum learning strategies, a key advantage of our method is its ability to estimate model performance for each curriculum proposal, enabling more informed decisions. For instance, by knowing the performance gap between different curricula, users can assess whether the difference is significant. If the gap is small, users can confidently choose one at random. 4.2 LLMs’ Memorization and Generalization Effect Analysis Problem definition . We continue to follow the notations introduced in Section 2. For each training batch in Dtr, the memorization problem evaluates model performance when the batch appears at different positions in the training sequence. Specifically, we use the following evaluation method: Mi,j=1 NNX k=1R(θπij k T, Bi), where πij kis a permutation function that fixes Biat the j-th training position while randomly shuffling all other batches. For each Bi, we generate Nsuch permutations, and the final performance is computed as the average across these permutations. The generalization problem is defined in a similar manner, with the key distinction that Biin the above equation is replaced by Di, a dataset not seen during training, i.e., Di/∈ Dtr. Our solution based on FUT . For each πij k, we first generate the sequence ltand then estimate γπij k T using the reference checkpoints {θt}T t=0. Finally, we compute R(θπij k T, Bi)orR(θπij k T, Di)based on γπij k T, and average the resulting performances over different values of k. Compared to previous studies that estimate memorization capability using black-box neural net- work [64, 30, 13], our method is more principled and grounded in theoretical foundations. 5 Experiments In this section, we conduct extensive experiments to demonstrate the effectiveness of our framework and its potential applications in designing LLM training curricula and analyzing LLMs’ memorization and generalization capabilities. 5.1 Evaluation on the General Capability of Our Methods Experimental Setup . To evaluate the accuracy of our estimated model parameters, we incorporate theγTobtained in Section 3 into the LLM and measure the performance gap between the estimated and actual results. Specifically, we conduct our experiments on the Wikitext dataset [ 37], a curated collection of high-quality English Wikipedia articles that is widely used for language modeling and evaluation. This dataset is particularly well-suited for assessing model perplexity due to its long-range token dependencies [ 38]. In our experiments, we partition the dataset into 80% for training, 10% for validation, and the remaining 10% for testing. We adopt the architecture of LLaMA [ 49] to construct a base model with 636 million parameters. The model has a hidden size
|
https://arxiv.org/abs/2505.22042v1
|
of 2048 and consists of 10 stacked transformer layers with 10 attention heads. We choose this relatively compact architecture because our main experiments involve repeated LLM training to validate that the proposed FUT framework can accurately estimate model parameters under various training orders. In appendix, we scale the model size up to 1.4 billion parameters to assess the scalability of our approach. Following common practice [ 56], we use the Adam optimizer for LLM training and train for a single epoch to evaluate performance based on perplexity [24]. 6 Table 1: Estimation accuracy (AbsDiff) with different batch sizes. T Random FUT FUT++ 8 0.0205 0.0165 0.0085 16 0.0917 0.0649 0.0703 32 0.0373 0.0290 0.0193 64 0.0644 0.0445 0.0319 128 0.0575 0.0372 0.0284 256 0.0471 0.0205 0.0368 10 50 100 1000(#Orders)0102103104Time Cost (min) 1.96.6 13.3 132.6 1.43.4 6.5 64.8 1.22.03.432.2 1.11.42.015.5 1.11.21.58.0 1.01.11.23.8Amortized Time Cost per Order (Log) T=8 T=16 T=32 T=64 T=128 T=256 Figure 2: Time cost comparison. In our experiments, assuming the dataset consists of Tbatches, we randomly select Ntraining orders from the total of T!possible permutations. For each selected order, we use our method to estimate the model performance ˆrand also train the LLM using that order to obtain the ground-truth performance r. The performance gap is then calculated as: AbsDiff =1 NNX k=1|ˆrk−rk|, where kindexes the different training orders. We set N= 10 to balance evaluation reliability with computational cost. To assess the scalability and robustness of our framework, we vary the number of batches Tacross the set {8,16,32,64,128,256}. This setup allows us to evaluate how performance estimation behaves under increasing training granularity and longer optimization trajectories. Baseline . We denote our method using the first- and second-order Taylor expansions as FUT and FUT++ , respectively. The ground-truth performance obtained via actual LLM training is referred to as Retraining . Additionally, we introduce a heuristic baseline, named Random , where we first obtain all the Nground-truth performances {rk}N k=1, and then randomly estimate the performance within the range mink∈[1,N]rk,max k∈[1,N]rk . Results . From the results presented in Table 1, we can see: the Random baseline performs the worst, indicating that estimating LLM performance without retraining itself is a non-trivial task. Both FUT and FUT++ consistently outperform Random across all batch settings with considerable margins, demonstrating their effectiveness. This result is expected, as our methods are grounded in a rigorous derivation of the relationship between the parameters induced by different sample training orders, whereas the Random method is a simple heuristic without any theoretical guarantees. Between our methods, FUT++ performs better than FUT in more cases, suggesting that the inclusion of the second- order term in the Taylor expansion is beneficial for our problem. In addition to performance analysis, we also compare the efficiency of our method with the Retraining strategy4. In practice, one often needs to explore the performance of a large number of sample orders. For example, determining the optimal curriculum requires searching through a vast candidate space of sample orders. Consequently, in this experiment, we vary Nover the set {10,50,100,1000}to observe the trend as the number
|
https://arxiv.org/abs/2505.22042v1
|
of sample orders increases5. We compare different methods with various T’s. The results are presented in Figure 2, where the solid bars represent our method and the dashed bars represent Retraining. We observe that as the total number of orders increases, our method progressively achieves higher time efficiency per order compared to Retraining, with a maximum speedup of 132.6 times. Our methods across all Tsurpass Retraining, highlighting the significant advantages of our methods in scalability. 5.2 Evaluation on the Application of Training Curriculum Design for LLMs Experimental Setup & Baselines . In this experiment, we evaluate whether our methods can assist in designing more effective training curricula for LLMs. Similar to the above section, we use 4Here, we do not include the Random baseline in the efficiency comparison, as it requires retraining the LLM to obtain all performance values in advance, resulting in even higher time costs than Retraining. The time costs of FUT and FUT++ are similar, thus, we only choose FUT for comparison. 5Since the first and second stages in our method are executed only once, their costs are amortized across all sample orders. 7 Table 2: Perplexity results across different batch numbers and curriculum design strategies. The best perplexity results are highlighted in bold. The colored results represent the best estimation accuracy between the FUT and FUT++ methods. Methods RO Len PPL PD FUT (Est.) FUT++ (Est.) 8 1.4414 1.4392 1.4012 1.4006 1.3996 (1.3963) 1.3998 (1.3962) 16 1.4599 1.5291 1.4531 1.4542 1.4536 (1.4314) 1.4523 (1.4307) 32 1.4109 1.4042 1.3966 1.3933 1.3909 (1.3823) 1.3881 (1.3686) 64 1.4248 1.4079 1.4027 1.4071 1.3785 (1.3838) 1.3804 (1.3856) 128 1.3838 1.3872 1.3790 1.3697 1.3412 (1.3446) 1.3619 (1.3512) 256 1.3696 1.3766 1.3645 1.3660 1.3378 (1.3551) 1.3178 (1.3460) 0 1 2 3 4 5 6 7 Position01234567Memorized Batch1.43 1.44 1.44 1.43 1.43 1.40 1.40 1.37 1.44 1.43 1.44 1.44 1.42 1.43 1.40 1.39 1.44 1.42 1.40 1.38 1.40 1.38 1.38 1.35 1.44 1.43 1.42 1.44 1.44 1.44 1.39 1.39 1.43 1.44 1.43 1.43 1.40 1.37 1.40 1.38 1.44 1.44 1.42 1.40 1.37 1.37 1.39 1.36 1.44 1.44 1.41 1.40 1.37 1.40 1.35 1.36 1.39 1.43 1.42 1.37 1.40 1.37 1.36 1.38Memorization Effects (FUT) (a) FUT 0 1 2 3 4 5 6 7 Position01234567Memorized Batch1.44 1.43 1.44 1.42 1.43 1.39 1.40 1.37 1.44 1.44 1.43 1.43 1.41 1.44 1.40 1.39 1.44 1.42 1.39 1.37 1.40 1.37 1.37 1.35 1.43 1.43 1.43 1.43 1.43 1.44 1.39 1.40 1.44 1.43 1.43 1.44 1.40 1.37 1.39 1.37 1.44 1.44 1.44 1.40 1.37 1.37 1.40 1.35 1.44 1.44 1.42 1.40 1.37 1.40 1.35 1.35 1.40 1.42 1.42 1.37 1.40 1.37 1.36 1.38Memorization Effects (FUT++) (b) FUT++ 0 1 2 3 4 5 6 7 Position01234567Selected Batch1.45 1.45 1.44 1.43 1.44 1.40 1.40 1.37 1.44 1.45 1.45 1.45 1.43 1.44 1.40 1.40 1.45 1.43 1.40 1.37 1.36 1.36 1.37 1.34 1.45 1.45 1.45 1.44 1.44 1.44 1.40 1.40 1.44 1.45 1.45 1.44 1.40 1.37 1.40 1.36 1.45 1.44 1.43 1.40 1.37 1.36 1.40 1.34 1.43 1.44 1.43 1.40 1.36 1.40 1.36 1.34 1.40 1.43 1.43 1.37 1.40 1.37 1.34 1.36Memorization Effects
|
https://arxiv.org/abs/2505.22042v1
|
(Retraining) 1.341.361.381.401.421.441.461.48 (c) True Figure 3: Memorization effects. Heatmaps in (a) and (b) are estimated by our FUT and FUT++ methods, respectively. Heatmap in (c) represents the true memorization effect obtained by retraining. perplexity as the evaluation metric and measure different models by varying Tin the range of {8,16,32,64,128,256}. We compare our methods with the following baselines: •Random Order (RO) , which generates the curriculum by randomly shuffling the training batches. •Sample Length (SL) [8], which is a difficulty-based curriculum design strategy, and the difficulty score is determined based on the sentence length. •Perplexity (PPL) [59], which uses the perplexity from a reference model as a proxy to evaluate sample difficulty and design the curriculum. •Perplexity Difference (PD) [60], which measures the perplexity gap between a strong and a weak model, treating samples with larger gaps as more difficult to design the curriculum. We use the baseline methods and our proposed approaches (using equation (9)) to generate training curricula, and train the LLM based on them for comparison. Results . The results are shown in Table 2. We can see: In most cases, ROperforms the worst, as it lacks any problem-specific design and simply generates the training curriculum randomly. PPL andPDconsistently outperform SLacross different batch sizes, which is as expected since they both leverage perplexity as a proxy to design the curricula-aligning well with the final evaluation metric. Finally, our methods achieve superior performance compared to all baselines, demonstrating their effectiveness in designing training curricula for LLMs. Beyond the above analysis, we would like to highlight another important advantage of our approach: it provides estimated performance for each curriculum. As shown in the last two columns of Table 2, these estimates closely align with the actual results. This capability enables more informed and efficient decision-making when selecting optimal training sample orders during LLM optimization. 5.3 Evaluation on the Application of LLM Memorization & Generalization Effect Analysis Experimental Setup . In this experiment, we evaluate the memorization & generalization effects of LLM when a sample batch is placed at different training positions. In specific, the number of training 8 0 1 2 3 4 5 6 7 Position1.3001.3251.3501.3751.4001.4251.4501.4751.500Perplexity Generalize B1 to D Real Values FUT FUT++ 0 1 2 3 4 5 6 7 Position1.361.381.401.421.441.461.481.50Perplexity Generalize B2 to D Real Values FUT FUT++ 0 1 2 3 4 5 6 7 Position1.361.381.401.421.441.461.481.50Perplexity Generalize B3 to D Real Values FUT FUT++ 0 1 2 3 4 5 6 7 Position1.361.381.401.421.441.461.481.50Perplexity Generalize B4 to D Real Values FUT FUT++Figure 4: The generalization effect of batch Bion dataset D, with sim (Bi, D)>=τ. 0 1 2 3 4 5 6 7 Position1.361.381.401.421.441.461.481.50Perplexity Generalize B0 to D Real Values FUT FUT++ 0 1 2 3 4 5 6 7 Position1.361.381.401.421.441.461.481.50Perplexity Generalize B5 to D Real Values FUT FUT++ 0 1 2 3 4 5 6 7 Position1.361.381.401.421.441.461.481.50Perplexity Generalize B6 to D Real Values FUT FUT++ 0 1 2 3 4 5 6 7 Position1.361.381.401.421.441.461.481.50Perplexity Generalize B7 to D Real Values FUT FUT++ Figure 5: The generalization effect of batch Bion dataset D, with sim (Bi, D)< τ.
|
https://arxiv.org/abs/2505.22042v1
|
batches is set as 8 ( i.e.,T= 8). We visualize the value of Mi,jin Section 4.2 based on perplexity by setting different (i, j)pairs. Results . The results are presented in Figure 3, 4 and 5. We can see: Compared to the true memorization effect (in Figure 3(c)), where we retrain the LLM to compute Mi,j, FUT and FUT++ in Figure 3(a) and (b), can accurately estimate the model’s memorization of different batches at various positions using both first-order and second-order approximations, respectively. All the results reveal that the model tends to memorize batches appearing later in the training order more effectively, as indicated by lower perplexity. In contrast, earlier batches are more susceptible to catastrophic forgetting. For generalization analysis, we divide training batches into two groups based on their similarity to the test set D, using the average similarity τas a threshold. As shown in Figures 4 and 5, our method (dashed red/blue lines) closely estimates the true performance (black line) and captures the same generalization trend in most cases. In Figure 4, batches similar to the test data generalize better when placed later in training. In contrast, Figure 5 shows that dissimilar batches have little or random effect on generalization, regardless of their positions in the training sequences. 6 Related Work Training Dynamics of Language Models. Understanding training dynamics is essential for ana- lyzing how deep models evolve during optimization [ 15,42,1]. In the context of language models, early work focused on the evolution of learned representations [ 45,44] and the encoding of world knowledge [ 34] during pre-training. These insights have also been extended to downstream tasks such as summarization [ 19] and speech translation [ 46]. More recent studies have begun to examine the training dynamics of LLMs [ 43,5,47,30], which are harder to analyze due to their scale. For exam- ple, [ 47] studies internal representation development and structural changes during training, while [ 5] uses models of varying sizes to study how training behavior shifts with scale. Additionally, [ 43] explores how learning certain examples affects the model’s behavior on other inputs. Influence Function . Influence function is a technique used to estimate the impact of each training sample on a specific test prediction [ 27,2,28]. The foundational work by [ 27] applies influence functions by calculating gradients and Hessian-vector products to measure the contribution of each training example to a test point. However, research in [ 3,22] has shown that influence functions can be unstable and unreliable in neural network. Additionally, computing the necessary Hessian-vector products is computationally expensive, particularly for LLMs. To address this challenge, a recent study by [ 32] introduces a caching mechanism to estimate token-level influences in LLMs. While this method alleviates some computational difficulties, it overlooks the crucial influence of sample order in the training process, which plays a significant role in shaping the learning dynamics. 7 Conclusion In this work, we propose a retraining-free framework for analyzing the effect of training sample order on LLMs, addressing the prohibitive cost of retraining-based approaches. By approximating the 9 optimization dynamics of
|
https://arxiv.org/abs/2505.22042v1
|
Adam via Taylor expansion and employing random projection for efficient parameter estimation, our framework enables accurate performance prediction under arbitrary sample orders. We demonstrate the utility of this framework in two key research problems of LLMs: training curriculum design, and memorization & generalization effect analysis. Extensive experiments show that our framework faithfully approximates true model performance and provides valuable insights into both external performance and internal learning dynamics of LLMs. Our framework offers a practical tool for understanding and optimizing the model behaviors of LLMs. References [1]Alessandro Achille, Matteo Rovere, and Stefano Soatto. Critical learning periods in deep networks. In International Conference on Learning Representations , 2018. [2]Juhan Bae, Nathan Ng, Alston Lo, Marzyeh Ghassemi, and Roger B Grosse. If influence functions are the answer, then what is the question? Advances in Neural Information Processing Systems , 35:17953–17967, 2022. [3]S Basu, P Pope, and S Feizi. Influence functions in deep learning are fragile. In International Conference on Learning Representations (ICLR) , 2021. [4]Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. Curriculum learning. InProceedings of the 26th annual international conference on machine learning , pages 41–48, 2009. [5]Stella Biderman, Hailey Schoelkopf, Quentin Gregory Anthony, Herbie Bradley, Kyle O’Brien, Eric Hallahan, Mohammad Aflah Khan, Shivanshu Purohit, USVSN Sai Prashanth, Edward Raff, et al. Pythia: A suite for analyzing large language models across training and scaling. In International Conference on Machine Learning , pages 2397–2430. PMLR, 2023. [6]Andrei Z. Broder. On the resemblance and containment of documents. Proceedings. Compres- sion and Complexity of SEQUENCES 1997 (Cat. No.97TB100171) , pages 21–29, 1997. [7]Mikhail Budnikov, Anna Bykova, and Ivan P Yamshchikov. Generalization potential of large language models. Neural Computing and Applications , 37(4):1973–1997, 2025. [8]Daniel Campos. Curriculum learning for language modeling. arXiv preprint arXiv:2108.02170 , 2021. [9]Haochen Chen, Syed Fahad Sultan, Yingtao Tian, Muhao Chen, and Steven Skiena. Fast and accurate network embeddings via very sparse random projection. In Proceedings of the 28th ACM international conference on information and knowledge management , pages 399–408, 2019. [10] Pin-Yu Chen, Huan Zhang, Yash Sharma, Jinfeng Yi, and Cho-Jui Hsieh. Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models. InProceedings of the 10th ACM workshop on artificial intelligence and security , pages 15–26, 2017. [11] Xiangyi Chen, Sijia Liu, Kaidi Xu, Xingguo Li, Xue Lin, Mingyi Hong, and David Cox. Zo- adamm: Zeroth-order adaptive momentum method for black-box optimization. Advances in neural information processing systems , 32, 2019. [12] John C Duchi, Michael I Jordan, Martin J Wainwright, and Andre Wibisono. Optimal rates for zero-order convex optimization: The power of two function evaluations. IEEE Transactions on Information Theory , 61(5):2788–2806, 2015. [13] Vitaly Feldman and Chiyuan Zhang. What neural networks memorize and why: Discovering the long tail via influence estimation. Advances in Neural Information Processing Systems , 33:2881–2891, 2020. [14] Abraham D Flaxman, Adam Tauman Kalai, and H Brendan McMahan. Online convex opti- mization in the bandit setting: gradient descent without a gradient. arXiv preprint cs/0408007 , 2004. 10 [15] Jonathan Frankle, David J Schwab, and Ari S Morcos. The early phase of
|
https://arxiv.org/abs/2505.22042v1
|
neural network training. arXiv preprint arXiv:2002.10365 , 2020. [16] Saeed Ghadimi and Guanghui Lan. Stochastic first-and zeroth-order methods for nonconvex stochastic programming. SIAM journal on optimization , 23(4):2341–2368, 2013. [17] Henok Ghebrechristos and Gita Alaghband. Deep curriculum learning optimization. SN Computer Science , 1(5):245, 2020. [18] Daniel Golovin, John Karro, Greg Kochanski, Chansoo Lee, Xingyou Song, and Qiuyi Zhang. Gradientless descent: High-dimensional zeroth-order optimization. arXiv preprint arXiv:1911.06317 , 2019. [19] Tanya Goyal, Jiacheng Xu, Junyi Jessy Li, and Greg Durrett. Training dynamics for text summarization models. In Findings of the Association for Computational Linguistics: ACL 2022 , pages 2061–2073, 2022. [20] Alex Graves, Marc G Bellemare, Jacob Menick, Remi Munos, and Koray Kavukcuoglu. Au- tomated curriculum learning for neural networks. In international conference on machine learning , pages 1311–1320. Pmlr, 2017. [21] Liangke Gui, Tadas Baltrušaitis, and Louis-Philippe Morency. Curriculum learning for facial expression recognition. In 2017 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2017) , pages 505–511. IEEE, 2017. [22] Han Guo, Nazneen Rajani, Peter Hase, Mohit Bansal, and Caiming Xiong. Fastif: Scalable influence functions for efficient model interpretation and debugging. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing , pages 10333–10350, 2021. [23] Guy Hacohen and Daphna Weinshall. On the power of curriculum learning in training deep networks. In International conference on machine learning , pages 2535–2544. PMLR, 2019. [24] Yutong Hu, Quzhe Huang, Mingxu Tao, Chen Zhang, and Yansong Feng. Can perplexity reflect large language model’s ability in long text understanding? ArXiv , abs/2405.06105, 2024. [25] Sourabh Katoch, Sumit Singh Chauhan, and Vijay Kumar. A review on genetic algorithm: past, present, and future. Multimedia Tools and Applications , 80:8091 – 8126, 2020. [26] Jisu Kim and Juhwan Lee. Strategic data ordering: Enhancing large language model performance through curriculum learning. ArXiv , abs/2405.07490, 2024. [27] Pang Wei Koh and Percy Liang. Understanding black-box predictions via influence functions. InInternational conference on machine learning , pages 1885–1894. PMLR, 2017. [28] Pang Wei W Koh, Kai-Siang Ang, Hubert Teo, and Percy S Liang. On the accuracy of influence functions for measuring group effects. Advances in neural information processing systems , 32, 2019. [29] Padmavathi Kora and Priyanka Yadlapalli. Crossover operators in genetic algorithms: A review. International Journal of Computer Applications , 162(10), 2017. [30] Pietro Lesci, Clara Meister, Thomas Hofmann, Andreas Vlachos, and Tiago Pimentel. Causal estimation of memorisation profiles. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 15616–15635, 2024. [31] William Liang, Sam Wang, Hung-Ju Wang, Osbert Bastani, Dinesh Jayaraman, and Yecheng Ja- son Ma. Environment curriculum generation via large language models. In 8th Annual Conference on Robot Learning , 2024. [32] Huawei Lin, Jikai Long, Zhaozhuo Xu, and Weijie Zhao. Token-wise influential training data retrieval for large language models. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 841–860, 2024. 11 [33] Sijia Liu, Pin-Yu Chen, Bhavya Kailkhura, Gaoyuan Zhang, Alfred O Hero III, and Pramod K Varshney. A primer on
|
https://arxiv.org/abs/2505.22042v1
|
zeroth-order optimization in signal processing and machine learning: Principals, recent advances, and applications. IEEE Signal Processing Magazine , 37(5):43–54, 2020. [34] Zeyu Liu, Yizhong Wang, Jungo Kasai, Hannaneh Hajishirzi, and Noah A Smith. Probing across time: What does roberta know and when? In Findings of the Association for Computational Linguistics: EMNLP 2021 , pages 820–842, 2021. [35] Sadhika Malladi, Tianyu Gao, Eshaan Nichani, Alex Damian, Jason D Lee, Danqi Chen, and Sanjeev Arora. Fine-tuning language models with just forward passes. Advances in Neural Information Processing Systems , 36:53038–53075, 2023. [36] Tambet Matiisen, Avital Oliver, Taco Cohen, and John Schulman. Teacher–student curriculum learning. IEEE transactions on neural networks and learning systems , 31(9):3732–3740, 2019. [37] Stephen Merity, Nitish Shirish Keskar, James Bradbury, and Richard Socher. Scalable language modeling: Wikitext-103 on a single gpu in 12 hours. Proceedings of the SYSML , 18, 2018. [38] Stephen Merity, Nitish Shirish Keskar, and Richard Socher. An analysis of neural language modeling at multiple scales. arXiv preprint arXiv:1803.08240 , 2018. [39] Marwa Naïr, Kamel Yamani, Lynda Said Lhadj, and Riyadh Baghdadi. Curriculum learning for small code language models. arXiv preprint arXiv:2407.10194 , 2024. [40] Yurii Nesterov and Vladimir Spokoiny. Random gradient-free minimization of convex functions. Foundations of Computational Mathematics , 17(2):527–566, 2017. [41] Ru Peng, Kexin Yang, Yawen Zeng, Junyang Lin, Dayiheng Liu, and Junbo Zhao. Dataman: Data manager for pre-training large language models. arXiv preprint arXiv:2502.19363 , 2025. [42] Maithra Raghu, Justin Gilmer, Jason Yosinski, and Jascha Sohl-Dickstein. Svcca: Singular vector canonical correlation analysis for deep learning dynamics and interpretability. Advances in neural information processing systems , 30, 2017. [43] Yi Ren and Danica J. Sutherland. Learning dynamics of llm finetuning, 2025. [44] Naomi Saphra. Training dynamics of neural language models. 2021. [45] Naomi Saphra and Adam Lopez. Understanding learning dynamics of language models with svcca. arXiv preprint arXiv:1811.00225 , 2018. [46] Beatrice Savoldi, Marco Gaido, Luisa Bentivogli, Matteo Negri, and Marco Turchi. On the dynamics of gender learning in speech translation. In Proceedings of the 4th Workshop on Gender Bias in Natural Language Processing (GeBNLP) , pages 94–111. Association for Computational Linguistics, 2022. [47] Ryan Teehan, Miruna Clinciu, Oleg Serikov, Eliza Szczechla, Natasha Seelam, Shachar Mirkin, and Aaron Gokaslan. Emergent structures and training dynamics in large language models. In Proceedings of BigScience Episode# 5–Workshop on Challenges & Perspectives in Creating Large Language Models , pages 146–159, 2022. [48] Kushal Tirumala, Aram Markosyan, Luke Zettlemoyer, and Armen Aghajanyan. Memorization without overfitting: Analyzing the training dynamics of large language models. Advances in Neural Information Processing Systems , 35:38274–38290, 2022. [49] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timo- thée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 , 2023. [50] Radu Tudor Ionescu, Bogdan Alexe, Marius Leordeanu, Marius Popescu, Dim P Papadopoulos, and Vittorio Ferrari. How hard can it be? estimating the difficulty of visual search in an image. InProceedings of the IEEE Conference on Computer Vision and Pattern Recognition , pages 2157–2166, 2016. 12 [51] Suresh Venkatasubramanian and Qiushi Wang.
|
https://arxiv.org/abs/2505.22042v1
|
The johnson-lindenstrauss transform: an empirical study. In 2011 Proceedings of the Thirteenth Workshop on Algorithm Engineering and Experiments (ALENEX) , pages 164–173. SIAM, 2011. [52] Xin Wang, Yuwei Zhou, Hong Chen, and Wenwu Zhu. Curriculum learning for multimedia in the era of large language models. In Proceedings of the 32nd ACM International Conference on Multimedia , pages 11296–11297, 2024. [53] Yining Wang, Simon Du, Sivaraman Balakrishnan, and Aarti Singh. Stochastic zeroth-order optimization in high dimensions. In International conference on artificial intelligence and statistics , pages 1356–1365. PMLR, 2018. [54] Daphna Weinshall, Gad Cohen, and Dan Amir. Curriculum learning by transfer learning: Theory and experiments with deep networks. In International conference on machine learning , pages 5238–5246. PMLR, 2018. [55] Benfeng Xu, Licheng Zhang, Zhendong Mao, Quan Wang, Hongtao Xie, and Yongdong Zhang. Curriculum learning for natural language understanding. In Proceedings of the 58th annual meeting of the association for computational linguistics , pages 6095–6104, 2020. [56] Fuzhao Xue, Yao Fu, Wangchunshu Zhou, Zangwei Zheng, and Yang You. To repeat or not to repeat: Insights from scaling llm under token-crisis, 2023. [57] Liang Zhang, Kiran Koshy Thekumparampil, Sewoong Oh, and Niao He. Dpzero: dimension- independent and differentially private zeroth-order optimization. In International Workshop on Federated Learning in the Age of Foundation Models in Conjunction with NeurIPS 2023 , 2023. [58] Xuan Zhang, Gaurav Kumar, Huda Khayrallah, Kenton Murray, Jeremy Gwinnup, Marianna J Martindale, Paul McNamee, Kevin Duh, and Marine Carpuat. An empirical exploration of curriculum learning for neural machine translation. arXiv preprint arXiv:1811.00739 , 2018. [59] Xuemiao Zhang, Feiyu Duan, Liangyu Xu, Yongwei Zhou, Sirui Wang, Rongxiang Weng, Jingang Wang, and Xunliang Cai. Frames: Boosting llms with a four-quadrant multi-stage pretraining strategy. arXiv preprint arXiv:2502.05551 , 2025. [60] Xuemiao Zhang, Liangyu Xu, Feiyu Duan, Yongwei Zhou, Sirui Wang, Rongxiang Weng, Jingang Wang, and Xunliang Cai. Preference curriculum: Llms should always be pretrained on their preferred data. arXiv preprint arXiv:2501.13126 , 2025. [61] Yihua Zhang, Pingzhi Li, Junyuan Hong, Jiaxiang Li, Yimeng Zhang, Wenqing Zheng, Pin-Yu Chen, Jason D Lee, Wotao Yin, Mingyi Hong, et al. Revisiting zeroth-order optimization for memory-efficient llm fine-tuning: A benchmark. arXiv preprint arXiv:2402.11592 , 2024. [62] Ziwei Zhang, Peng Cui, Haoyang Li, Xiao Wang, and Wenwu Zhu. Billion-scale network embedding with iterative random projection. In 2018 IEEE international conference on data mining (ICDM) , pages 787–796. IEEE, 2018. [63] Deli Zhao, Jiapeng Zhu, Zhenfang Guo, and Bo Zhang. Curriculum learning for deep generative models with clustering. arXiv preprint arXiv:1906.11594 , 2019. [64] Xiaosen Zheng and Jing Jiang. An empirical study of memorization in nlp. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 6265–6278, 2022. 13 Contents A Technical Details 15 A.1 Precomputation in Update Term Storing Stage . . . . . . . . . . . . . . . . . . . . 15 A.2 Random Projection for Storing Update Terms . . . . . . . . . . . . . . . . . . . . 16 A.3 Genetic
|
https://arxiv.org/abs/2505.22042v1
|
Algorithm for Training Curriculum Design in FUT Framework . . . . . . 16 B Experimental Details 18 B.1 General Capability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 B.2 Training Curriculum Design for LLMs . . . . . . . . . . . . . . . . . . . . . . . . 19 C Additional Experimental Results 20 C.1 Scalability of FUT Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 C.2 Batch-wise Analysis of Performance Estimation Accuracy . . . . . . . . . . . . . 20 D Further Discussions of Related Work 21 D.1 Curriculum Learning for LLMs . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 D.2 Zeroth-Order Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 E Broader Impacts 21 F Limitations 22 14 A Technical Details A.1 Precomputation in Update Term Storing Stage Recall that the proposed FUT framework consists of three stages in Figure 1, where the update term storing (Stage 2) plays an important role to bridge the gap between the learning dynamic of reference order and that of the new order. In specific, in Stage 2, we need to compute three kinds of update terms: Γ(θt, Blt)and∇θΓ(θt, Blt)for the first-order Taylor expansion (in equation (3)), and the additional ∇2 θΓ(θt, Blt)for the second-order Taylor expansion (in equation (7)). In the following, we describe how we compute these three update terms in detail, respectively. •First, for each Γ(θt, Blt)term, we can compute it by directly applying the checkpoint θtover batch Bltfollowing the updating rule of Adam optimizer: Γ(θt, Blt) =mt/(√vt+ϵ), (10) where mt= (β1·mt−1+ (1−β1)· ∇θL(Blt;θt))/(1−βt 1), vt= (β2·vt−1+ (1−β2)· ∇θL(Blt;θt)2)/(1−βt 2),(11) where the accumulative terms mt−1andvt−1terms in mtandvtare constructed by the gradient from the last step in the original training process, i.e.,∇θL(Bt−1;θt−1). •Second, for each first-order update term ∇θΓ(θt, Blt), we first expand it as: ∇Γ(θt, Blt) =∂Γ(θt, Blt) ∂θ=∂mt ∂θ(√vt+ϵ)−∂√vt ∂θmt (√vt+ϵ)2(12) where ∂mt ∂θ=β1·∂mt−1 ∂θ+ (1−β1)· ∇2 θL(Blt;θt) 1−βt 1, ∂√vt ∂θ=β2·∂vt−1 ∂θ+ 2(1−β2)· ∇θL(Blt;θt)· ∇2 θL(Blt;θt) 2(1−βt 2)√vt,(13) To compute the second-order gradient ∇2 θL(Blt;θt), a straightforward approach is to apply the backward operator to L(Blt;θt)twice. However, this requires computing the Hessian matrix of the parameters, which is prohibitively expensive, especially for LLMs with a large number of parameters. To address this limitation, we approximate the second-order gradient using (∇θL(Blt;θt)− ∇θL(Blt;θt−1))/(θt−θt−1),where θt−1denotes the parameter at step t−1in the original training process. This approximation is justified by the limited variation in parameter updates between adjacent training steps. •At last,
|
https://arxiv.org/abs/2505.22042v1
|
for each ∇2Γ(θt, Blt)term, we can also expand it as: ∇2Γ(θt, Blt) =∂2Γ(θt, Blt) ∂θ2=1 √vt+ϵ2"∂2mt ∂θ2 (√vt+ϵ)−∂2√vt ∂θ2 mt −2∂√vt ∂θ∂mt ∂θ + 2∂√vt ∂θ2mt√vt+ϵ# (14) 15 where ∂2mt ∂θ2=β1·∂2mt−1 ∂θ2+ (1−β1)· ∇3 θL(Blt;θt) 1−βt 1, ∂2√vt ∂θ2=β2·∂2vt−1 ∂θ2+ 2(1−β2) ∇2 θL(Blt;θt)· ∇2 θL(Blt;θt) +∇θL(Blt;θt)· ∇3 θL(Blt;θt) 2(1−βt 2)√vt − β2·∂vt−1 ∂θ+ 2(1−β2)· ∇θL(Blt;θt)· ∇2 θL(Blt;θt) ·∂vt ∂θ 4(1−βt 2)(vt)3/2, (15) Similarly, to compute the third-order gradient ∇3 θL(Blt;θt), we use (∇2 θL(Blt;θt)− ∇2 θL(Blt;θt−1))/(θt−θt−1)to approximate it. By computing these update terms for each (θt, Blt)pair, we can access all the update terms we may need in Estimating Stage (Stage 3 in Figure 1). That is, given an arbitrary permuted order, which is different from the reference one, we can recursively execute the first-order Taylor expansion in equation (3) or the second-order Taylor expansion in equation (7) to obtain the new model parameters. A.2 Random Projection for Storing Update Terms The update terms Γ(θt, Blt),∇θΓ(θt, Blt), and∇2 θΓ(θt, Blt)are essential to our FUT framework. However, in large-scale neural networks such as LLMs, these terms typically have dimensionality comparable to that of the model parameters, making direct precomputation and storage for every pair (θt, Blt)prohibitively expensive in terms of memory. To mitigate this issue, we adopt a random projection strategy based on the Johnson–Lindenstrauss (JL) theorem [ 51], following the well-established compression techniques in [ 32]. The JL theorem guarantees that high-dimensional vectors can be embedded into a significantly lower-dimensional space with bounded distortion of pairwise distances, which aligns well with our goal of efficiently storing approximate versions of gradient-related terms. Theorem 1 (Johnson–Lindenstrauss Theorem) Let0< ϵ < 1and let X={x1, x2, . . . , x n} ⊂ Rdbe a set of nvectors. Then there exists a linear mapping f:Rd→Rk, where k=O(ϵ−2logn), such that for all xi, xj∈X, (1−ϵ)∥xi−xj∥2 2≤ ∥f(xi)−f(xj)∥2 2≤(1 +ϵ)∥xi−xj∥2 2. In our setting, we apply the JL projection to compress each update matrix prior to storage. Formally, for any matrix M∈Rd1×d2—where Mmay represent Γ(θt, Blt),∇θΓ(θt, Blt), or ∇2 θΓ(θt, Blt)—we generate a random projection matrix A∈Rd2×kwhose entries are sampled i.i.d. from a Gaussian distribution: Aij∼ N(0,1/k). The compressed representation of Mis then given by: M′=MA∈Rd1×k. This projection reduces the space complexity from O(d1d2)toO(d1k)while approximately preserv- ing the geometric structure of the original matrix rows. To recover these terms for estimating the parameters under a new batch order, an approximate reconstruction can be achieved using the Moore–Penrose pseudoinverse A+∈Rk×d2as: fM=M′A+≈M. In practice, the target dimension kis selected based on the number of rows d1inM, which cor- responds to the number of vectors nin Theorem 1. To balance accuracy and memory usage, we empirically choose k∈ {300,200,160,80,20,8}depending on the layer size and update type. A.3 Genetic Algorithm for Training Curriculum Design in FUT Framework Recall that the objective in equation (9), i.e.,π∗:= arg max π∈ΠR(γπ T,Dval), is to find the optimal permutation π∗that leads to the best validation performance, where γπ Trepresents the final model 16 Algorithm 2 Genetic Algorithm for Finding Optimal Training Curriculum Require: Validation set Dval, number of batches T, population size N, number of generations
|
https://arxiv.org/abs/2505.22042v1
|
K, mutation probability pm Ensure: Optimal sample order πGA∗ 1:Initialize permutation space ST={π|πis a permutation of {1, . . . , T }} 2:Randomly sample Npermutations as initial population: POP ={πi}N i=1⊂ ST 3:fork= 1toKdo 4: for all πi∈POP do 5: Compute γπi Tusing FUT with sample order πi 6: Evaluate fitness ri=R(γπi T,Dval) 7: end for 8: Retain top 50% individuals with highest fitness to form POP survive 9: while Size of new children < N/ 2do 10: Randomly select two parents πa,πbfrom POP survive 11: Randomly choose crossover points l, rsuch that 1≤l < r≤T 12: Generate child πc=PMX (πa, πb, l, r) 13: ifrandom() < pmthen 14: Randomly select positions i, jand swap πc iandπc j 15: end if 16: Addπcto new children 17: end while 18: Replace discarded individuals in POP with new children 19:end for 20:return πGA∗= arg max π∈POPR(γπ T,Dval) parameters estimated by FUT framework. However, equation (9) is naturally non-differentiable, hindering its application in finding the optimal curriculum. To address this issue, we design an optimization algorithm based on Genetic Algorithm (GA) [ 25]. GA is a well-established metaheuristic algorithm inspired by Darwinian evolution, which iteratively evolves a population of candidate solutions based on the principle of survival of the fittest. In our context, each candidate represents a specific sample order π, and the fitness of each individual is evaluated by the model’s performance rπ=R(γπ T,Dval). By leveraging crossover, mutation, and selection operators, GA enables us to efficiently explore the exponentially large permutation space without exhaustive enumeration. We describe the detailed design of our GA-based search strategy as follows: 1.Population Initialization: Randomly select Nsample orders POP= {πi}N i=1fromSTas the initial populations, where ST={π|πis a permutation of {1, . . . , T }}, with|ST|=T!. 2.Fitness Selection: For each πi∈POP, evaluate the model performance R(γπi T,Dval)as its fitness, where γπi Tis estimated via the FUT method. Retain the top 50% individuals with the highest fitness scores for reproduction, and discard the rest. 3.Crossover: Generate new children by applying the partially matched crossover (PMX) [ 29] to randomly selected parent pairs πaandπbfrom the surviving population. Specifically, randomly choose two crossover points landrsuch that 1≤l < r≤T, then exchange the subsequences πa l:randπb l:rbetween the parents. At last, resolve conflicts using the mapping induced by the swapped segments to produce a valid permutation child πc= PMX (πa, πb, l, r)∈ ST. 4.Mutation: With a predefined mutation probability pm, randomly select two indices iandj inπcand swap their values: πc←πc i↔j. This operation introduces diversity and prevents premature convergence. 5.Replacement: Insert the newly generated children into the population, replacing the dis- carded individuals. The updated population then forms the basis for the next generation. By iteratively performing 2-5 steps over a fixed number of generations K, or until a convergence criterion is met (e.g., no improvement in validation performance over several generations), the algorithm ultimately returns the best sample order πGA∗with the highest validation performance. Therefore, this GA-based optimization reduces the inference time complexity of FUT from O(T!)to O(K·N), significantly accelerating the search for the optimal sample order. 17 B
|
https://arxiv.org/abs/2505.22042v1
|
Experimental Details B.1 General Capability In this section, we introduce more details for the experiments to test the general capability of our FUT framework in Section 5.1. B.1.1 Base Model We conduct all of our experiments on a language model that follows the LLaMA architecture [ 49], but with a reduced number of parameters—specifically, a hidden size of 2048 and 10 stacked transformer layers, resulting in approximately 636 million parameters. We choose this relatively small model to enable repeated training under varying experimental conditions, which is essential for rigorously evaluating the effectiveness of our proposed FUT framework in both training curriculum design and the analysis of memorization and generalization behaviors. In contrast, training large-scale models typically takes tens or even hundreds of days, making such extensive experimentation prohibitively time-consuming and computationally expensive. B.1.2 Dataset WikiText-103 [ 37] is a widely used benchmark dataset for evaluating language models, particularly in long-range dependency modeling. It consists of over 100 million tokens extracted from high-quality Wikipedia articles, specifically curated to preserve coherent paragraph and document-level structures. Unlike other common datasets that contain shuffled or sentence-level data, WikiText-103 maintains the original article formatting and ordering, enabling models to better learn contextual and discourse- level information. The vocabulary is relatively large and diverse, making it a challenging and realistic corpus for testing the generalization and memorization capabilities of large-scale language models. To preprocess the WikiText-103 dataset, we first remove short texts with fewer than five characters to eliminate noise. Then, we apply MinHash-based deduplication [ 6] to efficiently identify and discard near-duplicate samples. Specifically, each text is tokenized into a set of words, and a MinHash signature is computed using 128 permutations. Texts with identical MinHash digests are considered duplicates, and only one representative is retained. This process effectively reduces redundancy while preserving semantically diverse content. B.1.3 Training and Evaluation Protocols Training Protocol. For the preprocessed WikiText dataset, we split the data into 80%, 10%, and 10% for training, validation, and testing, respectively. The learning rate is selected from the range [0.0001,0.005] based on validation performance, and we choose the number of batches from{8,16,32,64,128,256}. Since it is not feasible to process very large batch sizes directly due to memory constraints, we apply gradient accumulation over multiple smaller mini-batches to effectively simulate the desired larger batch size. For the Adam optimizer, we fix the hyperparam- eters β1andβ2to 0.9 and 0.95, respectively. Within our FUT framework, to stabilize parameter estimation and mitigate the influence of outliers, we apply parameter clipping. Specifically, the pa- rameters are constrained within a tunable range, with the clipping threshold selected from the interval [−1.1,−0.3]∪[0.3,1.1]to ensure numerical stability and prevent extreme values from dominating the update dynamics. The experiments were conducted on a computing platform equipped with NVIDIA A800-SXM GPUs, with a total of 4 GPUs each providing 80GB of memory. Evaluation Protocol. We adopt Perplexity (PPL) [ 24] as the evaluation metric to assess language modeling performance. Given a token sequence x= (x1, x2, . . . , x N), the perplexity is defined as: PPL(x) =P(x1, . . . , x N)−1 N=
|
https://arxiv.org/abs/2505.22042v1
|
NY t=1P(xt|x<t)!−1 N = exp −1 NNX t=1logP(xt|x<t)! . (16) This is equivalent to the exponential of the average cross-entropy loss. Thus, for a given validation setDvaland final model parameters θT, we compute: 18 Table 3: Genetic Algorithm hyperparameters used in our framework Hyperparameter Notation Description Scope Population size N Number of candidates per generation [16,12,8,4,2] Max generations K Total evolution rounds [16,12,8,4,2,1] Number of batches T Total number of Batches [256,128,64,32,16,8] Crossover points l, r Random crossover segment indices 1≤l < r≤T Mutation probability pm Swap probability per child 0.1 Selection rate – Top individuals retained 50% PPL(Dval) = exp ( L(Dval;θT)), (17) where L(Dval;θT)denotes the average cross-entropy loss over the validation set. B.2 Training Curriculum Design for LLMs B.2.1 Baselines Although curriculum learning largely depends on human heuristics or empirical findings, there are still many works that make efforts to design a rational curriculum in the field of LLMs, primarily based on either the characteristics of the dataset [ 8], or the quantitative criteria [ 59,60] that are perceptible to the model. In this section, we introduce all the baselines used in training curriculum design in detail. For better understanding, we define ρBias the difficulty score for batch Bi. •Random Order (RO). RO is a naive baseline, which randomly assigns the difficulty score ρBito each batch Biin the range of [0,1]. •Sample Length (SL) [ 8].SL is a purely statistical method based on the intuition that longer sentences are inherently more difficult to model. This is because they require more effective tracking of dependencies, making the learning process more challenging. Therefore, the difficulty score of each batch Biis defined as the total number of tokens in the batch, computed as ρBi=P x∈Bi|x|, where |x|denotes the length of sample x. •Perplexity (PPL) [ 59].PPL metric closely aligns with the self-supervised learning objective of LLMs and effectively measures model-data fit, making it appropriate for data organization. Recent studies [ 59] empirically show that training on high-PPL data followed by low-PPL data can significantly reduce loss and boost performance. Following this finding, we introduce a reference model Mrefwith parameter θRto compute PPL for each batch as the difficulty score, i.e.,ρBi= −R(θR, Bi). •Perplexity Difference (PD) [ 60].Building on the idea in [ 60], PD between strong and weak models can serve as an indicator of how difficult a batch is for the model. Specifically, a low PD implies that both models perform similarly in terms of learning efficiency, while a high PD suggests that the batch presents greater difficulty for the weaker model. Consider two reference models, Mstr andMweak , with parameters θSandθW, respectively, both trained on the same dataset. In practice, we train two models: Mstrwith 636 million parameters and Mweak with 167 million parameters, using their perplexity differences to guide batch rescheduling. For each batch Bi, we define PD as the difficulty score, given by ρBi= (R(θW, Bi)− R(θS, Bi))/R(θW, Bi). B.2.2 Genetic Algorithm Configuration To effectively search the optimal sample order within the exponentially large permutation space, we employ a Genetic Algorithm (GA) tailored to our FUT framework. The key
|
https://arxiv.org/abs/2505.22042v1
|
design choices focus on maintaining a balance between exploration and exploitation: a moderately sized population ensures sufficient diversity, while elitist selection preserves high-quality solutions across generations. The complete set of hyperparameters and their configurations are summarized in Table 3. 19 0.8B 1.0B 1.2B 1.4B3.03.23.43.63.84.04.24.44.64.8PerformanceScalability at Batch Number = 8 True Random FUT FUT++ 0.8B 1.0B 1.2B 1.4B3.03.23.43.63.84.04.24.44.6PerformanceScalability at Batch Number = 16 True Random FUT FUT++Figure 6: Scalable estimation performance across model sizes. We evaluate the estimation accuracy of FUT and FUT++ across model scales {0.8B,1.0B,1.2B,1.4B}under training batch numbers T= 8(left) and T= 16 (right). FUT and FUT++ consistently outperform the Random baseline, with FUT++ showing improved accuracy for larger models. 0 1 2 3 4 5 6 7 Step1.301.351.401.451.501.55Perplexity Perplexity Across 8 Batches Real Values FUT FUT++ 0123456789101112131415 Step1.301.351.401.451.501.551.601.651.701.75Perplexity Perplexity Across 16 Batches Real Values FUT FUT++ 024681012141618202224262830 Step1.301.351.401.451.501.551.601.65Perplexity Perplexity Across 32 Batches Real Values FUT FUT++ Figure 7: Perplexity estimation at intermediate training steps. We visualize the validation perplexity estimated by FUT and FUT++ compared to the real validation perplexity after each batch, for training schedules with T∈ {8,16,32}total batches. FUT and FUT++ both closely follow the true performance trends, with FUT++ consistently providing more accurate estimates—especially when Tis larger. These results demonstrate the effectiveness of our methods in tracking training progress in a fine-grained manner. C Additional Experimental Results C.1 Scalability of FUT Framework Experimental Setup. In this section, we conduct additional experiments to evaluate whether our proposed FUT framework remains effective in estimating model performance as the base model size increases. Specifically, we scale the original 0.6Bmodel to {0.8B,1.0B,1.2B,1.4B}. In these experiments, the number of training batches is set to T= 8andT= 16 . We adopt perplexity as the evaluation metric and measure the performance gap between the true values and the estimates produced by our FUT framework. Results. The results are illustrated in Figure 6. Across both batch settings ( T= 8 andT= 16 ), our proposed FUT and FUT++ methods consistently outperform the Random baseline in estimating model performance, achieving smaller performance gaps to the ground truth. This trend holds true as we scale the base model size from 0.8Bto1.4B, validating the scalability of our framework. Importantly, we observe that FUT++—which incorporates second-order information—yields even more accurate estimates compared to the original FUT, particularly for larger models. This suggests that higher-order approximations are more effective at capturing complex parameter updates in large-scale language models. The Random baseline, by contrast, lacks theoretical grounding and exhibits less consistent performance as model size grows. C.2 Batch-wise Analysis of Performance Estimation Accuracy Experimental Setup. In this section, we conduct a fine-grained evaluation of our FUT framework by comparing estimated and true model performance at intermediate stages of training. Specifically, we consider batch numbers T∈ {8,16,32}and evaluate performance after each training batch. For each time step 1≤t≤T, we replace the final-step performance comparison R(γπ T,Dval) 20 andR(θπ T,Dval)with the intermediate-step comparison R(γπ t,Dval)andR(θπ t,Dval). We use perplexity on the validation set Dvalas the evaluation metric to assess how well the FUT-estimated parameters align with those obtained from actual training at each
|
https://arxiv.org/abs/2505.22042v1
|
step. Results. As shown in Figure 7, both FUT and FUT++ generate accurate perplexity estimates across different training stages. While FUT performs well in general, FUT++ shows higher fi- delity—especially as the number of batches increases. This is most evident in the T= 32 case, where FUT++ remains close to the true perplexity throughout, whereas FUT slightly deviates in later stages. These findings affirm the utility of incorporating higher-order dynamics in FUT++, and highlight the robustness of our framework in real-time model monitoring, dynamic training adaptation, and early stopping decisions. In addition, we observe that in certain training stages, particularly under small batch sizes or early steps, the estimated perplexity remains unchanged over multiple steps, forming plateau-like segments. This phenomenon arises from the Taylor-based approximation mechanism in our framework. Specifically, when the update gradients are small ( e.g., due to flat regions in the loss landscape), the computed updates become negligible. Consequently, FUT and FUT++ produce nearly identical estimates across consecutive steps. D Further Discussions of Related Work D.1 Curriculum Learning for LLMs Curriculum learning is a training paradigm that organizes training data in an easy-to-hard manner to facilitate more effective learning [ 4,20,23,55]. In deep learning tasks, sample difficulty is typically defined using either surface-level heuristics or model-based metrics [ 36,23,21,17,54]. For instance, in sequence modeling, easier examples are often shorter or contain more frequent tokens [ 58]. In the generative modeling domain, difficulty can be measured by how well a sample aligns with human cognitive expectations or its deviation from the data distribution center [ 50,63]. In the context of LLMs, several empirical studies have explored strategies to score training samples [ 39, 31,52,36,8,59,60]. Specifically, [ 8] reorganizes samples based on their sequence length to progressively improve the model’s ability to capture long-range dependencies. Furthermore, some researchers [ 59,60] propose curriculum schemes guided by model-based metrics such as perplexity and perplexity difference, motivated by their empirical observations. In contrast to conventional curriculum learning approaches that depend on human-designed heuristics for determining sample order, our proposed FUT framework offers an efficient and reliable means of estimating final performance across arbitrary curricula. This allows practitioners to make well-informed decisions among diverse curriculum strategies without incurring the cost of repeated retraining. D.2 Zeroth-Order Optimization Zeroth-order (ZO) optimization refers to a class of derivative-free methods that estimate gradients using only function evaluations, making them suitable for black-box or simulation-based scenarios where gradients are inaccessible or costly [ 14,16,40,12,53,10,33]. Classical approaches include finite-difference methods [ 14], random gradient estimators [ 40,12], and ZO-SGD [ 16]. Recently, ZO has been applied to LLM fine-tuning to reduce the memory burden of back-propagation. Notably, MeZO [ 35] introduced a forward-only ZO-SGD variant, while Zhang et al. [ 61] benchmarked and extended ZO techniques—such as ZO-Adam [ 11] and block-wise estimation—for scalable LLM fine-tuning. However, applying ZO to pre-training remains impractical due to the extreme di- mensionality of LLMs, high variance of estimators, and computational overhead from repeated forward passes [57,18,53]. Moreover, most of ZO methods rely on dynamic random perturbations, limiting result reproducibility and reuse. In contrast, our
|
https://arxiv.org/abs/2505.22042v1
|
FUT framework is a performance estimation tool—not an optimizer—that precomputes all necessary update terms using Taylor expansions. This enables efficient, deterministic evaluation of arbitrary curricula without retraining, making FUT quite suitable for analyzing training dynamics and guiding curriculum design. E Broader Impacts With the rapid advancement of LLMs, not only have their language understanding and reasoning abilities improved, but their parameter sizes have also grown significantly. As a result, training LLMs 21 has become increasingly time-consuming and computationally expensive. In this paper, we propose a retrain-free framework called FUT, which accurately estimates model performance using Taylor expansion. This has several important practical implications. First, FUT enables researchers to study the effect of training sample order on LLM performance without repeated retraining, including downstream applications such as memorization and generaliza- tion analysis. The performance estimates associated with different sample orders provide valuable insights into both internal learning dynamics and external behavior. Second, the FUT framework can serve as a tool for efficient performance evaluation and training analysis in large-scale model development pipelines. For example, developers can leverage FUT to screen and prioritize data curricula, identify critical samples, or detect unstable training configurations before committing to full-scale training. As LLMs continue to scale, such cost-effective analysis tools will be increasingly essential to accelerate research while reducing resource consumption. F Limitations While our proposed retraining-free framework (FUT) provides a computationally efficient and theoretically grounded method for estimating the effects of sample order in LLMs, several limitations should be acknowledged. 1.The accuracy of our estimates relies on the validity of Taylor expansions, particularly when higher-order nonlinearities dominate the optimization dynamics—scenarios where our first- and second-order approximations may fall short. 2.Although the use of random projection significantly reduces memory overhead, it may introduce approximation noise, especially for models with extremely large parameter spaces. 3.We evaluate the effectiveness of our FUT framework solely based on perplexity performance. This is because downstream natural language understanding and reasoning tasks typically re- quire large-scale models, which are infeasible to retrain repeatedly under varying conditions. Nevertheless, further validation is needed to assess the generalizability of our framework in these more complex tasks. 22
|
https://arxiv.org/abs/2505.22042v1
|
Preprint. Under review. Reinforced Reasoning for Embodied Planning Di Wu Tongji University diwu7012@gmail.comJiaxin Fan* Tongji University 2253538@tongji.edu.cnJunzhe Zang* Tongji University 2250724@tongji.edu.cn Guanbo Wang Tsinghua University wanggb23@mails.tsinghua.edu.cnWei Yin Bank of Communications yinw_8@bankcomm.comWenhao Li† Tongji University whli@tongji.edu.cn Bo Jin† Tongji University bjin@tongji.edu.cn Abstract Embodied planning requires agents to make coherent multi-step decisions based on dynamic visual observations and natural language goals. While recent vision-language models (VLMs) excel at static perception tasks, they struggle with the temporal reasoning, spatial understanding, and commonsense grounding needed for planning in interactive environments. In this work, we introduce a reinforcement fine-tuning framework that brings R1-style reasoning enhancement into embodied planning. We first distill a high-quality dataset from a powerful closed-source model and perform supervised fine-tuning (SFT) to equip the model with structured decision-making priors. We then design a rule-based reward function tailored to multi-step action quality and optimize the policy via Generalized Reinforced Preference Optimization (GRPO). Our approach is evaluated on Embench, a recent benchmark for interactive embodied tasks, covering both in-domain and out-of-domain scenarios. Experimental results show that our method significantly outperforms models of similar or larger scale, including GPT-4o-mini and 70B+ open-source baselines, and exhibits strong generalization to unseen environments. This work highlights the potential of reinforcement-driven reasoning to advance long-horizon planning in embodied AI. 1 Introduction Embodied planning serves as a cornerstone in hierarchical embodied AI systems[ 38,57], where intelligent agents must not only perceive their environment but also reason and act within it to accomplish complex, real-world tasks[ 11]. Unlike low-level controllers that govern precise trajectory execution[ 56,21], high-level planning is responsible for for- mulating coherent action sequences that translate complex instructions into manageable sub-tasks[ 51]. While conventional language-based reasoning is confined to static, text- driven contexts[ 24,55,36], embodied planning operates within dynamic, interactive envi- ronments that demand sequential decision-making across multiple steps. Despite recent advancements in Vision-Language Models (VLMs) have demonstrated impressive capa- bilities in static understanding tasks[ 58], they exhibit substantial limitations when applied to multi-step interactive embodied planning. Empirical analyses in Figure1 reveal that even state-of-the-art VLMs, which excel in image captioning or visual question answering, struggle to maintain coherent and efficient decision sequences in dynamic environments[ 54]. These shortcomings highlight a critical gap: effective planning in real-world embodied *Equal contribution.†Corresponding authors. 1arXiv:2505.22050v1 [cs.AI] 28 May 2025 Preprint. Under review. contexts imposes far greater demands on spatial reasoning, temporal consistency, and commonsense understanding than current VLM architectures can satisfy. Wrong Analysis in EB -ALFRED GPT-4o-mini Success Rate: Only 22% Task:Rinse off a ladle and move it to the table. GPT-4o-miniPlan: FrameAction Namefind a DiningTablepick up the Ladlefind a Sink FrameAction Nameturn on the Faucetturn off the Faucetfind a CounterTop Missing ‘Put down Ladle’ ( Missing Step ) Planning Error Wrong Termination Decision Reasoning Error Figure 1: Failure case and error breakdown of GPT-4o- mini in the EB-ALFRED environment. Left: A represen- tative task failure. Right: Distribution of failure types across EB-ALFRED tasks.To address reasoning deficiencies, recent research has explored en- hancing large models’ cognitive abilities through dedicated reason- ing frameworks[ 32]. Notably, ap- proaches such as DeepSeek-R1[ 14] have pioneered reinforcement- driven
|
https://arxiv.org/abs/2505.22050v1
|
paradigms that explicitly strengthen a model’s reasoning capacity via reward-guided op- timization, and have achieved promising results in math and code problems. Extensions of this paradigm into multimodal con- texts have begun to emerge[ 47], tackling tasks such as visual mathematics and diagram-based reasoning[ 59,37,27,25]. How- ever, applying such reasoning- enhancement techniques to em- bodied planning remains highly challenging and underexplored due to the fundamental differences between embodied tasks and conventional reasoning benchmarks: (1) Embod- ied planning requires spatial perception and physical commonsense[ 26], whereas tasks like math or code focus purely on symbolic reasoning without grounding in dynamic environments; (2) The transition from static, single-turn QA to interactive, multi-turn decision-making[ 49] introduces continuous feedback loops—unlike static tasks, embodied agents must adaptively reason as each action reshapes their environment; (3) Embodied planning lacks unique ground-truth trajectories, in contrast to deterministic domains, as multiple valid solutions exist for a single goal, complicating reward design and supervision. In this work, we bridge the gap by proposing a reinforcement fine-tuning framework that brings R1-style reasoning enhancement into embodied planning, enabling models to make more coherent and context-aware decisions in dynamic, interactive environments. We propose a rule-based reward function that specifically designed for multi-step decision, and optimize the model using Generalized Reinforced Preference Optimization (GRPO) [36] to encourage long-horizon, goal-directed reasoning. Prior to reinforcement learning, we distill response patterns from a large closed-source model to construct a high-quality training corpus and perform supervised fine-tuning (SFT)[ 31], equipping the model with rich commonsense priors and structured reasoning habits as a foundation for downstream optimization. Recognizing the discrepancy between simplistic text-based simulations and the complexities of real-world physics, we conduct evaluations within Embench[ 54], an interactive embodied benchmark that faithfully captures environmental dynamics and agent-environment feedback loops. Experimental results demonstrate that our method sig- nificantly improves planning performance, yielding more efficient and context-aware action sequences. Moreover, our reinforcement-driven fine-tuning exhibits strong generalization across unseen tasks and environments, underscoring its potential for practical deployment in real-world embodied AI applications. In summary, our contributions are as follows: •We are the first to apply reinforcement fine-tuning to optimize a vision-language model for embodied planning, significantly improving the model’s ability to perform coherent multi-step reasoning and decision-making in dynamic environments. •We propose a comprehensive training pipeline that integrates supervised fine-tuning (SFT) with reinforcement fine-tuning (RFT), alongside carefully constructed datasets, a reward function tailored for multi-step decision-making, and supporting mechanisms such as online data filtering, leading to consistent and robust performance improvements. 2 Preprint. Under review. •We conduct extensive evaluation on Embench, an interactive benchmark for embodied AI, showing that our model not only outperforms comparable-scale models but also surpasses GPT-4o-mini and open-source models with more than 70B parameters. It further demonstrates strong generalization to unseen domains, validating the generality of reinforcement-based adaptation. 2 Related Work 2.1 Embodied Task Planning Embodied task planning focuses on decomposing high-level natural language instructions into executable sequences of sub-tasks, enabling agents to perform complex behaviors in interactive environments. With the emergence of large language and vision-language models[ 52,53], researchers have explored using pretrained LLMs or VLMs to
|
https://arxiv.org/abs/2505.22050v1
|
generate plans from textual and visual observations, typically relying on carefully crafted prompts[ 39, 34,17,20,42,13] or auxiliary tools[ 34,6,41] to provide necessary planning cues. While simple and data-efficient, such methods often struggle with spatial grounding and temporal coherence in visually rich environments. Advanced methods have tried to fine-tune LLMs or VLMs to improve planning performance. Several works have employed supervised fine-tuning pipelines[ 51,8,19], while others adopt preference optimization methods[ 46,43] such as Direct Preference Optimization (DPO)[ 33] to better align model behavior with expert planning preferences. Despite these advances, most existing methods operate in static or offline settings, where plans are generated without actual interaction with the environment. In this work, we address this limitation by evaluating our model in interactive environments[ 54] bridging the gap between static planning capabilities and dynamic embodied execution. 2.2 Vision-Language Model Reasoning Reasoning in vision-language models (VLMs) involves drawing inferences from both textual and visual inputs, often requiring spatial, temporal, or causal understanding[ 47,48]. A common approach is Chain-of-Thought (CoT) prompting[ 50], where the model generates intermediate steps to clarify its reasoning. In multimodal settings, Multimodal Chain-of- Thought (MCoT) extends this idea by integrating visual inputs like images and videos into the reasoning process[61, 29, 28]. More recently, R1-style reinforcement learning[ 14,36] has emerged as an effective frame- work for enhancing reasoning capabilities. These methods optimize reasoning quality through reward-guided learning, enabling models to self-correct and generate more de- tailed reasoning processes. Originally developed for text-based reasoning, R1 approaches have since been extended to multimodal domains, including image-based QA[ 25,37,59], visual math problems[ 27,18,45], and video reasoning[ 23]. In the context of embodied AI, some early studies[ 62,60,44] have applied R1-based training to question answering tasks, however, they primarily focus on short-horizon QA tasks. In contrast, our work is the first to adopt R1-style reinforcement fine-tuning for long-horizon embodied planning, aiming to improve structured decision-making across multiple interactive steps. 3 Methodology 3.1 Problem Definition We formulate embodied task planning as a partially observable decision-making process, where the agent interacts with an environment through sequential actions based on visual observations. At each time step t, the agent receives an observation ot∈ O and executes an action at∈ A, forming a history ht={o0,a0,o1, ...,ot}. (1) 3 Preprint. Under review. Task: Pick up a remote and turn on a lamp VLM Policy (SFT+RFT) History : Action1 find a RemoteControlAction2 pickup a RemoteControl Multi -Step Planning Observation :StateEnvironment ALFRED Habitat Seen Unseen "visual_state_description \": \"The scene shows ... "reasoning_and_reflection \": \"Based on the visual scene, I can see a lamp on the side table that needs to be turned on. I will need to first... language_plan \": \"1Locate the desk lamp in the room. 2. Turn on the desk lamp to complete the task.“ "executable_plan \": {\"action_id \": 63, \"action_name \": \"find a DeskLamp \"},{\"action_id \": 151, \"action_name \“ : \"turn on the DeskLamp \"}Action: 1.find a DeskLamp 2.turnon the DesklampRunning Simulator Reasoning Thought Interactive Evaluation Qwen2.5 -VL-7B Stage1:Full SFTQwen2.5 -VL-FullSFT Stage2: Reinforcement Fine -tuning Distilled Data From Large models 𝑅𝑒𝑤𝑎𝑟𝑑 =𝑅𝑓𝑜𝑟𝑚𝑎𝑡+𝑅𝑎𝑐𝑐𝑢𝑟𝑎𝑐𝑦 𝑅𝑓𝑜𝑟𝑚𝑎𝑡=𝑅𝑠𝑡𝑟𝑢𝑐𝑡𝑢𝑟𝑒 +𝑅𝑣𝑎𝑙𝑖𝑑+𝑅𝑚𝑎𝑡𝑐ℎ 𝑅𝑎𝑐𝑐𝑢𝑟𝑎𝑐𝑦=𝑛(𝑛+1) 𝑘(𝑘+1)n:correct steps k:total stepsGRPO +
|
https://arxiv.org/abs/2505.22050v1
|
Rule -based Reward Model TrainingResults Seen (ALFRED ) Ours Unseen (Habitat ) SFT Figure 2: Overview of our proposed framework. We adopt a two-stage training paradigm consisting of supervised fine-tuning (SFT) followed by reinforcement fine-tuning (RFT) to enhance multi-step planning capabilities of the vision-language model. The final model is evaluated on Embench , an interactive embodied benchmark, where it achieves strong performance across both seen and unseen environments. Given a task instruction g∈ G described by a natural language command L, the task is associated with a set of binary goal-checking conditions C(g) ={c1, ...,ck}that must all be satisfied for the task to be considered successful. The agent generates a trajectory e= (g,o0,a0,o1, ...,on,an), (2) and the reward is defined as r(e) =I ^ c∈C(g)c=True , (3) where I[·]is the indicator function. We parameterize the policy πθusing a vision-language model (VLM), which outputs an action distribution conditioned on the observation ot, history ht, instruction L, and a fixed prompt template P: at+1∼πθ(· |ot,ht,L,P). (4) Our objective is to optimize θsuch that the expected task success rate of sampled trajectories increases: max θEe∼πθ[r(e)]. (5) We adopt a two-stage training paradigm: supervised fine-tuning (SFT) to align πθwith high-quality trajectories, followed by reinforcement fine-tuning (RFT) to further improve performance under interactive evaluation. 4 Preprint. Under review. 3.2 Preparing for Reinforcement: Distilled Supervised Fine-tuning Embodied planning requires strong spatial perception and commonsense reasoning abilities. However, small and open-source vision-language models (VLMs) often fall short in these as- pects compared to proprietary large-scale models. To bridge this capability gap and provide a solid initialization for subsequent reinforcement fine-tuning, we first adopt supervised fine-tuning (SFT)[31] on a high-quality dataset obtained via large model distillation. Distillation from Large Models. Unlike tasks with well-defined ground-truth labels, em- bodied planning allows for diverse valid trajectories to accomplish the same goal. Collecting human-annotated demonstrations for each trajectory is labor-intensive and lacks scalabil- ity. Therefore, we opt for a distillation-based approach: we prompt a proprietary model, Gemini-2.0-flash[ 4], to solve embodied planning tasks and record its outputs to construct our SFT dataset. Specifically, for each task goal g∈ G and environment observation history ht, we construct a prompt p=Prompt (g,ht), and collect Gemini’s response ˆat+1. The dataset is represented as a collection of tuples: DSFT={(pi,ˆai)}N i=1, (6) where piis the textual input prompt and ˆaiis the response generated by Gemini, which contains both the planning trajectory and associated reasoning process. In total, we collect over 4000 training samples and use them to supervise the open-source Qwen2.5-VL[ 7] model. Supervised Fine-tuning. Given the distilled dataset DSFT, we optimize the model parame- tersθof the VLM policy πθvia maximum likelihood estimation: LSFT(θ) =−E(p,ˆa)∼D SFT[logπθ(ˆa|p)]. (7) We explore both full-parameter fine-tuning and parameter-efficient LoRA-based[ 15] fine- tuning strategies. Empirically, we observe that full fine-tuning yields slightly better perfor- mance. Overall, the SFT stage enables the model to inherit the task-decomposition patterns, com- monsense priors, and spatial grounding demonstrated by the larger model. Details on training configuration and dataset content are provided in Appendix. 3.3 Reinforcing Reasoning for Embodied Planning While SFT improves task-specific performance, it often lacks
|
https://arxiv.org/abs/2505.22050v1
|
the reasoning generalization needed for unseen scenarios. Recent work such as DeepSeek-R1[ 14] shows that reinforce- ment learning (RL) with rule-based rewards can effectively enhance reasoning by optimizing for quality over imitation, improving both task success and generalization—especially im- portant in embodied contexts. Building on this, we propose a reinforcement fine-tuning framework for long-horizon embodied planning. Unlike prior RL methods limited to short-horizon QA, we extend to interactive multi-step tasks. We construct a dataset from the ALFRED benchmark[ 40], design a rule-based reward for planning evaluation, and optimize the VLM using the GRPO algorithm[36], with an online filtering strategy to improve training stability. Dataset Construction. The visual reinforcement fine-tuning dataset consists of samples formatted to support reward-based optimization. Each sample is represented as a triplet (L,o,ˆa), where Ldenotes the textual input instruction, ois the image input, and ˆais the ground-truth answer used for reward computation. For multi-step planning, we decompose each reference trajectory e= (g,o0,a0,o1,a1, ...,ok,ak)of length kinto ktraining samples. At each step n∈[1,k], we construct Lnby embedding the task goal galong with the previous 5 Preprint. Under review. action history a0:n−1. The corresponding observation onis taken from the n-th step, and the target ˆan:={an,an+1, ...,ak}consists of the remaining actions from step n. We build this dataset on top of the ALFRED benchmark[ 40], which provides complete exe- cution traces of agents performing household tasks in a simulated environment. Applying the above trajectory decomposition strategy, we collect a total of 43,898 training samples. Reward Function. Inspired by prior work in reinforcement fine-tuning[ 27,14], we design a composite reward function that integrates both format correctness and action accuracy. The goal is to guide the model toward producing structured, valid, and effective multi-step plans. We denote the total reward as: R(response, answer ) =Rformat (response ) +Raccuracy (response, answer ), (8) (1) Format Reward. Unlike prior works that rely on generic templates such as <think> and <action> , we tailor the reward to suit the structured output required for embod- ied multi-step planning. The response is expected to contain a JSON object with spe- cific keys: reasoning_and_reflection ,visual_state_description ,language_plan , and executable_plan . This output structure is inspired by the Embench[ 54] prompting for- mat, where the model is encouraged to first observe the image, then reflect and reason, and finally produce a coherent multi-step action plan. The format reward is computed as: Rformat =Rstructure +Rvalid+Rmatch , (9) where: •Rstructure =0.125 if all required top-level fields exist, otherwise 0. •Rvalid=0.125×# valid steps # total steps, where a step is valid if it contains an integer action_id and a string action_name . •Rmatch =0.25×# correctly matched actions # total steps, where a match is counted only if the pair (action_id, action_name) corresponds to a valid and correct entry in the prede- fined action mapping, ensuring consistency and preventing hallucinated actions. (2) Accuracy Reward. To assess execution correctness, we compare the predicted action sequence ˆa={a1, ...,ak}to the reference (gold) action sequence a∗={a∗ 1, ...,a∗ k}. The comparison is performed step-by-step in a prefix-matching manner: starting from the first step, each predicted action must exactly match the
|
https://arxiv.org/abs/2505.22050v1
|
corresponding ground-truth action. Once a mismatch is encountered, the comparison stops. Let ndenote the number of consecutively matched steps, i.e., the prefix length such that ai=a∗ ifor all i∈[1,n]. The accuracy reward is defined as: Raccuracy =R(n;k), (10) where R(n;k)denotes the multi-step reward allocation curve described in the next section. Additionally, for single-step tasks ( k=1), if the model generates more than one step, we apply a penalty of −0.25 to discourage redundant actions. (3) Multi-step Reward Allocation Curve. To reflect long-horizon planning quality, we define a progressive reward allocation curve that assigns higher reward to longer correct prefixes. Given a reference sequence of length kand a matched prefix of length n≤k, we compute the reward using triangular normalization: R(n;k) =n(n+1) k(k+1). (11) This function grows quadratically with nand is normalized to the range [0, 1], assigning proportionally more reward for longer correct planning. It encourages models not only to predict the correct final outcome, but also to maintain consistency and correctness throughout the entire action sequence. 6 Preprint. Under review. Optimization Method. We adopt Group Relative Policy Optimization (GRPO) [ 36] to optimize the VLM policy under reward-based supervision.Given a prompt x, the policy model πθgenerates a set of Gsampled responses {y1,y2,. . .,yG} ∼πθ(· |x). Each response yiis scored by the reward function ri=R(yi)that reflects its quality in terms of format and planning accuracy. GRPO computes the relative advantage Aiof each response as its normalized deviation from the group mean: Ai=ri−mean ({r1, . . . , rG}) std({r1, . . . , rG}), (12) The training objective encourages the model to increase the likelihood of high-quality responses under the current policy while maintaining stability with respect to a reference policy πref. The GRPO loss is defined as: J(θ) =Ex∼DE{yi}∼πθ" 1 GG ∑ i=1 clipπθ(yi|x) πold(yi|x), 1−ϵ, 1+ϵ ·Ai−β· DKL(πθ∥πref)# , (13) where πoldis the policy used for sampling, ϵcontrols the clipping range, and βis the weight for the KL penalty between the current and reference policies. This optimization objective enables stable and lightweight training by leveraging rela- tive preferences within sampled groups, without the need for absolute reward values or additional critic networks. Data Filtering Strategy. In the early stages of reinforcement fine-tuning, we observe that many sampled responses from the model yield extremely low reward values, leading to weak or unstable gradient signals. To address this issue and ensure stable policy updates, we incorporate an online data filtering strategy followed by PRIME[ 10] and MM-Eureka[ 27]. The core idea is to discard samples that are either too poor or too perfect to maintain informative and diverse training batches throughout optimization.Concretely, we apply a filtering criterion based on response-level accuracy. For each input prompt x, we generate a group of Gresponses {y(i)}G i=1from the current policy πθ, and compute their individual rewards r(i)=R(x,y(i)). We then define the accuracy of a prompt group as: Cx= {y(i)|r(i)=1} , (14) which counts how many responses in the group achieve full reward. We retain a prompt group for training only if its accuracy falls within a predefined range: ϵlower acc≤Cx≤ϵupper acc , ensuring that
|
https://arxiv.org/abs/2505.22050v1
|
the group contains a balanced mix of good and poor responses. Accepted samples are buffered into a memory set Bof size NB. Once the buffer is filled, we perform K2steps of GRPO optimization on the collected data, after which the buffer is cleared and the process repeats.This filtering mechanism significantly improves learn- ing stability by eliminating gradient degeneracy and encourages the policy to learn from relatively informative contrastive examples. 4 Experiments 4.1 Embodied Multi-step Planning Evaluation in Interactive Environment 4.1.1 Benchmark Most prior works in embodied planning reduce evaluation to static visual question an- swering, which fails to capture the interactive and sequential nature of real-world decision- making. To address this gap, we adopt Embench [54], a benchmark designed for evaluating multimodal agents in dynamic, interactive environments. 7 Preprint. Under review. Model Params. EB-ALFRED (Seen) Avg Base Common Complex Visual Spatial Long Closed-Source MLLMs Claude-3.5-Sonnet — 66.0 70 62 72 62 60 70 Gemini-2.0-flash — 51.3 58 58 50 46 42 54 GPT-4o — 54.6 62 52 68 44 48 54 GPT-4o-mini — 22.0 32 24 32 20 24 0 Open-Source General MLLMs Llama-3.2-11B 11B 13.3 22 8 16 22 6 6 Qwen2.5-VL-7B 7B 1.7 4 2 2 2 0 0 InternVL2.5-8B 8B 3.0 2 0 12 0 4 0 Open-Source Reasoning MLLMs R1-VL-7B 7B 2 2 2 6 0 0 2 MM-Eureka-Qwen-7B 7B 2.67 6 4 4 2 0 0 Open-Source Embodied MLLMs RoboBrain 7B 0.33 2 0 0 0 0 0 Tapa 7B 0 0 0 0 0 0 0 Open-Source Embodied + Reasoning MLLMs Ours 7B 35.6 54 42 46 28 38 6 Table 1: Performance Comparison of Models on EB-ALFRED (Seen) Stage1:SFTStage2:RFT GPT-4o GPT-4o-mini Qwen2 -VL-7B-InsQwen2.5 -VL-7B-InsOurs Llama -3.2-11B-Vision -InsInternVL2_5 -38B InternVL2_5 -8BLlama -3.2-90B-Vision -InsInternVL2_5 -78B Qwen2 -VL-72B-InsQwen2.5 -VL-72B-Ins Ours Llama InternVL QwenAverage Success Rate Across Parameters (Seen) Figure 3: Success rate vs. parameters (Seen) Stage1:SFTStage2:RFT Figure 4: Success rate vs. stages (Seen) Embench provides a unified framework across four embodied settings and supports over 1,100 tasks involving manipulation, navigation, and spatial reasoning. We evaluate on two environments: EB-ALFRED , built on ALFRED[ 40] and AI2-THOR[ 22], and EB-Habitat , based on Habitat 2.0’s rearrangement tasks[ 35]. Tasks are categorized into six subsets: Base, Common Sense ,Complex Instruction ,Spatial Awareness ,Visual Appearance , and Long Horizon , enabling fine-grained capability analysis. All models generate step-by-step plans from egocentric inputs and execute them in simu- lation. Since our training data is collected from the ALFRED simulator, EB-Habitat serves as an out-of-domain setting for generalization evaluation. More details are provided in Appendix. 4.1.2 Baselines We compare our method against a range of baselines, including: (1) proprietary models such as Claude-3.5-Sonnet[ 3], Gemini-2.0-flash[ 4], GPT-4o[ 2], and GPT-4o-mini[ 1]; (2) open- source general VLMs like LLaMA-3.2-Vision-11B[ 5], Qwen2.5-VL-7B[ 7], and InternVL2.5- 8B[9]; (3) reasoning-oriented models such as MM-Eureka[ 27] and R1-VL[ 59]; and (4) embod- ied VLMs including RoboBrain[ 19] and TAPA[ 51]. For evaluation, we convert visual inputs into text for TAPA due to its lack of vision capabilities. Further details on each baseline are provided in Appendix. 8 Preprint.
|
https://arxiv.org/abs/2505.22050v1
|
Under review. Model Params. EB-Habitat (Unseen) Avg Base Common Complex Visual Spatial Long Closed-Source MLLMs Claude-3.5-Sonnet — 67.7 96 68 74 74 40 54 Gemini-2.0-flash — 34.3 76 30 30 30 26 14 GPT-4o — 54.0 82 34 62 58 32 56 GPT-4o-mini — 32.3 68 38 28 28 22 10 Open-Source General MLLMs Llama-3.2-11B 11B 23.3 62 16 24 14 18 6 Qwen2.5-VL-7B 7B 12.7 38 4 12 4 12 6 InternVL2.5-8B 8B 17.0 48 6 16 10 18 4 Open-Source Reasoning MLLMs R1-VL-7B 7B 7.3 24 0 4 6 8 2 MM-Eureka-Qwen-7B 7B 16.3 40 16 14 10 16 2 Open-Source Embodied MLLMs RoboBrain 7B 15.3 38 6 18 8 18 4 Tapa 7B 0 0 0 0 0 0 0 Open-Source Embodied + Reasoning MLLMs Ours 7B 20 56 8 18 16 14 8 Table 2: Performance Comparison of Models on EB-Habitat (Unseen) Stage2:RFT Stage1:SFT GPT-4o GPT-4o-mini Qwen2 -VL-7B-Ins Qwen2.5 -VL-7B-InsOurs InternVL2_5 -8BLlama -3.2-11B-Vision -Ins InternVL2_5 -38BQwen2 -VL-72B-InsQwen2.5 -VL-72B-InsLlama -3.2-90B-Vision -Ins InternVL2_5 -78B Ours Llama InternVL QwenBaseSuccess Rate Across Parameters (Unseen) Figure 5: Success rate vs. parameters (Unseen) Stage2:RFT Stage1:SFT GPT-4o GPT-4o-mini Qwen2 -VL-7B-Ins Qwen2.5 -VL-7B-InsOurs InternVL2_5 -8BLlama -3.2-11B-Vision -Ins InternVL2_5 -38BQwen2 -VL-72B-InsQwen2.5 -VL-72B-InsLlama -3.2-90B-Vision -Ins InternVL2_5 -78B Ours Llama InternVL QwenBaseSuccess Rate Across Parameters (Unseen) Figure 6: Success rate vs. stages (Unseen) 4.1.3 Implementation Details For supervised fine-tuning (SFT), we train our model using the LLaMA-Factory[ 63] frame- work on 4 NVIDIA A100 40G GPUs for approximately 8 hours, using Qwen2.5-VL-7B[ 7] as base model. For reinforcement fine-tuning (RFT), we use the OpenRLHF[ 16] framework and perform GRPO optimization on 8 A100 40G GPUs, with one training epoch requiring approximately 40 hours. For evaluation, all models are deployed via Flask-based inference servers. Each model is evaluated on both EB-ALFRED and EB-Habitat environments, with end-to-end evaluation time per model being around 18 hours. For reasoning-heavy models such as R1-VL and MM-Eureka, the inference latency is significantly higher, resulting in total evaluation time of up to 2–3× longer. 4.2 Experiment Results 4.2.1 In-Domain Results We conduct comprehensive in-domain evaluations on the EB-ALFRED environment. As shown in Table 1, Figure 3, and Figure 4, our proposed model achieves a task success rate of 35.6%, significantly outperforming GPT-4o-mini (22.0%) and much larger models such as Qwen2.5-VL-72B (33.7%) and LLaMA3.2-90B-Vision-Ins(32.0%). 9 Preprint. Under review. VariantEB-ALFRED (Seen) EB-Habitat (Unseen) Avg Base Avg Base Base 1.67 4 12.67 38 SFT only 21 34 12.3 30 RFT only 9.3 18 15.6 40 RFT→SFT 29 40 10.3 30 SFT→RFT (ours) 35.6 54 20 56 Response Length (Our RFT Training)Response Length (Training after math dataset) Table 3: Left:Ablation study on the training stages. Right : Analysis for response length Several key observations emerge from the results: (1) Our two-stage training pipeline (SFT + RFT) leads to consistent performance gains in embodied task planning. (2) Existing open-source reasoning models and embodied VLMs perform poorly in Embench. While reasoning models produce verbose intermediate steps, they struggle to execute correct action sequences. Similarly, embodied VLMs lack the generalization ability to transfer to Embench tasks. (3) Long-horizon tasks remain a major challenge.
|
https://arxiv.org/abs/2505.22050v1
|
Despite overall improvement on other categories, the performance gain in Long-Horizon tasks is marginal, highlighting the need for future research on planning depth and temporal reasoning. 4.2.2 Out-of-Domain Results To evaluate generalization, we tested our models in the EB-Habitat environment, which differs from ALFRED in terms of scenes, objects, action space, and task types. As shown in Table 2, our method exhibits strong out-of-domain performance, outperforming all baseline models of similar 7B size, including general-purpose, reasoning-augmented, and embodied VLMs. We highlight the following findings: (1) Reinforcement fine-tuning leads to substantial improvements even in completely unseen environments, validating the cross-domain ro- bustness of our approach,in contrast, supervised fine-tuning alone offers no benefit in out-of-domain settings. (2) Since our training dataset is constructed primarily from base instructions in ALFRED, the improvement is more pronounced in Base-type tasks within EB-Habitat, while gains in other categories remain limited. This observation suggests the need for more diverse training data to support broader generalization. 4.3 Ablation Study We perform an ablation study to examine the contribution of each training stage in our two-stage pipeline. Specifically, we compare the performance of models trained with only supervised fine-tuning (SFT), only reinforcement fine-tuning (RFT), and the reversed order (RFT before SFT), using the same data and experimental setup. As shown in Table 3, SFT alone yields substantial improvements on in-domain tasks but fails to generalize to unseen environments. In contrast, applying RFT directly on the base model without prior SFT results in limited gains, likely due to the lack of foundational knowledge. These results confirm the necessity of our two-stage approach, where SFT establishes a strong initialization and RFT enhances generalization through reward-driven optimization. 4.4 Rethink the reasoning response length:Is Longer always Better? In mathematical reasoning tasks, reinforcement fine-tuning often leads to longer outputs without explicit supervision on reasoning length[ 27]. However, we observe that the length of reasoning traces is fundamentally influenced by the nature of the task itself, rather than by a universal tendency toward verbosity. As shown in Figure 3, our model does not generate longer reasoning outputs after reinforcement fine-tuning; in fact, increased output length does not correlate with higher planning accuracy. To further explore this, we conduct an additional experiment: pretraining the model with mathematical reasoning data to encourage longer responses, then fine-tuning it again on embodied planning data. 10 Preprint. Under review. Although the model initially produces more verbose outputs, the average reasoning length decreases as training progresses, reflecting a domain-specific adaptation toward concise planning. 5 Limitation and Future Work While our work adopts an interactive benchmark for evaluation, the reinforcement fine- tuning process itself does not involve real-time interaction with a simulator. Due to practical constraints in training efficiency and system complexity, we utilize pre-defined truth an- swers to compute rule-based rewards offline. Incorporating online interaction with the environment to generate learning signals dynamically remains a promising direction for future work, potentially enabling more robust policy refinement through trial-and-error[ 49]. In addition, our current focus lies on high-level embodied planning, producing structured action sequences that can guide downstream control modules. Although our method demonstrates
|
https://arxiv.org/abs/2505.22050v1
|
strong performance and generalization in simulated benchmarks, it has not yet been deployed on real-world robotic platforms. Extending this framework to physical agents and integrating it with low-level control systems is an important step toward realizing embodied intelligence in practical applications. 6 Conclusion In this paper, we tackle the challenge of enabling vision-language models to perform robust multi-step planning in dynamic embodied environments. To this end, we propose a reinforcement fine-tuning framework that enhances reasoning and decision-making under long-horizon, interactive settings. Our approach combines supervised initialization via knowledge distillation with rule-based reinforcement learning guided by Generalized Reinforced Preference Optimization (GRPO), enabling structured learning from both expert- like demonstrations and task-specific feedback. We validate our method on Embench, a comprehensive benchmark for interactive embodied planning, demonstrating that our model significantly outperforms both proprietary and open-source baselines of comparable or larger scale. Beyond in-domain performance, our approach shows strong generalization to out-of-distribution tasks and unseen environments. These results highlight the promise of reinforcement-driven reasoning as a scalable and effective direction for advancing embodied intelligence. 11 Preprint. Under review. References [1]GPT-4o mini: advancing cost-efficient intelligence. URL: https://openai.com/index/ gpt-4o-mini-advancing-cost-efficient-intelligence/ . [2] Hello GPT-4o | OpenAI. URL: https://openai.com/index/hello-gpt-4o/ . [3]Introducing claude 3.5 sonnet \ anthropic. URL: https://www.anthropic.com/news/ claude-3-5-sonnet . [4]Introducing gemini 2.0: our new AI model for the agentic era. URL: https://blog. google/technology/google-deepmind/google-gemini-ai-update-december-2024/ . [5]Llama 3.2: Revolutionizing edge AI and vision with open, customizable models. URL: https://ai.meta.com/blog/ llama-3-2-connect-2024-vision-edge-mobile-devices/ . [6]Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, By- ron David, Chelsea Finn, Chuyuan Fu, Keerthana Gopalakrishnan, Karol Hausman, Alex Herzog, Daniel Ho, Jasmine Hsu, Julian Ibarz, Brian Ichter, Alex Irpan, Eric Jang, Rosario Jauregui Ruano, Kyle Jeffrey, Sally Jesmonth, Nikhil J. Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Kuang-Huei Lee, Sergey Levine, Yao Lu, Linda Luu, Carolina Parada, Peter Pastor, Jornell Quiambao, Kanishka Rao, Jarek Rettinghouse, Diego Reyes, Pierre Sermanet, Nicolas Sievers, Clayton Tan, Alexander Toshev, Vincent Vanhoucke, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Mengyuan Yan, and Andy Zeng. Do as i can, not as i say: Grounding language in robotic affordances. URL: http://arxiv.org/abs/2204.01691 ,arXiv:2204.01691[cs] , doi:10.48550/arXiv.2204.01691 . [7]Shuai Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, Sibo Song, Kai Dang, Peng Wang, Shijie Wang, Jun Tang, et al. Qwen2. 5-vl technical report. arXiv preprint arXiv:2502.13923 , 2025. [8]Yaran Chen, Wenbo Cui, Yuanwen Chen, Mining Tan, Xinyao Zhang, Dongbin Zhao, and He Wang. RoboGPT: an intelligent agent of making embodied long- term decisions for daily instruction tasks. URL: http://arxiv.org/abs/2311.15649 , arXiv:2311.15649[cs] ,doi:10.48550/arXiv.2311.15649 . [9]Zhe Chen, Weiyun Wang, Yue Cao, Yangzhou Liu, Zhangwei Gao, Erfei Cui, Jinguo Zhu, Shenglong Ye, Hao Tian, Zhaoyang Liu, et al. Expanding performance boundaries of open-source multimodal models with model, data, and test-time scaling. arXiv preprint arXiv:2412.05271 , 2024. [10] Ganqu Cui, Lifan Yuan, Zefan Wang, Hanbin Wang, Wendi Li, Bingxiang He, Yuchen Fan, Tianyu Yu, Qixin Xu, Weize Chen, et al. Process reinforcement through implicit rewards. arXiv preprint arXiv:2502.01456 , 2025. [11] Jiafei Duan, Samson Yu, Hui Li Tan, Hongyuan Zhu, and Cheston Tan. A survey of embodied ai: From simulators to research tasks. IEEE Transactions on
|
https://arxiv.org/abs/2505.22050v1
|
Emerging Topics in Computational Intelligence , 6(2):230–244, 2022. [12] Hugging Face. Open r1: A fully open reproduction of deepseek-r1, January 2025. URL: https://github.com/huggingface/open-r1 . [13] Xian Fu, Min Zhang, Peilong Han, Hao Zhang, Lei Shi, Hongyao Tang, et al. What can vlms do for zero-shot embodied task planning? In ICML 2024 Workshop on LLMs and Cognition , 2024. [14] Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948 , 2025. 12 Preprint. Under review. [15] Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, et al. Lora: Low-rank adaptation of large language models. ICLR , 1(2):3, 2022. [16] Jian Hu, Xibin Wu, Zilin Zhu, Weixun Wang, Dehao Zhang, Yu Cao, et al. Open- rlhf: An easy-to-use, scalable and high-performance rlhf framework. arXiv preprint arXiv:2405.11143 , 2024. [17] Yingdong Hu, Fanqi Lin, Tong Zhang, Li Yi, and Yang Gao. Look before you leap: Un- veiling the power of GPT-4v in robotic vision-language planning. URL: http://arxiv. org/abs/2311.17842 ,arXiv:2311.17842[cs] ,doi:10.48550/arXiv.2311.17842 . [18] Wenxuan Huang, Bohan Jia, Zijie Zhai, Shaosheng Cao, Zheyu Ye, Fei Zhao, Zhe Xu, Yao Hu, and Shaohui Lin. Vision-r1: Incentivizing reasoning capability in multimodal large language models. arXiv preprint arXiv:2503.06749 , 2025. [19] Yuheng Ji, Huajie Tan, Jiayu Shi, Xiaoshuai Hao, Yuan Zhang, Hengyuan Zhang, Pengwei Wang, Mengdi Zhao, Yao Mu, Pengju An, Xinda Xue, Qinghang Su, Huaihai Lyu, Xiaolong Zheng, Jiaming Liu, Zhongyuan Wang, and Shanghang Zhang. RoboBrain: A unified brain model for robotic manipulation from abstract to concrete. URL: http://arxiv.org/abs/2502.21257 ,arXiv:2502.21257[cs] ,doi: 10.48550/arXiv.2502.21257 . [20] Byeonghwi Kim, Jinyeon Kim, Yuyeong Kim, Cheolhong Min, and Jonghyun Choi. Context-aware planning and environment-aware memory for instruction following embodied agents. URL: http://arxiv.org/abs/2308.07241 ,arXiv:2308.07241[cs] , doi:10.48550/arXiv.2308.07241 . [21] Moo Jin Kim, Karl Pertsch, Siddharth Karamcheti, Ted Xiao, Ashwin Balakrishna, Suraj Nair, Rafael Rafailov, Ethan P Foster, Pannag R Sanketi, Quan Vuong, et al. Openvla: An open-source vision-language-action model. In 8th Annual Conference on Robot Learning . [22] Eric Kolve, Roozbeh Mottaghi, Winson Han, Eli VanderBilt, Luca Weihs, Alvaro Her- rasti, Matt Deitke, Kiana Ehsani, Daniel Gordon, Yuke Zhu, et al. Ai2-thor: An interactive 3d environment for visual ai. arXiv preprint arXiv:1712.05474 , 2017. [23] Xinhao Li, Ziang Yan, Desen Meng, Lu Dong, Xiangyu Zeng, Yinan He, Yali Wang, Yu Qiao, Yi Wang, and Limin Wang. Videochat-r1: Enhancing spatio-temporal percep- tion via reinforcement fine-tuning. arXiv preprint arXiv:2504.06958 , 2025. [24] Hunter Lightman, Vineet Kosaraju, Yuri Burda, Harrison Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. Let’s verify step by step. In The Twelfth International Conference on Learning Representations , 2023. [25] Ziyu Liu, Zeyi Sun, Yuhang Zang, Xiaoyi Dong, Yuhang Cao, Haodong Duan, Dahua Lin, and Jiaqi Wang. Visual-rft: Visual reinforcement fine-tuning. arXiv preprint arXiv:2503.01785 , 2025. [26] Yueen Ma, Zixing Song, Yuzheng Zhuang, Jianye Hao, and Irwin King. A survey on vision-language-action models for embodied ai. arXiv preprint arXiv:2405.14093 , 2024. [27] Fanqing Meng, Lingxiao Du,
|
https://arxiv.org/abs/2505.22050v1
|
Zongkai Liu, Zhixiang Zhou, Quanfeng Lu, Daocheng Fu, Tiancheng Han, Botian Shi, Wenhai Wang, Junjun He, et al. Mm-eureka: Exploring the frontiers of multimodal reasoning with rule-based reinforcement learning. arXiv preprint arXiv:2503.07365 , 2025. [28] Chancharik Mitra, Brandon Huang, Trevor Darrell, and Roei Herzig. Composi- tional chain-of-thought prompting for large multimodal models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 14420–14431, 2024. 13 Preprint. Under review. [29] Debjyoti Mondal, Suraj Modi, Subhadarshi Panda, Rituraj Singh, and Godawari Sud- hakar Rao. Kam-cot: Knowledge augmented multimodal chain-of-thoughts reasoning. InProceedings of the AAAI conference on artificial intelligence , volume 38, pages 18798– 18806, 2024. [30] Yao Mu, Qinglong Zhang, Mengkang Hu, Wenhai Wang, Mingyu Ding, Jun Jin, Bin Wang, Jifeng Dai, Yu Qiao, and Ping Luo. EmbodiedGPT: Vision-language pre-training via embodied chain of thought. URL: http://arxiv.org/abs/2305.15021 ,arXiv:2305. 15021[cs] ,doi:10.48550/arXiv.2305.15021 . [31] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in neural information processing systems , 35:27730–27744, 2022. [32] Aske Plaat, Annie Wong, Suzan Verberne, Joost Broekens, Niki van Stein, and Thomas Back. Reasoning with large language models, a survey. arXiv preprint arXiv:2407.11511 , 2024. [33] Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D Manning, Stefano Ermon, and Chelsea Finn. Direct preference optimization: Your language model is secretly a reward model. Advances in Neural Information Processing Systems , 36:53728–53741, 2023. [34] Krishan Rana, Jesse Haviland, Sourav Garg, Jad Abou-Chakra, and Ian Reid. Say- Plan: Grounding large language models using 3d scene graphs for scalable robot task planning. [35] Manolis Savva, Abhishek Kadian, Oleksandr Maksymets, Yili Zhao, Erik Wijmans, Bhavana Jain, Julian Straub, Jia Liu, Vladlen Koltun, Jitendra Malik, et al. Habitat: A platform for embodied ai research. In Proceedings of the IEEE/CVF international conference on computer vision , pages 9339–9347, 2019. [36] Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, YK Li, Y Wu, et al. Deepseekmath: Pushing the limits of mathematical reasoning in open language models. arXiv preprint arXiv:2402.03300 , 2024. [37] Haozhan Shen, Peng Liu, Jingcheng Li, Chunxin Fang, Yibo Ma, Jiajia Liao, Qiaoli Shen, Zilun Zhang, Kangjia Zhao, Qianqian Zhang, et al. Vlm-r1: A stable and generalizable r1-style large vision-language model. arXiv preprint arXiv:2504.07615 , 2025. [38] Lucy Xiaoyang Shi, Brian Ichter, Michael Equi, Liyiming Ke, Karl Pertsch, Quan Vuong, James Tanner, Anna Walling, Haohuan Wang, Niccolo Fusai, et al. Hi robot: Open- ended instruction following with hierarchical vision-language-action models. arXiv preprint arXiv:2502.19417 , 2025. [39] Suyeon Shin, Sujin jeon, Junghyun Kim, Gi-Cheon Kang, and Byoung-Tak Zhang. Socratic planner: Inquiry-based zero-shot planning for embodied instruction following. URL: http://arxiv.org/abs/2404.15190 ,arXiv:2404.15190[cs] . [40] Mohit Shridhar, Jesse Thomason, Daniel Gordon, Yonatan Bisk, Winson Han, Roozbeh Mottaghi, Luke Zettlemoyer, and Dieter Fox. Alfred: A benchmark for interpreting grounded instructions for everyday tasks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 10740–10749, 2020. [41] Tom Silver, Soham Dan, Kavitha Srinivas, Joshua B. Tenenbaum, Leslie
|
https://arxiv.org/abs/2505.22050v1
|
Kaelbling, and Michael Katz. Generalized planning in PDDL domains with pretrained large language models. 38(18):20256–20264. URL: https://ojs.aaai.org/index.php/AAAI/article/ view/30006 ,doi:10.1609/aaai.v38i18.30006 . [42] Ishika Singh, Valts Blukis, Arsalan Mousavian, Ankit Goyal, Danfei Xu, Jonathan Tremblay, Dieter Fox, Jesse Thomason, and Animesh Garg. ProgPrompt: Generating situated robot task plans using large language models. URL: http://arxiv.org/abs/ 2209.11302 ,arXiv:2209.11302[cs] ,doi:10.48550/arXiv.2209.11302 . 14 Preprint. Under review. [43] Yifan Song, Da Yin, Xiang Yue, Jie Huang, Sujian Li, and Bill Yuchen Lin. Trial and error: Exploration-based trajectory optimization for llm agents. arXiv preprint arXiv:2403.02502 , 2024. [44] Huajie Tan, Yuheng Ji, Xiaoshuai Hao, Minglan Lin, Pengwei Wang, Zhongyuan Wang, and Shanghang Zhang. Reason-rft: Reinforcement fine-tuning for visual reasoning. arXiv preprint arXiv:2503.20752 , 2025. [45] Kimi Team, Angang Du, Bofei Gao, Bowei Xing, Changjiu Jiang, Cheng Chen, Cheng Li, Chenjun Xiao, Chenzhuang Du, Chonghua Liao, et al. Kimi k1. 5: Scaling reinforcement learning with llms. arXiv preprint arXiv:2501.12599 , 2025. [46] Siyin Wang, Zhaoye Fei, Qinyuan Cheng, Shiduo Zhang, Panpan Cai, Jinlan Fu, and Xipeng Qiu. World modeling makes a better planner: Dual preference optimization for embodied task planning. URL: http://arxiv.org/abs/2503.10480 ,arXiv:2503. 10480[cs] ,doi:10.48550/arXiv.2503.10480 . [47] Yaoting Wang, Shengqiong Wu, Yuecheng Zhang, Shuicheng Yan, Ziwei Liu, Jiebo Luo, and Hao Fei. Multimodal chain-of-thought reasoning: A comprehensive survey. arXiv preprint arXiv:2503.12605 , 2025. [48] Yiqi Wang, Wentao Chen, Xiaotian Han, Xudong Lin, Haiteng Zhao, Yongfei Liu, Bohan Zhai, Jianbo Yuan, Quanzeng You, and Hongxia Yang. Exploring the reasoning abilities of multimodal large language models (mllms): A comprehensive survey on emerging trends in multimodal reasoning. arXiv preprint arXiv:2401.06805 , 2024. [49] Zihan Wang, Kangrui Wang, Qineng Wang, Pingyue Zhang, Linjie Li, Zhengyuan Yang, Kefan Yu, Minh Nhat Nguyen, Licheng Liu, Eli Gottlieb, et al. Ragen: Understanding self-evolution in llm agents via multi-turn reinforcement learning. arXiv preprint arXiv:2504.20073 , 2025. [50] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems , 35:24824–24837, 2022. [51] Zhenyu Wu, Ziwei Wang, Xiuwei Xu, Jiwen Lu, and Haibin Yan. Embodied task planning with large language models. arXiv preprint arXiv:2307.01848 , 2023. [52] Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, et al. The rise and potential of large language model based agents: A survey. Science China Information Sciences , 68(2):121101, 2025. [53] Zhiyuan Xu, Kun Wu, Junjie Wen, Jinming Li, Ning Liu, Zhengping Che, and Jian Tang. A survey on robotics with foundation models: toward embodied ai. arXiv preprint arXiv:2402.02385 , 2024. [54] Rui Yang, Hanyang Chen, Junyu Zhang, Mark Zhao, Cheng Qian, Kangrui Wang, Qineng Wang, Teja Venkat Koripella, Marziyeh Movahedi, Manling Li, et al. Em- bodiedbench: Comprehensive benchmarking multi-modal large language models for vision-driven embodied agents. arXiv preprint arXiv:2502.09560 , 2025. [55] Yixin Ye, Zhen Huang, Yang Xiao, Ethan Chern, Shijie Xia, and Pengfei Liu. Limo: Less is more for reasoning. arXiv preprint arXiv:2502.03387 , 2025. [56] Michał Zawalski, William Chen, Karl Pertsch, Oier Mees, Chelsea Finn, and Sergey
|
https://arxiv.org/abs/2505.22050v1
|
Levine. Robotic control via embodied chain-of-thought reasoning. In 8th Annual Conference on Robot Learning , 2024. [57] Jianke Zhang, Yanjiang Guo, Xiaoyu Chen, Yen-Jen Wang, Yucheng Hu, Chengming Shi, and Jianyu Chen. Hirt: Enhancing robotic control with hierarchical robot transformers. arXiv preprint arXiv:2410.05273 , 2024. 15 Preprint. Under review. [58] Jingyi Zhang, Jiaxing Huang, Sheng Jin, and Shijian Lu. Vision-language models for vision tasks: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence , 2024. [59] Jingyi Zhang, Jiaxing Huang, Huanjin Yao, Shunyu Liu, Xikun Zhang, Shijian Lu, and Dacheng Tao. R1-vl: Learning to reason with multimodal large language models via step-wise group relative policy optimization. arXiv preprint arXiv:2503.12937 , 2025. [60] Wenqi Zhang, Mengna Wang, Gangao Liu, Xu Huixin, Yiwei Jiang, Yongliang Shen, Guiyang Hou, Zhe Zheng, Hang Zhang, Xin Li, et al. Embodied-reasoner: Synergizing visual search, reasoning, and action for embodied interactive tasks. arXiv preprint arXiv:2503.21696 , 2025. [61] Zhuosheng Zhang, Aston Zhang, Mu Li, Hai Zhao, George Karypis, and Alex Smola. Multimodal chain-of-thought reasoning in language models. arXiv preprint arXiv:2302.00923 , 2023. [62] Baining Zhao, Ziyou Wang, Jianjie Fang, Chen Gao, Fanhang Man, Jinqiang Cui, Xin Wang, Xinlei Chen, Yong Li, and Wenwu Zhu. Embodied-r: Collaborative framework for activating embodied spatial reasoning in foundation models via reinforcement learning. arXiv preprint arXiv:2504.12680 , 2025. [63] Yaowei Zheng, Richong Zhang, Junhao Zhang, Yanhan Ye, Zheyan Luo, Zhangchi Feng, and Yongqiang Ma. Llamafactory: Unified efficient fine-tuning of 100+ language models. arXiv preprint arXiv:2403.13372 , 2024. 16 Preprint. Under review. A Appendix Contents • Section B: Additional Details of SFT Training Stage • Section C: Additional Details of RFT Training Stage • Section D: Additional Details for Evaluation • Section E: Case study and Visualization B Additional Details of SFT Training Stage B.1 Training Dataset Details To initialize the model with structured planning priors, we construct a supervised fine- tuning (SFT) dataset via response distillation from a high-capacity proprietary model. Specifically, we query Gemini-2.0-flash on a subset of EB-ALFRED tasks from the Embench benchmark and collect its response traces. For each task prompt, we record the full model output, including both intermediate reasoning and executable plan. To ensure diversity and robustness, we repeat this process across three separate sampling rounds. The final dataset comprises a total of 4,245 training samples. Each sample includes a natural language instruction, egocentric image observations, and the model’s full response in a structured format. An example training instance is shown in figure B.1 B.2 Training Hyperparameters We perform full-parameter supervised fine-tuning on the Qwen2.5-VL-7B model using the LLaMA-Factory [63] framework. The training is conducted on 4 NVIDIA A100 40GB GPUs for approximately 8 hours. All hyperparameters are summarized in Table 4. Component Setting Component Setting Model Configuration image_max_pixels 262144 freeze_vision_tower true freeze_language_model false freeze_multi_modal_projector true deepspeed config ds_z3_config.json Dataset Configuration dataset alfred_sft template qwen2_vl cutoff_len 2048 max_samples 1000 overwrite_cache true preprocessing_workers 16 dataloader_workers 4 Training Configuration stage sft finetuning_type full do_train true num_train_epochs 3.0 learning_rate 1e-5 per_device_batch_size 1 grad_accum_steps 2 lr_scheduler cosine warmup_ratio 0.1 bf16 true ddp_timeout 180000000 Table 4: Detailed hyperparameters used in supervised fine-tuning. B.3
|
https://arxiv.org/abs/2505.22050v1
|
Training Results We record the final metrics and loss curve from the supervised fine-tuning process, as shown in Figure 8. The table summarizes key training statistics after 3 epochs of full-parameter tuning. 17 Preprint. Under review. SFT dataset example { "messages": [ { "role": "user", "content": {EB-ALFRED prompt} }, { "role": "assistant", "content": "{"reasoning_and_reflection": "The user wants me to rinse off a ladle and move it to the table. First I need to find the ladle, pick it up, find the sink, put the ladle down in the sink, find the faucet, turn on the faucet to rinse the ladle, turn off the faucet, pick up the ladle, find the table, and put the ladle on the table.", "visual_state_ description": "I can see a garbage can and a fridge in the current view .", "language_plan": "First, locate a ladle and pick it up. Then, find a sink and put the ladle down in the sink. Next, find a faucet, turn it on to rinse the ladle, and then turn it off. After that, pick up the rinsed ladle, find a dining table, and place the ladle on the table.", "executable_plan": [ { "action_id": 64, "action_name": "find a Ladle" }, { "action_id": 109, "action_name": "pick up the Ladle" }, { "action_id": 79, "action_name": "find a Sink" }, { "action_id": 133, "action_name": "put down the object in hand " }, { "action_id": 2, "action_name": "find a Faucet" }, { "action_id": 155, "action_name": "turn on the Faucet" }, { "action_id": 156, "action_name": "turn off the Faucet" }, { " action_id": 64, "action_name": "find a Ladle" }, { "action_id": 109, "action_name": "pick up the Ladle" }, { "action_id": 18, "action_name": "find a DiningTable" }, { "action_id": 133, "action_name": "put down the object in hand" }] }" } ], "images": [ "example.png" ] }, RFT dataset example { { "id": "trial_T20190909_062150_965386_remain_0", "question": "{Our_RFT_prompt}", "answer": "[ 'Goto handtowelholder ','Pickup handtowel ','Goto garbagecan ','Put handtowel ']", "message": "[{\"role\": \"system\", \"content\": \"Solve the question. The user asks a question, and you solves it. You first thinks about the reasoning process in the mind and then provides the user with the answer .\"}, {\"role\": \"user\", \"content\": [{\"type\": \"image\", \"image\": \"example.jpg\"}, {\"type\": \"text\", \"text\": \"{Our_RFT_prompt }\"}]}]" } }, 18 Preprint. Under review. Figure 7: Summary of SFT training results. Metric Value Epochs 3.0 Total FLOPs 3.13e13 Training Loss 0.252 Runtime (s) 21111.79 Samples/sec 0.142 Steps/sec 0.018 Training Loss of SFT stage Figure 8: Training loss curve during SFT stage. C Additional Details of RFT training stage C.1 Training Dataset Details We construct our reinforcement fine-tuning (RFT) dataset based on the ALFRED benchmark, following the decomposition and formatting strategy described in Section 3. Notably, we do not reuse the SFT-distilled dataset for reinforcement fine-tuning. This decision is motivated by two key considerations: (1) the distilled data may contain suboptimal trajectories, introducing noise into the learning signal; (2) the distilled instruction format is tightly coupled with the benchmark evaluation prompts, whereas our constructed dataset introduces instruction variations that encourage greater policy generalization and better isolate the impact of reinforcement learning. The resulting dataset contains
|
https://arxiv.org/abs/2505.22050v1
|
43,898 samples, each formatted to include a natural language instruction, a visual observation, and a ground-truth action sequence used for reward computation. We provide a full example of a training sample from the RFT dataset for reference in figureB.2 C.2 Training Hyperparameters We implement reinforcement fine-tuning using the OpenRLHF [16] framework, adopting the Generalized Reinforced Preference Optimization (GRPO) algorithm[ 36] to optimize policy learning from structured reward feedback. A full list of training hyperparameters is provided in Table 5. C.3 Training Log and Result We record the reinforcement fine-tuning process using several key indicators, as visualized in Figure 9. The total reward refers to the combined score of the format reward and the accuracy reward. Due to the use of an online filtering strategy during training, we distinguish between two types of accuracy reward: accuracy reward (filtered) , which reflects the reward from selected high-quality samples that pass the filtering criteria, and accuracy reward (original) , which represents the average reward across all generated responses prior to filtering. We also report two types of length statistics: response length , which quantifies the number of tokens generated by the model for each output, and total length , which denotes the combined token length of the input prompt and generated response. 19 Preprint. Under review. Hyperparameter Value Hyperparameter Value ref_num_nodes 1 vllm_num_engines 8 ref_num_gpus_per_node 8 actor_num_gpus_per_node 8 actor_num_nodes 1 vllm_tensor_parallel_size 1 vllm_gpu_memory_utilization 0.65 vllm_enable_sleep True vllm_sync_backend nccl temperature 1.0 max_epochs 1 max_episodes 10 prompt_max_len 3000 max_samples_len 10000 generate_max_len 4096 advantage_estimator group_norm zero_stage 3 actor_learning_rate 1e-6 init_kl_coef 0.0 n_samples_per_prompt 8 micro_train_batch_size 1 micro_rollout_batch_size 2 train_batch_size 128 rollout_batch_size 128 freeze_prefix visual enable_accuracy_filter True accuracy_lower_bound 0.1 accuracy_upper_bound 0.9 Table 5: Hyperparameter configuration used during reinforcement fine-tuning. Accuracy Rewards Accuracy Rewards (Original ) Forma tRewards Tota lRewards Response Length Tota lLength Figure 9: Training curve during reinforcement fine-tuning. The figure shows the progression of total reward, filtered and unfiltered accuracy reward, and generation length statistics. D Additional Details for Evaluation D.1 Detailed Introduction to EmbodiedBench EmbodiedBench is a comprehensive interactive benchmark designed to evaluate vision- language agents in embodied planning scenarios. Unlike static visual question answering settings, EmbodiedBench offers dynamic, simulation-based environments where agents must generate and execute multi-step plans grounded in first-person visual observations and natural language instructions. The benchmark spans four embodied environments and supports over 1,100 diverse tasks with hierarchical action levels, covering both high-level planning and low-level control. In our work, we focus on two high-level planning environments within EmbodiedBench: EB-ALFRED. EB-ALFRED is built upon the ALFRED dataset [ 40] and implemented on top of the AI2-THOR simulator [ 22]. It supports eight core skill types such as pick up ,put down ,find,open/close , and turn on/off . The environment provides egocentric visual inputs and textual feedback (e.g., success/failure messages), enabling agents to adaptively plan 20 Preprint. Under review. and act. Compared to the original ALFRED setup, EB-ALFRED enhances object diversity and simulator robustness. Specifically, it supports multiple object instances of the same type, merges redundant actions (e.g., unified put down ), and dynamically adjusts the action space size (ranging from 171 to 298).
|
https://arxiv.org/abs/2505.22050v1
|
These improvements provide a more realistic and flexible environment for assessing embodied planning capabilities. EB-Habitat. EB-Habitat extends the Language Rearrangement benchmark [ 35], based on the Habitat 2.0 simulator. It focuses on five high-level skills: navigation ,pick,place ,open , and close . Unlike ALFRED, navigation in EB-Habitat is constrained to receptacle-type targets, requiring more sophisticated exploration and scene understanding. The environment includes 282 instruction templates and places more emphasis on spatial reasoning and location-aware planning, making it a complementary testbed for generalization. Task Subsets. To enable fine-grained capability analysis, Embench introduces six distinct task subsets: •Base: Evaluates standard task-solving skills under low to medium complexity, testing general planning competence. •Common Sense: Assesses agents’ ability to reason over implicit object references and everyday knowledge. •Complex Instruction: Presents long, noisy or ambiguous contexts to evaluate the agent’s ability to extract user intent. •Spatial Awareness: Requires understanding object relationships in space, such as relative positions or arrangements. •Visual Appearance: Involves identifying objects via attributes like color or shape, testing fine-grained visual recognition. •Long Horizon: Contains tasks demanding long sequences of actions (often exceeding 15 steps), stressing planning depth and temporal consistency. Table 6: Examples of each task type from EB-ALFRED and EB-Habitat. Task Subset ALFRED Example Habitat Example Base Put washed lettuce in the refrigerator. Move one of the pear items to the in- dicated sofa. Common Sense Place washed leafy green vegetable in a receptacle that can keep it fresh.Prepare for a game by delivering something to play with to the TV stand. Complex Instruction Place the washed lettuce in the refrig- erator. This way, it’s ready for any delightful recipe ideas you have.When you find the fridge door open, go ahead and move one bowl to the sofa; otherwise, transport one ham- mer to the sofa. Spatial Awareness Put two spray bottles in the cabinet under the sink against the wall.Move a spatula from the right counter to the right receptacle of the left counter. Visual Appearance Put a knife in a blue container onto the black table in the corner.Deliver a small red object with green top to the indicated large gray piece of furniture. Long Horizon Pick up knife, slice apple, put knife in bowl, heat apple slice in microwave, put apple slice on table.Move the rubrics cube to the left counter; the towel to the left counter, and the bowl to the brown table. Each subset is designed to probe a specific capability of embodied reasoning, such as com- monsense inference, spatial understanding, or long-horizon planning. In our experiments, we evaluate model performance across all six subsets to provide a fine-grained analysis. As shown in Table 6, these categories span a wide range of reasoning challenges. Notably, 21 Preprint. Under review. since our reinforcement fine-tuning dataset only includes Base tasks, we observe a signif- icantly larger performance gain in this category, whereas improvements in other subsets are relatively modest. This highlights the need for more diverse training data to support generalizable planning across varied task types. Overall, Embench provides a rigorous, scalable, and diagnostic framework for benchmark- ing embodied agents
|
https://arxiv.org/abs/2505.22050v1
|
across diverse real-world challenges. In our setup, we use EB-ALFRED for in-domain training and evaluation, while EB-Habitat serves as an out-of-domain testbed to examine generalization performance. D.2 Detailed Introduction to Baselines To comprehensively evaluate our proposed method, we compare it against a diverse set of baselines, covering both proprietary and open-source models, as well as models specifically optimized for multimodal reasoning and embodied planning. (1)Closed-source models : we include several leading proprietary vision-language models as strong general-purpose baselines, including Claude-3.5-Sonnet[ 3], Gemini-2.0-flash[ 4], GPT-4o[2], and GPT-4o-mini[1]. (2)Open-source general VLMs : we evaluate widely adopted open-source VLMs trained for generic multimodal tasks, such as LLaMA-3.2-Vision-11B[ 5], Qwen2.5-VL-7B[ 7] and InternVL2.5-8B[9]. (3)Open-source reasoning VLMs : we further include two representative models that have been explicitly optimized for multimodal reasoning, including MM-Eureka[27] and R1-VL[59]. MM-Eureka extends rule-based reinforcement learning to multimodal reasoning, enabling models to improve through reward-driven optimization without supervised fine-tuning. It reproduces key behaviors from language-only RL systems, such as reflection and reward- aligned response growth, achieving strong data efficiency and reasoning performance. R1-VL enhances step-by-step reasoning in multimodal LLMs via StepGRPO, a reinforcement learning framework with dense, rule-based rewards for accuracy and logical consistency. It surpasses imitation learning by guiding models to self-correct flawed reasoning, achieving superior results on multiple benchmarks. We also attempted to evaluate other open-source reasoning models, such as VisualRFT[ 25] and Open-R1[ 12]. However, their inference speed was prohibitively slow, resulting in impractically long evaluation time on interactive benchmarks. Additionally, their final planning performance remained poor for embodied planning scenarios. (4)Embodied VLMs : we also include RoboBrain[ 19] and TAPA[ 51], two representative open- source large models designed for embodied tasks. TAPA is the first model specifically optimized for embodied multi-step planning, but it lacks visual perception capability; thus, we convert visual observations into textual descriptions for evaluation. RoboBrain is a state-of-the-art VLM for embodied scenarios that integrates robotic and gen- eral multimodal data through a multi-stage training pipeline,leveraging long-horizon video and high-resolution image supervision to enhance manipulation and planning performance. While there exist other VLMs designed for embodied settings, many of them are unavailable for public use, such as ReasonRFT[ 44], Embodied-R[ 62], and Embodied-Reasoner[ 60]. Other models, such as EmbodiedGPT[ 30] and TAPA[ 51], exhibit poor generalization to new task distributions, achieving near-zero scores on Embench tasks and revealing a lack of transferable planning capabilities. D.3 Evaluation Metrics We evaluate model performance using task success rate , defined as the percentage of tasks in which the agent successfully completes all required goals. A task is considered successful 22 Preprint. Under review. only if the generated action sequence leads to the environment reaching a final state that satisfies all predefined goal-checking conditions. During evaluation, the vision-language model generates multi-step plans at each interaction step based on the current egocentric observation. If a plan fails—either by producing an invalid action or failing to progress toward the task goal—the agent restarts planning from the latest valid state. D.4 More Experiment Results In addition to task success rate, we provide supplementary evaluation results using two additional metrics: Progress Rate (PR)
|
https://arxiv.org/abs/2505.22050v1
|
and Environment Steps (ES) . Progress Rate (PR) quantifies the degree to which the agent completes the task, measured as the proportion of goal conditions satisfied by the final environment state. This metric provides a finer-grained signal than binary success, especially for partially completed tasks. Environment Steps (ES) refers to the number of actions executed in the environment before task termination. A lower ES generally indicates more efficient planning and fewer redundant or failed actions. Complete results across these metrics are reported in Appendix Tables 7 and 8. EB-ALFRED (Seen) Model Avg Base Common Complex Visual Spatial Long PR ES PR ES PR ES PR ES PR ES PR ES PR ES Closed-Source MLLMs Claude-3.5-Sonnet 70.11 14.9 72.67 12.2 65.83 12.74 73.33 11.48 65.5 14.02 68.83 16.96 74.5 21.98 Gemini-2.0-flash 57.13 16.5 61.83 13.96 60.67 14.0 55.33 15.16 55.33 15.26 46.67 17.04 63.0 23.56 GPT-4o 61.78 16.77 65.67 12.54 57.17 16.1 74.67 13.92 58.33 15.2 52.33 17.58 62.5 25.43 GPT-4o-mini 30.42 19.69 36.33 17.32 29.83 18.06 38.0 17.74 27.33 18.48 31.0 19.9 20.0 26.62 Open-Source General MLLMs Qwen2.5-VL-7B 6.86 9.4 5.67 8.78 4.0 4.2 5.0 5.28 5.33 7.16 0.67 8.26 20.5 22.72 InternVL2.5-8B 5.78 7.87 6.17 8.2 0.67 4.9 16.0 8.92 4.0 6.78 6.33 7.52 1.5 10.92 Open-Source Reasoning MLLMs R1-VL-7B 2.78 4.01 3.0 3.22 3.0 2.06 6.0 1.7 0.67 1.62 0.0 2.66 4.0 12.78 MM-Eureka-Qwen-7B 6.59 8.48 8.67 7.64 5.33 5.04 8.67 9.72 3.67 6.46 0.67 6.58 12.5 15.42 Open-Source Embodied MLLMs RoboBrain 1.22 6.7 3.33 6.1 0.67 6.3 0.67 3.68 0.67 7.56 0 6.36 2.0 10.22 Tapa 0 0.03 0 0.06 0 0 0 0 0 0.04 0 0.08 0 0 Open-Source Embodied + Reasoning MLLMs Ours (Base) 6.86 9.4 5.67 8.78 4.0 4.2 5.0 5.28 5.33 7.16 0.67 8.26 20.5 22.72 Ours (SFT only) 23.8 15.06 39 13.14 26.6 13.04 27.6 12.56 19.3 14.12 14.3 15.16 16.5 22.38 Ours (SFT+RFT) 44.25 18.53 61.6 15.58 48.6 16.62 56.7 15.94 38 19.06 42.6 17.08 18 26.9 Table 7: Progress Rate (PR) and Environment Steps (ES) on EB-ALFRED (Seen) 23 Preprint. Under review. EB-Habitat (Unseen) Model Avg Base Common Complex Visual Spatial Long PR ES PR ES PR ES PR ES PR ES PR ES PR ES Closed-Source MLLMs Claude-3.5-Sonnet 70.9 10.7 98 6.54 69.5 10.46 75.5 10.6 75.1 10.74 45.2 9.44 62.1 16.42 Gemini-2.0-flash 38.5 13.41 76.5 8.56 31.5 12.9 34 15.66 32.7 13.7 37 12 19.8 17.66 GPT-4o 60.8 14.32 85.3 9.76 34 14.74 67.5 13.34 64.3 13.82 46.3 14.78 67.2 19.5 GPT-4o-mini 44.2 18.8 73.6 10.96 46 18.78 40.5 19.76 36.8 21.76 47.5 18.86 20.6 22.7 Open-Source General MLLMs Qwen2.5-VL-7B 19.05 12.58 44.5 10.64 6.5 14.9 17 11.12 6.4 14.12 28.8 11.74 11.1 12.94 InternVL2.5-8B 26 16.77 52.9 13.1 13 19.1 22 16.48 21.6 18.36 35.4 18.24 11.1 15.32 Open-Source Reasoning MLLMs R1-VL-7B 8.06 5.08 24.6 5.9 0 3.78 4 4.38 6 1.8 11.8 7.78 2 6.88 MM-Eureka-Qwen-7B 22.03 13.53 40.5 10.24 20.5 15.78 19 11.34 15.9 15.66 31.3 13.74 5 14.4 Open-Source Embodied MLLMs RoboBrain 20.18 10.68 39.1 8.08 9.5 9.08 21 11.3 12.9 13.9 31.1 11.48 7.5
|
https://arxiv.org/abs/2505.22050v1
|
10.24 Tapa 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Open-Source Embodied + Reasoning MLLMs Ours (Base) 19.05 12.58 44.5 10.64 6.5 14.9 17 11.12 6.4 14.12 28.8 11.74 11.1 12.94 Ours (SFT only) 20.05 12.40 38.75 10.62 7 12.3 19.5 12.76 16 11.24 34.6 15.26 4.5 12.26 Ours (SFT+RFT) 27.18 13.31 58.75 8.72 15 14.98 23 13.3 20 13.36 37 13.78 9.33 15.76 Table 8: Progress Rate (PR) and Environment Steps (ES) on EB-Habitat (Unseen) E Case study and Visualization E.1 Case Study To better understand how our model performs embodied multi-step planning, we present detailed case studies illustrating its behavior and reasoning process. Specifically, we compare the outputs of our reinforcement-tuned model with the base Qwen2.5-VL model to highlight improvements in planning coherence and action correctness, we also present full multi- step execution trajectories from our model to show how it plans and interacts with the environment to complete specific tasks. Figure 10 and Figure 11 show side-by-side comparisons between the two models in the EB-ALFRED and EB-Habitat environments, respectively. We observe that the base model often produces incomplete or illogical plans, while our model generates more structured and context-aware action sequences, along with interpretable reasoning steps. Figure 12, Figure 13,Figure 14 and Figure 15 further visualize full planning trajectories executed by our model in representative tasks from EB-ALFRED and EB-Habitat. These examples demonstrate the model’s ability to maintain long-horizon coherence, correctly interpret dynamic observations, and recover from intermediate failures. E.2 Prompt In this section, we document the full prompt formats used in both evaluation and training stages, including for EB-ALFRED, EB-Habitat, and our reinforcement fine-tuning (RFT) process. EB-ALFRED Prompt. The EB-ALFRED prompt is used for evaluating models within the EB-ALFRED environment of Embench. Our SFT stage also adopts this prompt format. EB-Habitat Prompt. This prompt format is used in Embench’s EB-Habitat environment, which differs from EB-ALFRED in simulator, object distribution, and language patterns. 24 Preprint. Under review. RFT Training Prompt. During reinforcement fine-tuning, we adopt a custom prompt format. While still grounded in the same simulation environment, our RFT prompts include modifications in action representation and instruction phrasing. These differences help introduce broader data diversity and encourage the model to learn a more generalizable planning policy. "reasoning": "{ \"visual_state_description \": \"The image shows a room with a bed, a nightstand, and a wall. There are no visible books or desks in the image. \", \"reasoning_and_reflection \": \"The task is to put books on a desk. However, there are no books or desks visible in the image. This suggests that the books might be located elsewhere in the room or not present at all. \", \"language_plan \": \"First, locate the books. Once found, pick them up. Next, find the desk and place the books on it. \", \"executable_plan \": [ {\"action_id \": 47, \"action_name \": \"find a Book \"}, {\"action_id \": 94, \"action_name \": \"pick up the Book \"}, {\"action_id \": 47, \"action_name \": \"find a Book \"}, {\"action_id \": 94, \"action_name \": \"pick up the Book \"}, {\"action_id \": 49, \"action_name \": \"find
|
https://arxiv.org/abs/2505.22050v1
|
a Desk \"}, {\"action_id \": 133, \"action_name \": \"put down the object in hand \"} ] }""reasoning": "{ \"visual_state_description \": \"The image shows a room with a bed, a dresser, and a safe. The bed has a blanket and a teddy bear on it. The dresser has a few items on it, including a book . The safe is closed and located near the dresser. \" , \"reasoning_and_reflection \": \"The task is to put the books on the desk. The first step is to find the books. The books are located on the dresser. Next, pick up the books. The books are not inside a closed receptacle, so the 'pick up' action is valid. Then, put down the books on the desk. The desk is visible in the image, so the 'put down' action is valid. The final step is to close the safe. The safe is closed, so the 'close' action is valid. \" , \"language_plan \": \"First, locate the books on the dresser. Once found, pick up the books. Next, search for the desk and place the books on it. Finally, find the safe and close it. \", \"executable_plan \": [ {\"action_id \": 177, \"action_name \": \"find a Book_2 \"}, {\"action_id \": 111, \"action_name \": \"pick up the Book_2 \"}, {\"action_id \": 18, \"action_name \": \"find a Desk \"}, {\"action_id \": 133, \"action_name \": \"put down the object in hand \"}, {\"action_id \": 48, \"action_name \": \"find a Safe \"}, {\"action_id \": 136, \"action_name \": \"close the Safe \"} ] }"Instruction You are a home -operating robot. Given a task, you must use a set of actions to achieve the desired outcome. The available action id and action names are: {}. Guidelines: {} Now the human instruction is: Put the books on the desk. The action history: Step {}, action id {}, {}, env feedback: {} You need to describe current visual state from the image, summarize interaction history and environment feedback, and reason why the last action or plan failed and did not finish the task, output your new plan to achieve the goal from current state. At the end, output the excutable plan with action ids(0 ~ {len(self.actions) -1}) from the available actions. The output json format should be {'visual_state_description':str, 'reasoning_and_reflection':str, 'language_plan':str, 'executable_plan':List[{'action_id':int, 'action_name':str}...]} Qwen2 -VL-7B-Instruct Ours The photo of the room does not show a dresser or any books Although a safe appears in the scene, there is no indication that the task is related to the safe Figure 10: Case comparison (Our model vs Base model) in EB-ALFRED "action_description" : "place at the TV stand", "reasoning" : "{ \"visual_state_description \": \"The image shows a table with various items including a mug, a red cup, a sausage, a pair of scissors, and a small orange container. A blue and white object is partially visible on the right side. \", \"reasoning_and_reflection \": \"The last action involved navigating to the right counter in the kitchen, which was successful as indicated by the feedback. However, the task is to find a mug on the right counter and move it to
|
https://arxiv.org/abs/2505.22050v1
|
the TV stand. Since the right counter is already visited, the next step is to identify the mug and pick it up. \", \"language_plan \": \"The task is to find a mug on the right counter and move it to the TV stand. The right counter has been navigated to, and the mug needs to be picked up and placed on the TV stand. \", \"executable_plan \": [ {\"action_id \": 10, \"action_name \": \"navigate to the right counter in the kitchen \"}, {\"action_id \": 44, \"action_name \": \"pick up the mug \"}, {\"action_id \": 8, \"action_name \": \"navigate to the TV stand \"}, {\"action_id \": 52, \"action_name \": \"place at the TV stand \"} ] }""action_description": "place at the table 2", "reasoning": "{ \"visual_state_description \": \"The robot is currently at the right counter in the kitchen. There is a mug on the right counter, but the robot cannot pick it up because it is not near the robot. \", \"reasoning_and_reflection \": \"The last action failed because the robot cannot pick up an object that is not near it. To find the mug, the robot needs to navigate to the right counter and then pick it up. \", \"language_plan \": \"The robot should first navigate to the right counter, then pick up the mug, and finally move to the TV stand and place the mug there. \", \"executable_plan \": [ {\"action_id \": 10, \"action_name \": \"navigate to the right counter in the kitchen \"}, {\"action_id \": 44, \"action_name \": \"pick up the mug \"}, {\"action_id \": 8, \"action_name \": \"navigate to the TV stand \"}, {\"action_id \": 51, \"action_name \": \"place at the TV stand \"} ] }"Instruction You are a home -operating robot. Given a task, you must use a set of actions to achieve the desired outcome. The available action id and action names are: {}. Guidelines: {} Now the human instruction is: Find a mug on the right counter and move it to the TV stand . The action history: Step {}, action id {}, {}, env feedback: {} You need to describe current visual state from the image, summarize interaction history and environment feedback, and reason why the last action or plan failed and did not finish the task, output your new plan to achieve the goal from current state. At the end, output the excutable plan with action ids(0 ~ {len(self.actions) -1}) from the available actions. The output json format should be {'visual_state_description':str, 'reasoning_and_reflection':str, 'language_plan':str, 'executable_plan':List[{'action_id':int, 'action_name':str}...]} Qwen2 -VL-7B-Instruct Ours The action differs from the plan. Table 2's source is un - substantiated, and repeated attempts in subsequent stepscaused failure. Figure 11: Case comparison (Our model vs Base model) in EB-Habitat 25 Preprint. Under review. Instruction Examine a tv remote next to the light of a tall lamp. Action 1 "env_step": 1 "env_feedback": "Last action executed successfully." "action_id": 68 "action_description": "find a RemoteControl""reasoning": "{"visual_state_description": "The image shows a room with a wooden table, a box on top of it, and a window with sunlight coming through. There is a tall lamp near the window, but no TV remote
|
https://arxiv.org/abs/2505.22050v1
|
is visible in the scene.", "reasoning_and_reflection": "The task requires examining a TV remote under the light of a tall lamp. However, there is no TV remote visible in the image. The lamp is present, but the TV remote is not. This suggests that the TV remote might be out of view or not in the current scene.", "language_plan": "First, locate the TV remote. Once found, pick it up. Next, find the tall lamp and turn it on to examine the remote.", "executable_plan": [{"action_id": 68, "action_name": "find a RemoteControl"}, {"action_id": 116, "action_name": "pick up the RemoteControl"}, {"action_id": 27, "action_name": "find a FloorLamp"}, {"action_id": 153, "action_name": "turn on the FloorLamp"}]}" Action 2 "env_step": 2 "env_feedback": "Last action executed successfully." "action_id": 116 "action_description": "pick up the RemoteControl" Action 3 "env_step": 3 "env_feedback": "Last action executed successfully." "action_id": 27 "action_description": "find a FloorLamp" Action 4 "env_step": 4 "env_feedback": "Last action executed successfully." "action_id": 153 "action_description": "turn on the FloorLamp" Task Success Figure 12: Our model’s Full Trajectory execution in EB-ALFRED 26 Preprint. Under review. Action 1 "env_step": 1 "env_feedback": "Last action executed successfully." "action_id": 10 "action_description": "navigate to the right counter in the kitchen""reasoning": "{"visual_state_description": "The image shows a living room with a gray sofa, a coffee table, a staircase, and a blue object resembling a person. The apple is not visible in the image.", "reasoning_and_reflection": "The task is to retrieve an apple and place it on the sofa. However, the apple is not visible in the image. This suggests that the apple might be located elsewhere in the house, possibly in the kitchen or another room. The current plan should involve navigating to the kitchen to find the apple and then placing it on the sofa.", "language_plan": "The task is to find an apple and place it on the sofa. Since the apple is not visible in the current room, the robot should navigate to the kitchen to search for the apple.", "executable_plan": [{"action_id": 10, "action_name": "navigate to the right counter in the kitchen"}, {"action_id": 33, "action_name": "pick up the apple"}, {"action_id": 12, "action_name": "navigate to the sofa"}, {"action_id": 56, "action_name": "place at the sofa"}]}" Action 2 "env_step": 2 "env_feedback": "Last action executed successfully and you are holding apple." "action_id": 33 "action_description": "pick up the apple" Action 3 "env_step": 3 "env_feedback": "Last action executed successfully." "action_id": 12 "action_description": "navigate to the sofa" Action 4 "env_step": 4 "env_feedback": "Last action executed successfully and you are holding nothing." "action_id": 56 "action_description": "place at the sofa" Task Success Instruction Retrieve a apple and place it into the predetermined sofa. Figure 13: Our model’s Full Trajectory execution example 1 in EB-Habitat 27 Preprint. Under review. Action 1 "env_step": 1 "env_feedback": "Last action executed successfully." "action_id": 8 "action_description": "navigate to the TV stand""reasoning": "{"visual_state_description": "The image shows a living room with a sofa, a TV stand, and a table. A ball is located on the floor near the TV stand.", "reasoning_and_reflection": "The task is to move a ball from the floor to the sofa. The ball is clearly visible on the floor near the TV stand. The next step is
|
https://arxiv.org/abs/2505.22050v1
|
to pick up the ball and then place it on the sofa.", "language_plan": "The plan is to first navigate to the ball, pick it up, and then place it on the sofa.", "executable_plan": [{"action_id": 8, "action_name": "navigate to the ball"}, {"action_id": 16, "action_name": "pick up the ball"}, {"action_id": 12, "action_name": "navigate to the sofa"}, {"action_id": 56, "action_name": "place at the sofa"}]}" Action 2 "env_step": 2 "env_feedback": "Last action is invalid. Robot cannot pick any object that is not near the robot. Navigate to other place to find the object." "action_id": 16 "action_description": "pick up the ball" Action 3 "env_step": 3 "env_feedback": "Last action executed successfully." "action_id": 8 "action_description": "navigate to the TV stand" Action 4 "env_step": 4 "env_feedback": "Last action is invalid. Robot cannot pick any object that is not near the robot. Navigate to other place to find the object." "action_id": 16 "action_description": "pick up the ball"Instruction Move a ball to the sofa. "reasoning": "{"visual_state_description": "The image shows a room with a plant in a pot on the floor, a sofa, and a table with some items on it. The ball is located on the floor next to the plant.", "reasoning_and_reflection": "The previous action was to navigate to the TV stand, but the ball was not found there. The ball is now visible on the floor next to the plant. The robot needs to pick up the ball and move it to the sofa.", "language_plan": "The robot should navigate to the ball, pick it up, and then move to the sofa to place the ball there.", "executable_plan": [{"action_id": 8, "action_name": "navigate to the TV stand"}, {"action_id": 16, "action_name": "pick up the ball"}, {"action_id": 12, "action_name": "navigate to the sofa"}, {"action_id": 56, "action_name": "place at the sofa"}]}" "reasoning": "{"visual_state_description": "The image shows a room with a plant in a pot, a blue and white object, a book, and a red and green object on the floor. The ball is not visible in the image.", "reasoning_and_reflection": "The previous attempts to pick up the ball were unsuccessful due to the robot being too far away from the ball. The ball is likely located on the floor, but the robot has not yet navigated to the correct position to pick it up. The current plan should involve navigating to the floor to find and pick up the ball.", "language_plan": "The robot needs to navigate to the floor to find the ball and then pick it up.", "executable_plan": [{"action_id": 6, "action_name": "navigate to the floor"}, {"action_id": 16, "action_name": "pick up the ball"}]}" Action 5 "env_step": 5 "env_feedback": "Last action executed successfully." "action_id": 6 "action_description": "navigate to the table 1" Action 6 "env_step": 6 "env_feedback": "Last action executed successfully and you are holding ball." "action_id": 16 "action_description": "pick up the ball" "reasoning": "{"visual_state_description": "The robot is currently standing in front of the table 1, holding a ball. The ball is located on the table 1, along with a toy airplane, a cup, a bowl, and a lemon. The sofa is visible in the background.", "reasoning_and_reflection": "The robot has already picked up the ball from the table 1.
|
https://arxiv.org/abs/2505.22050v1
|
The next step is to navigate to the sofa and place the ball there. The previous attempts to pick up the ball were successful, but the robot needs to move to the sofa to complete the task.", "language_plan": "The robot needs to navigate to the sofa and place the ball there.", "executable_plan": [{"action_id": 12, "action_name": "navigate to the sofa"}, {"action_id": 56, "action_name": "place at the sofa"}]}" Action 7 "env_step": 7 "env_feedback": "Last action executed successfully." "action_id": 12 "action_description": "navigate to the sofa" Action 8 "env_step": 8 "env_feedback": "Last action executed successfully and you are holding nothing." "action_id": 56 "action_description": "place at the sofa" Task SuccessReplanningReplanning Replanning Figure 14: Our model’s Full Trajectory execution example 2 (part 1) in EB-Habitat 28 Preprint. Under review. Action 1 "env_step": 1 "env_feedback": "Last action executed successfully." "action_id": 8 "action_description": "navigate to the TV stand""reasoning": "{"visual_state_description": "The image shows a living room with a sofa, a TV stand, and a table. A ball is located on the floor near the TV stand.", "reasoning_and_reflection": "The task is to move a ball from the floor to the sofa. The ball is clearly visible on the floor near the TV stand. The next step is to pick up the ball and then place it on the sofa.", "language_plan": "The plan is to first navigate to the ball, pick it up, and then place it on the sofa.", "executable_plan": [{"action_id": 8, "action_name": "navigate to the ball"}, {"action_id": 16, "action_name": "pick up the ball"}, {"action_id": 12, "action_name": "navigate to the sofa"}, {"action_id": 56, "action_name": "place at the sofa"}]}" Action 2 "env_step": 2 "env_feedback": "Last action is invalid. Robot cannot pick any object that is not near the robot. Navigate to other place to find the object." "action_id": 16 "action_description": "pick up the ball" Action 3 "env_step": 3 "env_feedback": "Last action executed successfully." "action_id": 8 "action_description": "navigate to the TV stand" Action 4 "env_step": 4 "env_feedback": "Last action is invalid. Robot cannot pick any object that is not near the robot. Navigate to other place to find the object." "action_id": 16 "action_description": "pick up the ball"Instruction Move a ball to the sofa. "reasoning": "{"visual_state_description": "The image shows a room with a plant in a pot on the floor, a sofa, and a table with some items on it. The ball is located on the floor next to the plant.", "reasoning_and_reflection": "The previous action was to navigate to the TV stand, but the ball was not found there. The ball is now visible on the floor next to the plant. The robot needs to pick up the ball and move it to the sofa.", "language_plan": "The robot should navigate to the ball, pick it up, and then move to the sofa to place the ball there.", "executable_plan": [{"action_id": 8, "action_name": "navigate to the TV stand"}, {"action_id": 16, "action_name": "pick up the ball"}, {"action_id": 12, "action_name": "navigate to the sofa"}, {"action_id": 56, "action_name": "place at the sofa"}]}" "reasoning": "{"visual_state_description": "The image shows a room with a plant in a pot, a blue and white object, a book, and a red and green object on the
|
https://arxiv.org/abs/2505.22050v1
|
floor. The ball is not visible in the image.", "reasoning_and_reflection": "The previous attempts to pick up the ball were unsuccessful due to the robot being too far away from the ball. The ball is likely located on the floor, but the robot has not yet navigated to the correct position to pick it up. The current plan should involve navigating to the floor to find and pick up the ball.", "language_plan": "The robot needs to navigate to the floor to find the ball and then pick it up.", "executable_plan": [{"action_id": 6, "action_name": "navigate to the floor"}, {"action_id": 16, "action_name": "pick up the ball"}]}" Action 5 "env_step": 5 "env_feedback": "Last action executed successfully." "action_id": 6 "action_description": "navigate to the table 1" Action 6 "env_step": 6 "env_feedback": "Last action executed successfully and you are holding ball." "action_id": 16 "action_description": "pick up the ball" "reasoning": "{"visual_state_description": "The robot is currently standing in front of the table 1, holding a ball. The ball is located on the table 1, along with a toy airplane, a cup, a bowl, and a lemon. The sofa is visible in the background.", "reasoning_and_reflection": "The robot has already picked up the ball from the table 1. The next step is to navigate to the sofa and place the ball there. The previous attempts to pick up the ball were successful, but the robot needs to move to the sofa to complete the task.", "language_plan": "The robot needs to navigate to the sofa and place the ball there.", "executable_plan": [{"action_id": 12, "action_name": "navigate to the sofa"}, {"action_id": 56, "action_name": "place at the sofa"}]}" Action 7 "env_step": 7 "env_feedback": "Last action executed successfully." "action_id": 12 "action_description": "navigate to the sofa" Action 8 "env_step": 8 "env_feedback": "Last action executed successfully and you are holding nothing." "action_id": 56 "action_description": "place at the sofa" Task SuccessReplanningReplanning Replanning Figure 15: Our model’s Full Trajectory execution example 2 (part 2) in EB-Habitat 29 Preprint. Under review. EB-ALFRED prompt "<image>## You are a robot operating in a home. Given a task, you must accomplish the task using a defined set of actions to achieve the desired outcome. ## Action Descriptions and Validity Rules* Find: Parameterized by the name of the receptacle to navigate to. So long as the object is present in the scene, this skill is always valid * Pick up: Parameterized by the name of the object to pick. Only valid if the robot is close to the object, not holding another object, and the object is not inside a closed receptacle.* Put down: Parameterized by the name of the object to put down to a nearby receptacle. Only valid if the robot is holding an object. * Drop: Parameterized by the name of the object to put down. It is different from Put down action, as this does not guarantee the held object will be put into a specified receptacle. * Open: Parameterized by the name of the receptacle to open. Only valid if the receptacle is closed and the robot is close to the receptacle. * Close: Parameterized by the name of the receptacle to
|
https://arxiv.org/abs/2505.22050v1
|
close. Only valid if the receptacle is open and the robot is close to the receptacle. * Turn on: Parameterized by the name of the object to turn on. Only valid if the object is turned off and the robot is close to the object. * Turn off: Parameterized by the name of the object to turn off. Only valid if the object is turned on and the robot is close to the object. * Slice: Parameterized by the name of the object to slice. Only valid if the object is sliceable and the robot is close to the object. ## The available action id (0 ~ 207) and action names are: {ALFRED ACTION LIST} ## Task Execution Example:{IN-CONTEXT TASK EXAMPLE} ## Guidelines 1. **Output Plan**: Avoid generating empty plan. Each plan should include no more than 20 actions. 2. **Visibility**: Always locate a visible object by the 'find 'action before interacting with it. 3. ** Action Guidelines**: Make sure match the action name and its corresponding action id in the output.newline Avoid performing actions that do not meet the defined validity criteria. For instance, if you want to put object in a receptacle, use 'put down 'rather than 'drop ' actions. 4. **Prevent Repeating Action Sequences**: Do not repeatedly execute the same action or sequence of actions. Try to modify the action sequence because previous actions do not lead to success. 5. ** Multiple Instances**: There may be multiple instances of the same object, distinguished by an index following their names, e.g., Cabinet_2, Cabinet_3. You can explore these instances if you do not find the desired object in the current receptacle. 6. **Reflection on History and Feedback**: Use interaction history and feedback from the environment to refine and improve your current plan. If the last action is invalid, reflect on the reason, such as not adhering to action rules or missing preliminary actions, and adjust your plan accordingly. ## Now the human instruction is: Rinse off a ladle and move it to the table. You are supposed to output in json. You need to describe current visual state from the image, output your reasoning steps and plan. At the end, output the action id (0 ~ 207) from the available actions to excute." 30 Preprint. Under review. EB-Habitat prompt <image>##You are a robot operating in a home. Given a task, you must accomplish the task using a defined set of actions to achieve the desired outcome. ## Action Descriptions and Validity Rules: * Navigation: Parameterized by the name of the receptacle to navigate to. So long as the receptacle is present in the scene, this skill is always valid. * Pick: Parameterized by the name of the object to pick. Only valid if the robot is close to the object, not holding another object, and the object is not inside a closed receptacle. * Place: Parameterized by the name of the receptacle to place the object on. Only valid if the robot is close to the receptacle and is holding an object. * Open: Parameterized by the name of the receptacle
|
https://arxiv.org/abs/2505.22050v1
|
to open. Only valid if the receptacle is closed and the robot is close to the receptacle. * Close: Parameterized by the name of the receptacle to close. Only valid if the receptacle is open and the robot is close to the receptacle. ## The available action id (0 ~ 69) and action names are:{HABITAT ACTION LIST} ## Task Execution Example:{IN-CONTEXT TASK EXAMPLE} ## Guidelines 1. **Output Plan**: Avoid generating empty plan. Each plan should include no more than 20 actions. 2. **Visibility**: If an object is not currently visible, use the \"Navigation\" action to locate it or its receptacle before attempting other operations. 3. **Action Validity **: Make sure match the action name and its corresponding action id in the output. Avoid performing actions that do not meet the defined validity criteria. 4. **Prevent Repeating Action Sequences**: Do not repeatedly execute the same action or sequence of actions. Try to modify the action sequence because previous actions do not lead to success. 5. **Multiple Instances**: There may be multiple instances of the same object, distinguished by an index following their names, e.g., cabinet 2, cabinet 3. You can explore these instances if you do not find the desired object in the current receptacle. 6. **Reflection on History and Feedback**: Use interaction history and feedback from the environment to refine and enhance your current strategies and actions. If the last action is invalid, reflect on the reason, such as not adhering to action rules or missing preliminary actions, and adjust your plan accordingly. ## Now the human instruction is: Move one of the pear items to the indicated sofa. You are supposed to output in json. You need to describe current visual state from the image, output your reasoning steps and plan. At the end, output the action id (0 ~ 69) from the available actions to excute." 31 Preprint. Under review. Our RFT prompt You are a robot operating in a home. Given a task, you must accomplish the task using a defined set of actions to achieve the desired outcome. ## Action Descriptions and Validity Rules * GotoLocation: Parameterized by the name of the target location or receptacle to navigate to. Always valid so long as the target exists in the scene. * PickupObject: Parameterized by the name of the object to pick up. Valid only if the robot is close to the object, is not holding anything, and the object is accessible. * PutObject: Parameterized by the name of the receptacle or surface where the held object will be placed. Valid only if the robot is holding an object. * ToggleObject: Parameterized by the name of the object whose state can be toggled (e.g., lamp, faucet). Valid only if the robot is close to the object. * CoolObject: Parameterized by the name of the object to cool. Requires the robot to be holding the object and near a cooling appliance such as a fridge. * SliceObject: Parameterized by the name of the object to slice. Requires that the object is slice-able and the robot holds an appropriate cutting tool.
|
https://arxiv.org/abs/2505.22050v1
|
* CleanObject: Parameterized by the name of the object to clean. Requires the robot to be near a water source and the object supports cleaning. * HeatObject: Parameterized by the name of the object to heat. Requires the robot to be holding the object and near a heating appliance such as a microwave or stove. ## The available action id (0 ~ 224) and action names are:{OUR RFT ACTION LIST} ## Guidelines 1. **Output Plan**: Avoid generating empty plan. Each plan should include no more than 20 actions. 2. **Visibility**: Always locate a visible object by the 'goto 'action before interacting with it. 3. ** Action Guidelines**: Make sure the action name and its corresponding action id match in the output. Avoid performing actions that do not meet the defined validity criteria. 4. **Prevent Repeating Action Sequences **: Do not repeatedly execute the same action or sequence of actions. 5. **Multiple Instances**: There may be multiple instances of the same object, distinguished by an index following their names, e.g., Cabinet_2. 6. **Reflection on History and Feedback**: Use interaction history and feedback from the environment to refine and improve your current plan. ## Expected JSON output format ```json {\"reasoning_and_reflection\": \"< string>\", \"visual_state_description\": \"<string>\", \"language_plan \": \"<string>\", \"executable_plan\": [ {\"action_id\": <int>, \"action _name\": \"<string>\"} ]} ``` ## Now the human instruction is: put a towel into a garbage can The history actions are: [{HISTORY LIST}] newlineConsidering the above interaction history and the current image state, to achieve the human instruction. newlineYou are supposed to output in json. You need to describe current visual state from the image, output your reasoning steps and plan. You shuold think carefully and output the comprehensive thought process in ' reasoning_and_reflection 'part. At the end, output the action id (0 ~ 224) from the available actions to execute." 32 Preprint. Under review. Part of EB-ALFRED Action list action id 1: find a Potato, action id 2: find a Faucet, action id 3: find a Ottoman, action id 4: find a CoffeeMachine, action id 5: find a Candle, action id 6: find a CD, action id 7: find a Pan, action id 8: find a Watch, action id 9: find a HandTowel, action id 10: find a SprayBottle, action id 11: find a BaseballBat, action id 12: find a CellPhone, action id 13: find a Kettle, action id 14: find a Mug, action id 15: find a StoveBurner, action id 16: find a Bowl, action id 17: find a Toilet, action id 18: find a DiningTable, action id 19: find a Spoon, action id 20: find a TissueBox, action id 21: find a Shelf, action id 22: find a Apple, action id 23: find a TennisRacket, action id 24: find a SoapBar, action id 25: find a Cloth, action id 26: find a Plunger, action id 27: find a FloorLamp, action id 28: find a ToiletPaperHanger, action id 29: find a CoffeeTable, action id 30: find a Spatula, action id 31: find a Plate, action id 32: find a Bed, action id 33: find a Glassbottle, action id 34: find
|
https://arxiv.org/abs/2505.22050v1
|
a Knife, action id 35: find a Tomato, action id 36: find a ButterKnife, action id 37: find a Dresser, action id 38: find a Microwave, action id 39: find a CounterTop, action id 40: find a GarbageCan, action id 41: find a WateringCan, action id 42: find a Vase, action id 43: find a ArmChair, action id 44: find a Safe, action id 45: find a KeyChain, action id 46: find a Pot, action id 47: find a Pen, action id 48: find a Cabinet, action id 49: find a Desk, action id 50: find a Newspaper, action id 51: find a Drawer, action id 52: find a Sofa, action id 53: find a Bread, action id 54: find a Book, action id 55: find a Lettuce, action id 56: find a CreditCard, action id 57: find a AlarmClock, action id 58: find a ToiletPaper, action id 59: find a SideTable, action id 60: find a Fork, action id 61: find a Box, action id 62: find a Egg, action id 63: find a DeskLamp, action id 64: find a Ladle, action id 65: find a WineBottle, action id 66: find a Pencil, action id 67: find a Laptop, action id 68: find a RemoteControl, action id 69: find a BasketBall, action id 70: find a DishSponge, action id 71: find a Cup, action id 72: find a SaltShaker, action id 73: find a PepperShaker, action id 74: find a Pillow, action id 75: find a Bathtub, action id 76: find a SoapBottle, action id 77: find a Statue, action id 78: find a Fridge, action id 79: find a Sink, action id 80: pick up the KeyChain, action id 81: pick up the Potato, action id 82: pick up the Pot, action id 83: pick up the Pen, action id 84: pick up the Candle, action id 85: pick up the CD, action id 86: pick up the Pan, action id 87: pick up the Watch, action id 88: pick up the Newspaper, action id 89: pick up the HandTowel, action id 90: pick up the SprayBottle, action id 91: pick up the BaseballBat, action id 92: pick up the Bread, action id 93: pick up the CellPhone, action id 94: pick up the Book, action id 95: pick up the Lettuce, action id 96: pick up the CreditCard, action id 97: pick up the Mug, action id 98: pick up the AlarmClock, action id 99: pick up the Kettle, action id 100: pick up the ToiletPaper 33 Preprint. Under review. EB-Habitat Action list action id 0: navigate to the cabinet 7, action id 1: navigate to the cabinet 6, action id 2: navigate to the cabinet 5, action id 3: navigate to the cabinet 4, action id 4: navigate to the refrigerator push point, action id 5: navigate to the chair 1, action id 6: navigate to the table 1, action id 7: navigate to the table 2, action id 8: navigate to the TV stand, action id 9: navigate to the sink in the kitchen, action id 10:
|
https://arxiv.org/abs/2505.22050v1
|
navigate to the right counter in the kitchen, action id 11: navigate to the left counter in the kitchen, action id 12: navigate to the sofa, action id 13: navigate to the refrigerator, action id 14: navigate to the left drawer of the kitchen counter, action id 15: navigate to the right drawer of the kitchen counter, action id 16: pick up the ball, action id 17: pick up the clamp, action id 18: pick up the hammer, action id 19: pick up the screwdriver, action id 20: pick up the padlock, action id 21: pick up the scissors, action id 22: pick up the block, action id 23: pick up the drill, action id 24: pick up the spatula, action id 25: pick up the knife, action id 26: pick up the spoon, action id 27: pick up the plate, action id 28: pick up the sponge, action id 29: pick up the cleanser, action id 30: pick up the plum, action id 31: pick up the pear, action id 32: pick up the peach, action id 33: pick up the apple, action id 34: pick up the lemon, action id 35: pick up the can, action id 36: pick up the box, action id 37: pick up the banana, action id 38: pick up the strawberry, action id 39: pick up the lego, action id 40: pick up the rubriks cube, action id 41: pick up the book, action id 42: pick up the bowl, action id 43: pick up the cup, action id 44: pick up the mug, action id 45: pick up the orange, action id 46: pick up the lid, action id 47: pick up the toy airplane, action id 48: pick up the wrench, action id 49: place at the chair 1, action id 50: place at the table 1, action id 51: place at the table 2, action id 52: place at the TV stand, action id 53: place at the sink in the kitchen, action id 54: place at the right counter in the kitchen, action id 55: place at the left counter in the kitchen, action id 56: place at the sofa, action id 57: place at the refrigerator, action id 58: place at the left drawer of the kitchen counter, action id 59: place at the right drawer of the kitchen counter, action id 60: open the refrigerator, action id 61: close the refrigerator, action id 62: open the cabinet 7, action id 63: open the cabinet 6, action id 64: open the cabinet 5, action id 65: open the cabinet 4, action id 66: close the cabinet 7, action id 67: close the cabinet 6, action id 68: close the cabinet 5, action id 69: close the cabinet 4 34 Preprint. Under review. Part of Our RFT Action list action id 1: goto apple, action id 2: goto armchair, action id 3: goto baseballbat, action id 4: goto basketball, action id 5: goto bathtubbasin, action id 6: goto bed, action id 7: goto bowl, action id 8: goto box, action id
|
https://arxiv.org/abs/2505.22050v1
|
9: goto bread, action id 10: goto butterknife, action id 11: goto cabinet, action id 12: goto candle, action id 13: goto cart, action id 14: goto cellphone, action id 15: goto cloth, action id 16: goto coffeemachine, action id 17: goto coffeetable, action id 18: goto countertop, action id 19: goto creditcard, action id 20: goto cup, action id 21: goto desk, action id 22: goto desklamp, action id 23: goto diningtable, action id 24: goto dishsponge, action id 25: goto drawer, action id 26: goto dresser, action id 27: goto egg, action id 28: goto floorlamp, action id 29: goto fork, action id 30: goto fridge, action id 31: goto garbagecan, action id 32: goto handtowelholder, action id 33: goto keychain, action id 34: goto knife, action id 35: goto laptop, action id 36: goto lettuce, action id 37: goto microwave, action id 38: goto mug, action id 39: goto newspaper, action id 40: goto ottoman, action id 41: goto pan, action id 42: goto pen, action id 43: goto pencil, action id 44: goto plate, action id 45: goto plunger, action id 46: goto pot, action id 47: goto potato, action id 48: goto remotecontrol, action id 49: goto safe, action id 50: goto shelf, action id 51: goto sidetable, action id 52: goto sinkbasin, action id 53: goto soapbar, action id 54: goto soapbottle, action id 55: goto sofa, action id 56: goto spatula, action id 57: goto spoon, action id 58: goto statue, action id 59: goto stoveburner, action id 60: goto tennisracket, action id 61: goto tissuebox, action id 62: goto toilet, action id 63: goto toiletpaper, action id 64: goto toiletpaperhanger, action id 65: goto tomato, action id 66: goto vase, action id 67: goto watch, action id 68: goto wateringcan, action id 69: pickup alarmclock, action id 70: pickup apple, action id 71: pickup baseballbat, action id 72: pickup basketball, action id 73: pickup book, action id 74: pickup bowl, action id 75: pickup box, action id 76: pickup bread, action id 77: pickup butterknife, action id 78: pickup candle, action id 79: pickup cd, action id 80: pickup cellphone, action id 81: pickup cloth, action id 82: pickup creditcard, action id 83: pickup cup, action id 84: pickup dishsponge, action id 85: pickup egg, action id 86: pickup fork, action id 87: pickup glassbottle, action id 88: pickup handtowel, action id 89: pickup kettle, action id 90: pickup keychain, action id 91: pickup knife, action id 92: pickup ladle, action id 93: pickup laptop, action id 94: pickup lettuce, action id 95: pickup mug, action id 96: pickup newspaper, action id 97: pickup pan, action id 98: pickup pen, action id 99: pickup pencil, action id 100: pickup peppershaker, 35
|
https://arxiv.org/abs/2505.22050v1
|
arXiv:2505.22067v1 [cs.CV] 28 May 2025From Failures to Fixes: LLM-Driven Scenario Repair for Self-Evolving Autonomous Driving Xinyu Xia1, Xingjun Ma2, Yunfeng Hu1, Ting Qu1, Hong Chen3, Xun Gong1,† 1Jilin University2Fudan University3Tongji University †Corresponding author Abstract Ensuring robust and generalizable autonomous driving re- quires not only broad scenario coverage but also efficient repair of failure cases, particularly those related to chal- lenging and safety-critical scenarios. However, existing scenario generation and selection methods often lack adap- tivity and semantic relevance, limiting their impact on per- formance improvement. In this paper, we propose SERA , an LLM-powered framework that enables autonomous driv- ing systems to self-evolve by repairing failure cases through targeted scenario recommendation. By analyzing perfor- mance logs, SERA identifies failure patterns and dynami- cally retrieves semantically aligned scenarios from a struc- tured bank. An LLM-based reflection mechanism further refines these recommendations to maximize relevance and diversity. The selected scenarios are used for few-shot fine- tuning, enabling targeted adaptation with minimal data. Experiments on the benchmark show that SERA consistently improves key metrics across multiple autonomous driving baselines, demonstrating its effectiveness and generaliz- ability under safety-critical conditions. 1. Introduction Autonomous driving technologies have achieved signifi- cant advancements in recent years, promising substantial enhancements in transportation safety, efficiency, and re- liability [6, 17, 22, 26]. Modern autonomous driving systems rely on sophisticated learning-based algorithms spanning perception, decision-making, control, and coor- dination. Despite remarkable progress, ensuring consis- tent and robust performance under complex and dynam- ically evolving real-world conditions remains challeng- ing [28, 42]. A key obstacle arises from inadequate ex- posure of these systems to rare but critical driving situa-tions—such as pedestrian occlusions, intricate merges, or adverse weather—which are significantly underrepresented in typical training datasets [9, 12]. Consequently, these under-addressed scenarios severely limit system reliability and generalization. Traditionally, driving scenarios are manually constructed using historical crash records, regulatory guidelines, or scripted simulations [32, 34]. While intuitive and inter- pretable, these manual approaches suffer critical limita- tions: they are labor-intensive, challenging to scale ef- fectively, and inherently biased toward previously known or encountered conditions. More importantly, handcrafted scenario generation techniques often miss subtle yet im- pactful failures related to ambiguous pedestrian intentions, minor visibility variations, or complex interactions among traffic agents. To overcome these issues, automated sce- nario retrieval techniques leveraging extensive databases have emerged [4, 14, 36]. However, these methods usually depend on rigid scenario descriptors and handcrafted sim- ilarity metrics, thereby struggling to capture the nuanced semantic context necessary for addressing performance- critical failures effectively. Recent developments in large language models (LLMs) have introduced significant opportunities for semantic sce- nario understanding, retrieval, and adaptation, owing to their exceptional capabilities in contextual reasoning and high-level abstraction [1, 8, 30, 31, 38]. Nevertheless, ex- isting LLM-based research primarily emphasizes scenario generation or interactive scenario creation, with limited ex- ploration of systematically employing LLMs to analyze au- tonomous driving pre-evaluation outcomes, reason about failures, and recommend efficient scenario repairs. There- fore, there is a critical research gap regarding performance- oriented scenario analysis and efficient repair, particularly by exploiting the reasoning capabilities of LLMs. To bridge this critical gap,
|
https://arxiv.org/abs/2505.22067v1
|
we propose SERA, an inno- 1 Pre-evaluation Scenario repairt=6s t=7s t=8s t=9s t=10s Failure analysis Failure-aware scenario recommendation Self-EvolvingFigure 1. Conceptual illustration of SERA . The system performs pre-evaluation to detect failure cases, leverages failure-aware scenario recommendation to retrieve vulnerable instances, and applies self-evolving scenario repair for targeted model adaptation. An example on the right shows a failure due to low-visibility collision that is successfully repaired through efficient fine-tuning, leading to improved decision-making under safety-critical conditions. vative framework for Failure-Aware Scenario Recommen- dation andSelf-Evolving Scenario Repair in autonomous driving. Specifically, SERA systematically analyzes pre- evaluation logs to pinpoint recurring failure patterns and dynamically retrieves semantically aligned scenarios from a structured scenario bank via Failure-Aware Scenario Rec- ommendation. These initial recommendations undergo fur- ther semantic refinement through an LLM-powered reflec- tion mechanism to ensure optimal relevance and diver- sity. The selected scenarios are subsequently used in a Self-Evolving Scenario Repair process, enabling the au- tonomous driving model to efficiently adapt and effectively repair its identified performance shortcomings. By com- bining scenario recommendation and targeted model adap- tation into a coherent closed-loop system, SERA facili- tates continuous improvement, significantly enhancing the robustness and generalization capabilities of autonomous driving systems under diverse and challenging conditions. An overview of the proposed SERA framework is illus- trated in Figure 1, highlighting the failure-aware scenario recommendation, self-evolving scenario repair process, and an example of autonomous vehicle behavior improvement after scenario refinement. The main contributions of this work are as follows: • We propose SERA, a novel framework that inte- grates Failure-Aware Scenario Recommendation and Self-Evolving Scenario Repair to systematically enhance the robustness of autonomous driving systems. • SERA introduces a unified pipeline that performs pre- evaluation failure analysis, semantic-driven scenario re- trieval, and reflection-guided refinement, enabling adap- tive and safety-critical model improvement. • Extensive experiments on benchmark datasets demon- strate that SERA significantly outperforms state-of-the- art baselines, especially under diverse and challenging driving conditions.2. Related Work 2.1. End-to-End Autonomous Driving The development of autonomous driving systems has histor- ically progressed through two major paradigms. Traditional pipelines adopt a modular architecture, decomposing the task into distinct perception [3, 23, 35], prediction [24, 39], and planning [15, 29] modules. Although this structure facilitates interpretability and enables independent compo- nent optimization, it often suffers from error propagation across modules and lacks holistic optimization toward fi- nal driving objectives. In contrast, end-to-end (E2E) au- tonomous driving frameworks [7, 16, 19] aim to overcome these limitations by jointly optimizing perception, predic- tion, and planning within a unified learning system. These approaches directly map raw sensor inputs to driving ac- tions, thereby improving adaptability to complex and di- verse driving scenarios. Recent E2E models demonstrate strong potential in achieving more robust and globally con- sistent behavior, although challenges remain in ensuring in- terpretability, safety guarantees, and generalization under long-tail real-world conditions. 2.2. Scenario-Based Testing for Autonomous Driv- ing Scenario-based testing is fundamental for ensuring au- tonomous driving robustness under diverse, real-world con- ditions [27, 32]. Early approaches primarily relied on manually defined templates or historical crash data, limit- ing scalability and the discovery of
|
https://arxiv.org/abs/2505.22067v1
|
rare, unexpected fail- ures [11]. To enhance coverage, recent works introduced automated methods, leveraging simulation-based complex- ity assessment [13], genetic algorithms [44], and adversarial reinforcement learning [5]. However, these techniques typ- ically require explicitly defined seed scenarios or optimiza- tion targets, lacking adaptive mechanisms that dynamically 2 respond to pre-evaluation results. 2.3. Few-shot Learning in Autonomous Driving Few-shot learning has been increasingly explored to en- hance data efficiency and model generalization in au- tonomous driving tasks, especially when acquiring large- scale annotated data is costly [33, 41]. Recent methods have employed fine-tuning and meta-learning strategies, demon- strating promising performance gains with minimal train- ing examples [10, 33]. Few-shot methodologies have also been leveraged to optimize scenario selection for targeted autonomous driving evaluation [21]. Nevertheless, existing approaches typically overlook semantic alignment between scenario selection and specific failure cases identified dur- ing pre-evaluation. 2.4. Large Language Models for Scenario Under- standing and Generation Recent advancements in large language models (LLMs) have enhanced scenario generation and semantic under- standing capabilities in autonomous driving [8, 20, 43]. Current research typically uses LLMs to guide scenario cre- ation with improved semantic fidelity or translate high-level descriptions and traffic rules into simulations [2, 25]. How- ever, existing LLM-based approaches primarily focus on scenario synthesis or natural language interfaces, neglecting the analysis of pre-evaluation results and efficient, adaptive scenario recommendation. Despite significant progress in scenario-based testing, few-shot learning, and the application of LLMs, existing methods generally overlook the integration of failure anal- ysis, semantic scenario retrieval, and adaptive recommen- dation based on pre-evaluation results. In contrast, our proposed SERA framework addresses these limitations by systematically combining these critical aspects, thereby en- abling efficient scenario repair and self-evolving adaptation for autonomous driving systems. 3. Methodology 3.1. Overview As illustrated in Figure 2, we propose the SERA frame- work, a safety-oriented self-evolving mechanism designed to systematically analyze, understand, and repair failures in autonomous driving systems through efficient scenario rec- ommendation and adaptation. Different from conventional training methods that broadly focus on performance opti- mization, our approach explicitly addresses critical safety concerns by identifying and rectifying specific failure sce- narios detected during pre-evaluation. Specifically, given the performance logs generated from a rigorous pre-evaluation process, SERA utilizes a large lan- guage model (LLM) to analyze and interpret failure pat-terns. These insights guide a Failure-Aware Scenario Rec- ommendation pipeline that retrieves semantically relevant scenarios from a structured Scenario Bank. The recom- mended scenarios are further refined by an LLM-based re- flection module to ensure semantic alignment and diver- sity, producing high-quality, efficient scenarios for subse- quent Self-Evolving Scenario Repair . This iterative repair process significantly enhances the robustness and safety of autonomous driving systems, particularly under challeng- ing, rare, and safety-critical conditions. The overall process and algorithmic details of our proposed SERA framework are summarized in Algorithm 1. For clarity, we summarize the key notations and their corresponding descriptions used throughout our framework in Table 1. Table 1. Summary of Notations Symbol Description S Structured driving scenario (weather, time, location, scene context) TS Textual description generated from Svia Scenario Descriptor B Scenario bank containing
|
https://arxiv.org/abs/2505.22067v1
|
textual scenarios o Agent observation (sensor input at a time step) a Agent action (trajectory output or control command) πθ Autonomous driving policy parameterized byθ τ Pre-evaluation driving route T Set of all pre-evaluation routes ℓ(τ, πθ)Performance log collected on route τ L Set of all performance logs p Individual failure pattern extracted from logs P Set of extracted failure patterns r(s,P)Initial relevance score between scenario s and failure patterns P C Initial candidate scenario set retrieved fromB R Reflection-generated improvement sugges- tions r′(s,P)Reflection-enhanced relevance score C′Final refined scenario set after reflection 3.2. Scenario Descriptor To enable semantic-level reasoning over autonomous driv- ing scenarios, we introduce a Scenario Descriptor that trans- forms structured, low-level scenario metadata into detailed textual narratives interpretable by large language models (LLMs). Unlike traditional vision datasets where objects and interactions are explicitly labeled, autonomous driv- ing datasets primarily contain environmental configurations and agent trajectories without fine-grained semantic anno- 3 Scenario Pre-evaluationFailur e-Aware Scenario RecommendationScenario Bank Performance logs Driving ScoreSelf-Evolving Scenario RepairfSuccess Rate Efficiency ComfortnessRoute Completion Infraction Score Initial Recommendation Reflection Module Final Recommendationscenario bank retrieveLLMfailure analysis expert evaluationcase analysisintial recommendation expert suggestionsLLM intial recommendation case 0 case 3 case 5case 0 case 1 case 3 case 5#0 #3#5 Recommend Scenario+ Autonomous Driving AlgorithmsHighway cut-in Static cut-in Obstacle in line Vehicle invading lane Door obstacleFigure 2. Overview of the proposed SERA framework. tations. Consequently, directly leveraging such data for fail- ure analysis and targeted scenario retrieval poses a signifi- cant challenge. Formally, each scenario is represented by a set of core semantic attributes: S={ϕweather , ϕtime, ϕlocation , ϕscene} (1) where: •ϕweather describes environmental conditions (e.g., clear, rainy, foggy); •ϕtimespecifies temporal context (e.g., daytime, nighttime, dawn); •ϕlocation indicates geographical settings (e.g., urban inter- sections, highways, roundabouts); •ϕscenecaptures scene semantics, including dynamic agent behaviors and static infrastructure.The Scenario Descriptor module D(·)synthesizes these structured attributes into a natural language description TS, enabling effective semantic retrieval: TS=D(ϕweather , ϕtime, ϕlocation , ϕscene) (2) This textualization process bridges the gap between structured environment metadata and the language-centric reasoning capabilities of LLMs. By generating interpretable and semantically rich descriptions, the Scenario Descrip- tor ensures that downstream retrieval and recommendation stages can reason about scenario relevance at a high level of abstraction, even in the absence of manually annotated semantic labels. 4 3.3. Scenario Bank The Scenario Bank serves as a structured and semantically enriched repository for autonomous driving scenarios, de- signed explicitly to support efficient scenario retrieval fol- lowing failure analysis. It is fundamentally distinct from the pre-evaluation set T, which is used primarily for broad assessment of autonomous driving performance. While T aims to reveal system vulnerabilities, the Scenario Bank B provides a rich source of semantically meaningful scenarios intended for targeted repair. Formally, the Scenario Bank is defined as: B={TS1, TS2, . . . , T SN} (3) where each TSirepresents a textual description of a scenario generated by the Scenario Descriptor. Each entry in Bcaptures essential semantic properties, including the driving task (e.g., merging, intersection han- dling), environmental conditions (e.g., rain, fog, nighttime driving), and potential risk factors (e.g., occluded pedes-
|
https://arxiv.org/abs/2505.22067v1
|
trians, aggressive cut-ins). These descriptions are con- structed to reflect both explicit and latent factors affect- ing autonomous driving behavior, ensuring that the retrieval process can match subtle failure patterns revealed during pre-evaluation. 3.4. Failure-Aware Scenario Recommendation Conventional methods typically select scenarios based solely on training losses or generic performance metrics, lacking the semantic reasoning capability needed to pin- point the nuanced contexts underlying safety-critical fail- ures. To address this limitation, we propose a Failure-Aware Scenario Recommendation module that explicitly leverages LLM reasoning over detailed performance logs rather than merely numerical metrics. 3.4.1. Performance Log Analysis To enable failure-aware scenario repair, we first conduct pre-evaluation by executing the autonomous driving policy πθacross a set of designated test routes T. During each route τ∈ T, the agent observes the environment and pro- duces an action sequence, forming an interaction trajectory. Formally, for each τ, the trajectory ξτis defined as: ξτ={(ot, at)|at=πθ(ot), t= 0, . . . , T } (4) where otdenotes the observation at time step t, andTis the episode horizon. Based on the trajectory ξτ, we generate the performance log as: ℓ(τ, πθ) =M(ξτ) (5) whereM(·)is a structured evaluation mapping that records semantic failures, including route deviations, collisions, traffic violations, and other safety-critical infractions.Collectively, the pre-evaluation process produces a set of performance logs: L={ℓ(τi, πθ)|τi∈ T } (6) Unlike conventional scalar loss functions, performance logsLprovide fine-grained semantic insights into agent be- havior under real-world conditions. For instance, a log en- try may explicitly document events such as ”Agent ran a red light at (x=341.25, y=209.1, z=0.104)” or”Agent de- viated from the route at (x=95.92, y=165.673, z=0.138)” , with precise spatiotemporal metadata. These detailed de- scriptions form the foundation for downstream failure pat- tern extraction and targeted scenario retrieval. 3.4.2. Initial Scenario Recommendation Upon obtaining the pre-evaluation performance logs L, we employ a large language model (LLM) to perform seman- tic reasoning and extract a set of failure patterns P. Each failure pattern p∈ P characterizes a distinct safety-critical weakness identified in the driving policy πθ, such as im- proper lane merging under rain or delayed braking near in- tersections. Formally, the extraction process is expressed as: P=AnalyzeLLM (L) (7) where AnalyzeLLM (·)denotes the LLM-based semantic interpretation of failure causes from the performance logs. To recommend scenarios relevant to these failures, we define a relevance score r(s,P)between each scenario s∈ Bin the Scenario Bank and the extracted failure patterns: r(s,P) =1 |P|X p∈Pϕ(Ts, p) (8) where ϕ(Ts, p)measures the semantic similarity between the scenario description Tsand a failure pattern p, imple- mented via embedding-based similarity or LLM-driven rea- soning. The initial candidate set Cis then obtained by selecting the top- Kscenarios with the highest cumulative relevance: C= arg max C⊆B |C|=KX s∈Cr(s,P) (9) This retrieval ensures that the candidate scenarios Cse- mantically align with the agent’s failure behaviors, provid- ing a focused basis for subsequent scenario refinement and self-evolving adaptation. 3.4.3. LLM-based Reflection To further enhance the quality of the initially recommended scenarios C, we introduce an LLM-based Reflection module. This module performs expert-level semantic evaluation by 5 jointly analyzing the initial
|
https://arxiv.org/abs/2505.22067v1
|
scenario candidates Cand the extracted failure patterns P. Specifically, the reflection process leverages the reason- ing capabilities of LLMs to assess the coverage adequacy ofCand diagnose any critical failure aspects that remain in- sufficiently addressed. Based on this analysis, the reflection module outputs a set of improvement suggestions R: R=ReflectLLM (C,P) (10) These suggestions indicate necessary modifications to en- hance scenario coverage, semantic diversity, and failure- targeted relevance. 3.4.4. Final Scenario Refinement Based on the improvement suggestions Rgenerated by the reflection module, we refine the initial scenario set Cto ob- tain the final selection C′: C′=RefineLLM (C,R) (11) where RefineLLM (·)denotes an LLM-guided adjustment process. Specifically, Rprovides targeted operations including replacing redundant scenarios, augmenting uncovered fail- ure cases, and prioritizing high-risk contexts. By explic- itly incorporating these reflection-driven adjustments, the refined set C′achieves tighter alignment with critical fail- ure patterns while maintaining semantic diversity, thereby maximizing the effectiveness of subsequent scenario repair. 3.5. Self-Evolving Scenario Repair The final refined scenario set C′drives the Self-Evolving Scenario Repair phase, explicitly targeting the safety- critical failures identified during pre-evaluation. Unlike tra- ditional fine-tuning strategies that rely purely on minimiz- ing generic training losses, our method leverages semanti- cally aligned scenarios to perform targeted adaptation. Formally, given the pretrained autonomous driving model πθwith parameters θ, the updated parameters θ∗are obtained by minimizing the failure-specific risk over C′: θ∗= arg min θE(o,a)∼C′[Lfail(πθ(o), a)] (12) whereLfailmeasures deviation from safe behavior under the curated failure scenarios. This self-evolving repair mechanism ensures that model updates are directly aligned with critical failure modes, leading to substantial improvements in robustness, safety, and generalization to long-tail driving conditions.Algorithm 1 SERA: Self-Evolving Scenario Repair for Au- tonomous Driving 1:Input : Pre-trained model θ, scenario bank B, pre- evaluation set T={τ1, τ2, ..., τ M} 2:Output : Self-evolved model θ∗after targeted repair 3:foreach route τinTdo 4: Execute policy πθon route τ, collect performance logLτ 5: Extract failure patterns Pτby analyzing Lτwith an LLM 6: Retrieve initial candidate scenarios CτfromBvia failure-aware relevance scoring 7: Generate reflection-driven improvement sugges- tionsRτby LLM-based reflection on CτandPτ 8: Refine candidate scenarios to obtain final repair set C′ τ=RefineLLM (Cτ,Rτ) 9: Update θby minimizing semantic repair loss over C′ τ: θ←arg min θE(o,a)∼C′τ[Lfail(πθ(o), a)] 10:end for 11:return Final repaired model θ∗ 4. Experiments 4.1. Experimental Datasets and Baselines We conduct our experiments using Bench2Drive [18], a closed-loop evaluation protocol integrated into the CARLA Leaderboard 2.0, specifically designed for end-to-end au- tonomous driving (E2E-AD) tasks. The base dataset, com- prising 1,000 driving clips, serves as the scenario bank, while performance is evaluated across the official set of 220 benchmark routes. To comprehensively validate the effectiveness of our pro- posed SERA, we benchmark it against several state-of-the- art E2E-AD methods: •UniAD [16]: A transformer-based method that explic- itly models perception and prediction using Transformer Queries to enable effective information flow. •V AD [19]: Leverages a vectorized scene representation and Transformer Queries to improve both inference effi- ciency and driving performance. •AD-MLP [40]: A lightweight baseline that feeds his- torical ego-vehicle states into a Multi-Layer Perceptron (MLP) for trajectory prediction. •TCP [37]: Fuses front-camera
|
https://arxiv.org/abs/2505.22067v1
|
images and ego-vehicle states to jointly predict trajectories and control com- mands. 6 Table 2. Overall performance comparison of baseline models with and without SERA. Method Input Driving Score ↑Success Rate (%) ↑ Efficiency ↑ Comfortness ↑ AD-MLP[40] Ego State 7.83 0.00 44.89 26.36 AD-MLP + SERA Ego State 8.28 (+5.75%) 0.00 (+0) 50.66 (+12.86%) 30.03 (+15.71%) UniAD[16] Ego State + 6 Cameras 33.64 8.12 98.50 42.00 UniAD + SERA Ego State + 6 Cameras 35.10 (+4.34%) 9.58 (+1.46) 102.34 (+3.90%) 43.50 (+3.57%) V AD[19] Ego State + 6 Cameras 34.21 8.51 105.00 43.20 V AD + SERA Ego State + 6 Cameras 35.64 (+4.18%) 9.89 (+1.38) 109.20 (+4.00%) 44.70 (+3.47%) TCP [37] Ego State + Front Cameras 31.78 22.34 76.50 18.00 TCP + SERA Ego State + Front Cameras 33.03 (+3.93%) 23.92 (+1.58) 79.51 (+3.93%) 19.50 (+8.33%) 4.2. Implementation Details We employ LLaMA-3 8B as the base large language model (LLM) within our SERA framework. All experiments are conducted on a computational setup equipped with four NVIDIA RTX 4090 GPUs. For closed-loop evaluation, each autonomous driving model is executed in CARLA across the standard set of 220 benchmark routes. During the fine-tuning stage, scenarios recommended by SERA, se- lected from the scenario bank, are used to iteratively refine each baseline model. The hyperparameters for fine-tuning are set as follows: a learning rate of 1e-5, a batch size of 2, and a total of 2 training epochs. 4.3. Evaluation Metrics To ensure comprehensive evaluation, we utilize the official CARLA simulator test server under diverse weather condi- tions. We adopt the standard CARLA Leaderboard metrics: • Driving Score (DS): A composite metric that non-linearly penalizes infractions as the vehicle progresses along the designated route. • Route Completion (RC): The percentage of the total route distance completed. • Infraction Score (IS): Measures the number and severity of driving infractions incurred during navigation. For consistency in local evaluations, we additionally adopt the Bench2Drive benchmark based on CARLA ver- sion 0.9.15. Bench2Drive consists of 220 shorter routes (approximately 150 meters each) spanning Town01 through Town15 under diverse weather conditions. Its official eval- uation metrics include: • Driving Score: A composite metric similar to the CARLA Leaderboard DS, adapted for short-distance routes. • Success Rate: The percentage of routes successfully com- pleted without encountering critical infractions. • Efficiency: Assesses time and path optimality throughout the route. • Comfortness: Measures driving smoothness and passen- ger comfort.4.4. Experimental Results 4.4.1. Quantitative Comparison Table 2 reports the performance comparison between base- line models and their SERA-enhanced versions across four key metrics. Overall, SERA consistently improves robust- ness, efficiency, and comfort across diverse model architec- tures. Specifically, UniAD demonstrates substantial improve- ments, with Driving Score increasing by +4.34% and Suc- cess Rate by +1.46 points. Efficiency and Comfortness also rise by +3.90% and +3.57%, respectively, indicat- ing that failure-driven repair not only enhances goal com- pletion but also optimizes control smoothness. Similarly, V AD achieves +4.18% Driving Score and +4.00% Effi- ciency gains, suggesting that SERA remains effective even for models with strong baseline robustness. TCP-traj, despite an already high Success
|
https://arxiv.org/abs/2505.22067v1
|
Rate, benefits from an +8.33% improvement in Comfortness, highlight- ing that scenario repair through SERA improves not just success likelihood but also the qualitative aspects of driv- ing behavior. This reflects the framework’s ability to enrich long-tail conditions impacting ride quality. For AD-MLP, although Success Rate remains at 0%, Efficiency and Comfortness improve by +12.86% and +14.00%, respectively. This suggests that while simpler models may struggle with complex scenario repair due to perceptual limitations, they still benefit from targeted fine- tuning in smoother control behavior. Beyond global metrics, Table 3 further evaluates fine- grained driving skills. SERA consistently enhances criti- cal capabilities such as Emergency Braking, Traffic Sign handling, and Merging across major baselines. Notably, UniAD and TCP-traj show marked improvements in Emer- gency Braking (+3.30% and +3.01%), confirming that failure-aware repair mechanisms reinforce safety-critical reactions under challenging scenarios. These findings collectively demonstrate that SERA not only boosts quantitative driving performance but also sys- tematically strengthens specific competencies essential for reliable and safe autonomous driving under real-world con- 7 Table 3. Performance breakdown across different scenarios. MethodAbility (%) ↑Avg.Merging Overtaking Emergency Brake Give Way Traffic Sign AD-MLP[40] 0.00 0.00 0.00 0.00 0.00 0.000 AD-MLP + SERA 0.00 0.00 0.00 0.00 0.00 0.000 UniAD[16] 9.46 12.50 20.00 30.00 23.03 18.598 UniAD + SERA 12.10(2.64 ↑) 14.00(1.50 ↑) 23.30(3.30 ↑) 32.50(2.50 ↑) 25.20(2.17 ↑)21.820(3.22 ↑) V AD[19] 0.13 17.50 14.54 30.00 25.55 17.544 V AD + SERA 0.50(0.37 ↑) 19.20(1.70 ↑) 17.00(2.46 ↑) 32.40(2.40 ↑) 27.90(2.35 ↑)19.800(2.26 ↑) TCP[37] 24.29 15.00 29.09 50.00 51.67 34.810 TCP + SERA 27.20(2.91 ↑) 16.60(1.60 ↑) 32.10(3.01 ↑) 52.80(2.80 ↑) 54.20(2.53 ↑)36.980(2.17 ↑) OvertakingMerging Emergency_Brake Give_Way Traffic_Signs12.5%13.0% 14.0%9.5%11.0%12.1% 20.0%21.2%23.3% 30.0% 31.5% 32.5%23.0%23.2% 25.2%UniAD + Random UniAD + Initial Rec. UniAD + Full SERA Figure 3. Ability-wise success rate comparison of UniAD un- der different selection strategies (Random, Initial Rec., and Full SERA). Full SERA consistently improves performance across var- ious driving abilities. Table 4. Ablation study of Reflection Module. Method Driving Score ↑Success Rate ↑ UniAD + Random 32.20 6.85 UniAD + Initial Rec. 33.25 7.52 UniAD + Full SERA 35.10 9.58 V AD + Random 32.85 7.10 V AD + Initial Rec. 33.90 7.90 V AD + Full SERA 35.64 9.89 AD-MLP + Random 7.10 0.00 AD-MLP + Initial Rec. 7.50 0.00 AD-MLP + Full SERA 8.28 0.00 TCP + Random 29.90 20.00 TCP + Initial Rec. 30.90 21.20 TCP + Full SERA 33.03 23.92 ditions.4.4.2. Ablation Study To assess the contribution of the reflection module in the SERA framework, we conduct an ablation study across all baseline models. As shown in Table 4, three vari- ants are compared: (i) Random Selection , which randomly samples scenarios from the bank without semantic reason- ing; (ii) Initial Recommendation , which retrieves scenar- ios based solely on failure pattern relevance; and (iii) Full SERA , which incorporates both initial recommendation and reflection-based refinement. Results show that Random Selection consistently achieves the lowest performance, highlighting that naive scenario augmentation is insufficient for effective model adaptation. For instance, UniAD achieves a Driving Score of only 32.20 and a Success Rate of
|
https://arxiv.org/abs/2505.22067v1
|
6.85% under ran- dom sampling. Initial Recommendation improves upon this baseline, reaching 33.25 and 7.52% respectively, demon- strating that targeted retrieval based on failure patterns is beneficial. Notably, Full SERA yields the best performance across all models. With reflection-enhanced refinement, UniAD further improves to a Driving Score of 35.10 and a Suc- cess Rate of 9.58%. Similar trends are observed for V AD and TCP, where reflection consistently provides an addi- tional gain beyond initial retrieval. For simpler models such as AD-MLP, while Success Rate remains unchanged, reflection still leads to measurable improvements in Driv- ing Score, confirming its value even in limited-capacity set- tings. These results validate the critical role of the reflection module in the SERA pipeline. By semantically auditing and refining initial recommendations, reflection enhances both the relevance and diversity of selected scenarios, resulting in more effective self-evolving model repair. To further evaluate the effectiveness of the reflection module across different driving abilities, we present an ability-wise success rate comparison in Figure 3. The radar chart shows that Full SERA consistently achieves superior success rates compared to both Random Selection and Ini- 8 t s t+1 s t+2 s t+3 s Highway Overtaking V AD V AD +SERA V AD V AD +SERAUrban IntersectionFigure 4. Qualitative comparison between V AD (red dashed borders) and SERA (green dashed borders) across various autonomous driving scenarios. Each column represents a future timestamp (t, t+1s, t+2s, t+3s), showing the behavioral differences between the two methods. SERA demonstrates more consistent and safer navigation compared to V AD. tial Recommendation across all evaluated abilities, includ- ingOvertaking ,Merging ,Emergency Brake ,Give Way , and Traffic Signs . In particular, substantial improvements are observed in complex tasks such as Emergency Brake and Give Way, where Full SERA demonstrates a significant ad- vantage. This comprehensive ability-level analysis further confirms that the reflection mechanism delivers broad and consistent performance gains across diverse and challeng- ing driving scenarios. 4.4.3. Qualitative Analysis To further validate the effectiveness of our proposed method, we present a qualitative comparison between V AD and SERA in Figure 4. Each row shows the predicted future behavior under different driving scenarios, with snapshots taken at successive timestamps ( t,t+1s,t+2s, and t+3s). Red and green dashed borders represent predictions from V AD and SERA, respectively. As illustrated, SERA consistently exhibits safer and more stable driving behaviors compared to V AD across di- verse environments. In the highway overtaking scenario (top two rows), V AD fails to properly react to the adjacent vehicle, leading to a side collision at t+ 3s. In contrast, SERA maintains a safe longitudinal distance and smoothlyavoids the hazard. Similarly, in the intersection crossing scenario (bottom two rows), V AD aggressively proceeds de- spite the presence of a crossing vehicle, resulting in a col- lision. Meanwhile, SERA anticipates the dynamic obstacle and successfully yields, preventing a potential accident. These observations demonstrate that SERA not only models the multi-agent dynamics more accurately but also generates future plans that are risk-aware and adaptive to complex situations. This qualitative analysis further sub- stantiates the quantitative performance gains reported
|
https://arxiv.org/abs/2505.22067v1
|
ear- lier. 5. Conclusion In this paper, we proposed SERA, a self-evolving scenario repair framework that systematically enhances autonomous driving systems by addressing failure cases through LLM- driven efficient scenario recommendation. Unlike tradi- tional retraining or static scenario generation approaches, SERA leverages large language models to analyze pre- evaluation performance logs, identify critical failure pat- terns, and recommend semantically aligned scenarios from a structured scenario bank. Through a reflection-driven re- finement process, SERA ensures high relevance and diver- sity in its recommendations, enabling efficient and safety- 9 critical model adaptation. Extensive experiments demon- strate that SERA consistently improves key driving perfor- mance metrics across multiple baselines, even under incom- plete scenario bank conditions. Our ablation studies fur- ther validate the importance of the reflection mechanism in achieving robust and targeted scenario recovery. References [1] Xuan Cai, Xuesong Bai, Zhiyong Cui, Danmu Xie, Daocheng Fu, Haiyang Yu, and Yilong Ren. 2025. Text2Scenario: Text-Driven Scenario Gener- ation for Autonomous Driving Test. arXiv preprint arXiv:2503.02911 (2025). 1 [2] Yue Cao and CS Lee. 2023. Robot behavior-tree- based task generation with large language models. arXiv preprint arXiv:2302.12927 (2023). 3 [3] Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. 2020. End-to-end object detection with transformers. In European conference on computer vi- sion. Springer, 213–229. 2 [4] Cheng Chang, Dongpu Cao, Long Chen, Kui Su, Kuifeng Su, Yuelong Su, Fei-Yue Wang, Jue Wang, Ping Wang, Junqing Wei, et al. 2022. MetaScenario: A framework for driving scenario data description, storage and indexing. IEEE Transactions on Intelli- gent Vehicles 8, 2 (2022), 1156–1175. 1 [5] Baiming Chen, Xiang Chen, Qiong Wu, and Liang Li. 2021. Adversarial evaluation of autonomous vehicles in lane-change scenarios. IEEE transactions on in- telligent transportation systems 23, 8 (2021), 10333– 10342. 2 [6] Li Chen, Penghao Wu, Kashyap Chitta, Bernhard Jaeger, Andreas Geiger, and Hongyang Li. 2024. End- to-end autonomous driving: Challenges and frontiers. IEEE Transactions on Pattern Analysis and Machine Intelligence (2024). 1 [7] Shaoyu Chen, Bo Jiang, Hao Gao, Bencheng Liao, Qing Xu, Qian Zhang, Chang Huang, Wenyu Liu, and Xinggang Wang. 2024. Vadv2: End-to-end vectorized autonomous driving via probabilistic planning. arXiv preprint arXiv:2402.13243 (2024). 2 [8] Yao Deng, Jiaohong Yao, Zhi Tu, Xi Zheng, Mengshi Zhang, and Tianyi Zhang. 2023. Target: Traffic rule- based test generation for autonomous driving systems. arXiv preprint arXiv:2305.06018 (2023). 1, 3 [9] Wenhao Ding, Chejian Xu, Mansur Arief, Haohong Lin, Bo Li, and Ding Zhao. 2023. A survey on safety- critical driving scenario generation—a methodolog- ical perspective. IEEE Transactions on Intelligent Transportation Systems 24, 7 (2023), 6971–6988. 1 [10] Yan Duan, Marcin Andrychowicz, Bradly Stadie, OpenAI Jonathan Ho, Jonas Schneider, Ilya Sutskever,Pieter Abbeel, and Wojciech Zaremba. 2017. One- shot imitation learning. Advances in neural informa- tion processing systems 30 (2017). 3 [11] Daniel J Fremont, Edward Kim, Yash Vardhan Pant, Sanjit A Seshia, Atul Acharya, Xantha Bruso, Paul Wells, Steve Lemke, Qiang Lu, and Shalin Mehta. 2020. Formal scenario-based testing of autonomous vehicles: From simulation to the real world. In 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC) . IEEE, 1–8.
|
https://arxiv.org/abs/2505.22067v1
|
2 [12] Mathias Gehrig, Willem Aarents, Daniel Gehrig, and Davide Scaramuzza. 2021. Dsec: A stereo event cam- era dataset for driving scenarios. IEEE Robotics and Automation Letters 6, 3 (2021), 4947–4954. 1 [13] Zahra Ghodsi, Siva Kumar Sastry Hari, Iuri Frosio, Timothy Tsai, Alejandro Troccoli, Stephen W Keck- ler, Siddharth Garg, and Anima Anandkumar. 2021. Generating and characterizing scenarios for safety testing of autonomous vehicles. In 2021 IEEE Intel- ligent Vehicles Symposium (IV) . IEEE, 157–164. 2 [14] Sascha Hornauer, Baladitya Yellapragada, Arian Ran- jbar, and Stella Yu. 2019. Driving scene retrieval by example from large-scale data. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pat- tern Recognition Workshops . 25–28. 1 [15] Peiyun Hu, Aaron Huang, John Dolan, David Held, and Deva Ramanan. 2021. Safe local motion planning with self-supervised freespace forecasting. In Pro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition . 12732–12741. 2 [16] Yihan Hu, Jiazhi Yang, Li Chen, Keyu Li, Chong- hao Sima, Xizhou Zhu, Siqi Chai, Senyao Du, Tian- wei Lin, Wenhai Wang, et al. 2023. Planning-oriented autonomous driving. In Proceedings of the IEEE/CVF conference on computer vision and pattern recogni- tion. 17853–17862. 2, 6, 7, 8 [17] Yu Huang and Yue Chen. 2020. Autonomous driving with deep learning: A survey of state-of-art technolo- gies. arXiv preprint arXiv:2006.06091 (2020). 1 [18] Xiaosong Jia, Zhenjie Yang, Qifeng Li, Zhiyuan Zhang, and Junchi Yan. 2024. Bench2drive: To- wards multi-ability benchmarking of closed-loop end-to-end autonomous driving. arXiv preprint arXiv:2406.03877 (2024). 6 [19] Bo Jiang, Shaoyu Chen, Qing Xu, Bencheng Liao, Jiajie Chen, Helong Zhou, Qian Zhang, Wenyu Liu, Chang Huang, and Xinggang Wang. 2023. Vad: Vec- torized scene representation for efficient autonomous driving. In Proceedings of the IEEE/CVF Interna- tional Conference on Computer Vision . 8340–8350. 2, 6, 7, 8 [20] Quanyi Li, Zhenghao Mark Peng, Lan Feng, Zhizheng Liu, Chenda Duan, Wenjie Mo, and Bolei Zhou. 2023. 10 Scenarionet: Open-source platform for large-scale traffic scenario simulation and modeling. Advances in neural information processing systems 36 (2023), 3894–3920. 3 [21] Shu Li, Honglin He, Jingxuan Yang, Jianming Hu, Yi Zhang, and Shuo Feng. 2024. Few-Shot Testing of Autonomous Vehicles with Scenario Similarity Learn- ing. arXiv preprint arXiv:2409.14369 (2024). 3 [22] Xincheng Li, Zhaoyi Wang, Yanjun Huang, and Hong Chen. 2023. A survey on self-evolving autonomous driving: a perspective on data closed-loop technol- ogy. IEEE Transactions on Intelligent Vehicles 8, 11 (2023), 4613–4631. 1 [23] Zhiqi Li, Wenhai Wang, Hongyang Li, Enze Xie, Chonghao Sima, Tong Lu, Qiao Yu, and Jifeng Dai. 2024. Bevformer: learning bird’s-eye-view repre- sentation from lidar-camera via spatiotemporal trans- formers. IEEE Transactions on Pattern Analysis and Machine Intelligence (2024). 2 [24] Weibo Mao, Chenxin Xu, Qi Zhu, Siheng Chen, and Yanfeng Wang. 2023. Leapfrog diffusion model for stochastic trajectory prediction. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition . 5517–5526. 2 [25] Antonio Valerio Miceli-Barone, Alex Lascarides, and Craig Innes. 2023. Dialogue-based generation of self- driving simulation scenarios using Large Language Models. arXiv preprint arXiv:2310.17372 (2023). 3 [26] Daniel Omeiza, Helena Webb, Marina Jirotka, and Lars Kunze.
|
https://arxiv.org/abs/2505.22067v1
|
2021. Explanations in autonomous driv- ing: A survey. IEEE Transactions on Intelligent Transportation Systems 23, 8 (2021), 10142–10162. 1 [27] Sagar Pathrudkar, Saadhana Venkataraman, Deepika Kanade, Aswin Ajayan, Palash Gupta, Shehzaman Khatib, Vijaya Sarathi Indla, and Saikat Mukherjee. 2023. SAFR-A V: Safety Analysis of Autonomous Vehicles using Real World Data–An end-to-end so- lution for real world data driven scenario-based test- ing for pre-certification of A V stacks. arXiv preprint arXiv:2302.14601 (2023). 2 [28] Hongping Ren, Hui Gao, He Chen, and Guangzhen Liu. 2022. A survey of autonomous driving scenar- ios and scenario databases. In 2022 9th International Conference on Dependable Systems and Their Appli- cations (DSA) . IEEE, 754–762. 1 [29] Siyu Teng, Xuemin Hu, Peng Deng, Bai Li, Yuchen Li, Yunfeng Ai, Dongsheng Yang, Lingxi Li, Zhe Xu- anyuan, Fenghua Zhu, et al. 2023. Motion planning for autonomous driving: The state of the art and fu- ture perspectives. IEEE Transactions on Intelligent Vehicles 8, 6 (2023), 3692–3711. 2[30] Haoxiang Tian, Xingshuo Han, Guoquan Wu, Yuan Zhou, Shuo Li, Jun Wei, Dan Ye, Wei Wang, and Tian- wei Zhang. 2024. An llm-enhanced multi-objective evolutionary search for autonomous driving test sce- nario generation. arXiv preprint arXiv:2406.10857 (2024). 1 [31] Hanlin Tian, Kethan Reddy, Yuxiang Feng, Mo- hammed Quddus, Yiannis Demiris, and Panagiotis Angeloudis. 2024. Enhancing autonomous vehicle training with language model integration and critical scenario generation. arXiv preprint arXiv:2404.08570 (2024). 1 [32] Simon Ulbrich, Till Menzel, Andreas Reschka, Fabian Schuldt, and Markus Maurer. 2015. Defining and substantiating the terms scene, situation, and scenario for automated driving. In 2015 IEEE 18th interna- tional conference on intelligent transportation sys- tems. IEEE, 982–988. 1, 2 [33] Xin Wang, Thomas E. Huang, Trevor Darrell, Joseph E. Gonzalez, and Fisher Yu. 2020. Frustrat- ingly simple few-shot object detection. In Proceed- ings of the 37th International Conference on Ma- chine Learning (ICML’20) . JMLR.org, Article 920, 10 pages. 3 [34] Xinghua Wang, Yong Peng, Tuo Xu, Qian Xu, Xian- hui Wu, Guoliang Xiang, Shengen Yi, and Honggang Wang. 2022. Autonomous driving testing scenario generation based on in-depth vehicle-to-powered two- wheeler crash data in China. Accident Analysis & Pre- vention 176 (2022), 106812. 1 [35] Yue Wang, Vitor Campagnolo Guizilini, Tianyuan Zhang, Yilun Wang, Hang Zhao, and Justin Solomon. 2022. Detr3d: 3d object detection from multi-view images via 3d-to-2d queries. In Conference on Robot Learning . PMLR, 180–191. 2 [36] Yujin Wang, Quanfeng Liu, Jiaqi Fan, Jinlong Hong, Hongqing Chu, Mengjian Tian, Bingzhao Gao, and Hong Chen. 2024. RAC3: Retrieval-Augmented Corner Case Comprehension for Autonomous Driv- ing with Vision-Language Models. arXiv preprint arXiv:2412.11050 (2024). 1 [37] Penghao Wu, Xiaosong Jia, Li Chen, Junchi Yan, Hongyang Li, and Yu Qiao. 2022. Trajectory-guided control prediction for end-to-end autonomous driving: A simple yet strong baseline. Advances in Neural In- formation Processing Systems 35 (2022), 6119–6132. 6, 7, 8 [38] Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yi- wen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, et al. 2025. The rise and po- tential of large language model based agents: A sur- vey. Science China Information Sciences 68, 2
|
https://arxiv.org/abs/2505.22067v1
|
(2025), 121101. 1 11 [39] Pengfei Yao, Yinglong Zhu, Huikun Bi, Tianlu Mao, and Zhaoqi Wang. 2024. TrajCLIP: Pedestrian trajec- tory prediction method using contrastive learning and idempotent networks. Advances in Neural Informa- tion Processing Systems 37 (2024), 77023–77037. 2 [40] Jiang-Tian Zhai, Ze Feng, Jinhao Du, Yongqiang Mao, Jiang-Jiang Liu, Zichang Tan, Yifu Zhang, Xiaoqing Ye, and Jingdong Wang. 2023. Rethinking the open- loop evaluation of end-to-end autonomous driving in nuscenes. arXiv preprint arXiv:2305.10430 (2023). 6, 7, 8 [41] Gongjie Zhang, Zhipeng Luo, Kaiwen Cui, Shijian Lu, and Eric P Xing. 2022. Meta-detr: Image- level few-shot detection with inter-class correlation exploitation. IEEE transactions on pattern analy- sis and machine intelligence 45, 11 (2022), 12832– 12843. 3 [42] Yifan Zhang, Bingyi Kang, Bryan Hooi, Shuicheng Yan, and Jiashi Feng. 2023. Deep Long-Tailed Learn- ing: A Survey. IEEE Transactions on Pattern Analysis and Machine Intelligence 45, 9 (2023), 10795–10816. https://doi.org/10.1109/TPAMI.2023. 3268118 1 [43] Ziyuan Zhong, Davis Rempe, Yuxiao Chen, Boris Ivanovic, Yulong Cao, Danfei Xu, Marco Pavone, and Baishakhi Ray. 2023. Language-guided traffic simula- tion via scene-level diffusion. In Conference on Robot Learning . PMLR, 144–177. 3 [44] Rui Zhou, Yuping Liu, Kai Zhang, and Ou Yang. 2022. Genetic algorithm-based challenging scenarios generation for autonomous vehicle testing. IEEE Jour- nal of Radio Frequency Identification 6 (2022), 928– 933. 2 12
|
https://arxiv.org/abs/2505.22067v1
|
arXiv:2505.22068v1 [cs.CL] 28 May 2025Beyond path selection: Better LLMs for Scientific Information Extraction with MimicSFT and Relevance and Rule-induced(R2)GRPO Ran Li HKUST Hong Kong SAR, China rlibb@connect.ust.hkShimin Di SEU Jiangsu, China shimin.di@seu.edu.cn Yuchen Liu HKUST(GZ) Guangzhou, China yliu356@connect.hkust-gz.edu.cnChen Jing Zhipu AI Beijing, China chen.jing@aminer.cnYu Qiu Zhipu AI Beijing, China yu.qiu@aminer.cn Lei Chen HKUST(GZ), HKUST Guangzhou, China leichen@hkust-gz.edu.cn Abstract Previous study suggest that powerful Large Language Models (LLMs) trained with Reinforcement Learning with Verifiable Rewards (RLVR) only refines reasoning path without improving the reasoning capacity in math tasks while supervised- finetuning(SFT) with distillation can. We study this from the view of Scientific information extraction (SciIE) where LLMs and reasoning LLMs underperforms small Bert-based models. SciIE require both the reasoning and memorization. We argue that both SFT and RLVR can refine the reasoning path and improve reasoning capacity in a simple way based on SciIE. We propose two-stage training with 1. MimicSFT, using structured reasoning templates without needing high- quality chain-of-thought data, 2. R2GRPO with relevance and rule-induced rewards. Experiments on scientific IE benchmarks show that both methods can improve the reasoning capacity. R2GRPO with mimicSFT surpasses baseline LLMs and specialized supervised models in relation extraction. Our code is available at https://github.com/ranlislz/R2GRPO . 1 Introduction Reasoning Large Language Models (LLMs) [ 12,16,11], trained with Reinforcement Learning from Verifiable Rewards (RLVR), have shown complex tasks like mathematical reasoning and code generation. By integrating chain-of-thought (CoT) prompting with RL-driven path optimization, models such as DeepSeek-R1 [ 12], OpenAI O1,O3 [ 11,16] and Google gemini-2.0 thinking [ 9] iteratively refine their reasoning trajectories, achieving human-level performance in select domains. On the one hand, recent studies suggest RLVR optimizes output distributions rather than imparting new abilities. For instance, RL-tuned LLMs exhibit improved pass@1 rates on math benchmarks but reduced pass@k diversity, implying narrower path selection instead of enriched reasoning capacity Preprint. Under review. Query AnswerVanila SFT Query Answer CoTDistillation SFT Query Answer Pseudo CoTMimic SFT Query Answer Pseudo CoT <think> Query Answer <think>R2GRPO -rewards F 1 Format Relevance RuleGRPO -rewards Accuracy Format1. SFT 2. RLVRHigh quality data Self-generate; Fixed Through information extraction: MimicSFT with Pseudo CoT can improve the performance; R2GRPO can improve the reasoning capacity beyond path selection .Figure 1: Our two-stage training for scientific IE (right) and the performance gain (left) Figure 2: Best F1@K scores representing the reasoning capacity and Avg@K scores representing the reasoning ability for NER and RE on SciER (small). Figure 3: Best F1@K scores representing the reasoning capacity and Avg@K scores representing the reasoning ability for NER and RE on OOD (small). [39]. Conversely, supervised fine-tuning (SFT) with knowledge distillation [ 13]from larger models has been shown to enhance reasoning breadth [ 39,12]. On the other hand, the less explored Information extraction (IE) for reasoning LLMs presents an ideal task for investigating this question. As shown in Figure 1, even state-of-the-art LLMs underperform supervised BERT-based models [ 10,3,35,37,43] on Scientific IE(SciIE) benchmarks like SciER [ 41]. The task demands precise recall of domain- specific entities (memorization) and systematic reasoning to infer implicit relations. LLMs often perform bad on entity span detection
|
https://arxiv.org/abs/2505.22068v1
|
and relational inference, highlighting a misalignment between their training objectives and IE’s dual requirements. We argue that IE’s hybrid nature with both knowledge memorization andcontextual rule reasoning makes it an ideal lens to study what RLVR truly learns. [ 6] States SFT is better at memorization and RL is good at generalization. However, [ 26] provide the unified representation of SFT and RLVR for LLM post-training. From this view, both RLVR and SFT should be able to conduct memorization and optimize reasoning in a simple way. Based on this, it is also interesting if we can improve the LLMs performance to be comparable with the supervised models. Our systematic study of RLVR’s impact on IE reveals that it can enhance both knowledge retention and reasoning when applied under task-aware conditions. Contrary to prior assumptions, RLVR and SFT exhibit complementary effects: RLVR improves both Best@1 and Best@k performance, suggesting it enhances both reasoning ability and capacity[ 39]. Furthermore, we discover that SFT with structured reasoning templates (which we call MimicSFT) can significantly boost performance 2 without requiring high-quality CoT data, challenging previous view on math datasets [ 39]. We noticed that for the reasoning, the structure and relevance are more important than the length for a constrained generation problem. Based on these insights, we propose R2GRPO , a two-phase training method that combines MimicSFT with a novel reward function incorporating relevance and rule-mining to jointly optimize knowledge grounding and reasoning paths. Our hierarchical reasoning approach decomposes complex SciIE tasks into more tractable sub-problems, guiding the model through structured reasoning steps that satisfy both schema and factual constraints. Our key contributions are: •We show RLVR can enhance both knowledge memorization and systematic reasoning in LLMs for SciIE, challenging the view that RLVR only refines reasoning path. •Further we propose MimicSFT, with pseudo reasoning templates, a simple adaptation without requiring high-quality CoT data can improve the reasoning ability of models on SciIE. •The development of R2GRPO, which integrates MimicSFT with a composite reward function en- couraging relevance and rule induction during reasoning, achieving state-of-the-art IE performance among reasoning models and comparable with supervised models. 2 Related Work Post-training LLMs for Enhanced Capabilities. Large Language Models [ 1] pre-trained on vast data, acquiring abundant knowledge and diverse abilities [ 17]. However, aligning these models with specific downstream tasks or desired behaviors often requires post-training [ 42,14,28,29], primarily Supervised Fine-Tuning (SFT) [ 25,5,31,7,44]and Reinforcement Learning (RL) [ 46,24,12,26]. Post-training adapts models to specific input-output formats using labeled examples, effectively injecting task-specific knowledge or styles [ 24,2].Chain-of-Thought prompting [ 32] and RLVR methods like GRPO [ 26] encouraging models to self-generate the reasoning chain have significantly boosted LLMs’ performance on reasoning tasks like mathematics and code generation. [ 6] states that SFT is mainly for memorization and RL can improve generalization. However, [ 39] argue that RLVRs mainly optimize the reasoning paths that the base model already has with its pre-trained knowledge. [ 18] mentions the structured reasoning pattern is more important than the content. [ 27] shows that the CoT is good for math and symbolic reasoning
|
https://arxiv.org/abs/2505.22068v1
|
but not necessary for other types tasks. On the other hand, some argue that SFT can also help to gain reasoning capacity [ 20]. So is there a clear gap between SFT and RLVR and what do RLVR really learns? Our work investigates this question within the less explored domain of Information Extraction, where both knowledge and reasoning are crucial. Scientific Information Extraction (SciIE). IE aims to automatically extract structured information, such as entities, relations, and events, from unstructured text. Traditional approaches often rely on supervised learning with sequence labeling models like BiLSTMs or Transformer-based architectures (e.g., BERT) trained on domain-specific annotated datasets [ 10]. These methods achieve high performance on current benchmarks, particularly for Named Entity Recognition (NER) and Relation Extraction (RE), but require significant domain annotation data. Recently, LLMs have been explored for IE tasks, leveraging their zero-shot [ 33,19,21,34,38] or in-context learning capabilities [ 4,40, 45] or undergoing supervised fine-tuning [ 30,8,22]. While LLMs offer flexibility, they often under- perform specialized supervised models [ 43,35,37] trained with domain knowledge, especially in scientific domains(SciIE) [ 3,41,8] requiring specialized knowledge or complex relational inference. Few studies have study how to adapt the reasoning ability of LLMs through post-training to improve the perfromance on SciIE, a gap our work aims to fill by examining how SFT and RL differentially contribute to performance on Scientific NER and RE. 3 Methodology 3.1 Problem Formulation We focus on two fundamental information extraction tasks: Named Entity Recognition (NER) and Relation Extraction (RE). 3 Named Entity Recognition (NER): Given an input text x={x1, x2, . . . , x n}, NER identifies entity spans ei={xj, . . . , x k}and assigns each a type ti∈ T, where Tis a predefined set of entity types (e.g., Task, Method, Dataset in scientific literature). Relation Extraction (RE): For a pair of entities (ei, ej)identified in text, RE determines whether a relation exists and, if so, classifies it into a relation type rij∈ R, where Ris the set of possible relation types (e.g., Used-For, Compare-With). End-to-End IE: This combines both tasks, requiring models to first identify entities and then determine relations between them, making it particularly challenging as errors in entity recognition propagate to relation extraction. Constrained Generation View: We can view IE as a constrained generation problem where the model must generate outputs ythat satisfy both: •Schema constraints : Answers must conform to predefined entity and relation types and follow the required structure (e.g. valid json format). •Factual constraints : Answers must come from the original content. Formally, we can define the constrained generation problem as finding: y∗= arg max y∈YP(y|x;θ)s.t., Cschema (y) = 1∧Cfactual(y, x) = 1 , (1) where Cschema andCfactual are binary constraint functions. This formulation is challenging for standard LLMs as they must simultaneously satisfy structural constraints while maintaining factual accuracy. All fine-tuning is performed using Low-Rank Adaptation (LoRA) [15] for computational efficiency. 3.2 Supervised Fine-Tuning and MimicSFT Standard SFT adapts a pre-trained LLM by maximizing the conditional probability of target outputs given inputs: LSFT(θ) =−X (x,y)∈D SFTlogP(y|x;θ), (2) where DSFTis the supervised fine-tuning dataset.
|
https://arxiv.org/abs/2505.22068v1
|
In terms of Equation 11, o=y,D=DSFT, and GC(x, y, t, π ref) = 1 for all tokens. To improve generalization, we decompose IE into distinct sub-tasks (NER only, RE with Gold Entities, RE only, End-to-End IE) and employ a multi-task learning approach: LMT-SFT (θ) =−KX k=1X (x,y)∈D SFT,TklogP(y|x, Tk;θ) (3) where Tkindicates the task type in the prompt. MimicSFT: Structured Reasoning Without CoT Data. We introduce MimicSFT to encourage structured reasoning without requiring high-quality CoT annotations. The model is trained to produce a templated reasoning block z(enclosed in <reasoning>...</reasoning> tags) before generating the final output y: LMimicSFT (θ) =−X (x,y′)∈D MimicSFTlogP(y′|x;θ), (4) where y′= (z, y)is the concatenation of reasoning steps and final output. The reasoning template follows a general IE process (e.g., 1. Identify entities, 2. Consider relations, 3. Formulate extraction). 4 3.3 R2GRPO: Reinforcement Learning with Relevance and Rule-Induction 3.3.1 GRPO Framework R2GRPO builds on Group Relative Policy Optimization (GRPO) [ 26], a PPO variant that normalizes rewards based on group performance. The GRPO objective is: JGRPO(θ) =Eq∼D,{oi}G i=1∼πθold(O|q)h1 GGX i=11 |oi||oi|X t=1Aclip(oi,t, q,ˆAi,t) −βD KL(πθ(·|q, oi,<t)||πref(·|q, oi,<t))i , (5) where: , qis an input prompt from the IE dataset , {oi}G i=1is a group of Goutputs sampled from policy πθold,Aclip(oi,t, q,ˆAi,t) = min( rt(θ)ˆAi,t,clip(rt(θ),1−ϵ,1 +ϵ)ˆAi,t),rt(θ) =πθ(oi,t|q,oi,<t) πθold(oi,t|q,oi,<t)is the probability ratio , ˆAi,t=Ri−mean (R) std(R)is the normalized advantage , βcontrols the KL divergence penalty from reference policy πref. 3.3.2 Composite Reward Function R2GRPO’s core innovation is a composite reward function for IE tasks: R(oi, x, y gold) =w1RF1(oi, ygold) +w2Rspan(oi, ygold) +w3Rrelevancy (oi, x) +w4Rrule(oi, x),(6) where wjare tunable weights and: •F1 Score Reward RF1measures F1 score between predicted and gold extractions: RF1(oi, ygold) = F1-score (oi, ygold). •Entity Span Reward Rspan encourages precise boundaries Rspan(oi, ygold) = 1 NePNe j=1Jaccard (span(epred,j),span(egold,j)), where Neis the number of matched entities. Jaccard (·)[23] measures the word-level overlap between the predicted and ground-truth entities. •Rule-pattern Reward Rrulerewards logical/domain pattern adherence: Rrule(oi, x) =P kwk· I(patternksatisfied by oigiven x). It encourage the think content to follow the rule based reasoning pattern like ’cause’, ’leads to’, ’rule implies’, ... or explanation of the relation inference. •Relevancy Reward Rrelevancy promotes evidence-based extraction: Rrelevancy (oi, x) = Map(ci,evidence gold)−λpenalty· length (ci) length (xsentence )2 ·I(length (ci)>threshold ), where ciis cited evidence. For Map here, we check if the cited content appears in the original content. 3.3.3 Training Strategy To enhance efficiency, we employ: Curriculum Learning: Starting with simpler IE tasks and gradually introducing complexity. We define the difficulty based on the number of entities and relation triple within one sentence. Data Selection: Prioritizing instances where SFT performs poorly but clear reward signals exist. This can boost the efficiency of training of R2GRPO to more epochs. The subset we selected ensure similar distribution of entity and relation types of the whole datasets and also maintain the distribution of samples with different difficulty levels. 3.4 Theoretical Analysis: Why Structured Reasoning Works Why does hierarchical reasoning with templated steps improve performance? We provide a theoretical explanation from multiple perspectives. Constraint Satisfaction Through Decomposition. The hierarchical reasoning approach transforms the constrained
|
https://arxiv.org/abs/2505.22068v1
|
generation problem into a more tractable form by decomposing it into stages. For MimicSFT with a single reasoning level z1: P(y|x;θ)≈X z1∈Z1P(y|z1, x;θ)P(z1|x;θ), (7) 5 where Z1is the space of valid reasoning templates. This decomposition allows the model to first focus on generating valid reasoning ( z1) that satisfies intermediate constraints before producing the final output ( y). For R2GRPO with two reasoning levels: P(y|x;θ)≈X z1∈Z1X z2∈Z2(z1)P(y|z2, z1, x;θ)P(z2|z1, x;θ)P(z1|x;θ), (8) where Z2(z1)is the space of valid second-level reasoning conditioned on z1. This further decompo- sition allows for more refined constraint satisfaction: z1(<reasoning>...</reasoning> ) estab- lishes the general reasoning framework, addressing schema constraints, z2(<think>...</think> ) refines the reasoning with task-specific details, addressing factual constraints , yproduces the final structured output based on both reasoning levels 3.4.1 Multi-Level Reasoning in R2GRPO R2GRPO extends MimicSFT by adding a second level of reasoning optimization. If z1is the fixed reasoning template (from MimicSFT) and z2is the RL-optimized reasoning, the full generation becomes y′= (z1, z2, y), creating a hierarchical structure: Improved Constraint Satisfaction. We can show that this hierarchical approach improves constraint satisfaction probability. Let C={y:Cschema (y) = 1 ∧Cfactual(y, x) = 1}be the set of outputs satisfying all constraints. The probability of generating a valid output is: P(y∈ C|x;θ) =X y∈CP(y|x;θ). (9) For the hierarchical model with reasoning steps z1andz2, we assume that: P(y∈ C|x;θhier)≥P(y∈ C|x;θdirect), (10) when the reasoning steps are optimized to guide the model toward constraint satisfaction. We will verify this later through experiments as shown in Figure 2 and Figure 3. Unified Gradient Framework Both SFT and RL update model parameters θby following a gradient. Following Shao et al. [26], we conceptualize these post-training algorithms under a unified gradient expression: ∇θJ(θ) =E⅁⅁ x,o 1 |o||o|X t=1GC(x, o, t, π ref)∇θlogπθ(ot|x, o<t) , (11) where (x, o)is an input-output pair from distribution D,πθ(ot|x, o<t)is the probability of generating token otgiven input xand previous tokens o<t,GC(x, o, t, π ref)is the gradient coefficient determin- ing update magnitude and direction, and Drepresents the data source (human-annotated for SFT, model-generated for RL). For SFT, GC= 1for all tokens in the target sequence, while for RL methods like GRPO, GCis derived from reward signals and advantage estimates. Based on this, both SFT and GRPO can update the model parameter based on the data. Since GRPO is can refine the reasoning path and SFT(with distillation) can improve the reasoning capacity. SFT should also be able to refine the reasoning process in a simple way. And GRPO should also be able to improve the reasoning capacity and memorize knowledge from the input data. Our method is one step towards this. 4 Experiments 4.1 Experimental Setup Training Settings Base Model All our fine-tuning experiments are conducted by adapting the Qwen2.5-7B-Instruct model [ 36].SFT (Supervised Fine-Tuning): Standard fine-tuning on the target IE tasks. MimicSFT (Multi-Task): An SFT approach that encourages pseudo-reasoning steps and leverages multi-task learning across different IE sub-tasks (e.g., NER only, RE with Gold Entities, 6 End-to-End IE) as described in Section 3.2 and 3.2. GRPO-only: Reinforcement learning using Group Relative Policy Optimization [ 26] with a basic
|
https://arxiv.org/abs/2505.22068v1
|
F1 score as the reward signal. R2GRPO: Our proposed Reinforcement Learning framework, R2GRPO (Relevance and Rule-Induction Group Relative Policy Optimization), incorporating a composite reward function as detailed in Section 3.3. The overall prompt can be seen in the appendix A.4. The system prompt for R 2GRPO training: System Prompt Respond in the following format: <reasoning> Provide step-by-step reasoning to solve the task based on the given instructions and sentence. </reasoning> <think> Cite the specific sentence part (e.g., phrase, verb, or structure) supporting the relation. Ar- ticulate a symbolic pattern you discovered (e.g., "The verb ’achieves’ suggests a Method is applied to a Task, implying a relation"). Explain how this pattern leads to the predicted relation, referencing the relationship definition. Use concise, logical chains (e.g., "X performs Y→relation Z because of definition"). </think> <answer> Provide the final answer in JSON format as specified in the instruction. </answer> Implementation Details All models are fine-tuned using the LoRA approach with a rank of 16 and alpha of 32 for SFT and a rank of 64 and alpha of 128 for R2GRPO, applied to all linear layers in the transformer blocks. For SFT and MimicSFT, we train for 3 epochs with a learning rate of 2×10−5 and a batch size of 32 (accumulated over gradient accumulation steps). For R2GRPO, the learning rate for the policy updates is set to 1×10−6. More detail can be found in the appendix. Evaluation Metrics For Named Entity Recognition (NER) and Relation Extraction (Rel and Rel+), we report the standard micro F1-score. NER: An entity is correct if its span and type match a gold entity. Rel: A relation is correct if the types and spans of both entities and the relation type match a gold relation. Rel+: It further requires the entity type is correct in the triples. To understand the upperbound and average performance characteristics, especially for RL-finetuned models, we employ metrics analogous to pass@K used in mathematical reasoning. We report: Best F1@K: The best F1 score among K generated outputs for a given input. This helps assess the model’s capability to produce a correct extraction within its top K hypotheses. Avg@K: The average F1 score over K generated outputs, providing insight into the general quality and consistency of the model’s generations. Unless otherwise specified, K is set to 1 for Best F1@K in main result tables. For the detailed Best F1@K analysis in Section 4.3, we explore a wider range of K values. For the main results of our models, we set temperature at 0. For the baseline models we use there default setting in their documents. For the Best@K performance to allow better exploration, we set temperature 1.0 for all the compared models. We show more analysis about temperature in the experiment part. Baseline Models We compare with: Proprietary or large (>72B) LLMs: regular LLMs like Gemini2.0-flash, DeepSeekV3 and reasoning LLMs like DeepSeek R1,Gemini2.0-flash-thinking; Small regular LLMs(<=72B): Qwen2.5-7B-Instruct (our base model), Qwen2.5-32B-Instruct, Small reasoning LLMs through distillation(<=72B): deepseek-r1-distill-Qwen2.5-7B, 32B. Supervised BERT-based models: Results from established SciBERT (Beltagy et al., 2019) fine-tuned on SciER are included for context.
|
https://arxiv.org/abs/2505.22068v1
|
General-purpose LLMs are evaluated using zero-shot. Dataset We conduct our experiments primarily on the SciER dataset and OOD datasets [ 41]. SciER is a benchmark for information extraction in the scientific domain. It contains 24k entities and 12k relations over 106 scientific publications. It features diverse entity types (e.g., Task, Method, Datasets) and relation types (e.g., Used-For, Compare-with, Feature-Of, Evaluate-with, ...). We use the standard splits for training. The detail dataset statistics can is shown in Table 2. 7 Table 1: Test F1 scores of different baselines on SciER and OOD setting. “Rel” and “Rel+” represent the relation extraction under boundaries and strict evaluation, respectively. R2GRPO* is the combi- nation of mimicSFT and R2GRPO. ”Best@5” represent best score among the performance with 5 generations. Our training is based on Qwen2.5-7B-Instruct. SciER OODMethodsNER Rel Rel+ NER Rel Rel+ Supervised Baselines PURE [43] 81.60 53.27 52.67 71.99 50.44 49.46 PL-Marker [37] 83.31 60.06 59.24 73.93 59.02 56.68 HGERE [35] 86.85 62.32 61.10 81.32 61.31 58.32 Zero-Shot LLMs-based Baselines DeepSeek-V3 42.45 18.76 18.76 57.40 22.66 22.02 DeepSeek-R1 60.27 27.98 27.16 65.95 32.82 32.25 Gemini2.0 69.85 38.38 38.12 58.53 27.74 26.93 Gemini2.0 thinking 61.43 32.30 31.44 64.75 30.62 30.33 Qwen2.5-32B 56.67 17.10 17.10 36.85 8.72 8.72 DeepSeek-R1-Distill-Qwen-32B 57.63 17.62 17.11 49.00 10.79 9.98 Qwen2.5-7B 41.24 7.09 7.09 44.88 4.20 4.20 DeepSeek-R1-Distill-Qwen-7B 32.01 4.60 4.60 30.25 2.88 2.88 Fine-tuned LLMs SFT 80.76 42.22 41.01 70.13 19.45 18.12 GRPO 76.18 48.84 48.02 68.93 42.34 41.76 Ours Mimic-SFT 81.70 56.02 55.34 73.71 50.74 49.95 R2GRPO 77.55 54.59 53.65 70.05 45.72 44.67 R2GRPO* 84.36 66.81 65.95 77.84 55.08 54.29 R2GRPO*(Best@5) 88.90 74.38 74.03 81.27 66.74 64.12 4.2 Main Results We present the overall performance for Named Entity Recognition (NER) and End-to-End Relation Extraction (Rel) on the SciER test set and OOD dataset in Table 1. The results in Table 1 show that our R2GRPO boosts the performance of Qwen2.5-7B-Instruct on SciER and OOD for both NER and RE tasks significantly. Especially on Relation extraction in SciER, it outperforms all the supervised baselines. Mimic-SFT achieve higher relation extraction score than SFT one show the pseudo CoT can activate model’s ’reasoning’ ability or constrained generation refine the reasoning path. Not we use 0 temperature for our models and default setting for the baseline models. For the results for supervised baselines, we use the reported results from the original paper [ 41]. Similarly, R2GRPO outperform GRPO is this case. MimicSFT also shows strong performance, often outperforming standard SFT, highlighting the benefit of the proposed structured pseudo-reasoning. 4.3 What Do Reasoning Models Learn? Analysis of Best F1@K To delve deeper into what reasoning models learn, particularly through Reinforcement Learning with Verifiable Rewards (RLVR) like R2GRPO, we analyze the Best F1@K performance. We selected a subset of 50 challenging samples from the SciER test set and evaluated model outputs with K values ranging from 1, 4, 16, 32, 64, 128, 512, up to 1024. This analysis aims to understand the upper-bound capabilities of the models and how SFT and RLVR shape their knowledge and reasoning. The results for NER and RE are visualized in Figure
|
https://arxiv.org/abs/2505.22068v1
|
2. RLVR and SFT both Enhance Reasoning Capacity: From Figure 2 and Figure 3, both RLVR- based (GRPO, R2GRPO) and SFT-based models (SFT, MimicSFT) significantly outperform the base Qwen2.5-7B-Instruct model across all K values. This contradicts the hypothesis that RLVR merely optimizes path selection without improving underlying capabilities [ 39]. Instead, our results demonstrate that both SFT and RLVR enable models to acquire domain-specific knowledge and enhance reasoning capabilities relevant to IE tasks. The consistent improvement in Best F1@K 8 scores, even at large K values, indicates a genuine expansion of the model’s knowledge boundaries rather than just better prioritization of existing knowledge. Hierarchical Reasoning Improves Knowledge Integration: MimicSFT consistently outperforms standard SFT, and similarly, R2GRPO outperforms basic GRPO across both in-domain and OOD settings. This validates our theoretical analysis in Section 3.4 that structured decomposition of reason- ing facilitates better constraint satisfaction. The templated reasoning approach creates intermediate representations that guide the model toward valid outputs, effectively narrowing the search space while maintaining exploration capabilities. This verify our assumption in Eq. 10. Figure 4: Performance v.s temperatureComplementary Effects of SFT and RLVR: While SFT-based models (particularly MimicSFT) achieve slightly higher Best F1@K at very large K values, RLVR models demonstrate superior Avg@K and Best F1@1 scores. This reveals a fundamental trade-off: SFT expands the model’s knowledge boundaries, while RLVR optimizes the probability distribution to prioritize high-quality outputs. The combination in R2GRPO* achieves the best of both worlds—maintaining high Best F1@K (knowledge breadth) while significantly improving Best F1@1 (practical performance). Structured Reasoning Enhances Generalization: The performance gap between our methods and baselines widens in OOD settings (Figure 3), demonstrating that hierarchical reasoning im- proves generalization. This aligns with our theoretical framework—by decomposing complex extraction tasks into structured sub-problems, models learn more generalizable patterns rather than memorizing specific input-output mappings. The structured attention allocation mecha- nism described in Section 3.4 enables more effective feature extraction across different domains. (a) Response length (b) Reward Figure 5: Response length(a) and Reward(b) v.s. training steps for R2GRPOExploration-Exploitation Balance: The slightly lower Best F1@K of R2GRPO compared to MimicSFT at very large K values reflects an intentional trade-off. R2GRPO optimizes for high-reward trajectories within a practical explo- ration budget, focusing computational resources on promising reasoning paths. This is particu- larly valuable in real-world applications where generating hundreds of candidates is impractical. The higher Avg@K scores of R2GRPO indicate more consistent performance across generations, making it more reliable in production environ- ments. 4.3.1 Ablation Studies and Parameter Sensitivity Figure 6: Performance v.s response token length. The deeper the color, the higher the temperature.Component Contribution Analysis: Table 1 demonstrates the progressive improvement from SFT to MimicSFT and from GRPO to R2GRPO, validating each component’s contribution. The most substantial gains come from combining MimicSFT with R2GRPO (R2GRPO*), which achieves a 24.59 point improvement in Rel F1 over standard SFT and a 17.97 point improve- ment over basic GRPO. This synergistic effect confirms our hypothesis that knowledge acquisi- tion (primarily through SFT) and reasoning re- finement (primarily through RL) are complemen- tary processes that can be jointly optimized. 9 Temperature Sensitivity:
|
https://arxiv.org/abs/2505.22068v1
|
Figure 4 shows that our models consistently outperform baselines across different temperature settings. The optimal performance is at lower temperatures (<0.6). This indicates that SciIE benefits from more deterministic generation since the task requires precise entity boundary detection and relation inference based on the content. The performance degradation at higher temperatures suggests that excessive exploration introduces more noise in the structured extraction process. We also found that the completion length increases with higher temperature in Figure 7a. This suggests the thinking length increase with the noise that leads to unstable thinking content can harm the performance. Response Length Analysis: In Figure 5, as training goes, the response length first increase than decrease. This differs from the results of [ 26] where response length increase. Figure 6 reveals an interesting relationship between response length and performance. The performance does not generally benefit from longer response. The long response length at high temperature(1.5) leads to bad performance. This suggest for tasks like SciIE, the constrained reasoning process is better than the long but noisy think content. This supports our assumption—effective IE requires concise, targeted reasoning that focuses on relevant constraints rather than exhaustive exploration. The hierarchical reasoning approach in R2GRPO guides the model to generate more efficient reasoning paths, avoiding unnecessary elaboration while maintaining extraction accuracy. More training detail is in Figure 7. 5 Limitation and Future Work Our study mainly focuses on the SciIE. The effectiveness of MimicSFT’s pseudo-reasoning templates and R2GRPO might vary across different types of information extraction tasks or languages. In the future, we would explore the adaptability of R2GRPO to broader domains and investigate more automated methods for generating or refining reasoning templates. Moreover, further research can also focus on how structured reasoning influences knowledge acquisition and path selection in diverse LLM architectures and explore the scalability of our approach to even larger models or datasets of different domains. References [1]Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774 , 2023. [2]Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862 , 2022. [3]Iz Beltagy, Kyle Lo, and Arman Cohan. Scibert: A pretrained language model for scientific text. arXiv preprint arXiv:1903.10676 , 2019. [4]Zhen Bi, Jing Chen, Yinuo Jiang, Feiyu Xiong, Wei Guo, Huajun Chen, and Ningyu Zhang. Codekgc: Code language model for generative knowledge graph construction. ACM Transac- tions on Asian and Low-Resource Language Information Processing , 23(3):1–16, 2024. [5]Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems , 33:1877–1901, 2020. [6]Tianzhe Chu, Yuexiang Zhai, Jihan Yang, Shengbang Tong, Saining Xie, Dale Schuurmans, Quoc V Le, Sergey Levine, and Yi Ma. Sft memorizes, rl generalizes: A comparative study of foundation model post-training. arXiv preprint arXiv:2501.17161 ,
|
https://arxiv.org/abs/2505.22068v1
|
2025. [7]Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. Scaling instruction-finetuned language models. Journal of Machine Learning Research , 25(70):1–53, 2024. [8]John Dagdelen, Alexander Dunn, Sanghoon Lee, Nicholas Walker, Andrew S Rosen, Gerbrand Ceder, Kristin A Persson, and Anubhav Jain. Structured information extraction from scientific text with large language models. Nature Communications , 15(1):1418, 2024. 10 [9]Google DeepMind. Gemini 2.0 flash thinking„ 2024. URL https://deepmind.google/ technologies/gemini/flash-thinking/ . Accessed: 2025-04-15. [10] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 conference of the North American chapter of the association for computational linguistics: human language technologies, volume 1 (long and short papers) , pages 4171–4186, 2019. [11] Ahmed El-Kishky, Alexander Wei, Andre Saraiva, Borys Minaiev, Daniel Selsam, David Dohan, Francis Song, Hunter Lightman, Ignasi Clavera, Jakub Pachocki, et al. Competitive programming with large reasoning models. arXiv preprint arXiv:2502.06807 , 2025. [12] Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948 , 2025. [13] Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 , 2015. [14] Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. Training compute-optimal large language models. arXiv preprint arXiv:2203.15556 , 2022. [15] Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, et al. Lora: Low-rank adaptation of large language models. ICLR , 1 (2):3, 2022. [16] Aaron Jaech, Adam Kalai, Adam Lerer, Adam Richardson, Ahmed El-Kishky, Aiden Low, Alec Helyar, Aleksander Madry, Alex Beutel, Alex Carney, et al. Openai o1 system card. arXiv preprint arXiv:2412.16720 , 2024. [17] Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models. arXiv preprint arXiv:2001.08361 , 2020. [18] Dacheng Li, Shiyi Cao, Tyler Griggs, Shu Liu, Xiangxi Mo, Eric Tang, Sumanth Hegde, Kourosh Hakhamaneshi, Shishir G Patil, Matei Zaharia, et al. Llms can easily learn to rea- son from demonstrations structure, not content, is what matters!, 2025. URL https://arxiv. org/abs/2502.07374 . [19] Guozheng Li, Peng Wang, and Wenjun Ke. Revisiting large language models as zero-shot relation extractors. arXiv preprint arXiv:2310.05028 , 2023. [20] Jiaxiang Li, Siliang Zeng, Hoi-To Wai, Chenliang Li, Alfredo Garcia, and Mingyi Hong. Getting more juice out of the sft data: Reward learning from human demonstration improves sft for llm alignment. Advances in Neural Information Processing Systems , 37:124292–124318, 2024. [21] Keming Lu, Xiaoman Pan, Kaiqiang Song, Hongming Zhang, Dong Yu, and Jianshu Chen. Pivoine: Instruction tuning for open-world information extraction. arXiv preprint arXiv:2305.14898 , 2023. [22] Yansong Ning and Hao Liu. Urbankgent: A unified large language model agent framework for urban knowledge graph construction. arXiv preprint arXiv:2402.06861
|
https://arxiv.org/abs/2505.22068v1
|
, 2024. [23] Suphakit Niwattanakul, Jatsada Singthongchai, Ekkachai Naenudorn, and Supachanun Wanapu. Using of jaccard coefficient for keywords similarity. In Proceedings of the international multiconference of engineers and computer scientists , volume 1, pages 380–384, 2013. [24] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in neural information processing systems , 35:27730–27744, 2022. 11 [25] Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al. Improving language understanding by generative pre-training. 2018. [26] Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, YK Li, Y Wu, et al. Deepseekmath: Pushing the limits of mathematical reasoning in open language models. arXiv preprint arXiv:2402.03300 , 2024. [27] Zayne Sprague, Fangcong Yin, Juan Diego Rodriguez, Dongwei Jiang, Manya Wadhwa, Prasann Singhal, Xinyu Zhao, Xi Ye, Kyle Mahowald, and Greg Durrett. To cot or not to cot? chain- of-thought helps mainly on math and symbolic reasoning. arXiv preprint arXiv:2409.12183 , 2024. [28] Gemini Team, Petko Georgiev, Ving Ian Lei, Ryan Burnell, Libin Bai, Anmol Gulati, Garrett Tanzer, Damien Vincent, Zhufeng Pan, Shibo Wang, et al. Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context. arXiv preprint arXiv:2403.05530 , 2024. [29] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timo- thée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 , 2023. [30] Shuhe Wang, Xiaofei Sun, Xiaoya Li, Rongbin Ouyang, Fei Wu, Tianwei Zhang, Jiwei Li, and Guoyin Wang. Gpt-ner: Named entity recognition via large language models. arXiv preprint arXiv:2304.10428 , 2023. [31] Jason Wei, Maarten Bosma, Vincent Y Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. Finetuned language models are zero-shot learners. arXiv preprint arXiv:2109.01652 , 2021. [32] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems , 35:24824–24837, 2022. [33] Xiang Wei, Xingyu Cui, Ning Cheng, Xiaobin Wang, Xin Zhang, Shen Huang, Pengjun Xie, Jinan Xu, Yufeng Chen, Meishan Zhang, et al. Zero-shot information extraction via chatting with chatgpt. arXiv e-prints , pages arXiv–2302, 2023. [34] Tingyu Xie, Qi Li, Jian Zhang, Yan Zhang, Zuozhu Liu, and Hongwei Wang. Empirical study of zero-shot ner with chatgpt. arXiv preprint arXiv:2310.10035 , 2023. [35] Zhaohui Yan, Songlin Yang, Wei Liu, and Kewei Tu. Joint entity and relation extraction with span pruning and hypergraph neural networks. arXiv preprint arXiv:2310.17238 , 2023. [36] An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, et al. Qwen2. 5 technical report. arXiv preprint arXiv:2412.15115 , 2024. [37] Deming Ye, Yankai Lin, Peng Li, and Maosong Sun. Packed levitated marker for entity and relation extraction. arXiv preprint arXiv:2109.06067 , 2021. [38] Chenhan Yuan, Qianqian Xie, and Sophia Ananiadou. Zero-shot temporal relation
|
https://arxiv.org/abs/2505.22068v1
|
extraction with chatgpt. arXiv preprint arXiv:2304.05454 , 2023. [39] Yang Yue, Zhiqi Chen, Rui Lu, Andrew Zhao, Zhaokai Wang, Shiji Song, and Gao Huang. Does reinforcement learning really incentivize reasoning capacity in llms beyond the base model? arXiv preprint arXiv:2504.13837 , 2025. [40] Bowen Zhang and Harold Soh. Extract, define, canonicalize: An llm-based framework for knowledge graph construction. arXiv preprint arXiv:2404.03868 , 2024. [41] Qi Zhang, Zhijia Chen, Huitong Pan, Cornelia Caragea, Longin Jan Latecki, and Eduard Dragut. Scier: An entity and relation extraction dataset for datasets, methods, and tasks in scientific documents. arXiv preprint arXiv:2410.21155 , 2024. 12 [42] Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. Opt: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068 , 2022. [43] Zexuan Zhong and Danqi Chen. A frustratingly easy approach for joint entity and relation extraction. arXiv preprint arXiv:2010.12812 , 2020. [44] Chunting Zhou, Pengfei Liu, Puxin Xu, Srinivasan Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, et al. Lima: Less is more for alignment. Advances in Neural Information Processing Systems , 36:55006–55021, 2023. [45] Yuqi Zhu, Xiaohan Wang, Jing Chen, Shuofei Qiao, Yixin Ou, Yunzhi Yao, Shumin Deng, Huajun Chen, and Ningyu Zhang. Llms for knowledge graph construction and reasoning: Recent capabilities and future opportunities. World Wide Web , 27(5):58, 2024. [46] Daniel M Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B Brown, Alec Radford, Dario Amodei, Paul Christiano, and Geoffrey Irving. Fine-tuning language models from human preferences. arXiv preprint arXiv:1909.08593 , 2019. 13 A Appendix A.1 R2GRPO Training A.2 Dataset Statistics We show here the detail statistic of the datasets used. For the Best@K evaluation, we select 50 samples from both the SciER and OOD test set. And for the rest evaluation we use the full datasets. Table 2: Detail Distribution of Datasets Entity/Relation Type Train Dev SciER Test OOD Test Total Entity Types Method 11424 1549 1890 1018 15881 DATASET 3220 269 370 83 3942 TASK 3397 416 688 194 4695 Total 18041 2234 2948 1295 24518 Relation Types PART-OF 1865 214 304 111 2494 USED-FOR 2398 343 546 167 3454 EVALUATED-WITH 863 78 131 49 1121 SYNONYM-OF 880 76 170 89 1215 COMPARE-WITH 875 175 114 54 1218 SUBCLASS-OF 697 114 176 73 1060 BENCHMARK-FOR 551 64 85 28 728 SUBTASK-OF 210 31 65 9 315 TRAINED-WITH 404 37 35 2 478 Total 8743 1132 1626 582 12083 A.3 Training details We trained the model on a sub set of the SciER with 1K sample for RL. The selection is based on the balanced distribution of the entity and relation of different types and samples with different length total number of entities and relation triples. The training detail is shown in Fig. 7. Training can be done within 24GB vram with the lora adapter. However, larger group size require larger vram We train for 3 epochs on the full SciER datasets and 10 epochs for the RL on the 1K subset. A.4 Prompt We adapt the instruction from SciER [41].
|
https://arxiv.org/abs/2505.22068v1
|
14 (a) Completion Length (b) Ner reward (c) Rel reward (d) Reasoning reward (e) Total reward (f) Standard Deviation Figure 7: R2GRPO training detail v.s. steps Ner Background Extract specific entities from the following sentence. The entities to be identified are: ’Dataset’, ’Task’, and ’Method’. ### Entity Definitions: - ’Task’: A task in machine learning refers to the specific problem or type of problem that a ML/AI model/method is designed to solve. Tasks can be broad, like classification, regression, or clustering, or they can be very specific, such as Pedestrian Detection, Autonomous Driving, Sentiment Analysis, Named Entity Recognition, and Relation Extraction. - ’Method’: A method entity refers to the approach, algorithm, or technique used to solve a specific task/problem. Methods encompass the computational algorithms, model architectures, and the training procedures that are employed to make predictions or decisions based on data. For example, Convolutional Neural Networks, Dropout, data augmentation, recurrent neural networks. - ’Dataset’: A realistic collection of data that is used for training, validating, or testing the algorithms. These datasets can consist of various forms of data such as text, images, videos, or structured data. For example, MNIST, COCO, AGNews, IMDb. ### Other Notes: - Generics cannot be used independently to refer to any specific entities, e.g., ’This task’, ’the dataset’, and ’a public corpus’ are not entities. - The determiners should not be part of an entity span. For example, given span ’the SQuAD v1.1 dataset’, where the determiner ’the’ should be excluded from the entity span. - If both the full name and the abbreviation are present in the sentence, annotate the abbrevia- tion and its corresponding full name separately. For instance, ’20-newsgroup (20NG)’. - Only annotate "factual, content-bearing" entities. Task, dataset, and method entities normally have specific names and their meanings are consistent across different papers. For example, "CoNLL03", "SNLI" are factual entities. Annotators should annotate only the minimum necessary to represent the original meaning of task/dataset/metric (e.g., "The", "dataset", "public", ’method’, ’technique’ are often omitted). Based on the given sentence and the entities with their types, determine the relationship between each pair. The potential relations are: [’Part-Of’, ’SubClass-Of’, ’SubTask-Of’, ’Benchmark-For’, ’Trained-With’, ’Evaluated-With’, ’Synonym-Of’, ’Used-For’, ’Compare- With’]. If no relationship exists between a pair, do not include it in the output. 15 Rel Background ### Relationship Definitions: - ’Part-Of’: This relationship denotes that one entity (e.g., a Method) is a component or a part of another entity (e.g., another Method). - ’SubClass-Of’: Specifies that one entity is a subclass or a specialized version of another entity. - ’SubTask-Of’: Indicates that one Task is a subset or a specific aspect of another broader Task. - ’Benchmark-For’: Shows that a Dataset serves as a standard or benchmark for evaluating the performance of a Method on a Task. - ’Trained-With’: Indicates that a Method is trained using a Dataset. - ’Evaluated-With’: This relationship denotes that a Method is evaluated using a Dataset to test its performance or conduct experiments. - ’Synonym-Of’: Indicates that two entities are considered to have the same or very similar meaning, such as abbreviations. - ’Used-For’: Shows
|
https://arxiv.org/abs/2505.22068v1
|
that one entity (e.g., a Method) is utilized for achieving or performing another entity (e.g., a Task). This relationship is highly flexible. - ’Compare-With’: This relationship is used when one entity is compared with another to highlight differences, similarities, or both. ### Notes: - Determine the ’Relationship’ that best describes how the subject and object are related, based on the sentence context. - Please do not annotate negative relations (e.g., X is not used in Y). - Annotate a relationship only if there is direct evidence or clear implication in the text. Avoid inferring relationships that are not explicitly mentioned or clearly implied. Task Given the sentence: "sentence" Extract entities and their relations. ### Instruction: - Think step-by-step to identify entities (’Dataset’, ’Task’, ’Method’) and their relationships. - Return the results in JSON format with: - "ner": a list of [entity, type] pairs. - "rel": a list of [subject, relation, object] triples. In general, the prompt consists of the background definition of the entity, relation and the instruction on the tasks. 16
|
https://arxiv.org/abs/2505.22068v1
|
arXiv:2505.22074v1 [cs.LG] 28 May 2025The Resurrection of the ReLU Co¸ sku Can Horuz1∗Geoffrey Kasenbacher1,2∗Saya Higuchi1Sebastian Kairat1 Jendrik Stoltz1Moritz Pesl1Bernhard A. Moser3,4Christoph Linse5 Thomas Martinetz5Sebastian Otte1 1Institute of Robotics and Cognitive Systems, University of Lübeck 2Mercedes-Benz AG 3Software Competence Center Hagenberg (SCCH) 4Institute of Signal Processing, Johannes Kepler University of Linz (JKU) 5Institute of Neuro- and Bioinformatics, University of Lübeck {cosku.horuz, sa.higuchi, sebastian.kairat, c.linse, thomas.martinetz, sebastian.otte}@uni-luebeck.de {jendrik.stoltz, moritz.pesl}@student.uni-luebeck.de geoffrey.kasenbacher@mercedes-benz.com bernhard.moser@scch.at Abstract Modeling sophisticated activation functions within deep learning architectures has evolved into a distinct research direction. Functions such as GELU, SELU, and SiLU offer smooth gradients and improved convergence properties, making them popular choices in state-of-the-art models. Despite this trend, the classical ReLU remains appealing due to its simplicity, inherent sparsity, and other advan- tageous topological characteristics. However, ReLU units are prone to becoming irreversibly inactive—a phenomenon known as the dying ReLU problem—which limits their overall effectiveness. In this work, we introduce surrogate gradient learning for ReLU (SUGAR) as a novel, plug-and-play regularizer for deep archi- tectures. SUGAR preserves the standard ReLU function during the forward pass but replaces its derivative in the backward pass with a smooth surrogate that avoids zeroing out gradients. We demonstrate that SUGAR, when paired with a well- chosen surrogate function, substantially enhances generalization performance over convolutional network architectures such as VGG-16 and ResNet-18, providing sparser activations while effectively resurrecting dead ReLUs. Moreover, we show that even in modern architectures like Conv2NeXt and Swin Transformer—which typically employ GELU—substituting these with SUGAR yields competitive and even slightly superior performance. These findings challenge the prevailing notion that advanced activation functions are necessary for optimal performance. Instead, they suggest that the conventional ReLU, particularly with appropriate gradient handling, can serve as a strong, versatile revived classic across a broad range of deep learning vision models. 1 Introduction The choice of activation functions in deep neural networks has a substantial effect on the convergence and performance of a model. Prior to the suggestion to apply Rectified Linear Unit (ReLU) as activation function in [ 7], concerted effort was made to tackle the saturation and vanishing gradient problem of sigmoidal activations. ReLU has been shown to accelerate convergence and often also ∗Equal contribution, corresponding authors. Preprint. Under review. to enhance generalization. As a landmark moment in deep learning history, ReLU has been used in AlexNet [ 19], which significantly outperformed its competitors on the ImageNet [ 3] benchmark. This work explicitly studied the superiority of ReLU over hyperbolic tangent (tanh) in a deep convolution network. Since then, ReLU has been used in numerous tasks including image classification and segmentation, reinforcement learning, natural language processing, and speech recognition [ 10,32, 13, 36, 14, 39]. Mathematically, recent research has established a deep connection between tropical geometry and feedforward neural networks with ReLU activations [ 42], unraveling an underlying algebraic structure of ReLU, namely, tropical semiring that replaces addition with it’s maximum and multiplication with addition. As a result, the decision boundaries formed by ReLU networks correspond to tropical hypersurfaces, which are the fundamental objects in tropical geometry. Such neural networks are equivalent to tropical rational
|
https://arxiv.org/abs/2505.22074v1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.