text
string
source
string
Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149 , 2015. Zeyu Han, Chao Gao, Jinyang Liu, Jeff Zhang, and Sai Qian Zhang. Parameter-efficient fine-tuning for large models: A comprehensive survey. Transactions on Machine Learning Research , 2024. Xuehai He, Chunyuan Li, Pengchuan Zhang, Jianwei Yang, and Xin Eric Wang. Parameter-efficient model adaptation for vision transformers. In Proceedings of the AAAI Conference on Artificial Intelligence , pages 817–825, 2023. Patrick Helber, Benjamin Bischke, Andreas Dengel, and Damian Borth. Introducing eurosat: A novel dataset and deep learning benchmark for land use and land cover classification. In IGARSS 2018 - 2018 IEEE International Geoscience and Remote Sensing Symposium , pages 204–207, 2018. Christian Herglotz, Matthias Kränzler, Rubens Mons, and André Kaup. Beyond Bjøntegaard: Limits of video compression performance comparisons. In International Conference on Image Processing (ICIP) , 2022. Christian Herglotz, Hannah Och, Anna Meyer, Geetha Ramasubbu, Lena Eichermüller, Matthias Kränzler, Fabian Brand, Kristian Fischer, Dat Thanh Nguyen, Andy Regensky, and André Kaup. The bjøntegaard bible why your way of comparing video codecs may be wrong. IEEE Transactions on Image Processing , 33:987–1001, 2024. Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. Parameter-efficient transfer learning for NLP. In Proceedings of the 36th International Conference on Machine Learning , pages 2790–2799. PMLR, 2019. 11 Chia-Yi Hsu, Yu-Lin Tsai, Chih-Hsun Lin, Pin-Yu Chen, Chia-Mu Yu, and Chun-Ying Huang. Safe loRA: The silver lining of reducing safety risks when finetuning large language models. In The Thirty-eighth Annual Conference on Neural Information Processing Systems , 2024. Edward J Hu, yelong shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. LoRA: Low-rank adaptation of large language models. In International Conference on Learning Representations , 2022. Berivan Isik, Hermann Kumbong, Wanyi Ning, Xiaozhe Yao, Sanmi Koyejo, and Ce Zhang. GPT-zip: Deep compression of finetuned large language models. In Workshop on Efficient Systems for Foundation Models @ ICML2023 , 2023. Benoit Jacob, Skirmantas Kligys, Bo Chen, Menglong Zhu, Matthew Tang, Andrew Howard, Hartwig Adam, and Dmitry Kalenichenko. Quantization and training of neural networks for efficient integer-arithmetic-only inference. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , 2018. Yiping Ji, Hemanth Saratchandran, Cameron Gordon, Zeyu Zhang, and Simon Lucey. Efficient learning with sine-activated low-rank matrices. In The Thirteenth International Conference on Learning Representations , 2025. Yanfeng Jiang, Zelan Yang, Bohua Chen, Shen Li, Yong Li, and Tao Li. Deltadq: Ultra-high delta compression for fine-tuned llms via group-wise dropout and separate quantization, 2024. Shibo Jie, Haoqing Wang, and Zhi-Hong Deng. Revisiting the parameter efficiency of adapters from the perspective of precision redundancy. In 2023 IEEE/CVF International Conference on Computer Vision (ICCV) , pages 17171–17180, 2023. Rabeeh Karimi Mahabadi, James Henderson, and Sebastian Ruder. Compacter: Efficient low-rank hypercomplex adapter layers. In Advances in Neural Information Processing Systems , pages 1022–1035. Curran Associates, Inc., 2021. Ayush Kaushal, Tejas Vaidhya, Arnab Kumar Mondal, Tejas Pandey, Aaryan Bhagat, and Irina Rish. Surprising effectiveness of pretraining ternary language model at scale. In
https://arxiv.org/abs/2505.21895v1
The Thirteenth International Conference on Learning Representations , 2025. Soroush Abbasi Koohpayegani, Navaneet K L, Parsa Nooralinejad, Soheil Kolouri, and Hamed Pirsiavash. NOLA: Compressing lora using linear combination of random basis. In The Twelfth International Conference on Learning Representations , 2024. Dawid Jan Kopiczko, Tijmen Blankevoort, and Yuki M Asano. VeRA: Vector-based random matrix adaptation. In The Twelfth International Conference on Learning Representations , 2024. Jonathan Krause, Michael Stark, Jia Deng, and Li Fei-Fei. 3d object representations for fine-grained categorization. In 2013 IEEE International Conference on Computer Vision Workshops , pages 554–561, 2013. Min Li, Zihao Huang, Lin Chen, Junxing Ren, Miao Jiang, Fengfa Li, Jitao Fu, and Chenghua Gao. Contemporary advances in neural network quantization: A survey. In 2024 International Joint Conference on Neural Networks (IJCNN) , pages 1–10, 2024a. Yinqiao Li, Linqi Song, and Hanxu Hou. LoRAN: Improved low-rank adaptation by a non-linear transformation. In Findings of the Association for Computational Linguistics: EMNLP 2024 , pages 3134–3143, Miami, Florida, USA, 2024b. Association for Computational Linguistics. James Liu, Guangxuan Xiao, Kai Li, Jason D. Lee, Song Han, Tri Dao, and Tianle Cai. Bitdelta: Your fine-tune may only be worth one bit. In The Thirty-eighth Annual Conference on Neural Information Processing Systems , 2024a. Shih-Yang Liu, Chien-Yi Wang, Hongxu Yin, Pavlo Molchanov, Yu-Chiang Frank Wang, Kwang- Ting Cheng, and Min-Hung Chen. DoRA: Weight-decomposed low-rank adaptation. arXiv preprint arXiv:2402.09353 , 2024b. 12 Zechun Liu, Changsheng Zhao, Igor Fedorov, Bilge Soran, Dhruv Choudhary, Raghuraman Krish- namoorthi, Vikas Chandra, Yuandong Tian, and Tijmen Blankevoort. Spinquant: LLM quantization with learned rotations. In The Thirteenth International Conference on Learning Representations , 2025. Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In International Confer- ence on Learning Representations , 2019. S. Maji, J. Kannala, E. Rahtu, M. Blaschko, and A. Vedaldi. Fine-grained visual classification of aircraft, 2013. Yuren Mao, Yuhang Ge, Yijiang Fan, Wenyi Xu, Yu Mi, Zhonghao Hu, and Yunjun Gao. A survey on lora of large language models. Frontiers of Computer Science , 19(7):197605, 2025. Per-Gunnar Martinsson and Joel A Tropp. Randomized numerical linear algebra: Foundations and algorithms. Acta Numerica , 29:403–572, 2020. Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. Can a suit of armor conduct electricity? a new dataset for open book question answering. arXiv preprint arXiv:1809.02789 , 2018. Maria-Elena Nilsback and Andrew Zisserman. Automated flower classification over a large number of classes. In 2008 Sixth Indian Conference on Computer Vision, Graphics & Image Processing , pages 722–729, 2008. Omkar M. Parkhi, Andrea Vedaldi, Andrew Zisserman, and C. V . Jawahar. Cats and dogs. In IEEE Conference on Computer Vision and Pattern Recognition , 2012. Bowen Ping, Shuo Wang, Hanqing Wang, Xu Han, Yuzhuang Xu, Yukun Yan, Yun Chen, Baobao Chang, Zhiyuan Liu, and Maosong Sun. Delta-come: Training-free delta-compression with mixed-precision for large language models. In The Thirty-eighth Annual Conference on Neural Information Processing Systems , 2024. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision. In
https://arxiv.org/abs/2505.21895v1
Proceedings of the 38th International Conference on Machine Learning , pages 8748–8763. PMLR, 2021. Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. Xnor-net: Imagenet classification using binary convolutional neural networks. In ECCV , 2016. Nataniel Ruiz, Yuanzhen Li, Varun Jampani, Yael Pritch, Michael Rubinstein, and Kfir Aberman. Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation. In Proceed- ings of the IEEE/CVF conference on computer vision and pattern recognition , pages 22500–22510, 2023. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision , 115(3): 211–252, 2015. Simo Ryu, Seunghyun Seo, and Jaejun Yoo. Efficient storage of fine-tuned models via low-rank approximation of weight residuals, 2023. Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. WinoGrande: An adversarial Winograd schema challenge at scale. Communications of the ACM , 64(9):99–106, 2021. Maarten Sap, Hannah Rashkin, Derek Chen, Ronan LeBras, and Yejin Choi. SocialIQA: Common- sense reasoning about social interactions. arXiv preprint arXiv:1904.09728 , 2019. Khurram Soomro, Amir Roshan Zamir, and Mubarak Shah. Ucf101: A dataset of 101 human actions classes from videos in the wild, 2012. 13 Daniel Steinberg. Kmeans1d: Globally optimal efficient 1d k-means, 2019. Accessed: 2025-02-18. Danying Wang, Wenjie Zhu, Yingzhan Xu, Yiling Xu, and LeYang. Visual quality optimization for view-dependent point cloud compression. In 2021 IEEE International Symposium on Circuits and Systems (ISCAS) , pages 1–5, 2021a. Jianqiang Wang, Hao Zhu, Haojie Liu, and Zhan Ma. Lossy point cloud geometry compression via end-to-end learning. IEEE Transactions on Circuits and Systems for Video Technology , 31(12): 4909–4923, 2021b. Xiaolin Wu. Optimal quantization by matrix searching. Journal of Algorithms , 12(4):663–673, 1991. Jianxiong Xiao, Krista A. Ehinger, James Hays, Antonio Torralba, and Aude Oliva. SUN Database: Exploring a Large Collection of Scene Categories. International Journal of Computer Vision , 119 (1):3–22, 2016. Yuhui Xu, Lingxi Xie, Xiaotao Gu, Xin Chen, Heng Chang, Hengheng Zhang, Zhengsu Chen, Xiaopeng Zhang, and Qi Tian. QA-LoRA: Quantization-aware low-rank adaptation of large language models. In The Twelfth International Conference on Learning Representations , 2024. Xiaozhe Yao and Ana Klimovic. Deltazip: Multi-tenant language model serving via delta compression. arXiv preprint arXiv:2312.05215 , 2023. Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. HellaSwag: Can a machine really finish your sentence? arXiv preprint arXiv:1905.07830 , 2019. Frederic Z. Zhang, Paul Albert, Cristian Rodriguez-Opazo, Anton van den Hengel, and Ehsan Abbasnejad. Knowledge composition using task vectors with learned anisotropic scaling. In The Thirty-eighth Annual Conference on Neural Information Processing Systems , 2024. Xunyu Zhu, Jian Li, Yong Liu, Can Ma, and Weiping Wang. A survey on model compression for large language models, 2024. Yongshuo Zong, Ondrej Bohdal, Tingyang Yu, Yongxin Yang, and Timothy Hospedales. Safety fine- tuning at (almost) no cost: A baseline for vision large language models. In Forty-first International Conference on Machine Learning , 2024. 14 A Bjøntegaard Delta Bjøntegaard Delta (BD) is a commonly applied evaluation for comparing video and image compres- sion codecs Bjøntegaard (2001);
https://arxiv.org/abs/2505.21895v1
Herglotz et al. (2022, 2024), and has occasionally been applied for other modalities such as Point Cloud Wang et al. (2021a,b); Herglotz et al. (2024); Barman et al. (2022) or Neural Radiance Field compression Ji et al. (2025). The metric involves evaluating two comparison codecs at two rate and performance positions. These are interpolated, with the measure evaluated as the integral between these positions. Figure 4 shows visually how the Bjøntegaard Delta can be calculated. Standard metrics include BD-Rate (the gain in performance at a given rate) and BD-PSNR (the gain in compression at a given performance). These can be extended without difficulty to evaluation metrics used for language models, such as average task accuracy (BD-Accuracy). Large language model compression is an area of relatively recent interest in contrast to more established modalities such as images and video Zhu et al. (2024); Hadish et al. (2024). As a result evaluating compression changes has yet to be standardised. As a result, it is common to present model gains visually to show Pareto improvements; or to compare performance changes at an (approximately) equivalent parameter or performance position (e.g. ‘. . . performance with 25% fewer parameters’). This leads to difficulties where parameters are not directly comparable or there is variation in algorithm performance at different memory levels. In contrast, the advantage of BD analysis is that it accounts for small inconsistencies in parameters, and accounts for the natural rate- performance trade-off that occurs during compression. We suggest here that BD analysis is applicable to evaluating compression performance in large language models including Delta Compression. A.1 Mathematical Description 200 400 600 800 1000 Bitrate47.550.052.555.057.560.0Accuracy BD-Accuracy Codec A Codec B 200 400 600 800 1000 Bitrate47.550.052.555.057.560.0AccuracyBD-Rate Codec A Codec B Figure 4: Bjontegaard Delta is calculated by taking (bitrate, performance) pairs for two codecs, and evaluating the (horizontal or vertical) integral between the interpolated performance. Given RD points (Ri, Di), we interpolate the RD curves using a smooth function f(R). The BD-rate is computed as: ∆R=AZDmax Dmin f−1 2(D)−f−1 1(D) dD (16) Similarly, BD-Accuracy is inversely defined: ∆D=BZRmax Rmin(f2(R)−f1(R))dR (17) Where A=1 Dmax−DminandB=1 Rmax−Rmin. Typically, a cubic polynomial is used for f(·), however recent works have argued for the use of Akima interpolation due to its increased stability Herglotz et al. (2022, 2024). For our evaluation we use the Bjontegaard Python library accessible at https://github.com/FAU-LMS/bjontegaard . 15 B Quantization Details Choice of Quantization Function For our experiments we choose to use k-means quantization. Choice of quantization function involves a number of design decisions Gholami et al. (2021), balancing the computational limitations, hardware requirements, and the fidelity of quantization Gray and Neuhoff (1998); Gersho and Gray (1991). Often quantization schemes make a distributional assumption on the data. Uniform quantizers equally partition data between the minimum and maximum value of the data range. Gaussian quantizers calibrate according to the data mean and variance, and includes popular LLM quantization methods such as BNB FP4 Dettmers et al. (2024). The half-quadratic quantization scheme of HQQ similarly makes an assumption on tensor distributions Badri and Shaji (2024). The advantage of these
https://arxiv.org/abs/2505.21895v1
explicit distributional schemes is that they are easy to standardise within a new data-type, fast to compute, and require no additional codebook for dequantization. The disadvantage is that they perform poorly in the presence of outlier values or non-Gaussian distributions Ashkboos et al. (2024); Liu et al. (2025); Gersho and Gray (1991). In addition, work has shown that a k-means optimizer can lead to performance improvements over these methods in very low bits-per-precision settings Gordon et al. (2023). As base models contain billions of parameters, and weight tensors follow approximately Gaussian distributions, the use of a nonoptimal quantizer is a trade-off that is frequently employed in practice Dettmers et al. (2023). As our objective is to quantize low-rank adapters, which contain a far more manageable number of weights (28.3M for LLAMA-3-8 at rank 16), we choose to use a k-means quantizer for its theoretically optimal accuracy Gersho and Gray (1991); Gray and Neuhoff (1998). To upper-bound compute we use 10,000 samples of values per-tensor for the k-means calculation. We use the k-means1d package Steinberg (2019), which provides an efficient wrapper for a fast k-means solver that runs in O(kn+nlogn)forn1D data and kclusters based on Wu (1991); Grønlund et al. (2018). Table 5: Comparison between K-Means and Uniform Quantization (Vision Tasks - 1 Shot Average) Model Rank 1-Bit 2-Bit 3-Bit 4-Bit 5-Bit 8-Bit Full Params SineLoRA (Uniform) 2 1 .8 21 .3 60 .9 73 .7 77.1 77.1 77.0 123 K SineLoRA (K-means) 2 63.5 68.1 74.2 76.3 76.9 77 .077.0 123 K SineLoRA (Uniform) 5 2 .0 23 .1 68 .7 77 .1 78.9 78.878.9 307 K SineLoRA (K-means) 5 66.9 74.1 77.5 78.6 78.7 78.9 78.9 307 K SineLoRA (Uniform) 10 1 .9 18 .7 72 .9 78 .7 79.8 79.8 79.8 614 K SineLoRA (K-means) 10 68.7 76.3 78.8 79.4 79.6 79.8 79.8 614 K SineLoRA (Uniform) 16 2 .2 18 .7 75 .0 79 .1 80 .2 80.2 80.3 983 K SineLoRA (K-means) 16 68.3 77.4 79.5 80.0 80.3 80.2 80.3 983 K K-means clustering satisfies a property known as Lloyd-optimality. A set of cluster centroids {c1, c2, . . . , c k}isLloyd-optimal if it satisfies the two conditions: 1) Each centroid cimust be the mean of all data points in its cluster (the centroid condition). 2) Each point xmust be assigned to the cluster with the nearest centroid (the nearest neighbor condition). These two optimality conditions ensure that quantized values will be allocated to the correct partition, with values that optimally represent the underlying distribution. This minimizes the total quantization error Gersho and Gray (1991); Gray and Neuhoff (1998). It can be noted that uniform and Gaussian quantization will satisfy the second (nearest neighbor), but not necessarily the first condition (that values are centroids). It is well known that a 1D k-means quantization will satisfy these conditions, producing a Lloyd-optimal partition when appropriately converged Grønlund et al. (2018); Steinberg (2019). Table table 5 provides a comparison between k-means quantization and uniform quantization for the 1-Shot Vision-Language classification task. We can see that k-means substantially outperforms uniform quantization
https://arxiv.org/abs/2505.21895v1
in the low-bit region. Consistent with theory, this gap narrows at higher bit-rates with equivalent performance at 8-bits. Some individual evaluations at 5-Bit show slightly improved performance despite lower quantization error due to the downstream classification evaluation. 16 C Additional Stable Diffusion Results C.1 Qualitative Results LoRA SineLoRA 1-Bit 2-Bit 3-Bit 5-Bit LoRA SineLoRA 1-Bit 2-Bit 3-Bit 5-Bit Figure 5: Dreambooth Stable Diffusion comparison. Top: Prompt A stuffed animal floating in an ocean of milk for the category bear plushie .Bottom: Prompt A wet bowl for the category berry bowl . We find that SineLoRA exhibits greater consistency with target images (left) than LoRA. 17 C.2 Category-Level Comparison 10 5 0 5 10 15 20 25 Percentage Difference (%)backpack backpack_dog bear_plushie berry_bowl can candle cat cat2 clock colorful_sneaker dog dog2 dog3 dog5 dog6 dog7 dog8 duck_toy fancy_boot grey_sloth_plushie monster_toy pink_sunglasses poop_emoji rc_car red_cartoon robot_toy shiny_sneaker teapot vase wolf_plushie DINO Quantization Level 1-Bit 2-Bit 3-Bit 5-Bit Full Figure 6: Category Level Comparison (% difference between SineLORA and LoRA) for Stable Diffusion on the DINO metric. DINO measures similarity between generated images and DINO embeddings of the target image Caron et al. (2021). We can note that despite category-level variation in performance, SineLORA strongly outperforms LoRA in most categories and quantization levels. 18 6 4 2 0 2 4 6 Percentage Difference (%)backpack backpack_dog bear_plushie berry_bowl can candle cat cat2 clock colorful_sneaker dog dog2 dog3 dog5 dog6 dog7 dog8 duck_toy fancy_boot grey_sloth_plushie monster_toy pink_sunglasses poop_emoji rc_car red_cartoon robot_toy shiny_sneaker teapot vase wolf_plushieCLIP-I Quantization Level 1-Bit 2-Bit 3-Bit 5-Bit FullFigure 7: Category Level Comparison (% difference between SineLORA and LoRA) for Stable Diffusion on the CLIP-I metric. CLIP-I measures similarity between generated images and CLIP embeddings of the target image Radford et al. (2021). Despite category-level variation in performance, we can note SineLORA strongly outperforms LoRA in most categories and quantization levels. 19 5 4 3 2 1 0 1 2 3 Percentage Difference (%)backpack backpack_dog bear_plushie berry_bowl can candle cat cat2 clock colorful_sneaker dog dog2 dog3 dog5 dog6 dog7 dog8 duck_toy fancy_boot grey_sloth_plushie monster_toy pink_sunglasses poop_emoji rc_car red_cartoon robot_toy shiny_sneaker teapot vase wolf_plushie CLIP- T Quantization Level 1-Bit 2-Bit 3-Bit 5-Bit FullFigure 8: Category Level Comparison (% difference between SineLORA and LoRA) for the CLIP-T metric. CLIP-T measures the similarity between generated images and the text prompt. We find similar performance between SineLORA and LoRA with both models outperforming in individual categories. Note that as CLIP-I and DINO measure image consistency with the target image they are preferred over CLIP-T for evaluating Dreambooth performance Ruiz et al. (2023). 20 D Additional Vision-Language Classification Results 2 4 6 865707580Accuracy Rank 2 2 4 6 8 Rank 5 2 4 6 8 Rank 16DoRA LoRA SineLoRA 2 4 6 8 Bits-Per-Parameter66687072Accuracy 2 4 6 8 Bits-Per-Parameter 2 4 6 8 Bits-Per-Parameter Figure 9: Performance on CLIP fine-tuning (averaged over 11 tasks). Top: 1 shot evaluation. Bottom : 16 shot evaluation. Consistent with Ji et al. (2025), improvement for SineLoRA is most pronounced at low-rank adapters with performance observed to converge at higher ranks and quantization levels.
https://arxiv.org/abs/2505.21895v1
21 E Additional LLAMA-3 Results E.1 Non-Quantized Base Model Table 6: LLAMA-3-8B performance on Commonsense Reasoning evaluated for Ranks (1, 2, 4, 8, 16), and PTQ (2, 3, 5). Memory in KB. Training conducted with a non-quantized base model. PTQ applied to model adapters. Results as the (correct / total) for each task. Rank Model Bits ARC-C ARC-E BoolQ HS OBQA PiQA SQA WG Avg. Mem. 1DoRA 2 0.065 0 .092 0 .354 0 .041 0 .118 0 .164 0 .087 0 .330 0 .156 758 LoRA 2 0.689 0 .851 0.629 0.613 0.722 0.824 0.641 0 .607 0 .697 569 SineLoRA 2 0.724 0.885 0.625 0 .556 0 .708 0 .817 0.676 0.609 0.700 573 DoRA 3 0.096 0 .161 0 .090 0 .054 0 .166 0 .112 0 .200 0 .324 0 .150 1096 LoRA 3 0.730 0 .891 0 .647 0.486 0.740 0.823 0.670 0 .615 0 .700 793 SineLoRA 3 0.743 0.889 0.664 0.468 0.748 0.823 0.672 0.631 0.705 794 DoRA 5 0.735 0 .886 0 .610 0 .205 0 .732 0.835 0.667 0 .630 0 .663 1770 LoRA 5 0.731 0 .890 0 .643 0.427 0.748 0.826 0.671 0.613 0 .694 1245 SineLoRA 5 0.745 0.891 0.673 0.422 0 .744 0 .826 0 .654 0.625 0.698 1246 2DoRA 2 0.037 0 .063 0 .383 0 .046 0 .094 0 .131 0 .113 0 .405 0 .159 1303 LoRA 2 0.723 0 .885 0 .622 0 .611 0 .734 0 .830 0 .638 0.639 0.710 1107 SineLoRA 2 0.761 0.893 0.645 0.683 0.762 0.840 0.682 0.627 0.737 1125 DoRA 3 0.250 0 .393 0 .145 0 .058 0 .326 0 .126 0 .210 0 .394 0 .238 1852 LoRA 3 0.747 0.893 0.621 0 .659 0 .762 0.832 0.688 0 .646 0 .731 1547 SineLoRA 3 0.765 0.892 0.653 0.680 0.774 0.832 0.701 0.658 0.744 1561 DoRA 5 0.756 0 .888 0 .621 0 .457 0 .764 0 .831 0 .686 0 .651 0 .707 2922 LoRA 5 0.752 0 .894 0 .622 0.648 0.762 0 .837 0 .690 0 .646 0 .731 2392 SineLoRA 5 0.765 0.895 0.639 0.647 0.790 0.840 0.705 0.672 0.744 2404 4DoRA 2 0.106 0 .117 0 .184 0 .029 0 .138 0 .137 0 .062 0 .412 0 .148 2385 LoRA 2 0.758 0 .896 0.652 0.708 0 .766 0.840 0.692 0 .661 0 .747 2182 SineLoRA 2 0.769 0.898 0.622 0.741 0.774 0.836 0.700 0.670 0.751 2225 DoRA 3 0.409 0 .581 0 .616 0 .036 0 .382 0 .466 0 .396 0 .491 0 .422 3354 LoRA 3 0.764 0 .894 0.675 0.703 0 .786 0.840 0.702 0 .677 0 .755 3047 SineLoRA 3 0.770 0.897 0.622 0.744 0.802 0.838 0.721 0.681 0.759 3089 DoRA 5 0.765 0 .892 0 .622 0 .604 0 .784 0 .840 0 .707 0 .669 0 .735 5222 LoRA 5 0.770 0 .895 0.677 0.695 0 .790 0.842 0.702 0 .676 0 .756 4686 SineLoRA 5 0.777 0.899 0.622 0.741 0.798 0.838 0.724 0.686 0.761 4720 8DoRA 2 0.052 0 .057 0 .418 0
https://arxiv.org/abs/2505.21895v1
.058 0 .068 0 .213 0 .049 0 .405 0 .165 4521 LoRA 2 0.773 0.900 0.624 0.753 0 .774 0 .841 0 .695 0 .659 0 .752 4316 SineLoRA 2 0.782 0.900 0.618 0.779 0.792 0.843 0.714 0.680 0.764 4422 DoRA 3 0.352 0 .489 0 .509 0 .222 0 .360 0 .417 0 .320 0 .502 0 .396 6344 LoRA 3 0.784 0 .901 0 .623 0 .761 0.812 0.845 0.713 0 .681 0 .765 6028 SineLoRA 3 0.796 0.902 0.676 0.792 0.784 0 .843 0.723 0.700 0.777 6144 DoRA 5 0.784 0 .894 0 .655 0 .596 0 .794 0 .835 0 .713 0 .689 0 .745 9797 LoRA 5 0.786 0 .903 0 .622 0 .762 0.814 0.847 0.718 0 .684 0 .767 9275 SineLoRA 5 0.799 0.904 0.698 0.787 0.792 0 .843 0.727 0.699 0.781 9340 16DoRA 2 0.092 0 .114 0 .328 0 .081 0 .108 0 .276 0 .062 0 .394 0 .182 8789 LoRA 2 0.786 0.902 0.672 0.801 0 .790 0.845 0.705 0 .686 0 .773 8582 SineLoRA 2 0.782 0.902 0.622 0.839 0.804 0.841 0.724 0.716 0.779 8788 DoRA 3 0.505 0 .721 0 .615 0 .295 0 .572 0 .719 0 .498 0 .500 0 .553 12347 LoRA 3 0.796 0.901 0.692 0.806 0 .818 0.850 0.713 0 .699 0 .784 12024 SineLoRA 3 0.789 0.903 0.622 0.843 0.820 0.847 0.729 0.736 0.786 12222 DoRA 5 0.788 0 .904 0 .701 0 .768 0 .806 0 .850 0 .710 0 .691 0 .777 18964 LoRA 5 0.793 0.902 0.698 0.808 0 .818 0.851 0.720 0 .701 0 .786 18426 SineLoRA 5 0.791 0.905 0.622 0.837 0.830 0.843 0.736 0.739 0.788 18537 22 E.2 Quantized Base Model Table 7: LLAMA-3-8B performance on Commonsense Reasoning evaluated for Ranks (1, 2, 4, 8, 16), and PTQ (2, 3, 5). Memory in KB. Training conducted with a 4FP quantized base model (QLoRA, QDoRA, QSineLoRA). PTQ applied to model adapters. Results as the (correct / total) for each task. Rank Model Bits ARC-C ARC-E BoolQ HS OBQA PiQA SQA WG Avg. Mem. 1QDoRA 2 0.061 0 .088 0 .269 0 .035 0 .110 0 .149 0 .066 0 .301 0 .135 758 QLoRA 2 0.656 0.834 0.622 0.591 0.678 0.820 0.568 0 .597 0.671 569 QSineLoRA 2 0.648 0.825 0.655 0.478 0 .674 0 .532 0.582 0.613 0.626 573 QDoRA 3 0.102 0 .138 0 .281 0 .088 0 .158 0 .157 0 .142 0 .369 0 .179 1096 QLoRA 3 0.722 0.876 0.629 0 .390 0 .732 0.830 0.669 0.615 0.683 793 QSineLoRA 3 0.729 0.876 0.660 0.405 0.738 0.675 0 .632 0.638 0.669 795 QDoRA 5 0.735 0.884 0 .623 0 .271 0 .732 0.828 0.671 0.632 0.672 1771 QLoRA 5 0.729 0.885 0.640 0 .355 0.748 0.826 0.671 0.626 0.685 1244 QSineLoRA 5 0.724 0 .878 0.650 0.385 0.742 0 .545 0 .639 0 .630 0 .649 1245 2QDoRA 2 0.020 0 .040 0 .137 0 .038 0 .058 0 .142 0 .040 0 .393 0 .109 1302 QLoRA 2 0.706 0 .872
https://arxiv.org/abs/2505.21895v1
0.622 0.524 0 .718 0 .816 0 .612 0 .627 0 .687 1107 QSineLoRA 2 0.743 0.896 0.622 0.701 0.770 0.828 0.688 0.642 0.736 1126 QDoRA 3 0.225 0 .343 0 .589 0 .025 0 .286 0 .018 0 .188 0 .470 0 .268 1854 QLoRA 3 0.744 0 .894 0 .622 0 .541 0 .756 0.834 0.681 0 .648 0 .715 1547 QSineLoRA 3 0.754 0.896 0.654 0.646 0.774 0.826 0.695 0.660 0.738 1561 QDoRA 5 0.758 0.891 0 .605 0 .615 0 .744 0 .830 0 .685 0 .655 0 .723 2922 QLoRA 5 0.753 0.894 0.622 0 .521 0 .756 0.835 0.691 0 .648 0 .715 2393 QSineLoRA 5 0.757 0.894 0.667 0.627 0.776 0.827 0.697 0.668 0.739 2405 4QDoRA 2 0.075 0 .111 0 .262 0 .046 0 .142 0 .091 0 .132 0 .439 0 .162 2386 QLoRA 2 0.749 0 .892 0 .622 0 .728 0 .776 0 .836 0 .690 0 .650 0 .743 2184 QSineLoRA 2 0.772 0.901 0.681 0.751 0.786 0.839 0.704 0.662 0.762 2224 QDoRA 3 0.358 0 .491 0 .544 0 .082 0 .358 0 .381 0 .380 0 .493 0 .386 3353 QLoRA 3 0.770 0 .894 0 .626 0.739 0.792 0 .845 0 .700 0 .680 0 .756 3044 QSineLoRA 3 0.786 0.904 0.682 0.731 0.798 0.850 0.715 0.681 0.768 3088 QDoRA 5 0.772 0 .892 0 .622 0 .580 0 .776 0 .839 0 .694 0 .661 0 .730 5218 QLoRA 5 0.776 0 .896 0 .639 0.735 0.802 0.845 0 .706 0 .676 0 .759 4687 QSineLoRA 5 0.786 0.904 0.680 0.718 0.802 0.851 0.721 0.679 0.768 4719 8QDoRA 2 0.064 0 .090 0 .335 0 .043 0 .112 0 .190 0 .063 0 .381 0 .160 4525 QLoRA 2 0.770 0 .896 0 .631 0 .755 0 .780 0 .838 0 .693 0 .659 0 .753 4316 QSineLoRA 2 0.784 0.909 0.681 0.766 0.796 0.850 0.707 0.678 0.771 4423 QDoRA 3 0.536 0 .719 0 .460 0 .091 0 .548 0 .379 0 .466 0 .484 0 .460 6345 QLoRA 3 0.776 0 .898 0 .652 0.770 0.798 0 .845 0 .705 0 .684 0 .766 6029 QSineLoRA 3 0.798 0.908 0.700 0.743 0.800 0.849 0.718 0.689 0.776 6144 QDoRA 5 0.771 0 .897 0 .674 0 .618 0 .788 0 .846 0 .702 0 .684 0 .748 9807 QLoRA 5 0.778 0 .898 0 .634 0.766 0.800 0.848 0 .708 0 .677 0 .764 9275 QSineLoRA 5 0.797 0.908 0.708 0.725 0.800 0.849 0.717 0.689 0.774 9343 16QDoRA 2 0.038 0 .053 0 .466 0 .034 0 .044 0 .280 0 .066 0 .483 0 .183 8792 QLoRA 2 0.783 0 .897 0.624 0.793 0.802 0 .845 0.716 0.682 0 .768 8584 QSineLoRA 2 0.788 0.908 0.623 0 .791 0.824 0.857 0.714 0.710 0.777 8798 QDoRA 3 0.449 0 .664 0 .591 0 .291 0 .562 0 .658 0 .368 0 .513 0 .512 12350 QLoRA 3 0.799 0 .901 0 .645 0.791 0.802 0 .850 0 .727 0 .699 0 .777 12015 QSineLoRA 3
https://arxiv.org/abs/2505.21895v1
arXiv:2505.21898v1 [cs.CL] 28 May 2025Co-Saving: Resource Aware Multi-Agent Collaboration for Software Development Rennai Qiu⋆†Chen Qian♣†Ran Li⋆Yufan Dang⋆Weize Chen⋆ Cheng Yang♠Yingli Zhang♢Ye Tian♡Xuantang Xiong♡ Lei Han♡Zhiyuan Liu⋆BMaosong Sun⋆B ⋆Tsinghua University♣Shanghai Jiao Tong University ♠Beijing University of Posts and Telecommunications ♢Siemens♡Tencent Robotics X qrn22@mails.tsinghua.edu.cn qianc@sjtu.edu.cn liuzy@tsinghua.edu.cn sms@tsinghua.edu.cn Abstract Recent advancements in Large Language Models (LLMs) and autonomous agents have demonstrated remarkable capabilities across various domains. However, stan- dalone agents frequently encounter limitations when handling complex tasks that demand extensive interactions and substantial computational resources. Although Multi-Agent Systems (MAS) alleviate some of these limitations through collab- orative mechanisms like task decomposition, iterative communication, and role specialization, they typically remain resource-unaware, incurring significant ineffi- ciencies due to high token consumption and excessive execution time. To address these limitations, we propose a resource-aware multi-agent system—Co-Saving (meaning that multiple agents collaboratively engage in resource-saving activities), which leverages experiential knowledge to enhance operational efficiency and solu- tion quality. Our key innovation is the introduction of "shortcuts"—instructional transitions learned from historically successful trajectories—which allows to bypass redundant reasoning agents and expedite the collective problem-solving process. Experiments for software development tasks demonstrate significant advantages over existing methods. Specifically, compared to the state-of-the-art MAS ChatDev, our method achieves an average reduction of 50.85% in token usage, and improves the overall code quality by 10.06%. 1 Introduction In recent years, Large Language Models (LLMs) have achieved remarkable success in various domains, including text generation, code synthesis, and long context comprehension [ 1,2,3]. However, the inherent limitations of standalone LLMs become apparent when they confront complex tasks that extend beyond conversational interactions, often exhibiting behaviors that are not sufficiently robust or adaptive [ 4]. Research in autonomous agents recently has improved LLMs by empowering them with features such as contextual memory [ 5], multi-step planning [ 6] and utilization of external tools [7]. Although these enhanced agents represent a significant leap forward, the increasing complexity of many challenges often surpasses the capabilities of any agent. This necessitates a further evolution towards collaborative approaches, providing a strong motivation for the development of MAS. †Equal Contribution. BCorresponding Author. Preprint. Under review. MAS collaborate through mechanisms such as role assignment, task decomposition, and iterative communication [ 5,8,9], forming a chat chain between agents and thus achieving sophisticated goals that would be intractable for a single agent. MAS evidently offers advantages in its superior modularity, allowing for specialized agent roles; enhanced scalability, enabling the distribution of tasks across numerous agents; and increased robustness, providing resilience through redundancy and collective problem solving. These benefits have led to notable advancements in complex scenarios such as collaborative software development [ 10,11,9], graphical user interface (GUI) automation [12], social simulation [5, 13, 14], game playing [15, 16, 17, 18] and scientific research [19, 20]. Inference ChainReference Chain Shortcut Bypassed TrajectoryShortcuts Agent Bypassed Agent Figure 1: A schematic representation of the executing process, including reference chain and inference chain. The reference chain is based on historically excellent trajectories, while the inference chain is the execution process of the current task. However, MAS often confront unawareness [ 21] of resources such as substantial token
https://arxiv.org/abs/2505.21898v1
consumption and excessive time usage, which directly incurs inefficiency of system. As the scale of tasks expands and the number of participating agents increases, the frequency and complexity of agent interactions correspondingly increase, exacerbating operational overhead. Thus, effectively managing and reducing the operational overhead, while simultaneously enhancing resource efficiency, becomes imperative for the MAS. To address these limitations, we propose Co-Saving, a resource-aware multi- agent collaboration that leverages experiential knowledge to enhance both operational efficiency and solution quality. Our key innovation lies in introducing the concept of "shortcuts"—instructional transitions mined from historically successful trajectories. The shortcut serves as learned "fast track", enabling to effectively bypass redundant reasoning agents and accelerate problem-solving processes, particularly in familiar task contexts. As the interaction process of agents in the MAS proceeds, a chat chain is accordingly formed, where the nodes correspond to the solutions generated by agents and the edges represent the instructions in the interaction process of agents. To fully utilize shortcuts to advance the current task execution process, a comprehensive evaluation for shortcuts is designed, involving both effectiveness and efficiency, and shortcuts filtering is implemented accordingly, which is shown schematically in Figure 1. A force termination mechanism is also integrated to prevent resource exhaustion. Experiments conducted on the SRDD dataset [ 9] for software development tasks. Compared to baselines that including single-agent framework (e.g., GPT-Engineer [ 11]) and existing multi-agent systems (e.g., MetaGPT [ 22], ChatDev [ 9]), our method achieves higher quality evaluated by co- learning [ 23] with lower cost. Specifically, compared to ChatDev, Co-Saving achieves an average reduction of 50.85% in token consumption, along with a 10.06% improvement of code overall quality. 2 Method In task-solving scenarios, particularly when addressing newly assigned tasks, it is often challenging to accurately estimate their inherent complexity or the resources required for successful completion by a multi-agent system (e.g., time, token consumption). To enhance the monitoring and management of task progress, we propose a strategy that involves retrieving reference tasks from historical records. These reference tasks function as a form of memory, guiding the agent in its current task execution. 2 To leverage these references effectively, experiential knowledge is extracted from a repository of past tasks and integrated into the task-solving process. However, not all prior experiences are directly transferable or beneficial to the task at hand. Consequently, a critical step in this strategy is the evaluation and selection of relevant experiences, aimed at optimizing task execution efficiency. Reference Chain Inference Chain Shortcut Bypassed Shortcut Bypassed Trajectory Agent Bypassed Agent Figure 2: Overview of reference chain and inference chain to represent the shortcut filtering process. Being selected, evaluated and applied, shortcuts guide the current task to be completed in multiple steps. 2.1 Shortcut Formalization We introduce a type of instruction, termed a "shortcut", which connects two nodes within a reasoning chain by bypassing certain intermediate reasoning steps. This design aims to reduce the overall length of the reasoning chain, thereby enhancing reasoning efficiency. Figure 2 shows a illustration of shortcut filtering. To validate the effectiveness of this approach, it is essential to conduct a comprehensive
https://arxiv.org/abs/2505.21898v1
and quantitative evaluation of the shortcut mechanism. To enable a more rigorous representation and analysis of the multi-agent collaboration process, we abstract each complete task execution as a directed graph. During the interaction, an instructor issues a series of instructions ( I={i1, i2,···, in}) , and an assistant generates corresponding solutions as responses. Each instruction includes comments or feedback on the preceding solution, while each solution represents a complete software code snippet. Accordingly, the entire collaboration process can be formally represented by a directed graph G= (N, E), as defined below. N={nj|j= 0,1,···, n} E={(nj, ij+1, nj+1)|nj, nj+1∈N, ij+1∈I}(1) Here, Ndenotes the set of nodes, each corresponding to a solution state, with n0representing the initial state (typically an empty solution). Erepresents the set of the edges corresponding to the instructions. Each edge connects two nodes and represents the transition from one solution sjto the modified one sj+1, guided by the instruction ij+1. Here, Ndenotes the set of nodes, each corresponding to a solution state, with n0representing the initial state (typically an empty solution). Edenotes the set of directed edges, where each edge represents an instruction guiding the transition from one solution sjto its subsequent modification sj+1, based on the instruction ij+1. To enhance task completion efficiency, we aim for agents to achieve equivalent outcomes with fewer reasoning steps. For example, a solution that originally evolves through two steps (from s0tos1to s2) could be optimized into a single-step transition (from s0directly to s2). To this end, we introduce 3 the concept of a shortcut, which is also modeled as a directed edge in graph G. A shortcut connects two non-adjacent nodes, always pointing forward in the interaction sequence, effectively bypassing intermediate reasoning steps while preserving the correctness of the final solution. LetSdenote the set of all shortcuts, formally defined as follows: S={(ni, nj)|ni, nj∈N, i < j } (2) We extract shortcuts from all tasks in the training set and store them in the form of instructions, serving as experiential knowledge. Subsequently, these shortcuts are incorporated into the agent’s memory, allowing the agent to leverage prior experiences to enhance task-solving performance. 2.2 Shortcut Filtering Not all extracted shortcuts are effective and efficient in improving solution generation or reducing resource consumption for a given task. Therefore, evaluating and selecting appropriate shortcuts is essential. We heuristically score shortcuts across multiple dimensions and ultimately derive a comprehensive metric to assess their overall utility. Throughout the task execution process, we continuously monitor the current resource consumption, including time and token usage. When considering a shortcut, agents are guided to refer to its content and provide feedback accordingly, facilitating the optimization of candidate solutions. Shortcuts whose estimated resource consumption exceeds the remaining available resources are discarded from consideration. Only the feasible subset of shortcuts is retained for further evaluation. This selection process can be formalized as follows: S← {s|s∈S, ts< tr, τs< τr)} (3) where tsandτsdenote the time and tokens required to generate the shortcut, respectively, while tr andτrrepresent the currently remaining time and tokens. Value The contribution of a shortcut is primarily reflected in the transition it
https://arxiv.org/abs/2505.21898v1
facilitates between two solutions—specifically, the transition from one node to another in the solution graph. For a given solution denoted by njlocated at a specific node, we define its score as follows: w(nj) =sim(nj,task)×sim(nj, s |N|)×[ [sj] ] (4) Here, s|N|denotes the solution at the final node in the graph, representing the ultimate goal. The variable task refers to the original software development requirement expressed in natural language. The two similarity terms are computed as the cosine similarity between the embedding vectors of the corresponding texts or code. The indicator function [ [·] ]is binary: it equals 1 if the code corresponding tosjcan be successfully compiled using an external compiler, and 0 otherwise. Based on this node-level score, the value of a shortcut (ni, nj)is defined as: v(ni, nj) =w(nj)−w(ni) (5) This value quantifies the incremental benefit that the shortcut brings to the software development process by enabling a more effective and efficient transition between solutions. Cost Considering task-solving in multi-agent systems, the primary cost components are of two distinct types: time and tokens. These represent different dimensions of resource consumption and exhibit distinct distribution patterns within the dataset. To enable a unified evaluation, we normalize their raw values into percentile ranks based on their empirical distributions in the dataset. By integrating these normalized values, we derive a composite metric referred to as cost. For a given shortcut s0, let its normalized time and token consumption be t0andτ0, respectively. Denote by Tthe set of normalized time values for all shortcuts in the dataset, denoted as S, and by T the corresponding set of normalized token values. We define the relative rankings of s0in terms of time and tokens as follows: α=|{t < t 0|t∈T}| |S|, β=|{τ < τ 0|τ∈ T }| |S|(6) 4 The composite cost is then computed using the harmonic mean of αandβ: C=Fγ(α, β) =2αβ α+β(7) This formulation balances the trade-off between time and token efficiency, where γis the emergency factor that will be introduced in the following section. Emergency Factor The value and cost metrics represent two distinct dimensions in evaluating task execution: value reflects the improvement in solution quality, while cost measures the efficiency of task completion. At different stages of task execution, the relative importance of these two aspects may vary. For instance, during the early stages—when resources are still abundant—the primary focus is typically on achieving high-quality solutions. Conversely, as resources approach depletion, the emphasis shifts toward completing the task promptly and within budget. To accommodate these dynamic shifts in priority, we introduce the emergency factor γ, which regulates the relative weighting of value and cost throughout the task execution process. Unlike value and cost—which are determined solely by the characteristics of the shortcuts and the dataset—the emergency factor is explicitly linked to the user-defined resource budget, rendering it inherently dynamic and adaptive. Lettandτdenote the allocated budgets for time and tokens, respectively, and let tuandτurepresent the corresponding amounts consumed thus far. The emergency factor γis then defined as follows: γt:=tu t, γτ:=τu τ. γ=F1(γt, γτ) =2γtγτ γt+γτ.(8) 3 Experiments Baselines To evaluate the effectiveness of
https://arxiv.org/abs/2505.21898v1
our method, we select a diverse set of representative LLM- driven software engineering methods and pure LLMs to facilitate a comprehensive multidimensional comparison: •GPT-3.5-Turbo [ 24], GPT-4 [ 25], LLaMA 3 70B [ 26]and are widely adopted foundation models that serve as baselines for pure LLM performance, covering a range of capabili- ties from efficient instruction-following to strong multimodal reasoning and open-source adaptability. •GPT-Engineer [ 11] exemplifies a single-agent approach and serves as a foundational frame- work in this domain. Its key strength lies in its ability to interpret natural language require- ments and autonomously perform development tasks such as code generation and execution through single step reasoning. •ReAct [ 27] integrates reasoning and acting within LLMs by jointly generating reasoning traces and environment-interacting actions. Unlike approaches that separate thought and execution, ReAct enables LLMs to iteratively refine their understanding and update the environment through interleaved reasoning and action steps. •MetaGPT [ 22] adopts a MAS design, introducing a novel role-assignment mechanism where agents are assigned specific responsibilities. These agents collaborate through a standardized communication protocol to accomplish software engineering tasks. •ChatDev [ 9] presents a comprehensive multi-agent collaboration framework that decomposes the software development lifecycle into distinct phases, including demand analysis, code implementation, code review, and system testing. Within this framework, agents engage in multi-turn dialogues to iteratively propose instructions and solutions, thereby enhancing the quality and robustness of the software development process. Datasets We use a subset of the SRDD [ 9] as our experimental corpus, containing diverse software development requirements. The dataset is organized into five primary categories: Education, Work, Life, Game, and Creation, and further divided into 40 fine-grained subcategories. We partition it into a training set for shortcut extraction and a test set for evaluation and data collection. 5 Metrics Our primary research objective is to enhance both the quality and efficiency of task completion in MAS, using software development as the application context. Accordingly, we evaluate task outcomes—specifically code generation—along two key dimensions: quality andefficiency . For quality assessment, we adopt a comprehensive evaluation framework inspired by co-learning [23], which integrates multiple dimensions into a unified metric for holistic evaluation. Efficiency is measured by the Budgeted Completion Rate (BCR), defined as the proportion of tasks completed within the specified resource constraints. •Completeness : Measures whether the generated code provides a structurally complete implementation of the software requirement. It is quantified as the proportion of source files that do not contain placeholders such as " TODO ". •Executability : Assesses the ability of the generated software to compile and run successfully in a real operating system environment. It is calculated as the ratio of programs that compile and execute without errors. •Consistency : Evaluates the semantic alignment between the generated code and the original natural language requirement, computed as the cosine similarity between their respective embedding vectors. •Granularity : Assesses the level of detail in the generated code. Given the inherent chal- lenges in objectively quantifying code granularity and completeness—especially across tasks of varying complexity—we adopt the average number of lines of code per task as a
https://arxiv.org/abs/2505.21898v1
practical proxy. A higher value indicates greater code detail. •Quality : A comprehensive metric obtained by integrating completeness, executability, consistency, and granularity. Specifically, it is defined as the product of these four metrics, serving as an overall indicator of code quality. •Budgeted Completion Rate (BCR ): Measures the proportion of tasks completed within the predefined resource budget (time and tokens). It reflects resource efficiency without considering the quality of the generated solution; thus, even low-quality code produced quickly is counted under this metric. Implementation Details The software development process is divided into multiple phases, in- cluding demand analysis, language selection, code completion, code review, and system testing. Our work primarily focuses on phases directly related to code generation. For these tasks, we adopt GPT-3.5-Turbo as the base model. For node evaluation, metric consistency computation, and reference task retrieval, we employ text-embedding-ada-002 as the semantic embedder, due to its strong performance in both textual and code-related embeddings. Python 3.9.19 serves as the external feedback environment, enabling compilation, execution, and assessment of generated code.Throughout the experiments, we monitor agent interactions and implicitly construct the interac- tion graph. The number of edges in the graph corresponds to the number of interaction rounds. To prevent excessive interactions, once the current interaction graph reaches or exceeds the number of edges in the reference task graph, we forcibly terminate the task. Table 1: overall performance of selected baseline and our Co-Saving. The highest scores are formatted in bold and the second-highest scores are underlined. Method Completeness Executability Consistency Granularity Quality BCR GPT-3.5-Turbo 0.9200 0.8600 0.8076 0.2882 0.1842 1.0000 GPT-4 0.9800 0.8800 0.8053 0.3036 0.2109 0.6200 LLaMA 3 70B 0.9000 0.7600 0.8032 0.4333 0.2058 1.0000 GPT-Engineer 0.5200 0.5680 0.7863 0.3379 0.0785 1.0000 ReAct 0.9600 1.0000 0.8033 0.2548 0.1965 1.0000 MetaGPT 0.7040 0.1120 0.7731 0.9721 0.0593 0.2720 ChatDev 0.9040 0.3680 0.7897 0.5746 0.1510 0.0160 Co-Saving 0.8160 0.6880 0.8034 0.5743 0.2515 0.7280 6 3.1 Overall Performance As shown in Table 1, our proposed approach (denoted as Co-Saving1) significantly outperforms all baselines in terms of Quality and surpasses other multi-agent baselines in BCR. These results indicate that Co-Saving effectively accelerates the reasoning trajectory toward generating high-quality solutions. As single-agent frameworks, GPT-Engineer and ReAct typically do not decompose or subdivide tasks based on user instructions. Instead, they perform code generation through a one-shot reasoning process. Consequently, they exhibit low execution time and resource consumption. The same observation holds for pure LLM-based paradigms. However, for more complex software development tasks, these approaches often fail to produce functionally complete code. In many cases, they define interfaces or modules related to complex requirements but leave them partially or entirely unimplemented. This limitation artificially inflates the Executability metric, as syntactically correct but semantically incomplete code can still compile and run. Such shortcomings are reflected in the relatively low Granularity scores, which indicate insufficient implementation detail. In contrast, ChatDev adopts a multi-stage reasoning paradigm that iteratively refines solutions, leading to more complete implementations. However, this iterative process incurs higher resource consumption, resulting in a lower BCR. MetaGPT achieves a BCR between GPT-Engineer and ChatDev. It leverages multi-agent collaboration
https://arxiv.org/abs/2505.21898v1
through role-based coordination to perform multi- step reasoning, but still struggles to generate logically coherent code for complex tasks, leading to a relatively lower Executability score. For the Completeness metric, ChatDev slightly outperforms Co-Saving. We hypothesize that this advantage stems from Co-Saving’s resource-awareness and dynamic execution control. When en- countering tasks that exceed the available resource budget, Co-Saving may opt to terminate reasoning prematurely, prioritizing efficiency over completeness. In contrast, ChatDev lacks such resource sensitivity and continues execution regardless of task complexity, achieving higher completeness at the expense of increased resource usage. Additionally, Consistency scores across all four experimental settings show only minor differences, with Co-Saving achieving a modest improvement. This result may reflect the limitations of current embedding models in capturing fine-grained semantic distinctions between code and textual require- ments. Consequently, these models are insufficiently sensitive to subtle inconsistencies, highlighting the need for more precise evaluation methods to better assess code-text alignment. 3.2 Ablation Study In the Method section, we introduced key components of our approach: shortcut selection, cost design, and the emergency factor. To validate the effectiveness of each component, we design corresponding ablation studies. The results of the full model and the ablation variants are summarized in Table 2. Table 2: Ablation study on main design in Co-Saving, \denote the removing operation, the three ablations remove selection, cost, emergency factor( γ) respectively. Method Completeness Executability Consistency Granularity Quality BCR Co-Saving 0.9250 0.8500 0.8106 0.8556 0.5453 0.8000 \selection 0.8500 0.8000 0.8073 0.8791 0.4826 0.6000 \cost 0.8250 0.9250 0.8061 0.9412 0.5789 0.7500 \γ 0.8250 0.8500 0.8087 0.8484 0.4811 0.8250 As we can see, removing the cost-based shortcut selection mechanism results in all candidate shortcuts being retained for evaluation, including those that significantly exceed the available resource budget. Consequently, this variant exhibits a substantially lower BCR compared to other configurations. In the second ablation, where cost is removed from the value-cost evaluation metric (i.e., only value is considered), the system achieves relatively good performance in Executability and Granularity. However, the lack of resource awareness makes it difficult to complete tasks within time constraints, leading to lower Completeness and a reduced BCR. In the third ablation, the emergency factor is 1Co-Saving means that multiple agents collaboratively engage in resource-saving activities. 7 Figure 3: Distribution of path length, time cost and number of tokens. experiments with Co-Saving and without Co-Saving are indicated in red and blue color, respectively, as shown in legend. excluded. Without this dynamic adjustment, the system continues to prioritize high-value shortcuts even under resource-limited conditions. Although the BCR remains relatively high due to the forced termination mechanism, both Completeness and Granularity are lower compared to the full Co-Saving configuration, indicating suboptimal task outcomes. 3.3 Resource Distribution Shift To further evaluate the effectiveness of Co-Saving, we conducted a comparative study between software development MAS with and without Co-Saving. Specifically, we analyzed the distribution of path lengths—defined as the number of edges in the execution graph, reflecting the number of reasoning iterations—on the same dataset. Additionally, we examined the distribution of resource consumption, including execution time and token usage. The experimental results are
https://arxiv.org/abs/2505.21898v1
presented in Figure 3. The inclusion of the Co-Saving algorithm results in a significant reduction in the number of reasoning iterations required for task execution. Additionally, both total execution time and token consumption are notably decreased. These findings demonstrate that Co-Saving effectively streamlines the multi- agent reasoning process, accelerating task execution and enhancing overall development efficiency. This improvement is largely attributed to Co-Saving’s ability to accurately assess and utilize shortcuts. By extracting precise and efficient instructions from reference tasks, Co-Saving enables agents to make more informed decisions, thereby reducing the occurrence of inefficient or ineffective actions. 3.4 Case Study In order to illustrate how Co-Saving operates within a MAS, we present a case study of a specific task. Using ChatDev as the underlying software development framework, we select the task "Photo Defogger" as an example. At the initial stage, the system retrieves the reference task "Background Blur Editor" from the training dataset. This reference task forms an execution graph comprising three rounds of reasoning. For the current task, after the programmer generates node n0in the Code Complete stage, our system evaluates the shortcuts (n0, n1),(n0, n2), and(n0, n3)to select the optimal path. Eventually, (n0, n2) is chosen.In the reference task, the transition from n0ton1involves fixing a function to prevent file overwrite issues and adding necessary import statements. For the current task, given the programmer’s initial code and the shortcut (n0, n1)as input, the code reviewer generates an instruction to adjust function details to avoid file overwrites. Based on this instruction, the programmer produces a new solution, corresponding to node n2in the reference task. It is worth noting that the shortcut (n0, n2) is not a simple merge of the edges (n0, i1, n1)and(n1, i2, n2), but is related to n0andn2, containing more complete and detailed information about how to transition from the source to the target. For instance, a shortcut says "To transition from the initial code version to the final version, follow these 8 instructions: Modules and Classes: 1. In the game.py file, add the following import statement at the top... Data Structure: 1. In the player.py file, add the following attribute to the Player class... Main Program Flow: 1. In the game.py file, modify the take_turn method as follows... Exception Handling...". Without the shortcut input, the code reviewer could simply output short, abbreviated feedback. Next, the shortcut originating from n2in the reference task—specifically, (n2, n3)—is considered. After evaluation, this shortcut is selected for code review, leading to the generation of another solution in the subsequent code modification stage. At this point, the number of reasoning steps in the current task reaches the predefined limit (matching the reference task’s path length), prompting termination of further inference.The execution processes of both the current and reference tasks are illustrated in Figure 2. Ultimately, Co-Saving successfully generates an executable program with a correct GUI interface and essential functions within three iterations. In contrast, ChatDev requires more iterations to produce a comparable solution, incurring higher token consumption. 4 Related Work Understanding and processing natural language remains a central challenge in artificial intelligence. LLMs [ 2,1,3,28,26,29,30,31,32,24,33,34,35],
https://arxiv.org/abs/2505.21898v1
empowered by large-scale pretraining and parameter-rich architectures, have achieved remarkable advancements in this area. With the rapid development of LLMs, there is increasing interest in building autonomous agents [ 36,15,5,13, 37,4,11] that leverage LLMs for domain-specific tasks. These agents combine LLMs’ reasoning and language understanding capabilities with external tools [ 7,38,39,40,41], context memory management [ 5,42], and task decomposition and planning strategies [ 11,43,44,6], enabling them to tackle increasingly complex problems [ 45,36,46,47,48,49,50]. In parallel, techniques such as self-evolving [51], self-instruct [52], and other enhancement methods [53, 54, 55, 56, 57, 58, 59] have been proposed to further improve agent capabilities.Beyond single-agent research, MAS have emerged as a critical area of study [ 22,14,60,61,43,62,63,49,64]. Unlike single-agent frameworks, which attempt to solve complex problems independently, MAS introduce greater variability and design flexibility. This includes assigning distinct roles and identities to different agents, designing workflows for decomposing complex tasks into subtasks, and establishing communication protocols, information exchange pathways, and coordination structures to facilitate collaborative task execution. Recent studies have explored how the number and structure of agents influence the performance and scalability of MAS [ 57]. As agent count and task complexity increase, interaction frequency and resource consumption also grow. This highlights key challenges in enhancing resource utilization, minimizing redundant communication, and designing efficient collaboration mechanisms. For instance, AgentDropout [ 65] improves communication efficiency by pruning redundant agents and interactions in multi-round dialogues, enhancing token efficiency and task performance. BTP (Budget- Constrained Tool Learning with Planning) [ 66] formulates budget-aware tool selection strategies to maximize utility under resource constraints. TimeArena [ 21] provides a simulated environment with complex temporal dynamics, revealing that current LLMs lack robust temporal reasoning, especially in multitasking or concurrent scenarios—underscoring the need for more temporally-aware agent designs. 5 Conclusion In this paper, we proposed Co-Saving, a resource-aware multi-agent collaboration designed to address the inherent lack of resource sensitivity in multi-agent collaboration. By introducing shortcuts mined from successful historical trajectories, our system enables agents to leverage prior experience to optimize resource usage and accelerate task completion. The framework adopts a graph-based representation of task execution, where shortcuts connect effective reasoning paths to support efficient navigation through familiar problem-solving contexts. Additionally, we incorporate an emergency factor for dynamic shortcut management and a forced termination mechanism to prevent resource exhaustion. Experimental results on the SRDD demonstrate that Co-Saving significantly improves both efficiency and solution quality compared to baseline single-agent and multi-agent. Overall, this work underscores the importance of resource-aware multi-agent collaboration, offering a scalable and effective paradigm for deploying LLM-powered MAS in complex real-world applications. 9 References [1]Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. Attention is All You Need. In I. Guyon, U. V on Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems (NeurIPS) , volume 30, 2017. [2]Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-V oss, Gretchen Krueger, Tom Henighan, Rewon Child,
https://arxiv.org/abs/2505.21898v1
Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language Models are Few-Shot Learners. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems (NeurIPS) , volume 33, pages 1877–1901, 2020. [3]Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. Sparks of Artificial General Intelligence: Early Experiments with GPT-4. In arXiv preprint arXiv:2303.12712 , 2023. [4] Toran Bruce Richards. AutoGPT. In https://github.com/Significant-Gravitas/AutoGPT , 2023. [5]Joon Sung Park, Joseph O’Brien, Carrie Jun Cai, Meredith Ringel Morris, Percy Liang, and Michael S Bernstein. Generative Agents: Interactive Simulacra of Human Behavior. In Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology (UIST) , pages 1–22, 2023. [6]Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, brian ichter, Fei Xia, Ed Chi, Quoc V Le, and Denny Zhou. Chain-of-thought Prompting Elicits Reasoning in Large Language Models. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, editors, Advances in Neural Information Processing Systems (NeurIPS) , volume 35, pages 24824–24837, 2022. [7]Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Luke Zettle- moyer, Nicola Cancedda, and Thomas Scialom. Toolformer: Language Models Can Teach Themselves to Use Tools. In Thirty-seventh Conference on Neural Information Processing Systems (NeurIPS) , 2023. [8]Guohao Li, Hasan Abed Al Kader Hammoud, Hani Itani, Dmitrii Khizbullin, and Bernard Ghanem. CAMEL: Communicative Agents for ”Mind” Exploration of Large Language Model Society. In Thirty-seventh Conference on Neural Information Processing Systems (NeurIPS) , 2023. [9]Chen Qian, Xin Cong, Wei Liu, Cheng Yang, Weize Chen, Yusheng Su, Yufan Dang, Jiahao Li, Juyuan Xu, Dahai Li, Zhiyuan Liu, and Maosong Sun. ChatDev: Communicative Agents for Software Development. In Proceedings of the 62st Annual Meeting of the Association for Computational Linguistics (ACL) , 2023. [10] Harlan D Mills. Software development. In IEEE Transactions on Software Engineering , number 4, pages 265–273, 1976. [11] Anton Osika. GPT-Engineer. In https://github.com/AntonOsika/gpt-engineer , 2023. [12] Yao Zhang, Zijian Ma, Yunpu Ma, Zhen Han, Yu Wu, and V olker Tresp. Webpilot: A versatile and autonomous multi-agent system for web task execution with strategic exploration. In Toby Walsh, Julie Shah, and Zico Kolter, editors, AAAI-25, Sponsored by the Association for the Advancement of Artificial Intelligence, February 25 - March 4, 2025, Philadelphia, PA, USA , pages 23378–23386. AAAI Press, 2025. [13] Zhilin Wang, Yu Ying Chiu, and Yu Cheung Chiu. Humanoid Agents: Platform for Simulating Human-like Generative Agents. In Yansong Feng and Els Lefever, editors, Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstra- tions (EMNLP) , pages 167–176, December 2023. 10 [14] Wenyue Hua, Lizhou Fan, Lingyao Li, Kai Mei, Jianchao Ji, Yingqiang Ge, Libby Hemphill, and Yongfeng Zhang. War and Peace (WarAgent): Large Language Model-based Multi-Agent Simulation of World Wars. In arXiv preprint arXiv:2311.17227 , 2023. [15] Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay
https://arxiv.org/abs/2505.21898v1
Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, and Anima Anandkumar. V oyager: An Open-Ended Embodied Agent with Large Language Models. In Intrinsically-Motivated and Open-Ended Learning Workshop @NeurIPS2023 , 2023. [16] Xizhou Zhu, Yuntao Chen, Hao Tian, Chenxin Tao, Weijie Su, Chenyu Yang, Gao Huang, Bin Li, Lewei Lu, Xiaogang Wang, Yu Qiao, Zhaoxiang Zhang, and Jifeng Dai. Ghost in the Minecraft: Generally Capable Agents for Open-World Environments via Large Language Models with Text-based Knowledge and Memory. In arXiv preprint arXiv:2305.17144 , 2023. [17] Shenzhi Wang, Chang Liu, Zilong Zheng, Siyuan Qi, Shuo Chen, Qisen Yang, Andrew Zhao, Chaofei Wang, Shiji Song, and Gao Huang. Avalon’s Game of Thoughts: Battle Against Deception through Recursive Contemplation. In Findings of the Association for Computational Linguistics: ACL 2024 , July 2023. [18] Ran Gong, Qiuyuan Huang, Xiaojian Ma, Hoi V o, Zane Durante, Yusuke Noda, Zilong Zheng, Song-Chun Zhu, Demetri Terzopoulos, Li Fei-Fei, and Jianfeng Gao. MindAgent: Emergent Gaming Interaction. In arXiv preprint arXiv:2309.09971 , 2023. [19] Qian Huang, Jian V ora, Percy Liang, and Jure Leskovec. Benchmarking Large Language Models As AI Research Agents. In arXiv preprint arXiv:2310.03302 , 2023. [20] Weixin Liang, Yuhui Zhang, Hancheng Cao, Binglu Wang, Daisy Ding, Xinyu Yang, Kailas V odrahalli, Siyu He, Daniel Smith, Yian Yin, Daniel McFarland, and James Zou. Can Large Language Models Provide Useful Feedback on Research Papers? A Large-Scale Empirical Analysis. In arXiv preprint arXiv:2310.01783 , 2023. [21] Yikai Zhang, Siyu Yuan, Caiyu Hu, Kyle Richardson, Yanghua Xiao, and Jiangjie Chen. TimeArena: Shaping efficient multitasking language agents in a time-aware simulation. In Lun-Wei Ku, Andre Martins, and Vivek Srikumar, editors, Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 3894–3916, Bangkok, Thailand, August 2024. Association for Computational Linguistics. [22] Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, and Jürgen Schmidhuber. MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework. In The Twelfth International Conference on Learning Representations (ICLR) , 2024. [23] Chen Qian, Yufan Dang, Jiahao Li, Wei Liu, Zihao Xie, YiFei Wang, Weize Chen, Cheng Yang, Xin Cong, Xiaoyin Che, Zhiyuan Liu, and Maosong Sun. Experiential co-learning of software-developing agents. In Lun-Wei Ku, Andre Martins, and Vivek Srikumar, editors, Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 5628–5640, Bangkok, Thailand, August 2024. Association for Computational Linguistics. [24] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F Christiano, Jan Leike, and Ryan Lowe. Training Language Models to Follow Instructions with Human Feedback. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, editors, Advances in Neural Information Processing Systems (NeurIPS) , volume 35, pages 27730–27744. Curran Associates, Inc., 2022. [25] OpenAI, Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Flo- rencia Leoni
https://arxiv.org/abs/2505.21898v1
Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, Red Avila, Igor Babuschkin, Suchir Balaji, Valerie Balcom, Paul Baltescu, Haiming Bao, Mohammad Bavarian, Jeff Belgum, Irwan Bello, Jake Berdine, Gabriel Bernadett-Shapiro, Christopher Berner, Lenny Bogdonoff, Oleg Boiko, Madelaine Boyd, Anna-Luisa Brakman, 11 Greg Brockman, Tim Brooks, Miles Brundage, Kevin Button, Trevor Cai, Rosie Campbell, An- drew Cann, Brittany Carey, Chelsea Carlson, Rory Carmichael, Brooke Chan, Che Chang, Fotis Chantzis, Derek Chen, Sully Chen, Ruby Chen, Jason Chen, Mark Chen, Ben Chess, Chester Cho, Casey Chu, Hyung Won Chung, Dave Cummings, Jeremiah Currier, Yunxing Dai, Cory Decareaux, Thomas Degry, Noah Deutsch, Damien Deville, Arka Dhar, David Dohan, Steve Dowling, Sheila Dunning, Adrien Ecoffet, Atty Eleti, Tyna Eloundou, David Farhi, Liam Fedus, Niko Felix, Simón Posada Fishman, Juston Forte, Isabella Fulford, Leo Gao, Elie Georges, Christian Gibson, Vik Goel, Tarun Gogineni, Gabriel Goh, Rapha Gontijo-Lopes, Jonathan Gordon, Morgan Grafstein, Scott Gray, Ryan Greene, Joshua Gross, Shixiang Shane Gu, Yufei Guo, Chris Hallacy, Jesse Han, Jeff Harris, Yuchen He, Mike Heaton, Johannes Heidecke, Chris Hesse, Alan Hickey, Wade Hickey, Peter Hoeschele, Brandon Houghton, Kenny Hsu, Shengli Hu, Xin Hu, Joost Huizinga, Shantanu Jain, Shawn Jain, Joanne Jang, Angela Jiang, Roger Jiang, Haozhun Jin, Denny Jin, Shino Jomoto, Billie Jonn, Heewoo Jun, Tomer Kaftan, Łukasz Kaiser, Ali Kamali, Ingmar Kanitscheider, Nitish Shirish Keskar, Tabarak Khan, Logan Kilpatrick, Jong Wook Kim, Christina Kim, Yongjik Kim, Jan Hendrik Kirchner, Jamie Kiros, Matt Knight, Daniel Kokotajlo, Łukasz Kondraciuk, Andrew Kondrich, Aris Konstantinidis, Kyle Kosic, Gretchen Krueger, Vishal Kuo, Michael Lampe, Ikai Lan, Teddy Lee, Jan Leike, Jade Leung, Daniel Levy, Chak Ming Li, Rachel Lim, Molly Lin, Stephanie Lin, Mateusz Litwin, Theresa Lopez, Ryan Lowe, Patricia Lue, Anna Makanju, Kim Malfacini, Sam Man- ning, Todor Markov, Yaniv Markovski, Bianca Martin, Katie Mayer, Andrew Mayne, Bob McGrew, Scott Mayer McKinney, Christine McLeavey, Paul McMillan, Jake McNeil, David Medina, Aalok Mehta, Jacob Menick, Luke Metz, Andrey Mishchenko, Pamela Mishkin, Vinnie Monaco, Evan Morikawa, Daniel Mossing, Tong Mu, Mira Murati, Oleg Murk, David Mély, Ashvin Nair, Reiichiro Nakano, Rajeev Nayak, Arvind Neelakantan, Richard Ngo, Hyeonwoo Noh, Long Ouyang, Cullen O’Keefe, Jakub Pachocki, Alex Paino, Joe Palermo, Ashley Pan- tuliano, Giambattista Parascandolo, Joel Parish, Emy Parparita, Alex Passos, Mikhail Pavlov, Andrew Peng, Adam Perelman, Filipe de Avila Belbute Peres, Michael Petrov, Henrique Ponde de Oliveira Pinto, Michael, Pokorny, Michelle Pokrass, Vitchyr H. Pong, Tolly Powell, Alethea Power, Boris Power, Elizabeth Proehl, Raul Puri, Alec Radford, Jack Rae, Aditya Ramesh, Cameron Raymond, Francis Real, Kendra Rimbach, Carl Ross, Bob Rotsted, Henri Roussez, Nick Ryder, Mario Saltarelli, Ted Sanders, Shibani Santurkar, Girish Sastry, Heather Schmidt, David Schnurr, John Schulman, Daniel Selsam, Kyla Sheppard, Toki Sherbakov, Jessica Shieh, Sarah Shoker, Pranav Shyam, Szymon Sidor, Eric Sigler, Maddie Simens, Jordan Sitkin, Kata- rina Slama, Ian Sohl, Benjamin Sokolowsky, Yang Song, Natalie Staudacher, Felipe Petroski Such, Natalie Summers, Ilya Sutskever, Jie Tang, Nikolas Tezak, Madeleine B. Thompson, Phil Tillet, Amin Tootoonchian, Elizabeth Tseng, Preston Tuggle, Nick Turley, Jerry Tworek, Juan Felipe Cerón Uribe, Andrea Vallone, Arun Vijayvergiya, Chelsea V oss, Carroll Wainwright, Justin Jay Wang, Alvin Wang, Ben
https://arxiv.org/abs/2505.21898v1
Wang, Jonathan Ward, Jason Wei, CJ Weinmann, Akila Welihinda, Peter Welinder, Jiayi Weng, Lilian Weng, Matt Wiethoff, Dave Willner, Clemens Winter, Samuel Wolrich, Hannah Wong, Lauren Workman, Sherwin Wu, Jeff Wu, Michael Wu, Kai Xiao, Tao Xu, Sarah Yoo, Kevin Yu, Qiming Yuan, Wojciech Zaremba, Rowan Zellers, Chong Zhang, Marvin Zhang, Shengjia Zhao, Tianhao Zheng, Juntang Zhuang, William Zhuk, and Barret Zoph. Gpt-4 technical report, 2024. [26] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timo- thée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and Efficient Foundation Language Models. In arXiv preprint arXiv:2302.13971 , 2023. [27] Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. React: Synergizing reasoning and acting in language models, 2023. [28] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language Models are Unsupervised Multitask Learners. In OpenAI blog , volume 1, page 9, 2019. [29] Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H. Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, and William Fedus. Emergent Abilities of Large Language Models. In Transactions on Machine Learning Research , 2022. 12 [30] Murray Shanahan, Kyle McDonell, and Laria Reynolds. Role Play with Large Language Models. InNature , volume 623, pages 493–498, 2023. [31] Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating Large Language Models Trained on Code. In arXiv preprint arXiv:2107.03374 , 2021. [32] Thorsten Brants, Ashok C. Popat, Peng Xu, Franz J. Och, and Jeffrey Dean. Large Language Models in Machine Translation. In Jason Eisner, editor, Proceedings of the 2007 Joint Con- ference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL) , pages 858–867, June 2007. [33] Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V . Le, Denny Zhou, and Xinyun Chen. Large Language Models as Optimizers. In The Twelfth International Conference on Learning Representations (ICLR) , 2024. [34] Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, and Michael Bendersky. Large Language Models are Effective Text Rankers with Pairwise Ranking Prompting. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics (NAACL) , 2023. [35] Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling Laws for Neural Language Models. In arXiv preprint arXiv:2001.08361 , 2020. [36] Shuyan Zhou, Frank F Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Yonatan Bisk, Daniel Fried, Uri Alon, et al. Webarena: A realistic Web Environment for Building Autonomous Agents. In The Twelfth International Conference on Learning Representations (ICLR) , 2024. [37] Xinyuan Wang, Chenxi Li, Zhen Wang, Fan Bai, Haotian Luo, Jiayou Zhang, Nebojsa Jojic, Eric P. Xing, and Zhiting Hu. PromptAgent: Strategic Planning with Language Models
https://arxiv.org/abs/2505.21898v1
Enables Expert-level Prompt Optimization. In The Twelfth International Conference on Learning Representations (ICLR) , 2024. [38] Tianle Cai, Xuezhi Wang, Tengyu Ma, Xinyun Chen, and Denny Zhou. Large Language Models as Tool Makers. In The Twelfth International Conference on Learning Representations (ICLR) , 2024. [39] Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, et al. Toolllm: Facilitating Large Language Models to Master 16000+ Real-World APIs. In The Twelfth International Conference on Learning Representations (ICLR) , 2024. [40] Jingqing Ruan, YiHong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, du qing, shi shiwei, Hangyu Mao, Xingyu Zeng, and Rui Zhao. TPTU: Task Planning and Tool Usage of Large Language Model-based AI Agents. In NeurIPS 2023 Foundation Models for Decision Making Workshop , 2023. [41] Rui Yang, Lin Song, Yanwei Li, Sijie Zhao, Yixiao Ge, Xiu Li, and Ying Shan. GPT4Tools: Teaching Large Language Model to Use Tools via Self-instruction. In Thirty-seventh Conference on Neural Information Processing Systems (NeurIPS) , 2023. [42] Theodore Sumers, Shunyu Yao, Karthik Narasimhan, and Thomas Griffiths. Cognitive archi- tectures for language agents. In Transactions on Machine Learning Research (TMLR) , 2024. Survey Certification. [43] Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chen Qian, Chi-Min Chan, Yujia Qin, Yaxi Lu, Ruobing Xie, et al. Agentverse: Facilitating Multi-agent Collaboration and Exploring Emergent Behaviors in Agents. In The Twelfth International Conference on Learning Representations (ICLR) , 2024. 13 [44] Zhiwei Liu, Weiran Yao, Jianguo Zhang, Le Xue, Shelby Heinecke, Rithesh Murthy, Yihao Feng, Zeyuan Chen, Juan Carlos Niebles, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, and Silvio Savarese. BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents. In ICLR 2024 Workshop on Large Language Model (LLM) Agents , 2023. [45] Andrew Zhao, Daniel Huang, Quentin Xu, Matthieu Lin, Yong-Jin Liu, and Gao Huang. ExpeL: LLM Agents Are Experiential Learners. In Proceedings of the AAAI Conference on Artificial Intelligence , number 17, pages 19632–19642, Mar. 2024. [46] Kaixin Ma, Hongming Zhang, Hongwei Wang, Xiaoman Pan, and Dong Yu. LASER: LLM agent with state-space exploration for web navigation. In NeurIPS 2023 Foundation Models for Decision Making Workshop , 2023. [47] An Zhang, Leheng Sheng, Yuxin Chen, Hao Li, Yang Deng, Xiang Wang, and Tat-Seng Chua. On Generative Agents in Recommendation. In arXiv preprint arXiv:2310.10108 , 2023. [48] Lei Wang, Jingsen Zhang, Hao Yang, Zhiyuan Chen, Jiakai Tang, Zeyu Zhang, Xu Chen, Yankai Lin, Ruihua Song, Wayne Xin Zhao, Jun Xu, Zhicheng Dou, Jun Wang, and Ji-Rong Wen. When Large Language Model based Agent Meets User Behavior Analysis: A Novel User Simulation Paradigm. In arXiv preprint arXiv:2306.02552 , 2023. [49] Shiying Ding, Xinyi Chen, Yan Fang, Wenrui Liu, Yiwu Qiu, and Chunlei Chai. DesignGPT: Multi-Agent Collaboration in Design. In 2023 16th International Symposium on Computational Intelligence and Design (ISCID) , pages 204–208, 2023. [50] Lilian Weng. LLM-powered Autonomous Agents. In lilianweng.github.io , 2023. [51] Yue Hu, Yuzhu Cai, Yaxin Du, Xinyu Zhu, Xiangrui Liu, Zijie Yu, Yuchen Hou, Shuo Tang, and Siheng Chen. Self-evolving multi-agent collaboration networks for software development.
https://arxiv.org/abs/2505.21898v1
InThe Thirteenth International Conference on Learning Representations , 2025. [52] Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, and Hannaneh Hajishirzi. Self-Instruct: Aligning Language Models with Self-Generated Instructions. In Anna Rogers, Jordan Boyd-Graber, and Naoaki Okazaki, editors, Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (ACL) , pages 13484–13508, July 2023. [53] Dayuan Fu, Keqing He, Yejie Wang, Wentao Hong, Zhuoma GongQue, Weihao Zeng, Wei Wang, Jingang Wang, Xunliang Cai, and Weiran Xu. Agentrefine: Enhancing agent general- ization through refinement tuning. In The Thirteenth International Conference on Learning Representations , 2025. [54] Vijay Lingam, Behrooz Omidvar Tehrani, Sujay Sanghavi, Gaurav Gupta, Sayan Ghosh, Linbo Liu, Jun Huan, and Anoop Deoras. Enhancing language model agents using diversity of thoughts. In The Thirteenth International Conference on Learning Representations , 2025. [55] Weize Chen, Ziming You, Ran Li, yitong guan, Chen Qian, Chenyang Zhao, Cheng Yang, Ruob- ing Xie, Zhiyuan Liu, and Maosong Sun. Internet of agents: Weaving a web of heterogeneous agents for collaborative intelligence. In The Thirteenth International Conference on Learning Representations , 2025. [56] Shengbin Yue, Siyuan Wang, Wei Chen, Xuanjing Huang, and Zhongyu Wei. Synergistic multi- agent framework with trajectory learning for knowledge-intensive tasks. In Toby Walsh, Julie Shah, and Zico Kolter, editors, AAAI-25, Sponsored by the Association for the Advancement of Artificial Intelligence, February 25 - March 4, 2025, Philadelphia, PA, USA , pages 25796–25804. AAAI Press, 2025. [57] Chen Qian, Zihao Xie, YiFei Wang, Wei Liu, Kunlun Zhu, Hanchen Xia, Yufan Dang, Zhuoyun Du, Weize Chen, Cheng Yang, Zhiyuan Liu, and Maosong Sun. Scaling large language model- based multi-agent collaboration. In The Thirteenth International Conference on Learning Representations , 2025. 14 [58] Zijing Shi, Meng Fang, and Ling Chen. Monte carlo planning with large language model for text- based game agents. In The Thirteenth International Conference on Learning Representations , 2025. [59] Benyamin T. Tabarsi. Developing llm-powered trustworthy agents for personalized learning support. In Toby Walsh, Julie Shah, and Zico Kolter, editors, AAAI-25, Sponsored by the Association for the Advancement of Artificial Intelligence, February 25 - March 4, 2025, Philadelphia, PA, USA , pages 29301–29302. AAAI Press, 2025. [60] Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Beibin Li, Erkang Zhu, Li Jiang, Xiaoyun Zhang, Shaokun Zhang, Jiale Liu, Ahmed Hassan Awadallah, Ryen W White, Doug Burger, and Chi Wang. AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation Framework. In arXiv preprint arXiv:2308.08155 , 2023. [61] Tian Liang, Zhiwei He, Wenxiang Jiao, Xing Wang, Yan Wang, Rui Wang, Yujiu Yang, Zhaopeng Tu, and Shuming Shi. Encouraging Divergent Thinking in Large Language Models through Multi-Agent Debate. In arXiv preprint arXiv:2305.19118 , 2023. [62] Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, and Zhiyuan Liu. Chateval: Towards Better LLM-based Evaluators through Multi-agent Debate. InThe Twelfth International Conference on Learning Representations (ICLR) , 2024. [63] Dake Chen, Hanbin Wang, Yunhao Huo, Yuzhao Li, and Haoyang Zhang. GameGPT: Multi- agent Collaborative Framework for Game Development. In arXiv preprint arXiv:2310.08067 , 2023. [64] Taicheng Guo, Xiuying Chen, Yaqi Wang,
https://arxiv.org/abs/2505.21898v1
Ruidi Chang, Shichao Pei, Nitesh V . Chawla, Olaf Wiest, and Xiangliang Zhang. Large language model based multi-agents: A survey of progress and challenges. 2024. [65] Zhexuan Wang, Yutong Wang, Xuebo Liu, Liang Ding, Miao Zhang, Jie Liu, and Min Zhang. Agentdropout: Dynamic agent elimination for token-efficient and high-performance llm-based multi-agent collaboration, 2025. [66] Yuanhang Zheng, Peng Li, Ming Yan, Ji Zhang, Fei Huang, and Yang Liu. Budget-constrained tool learning with planning. In Lun-Wei Ku, Andre Martins, and Vivek Srikumar, editors, Findings of the Association for Computational Linguistics: ACL 2024 , pages 9039–9052, Bangkok, Thailand, August 2024. Association for Computational Linguistics. 15
https://arxiv.org/abs/2505.21898v1
CAST: Contrastive Adaptation and Distillation for Semi-Supervised Instance Segmentation Pardis Taghavi Texas A&M University ptgh@tamu.eduTian Liu Texas A&M University ltmask@tamu.eduRenjie Li Texas A&M University renjie@tamu.edu Reza Langari Texas A&M University rlangari@tamu.eduZhengzhong Tu Texas A&M University tzz@tamu.edu Abstract Instance segmentation demands costly per-pixel annotations and large models. We introduce CAST, a semi-supervised knowledge distillation (SSKD) framework that compresses pretrained vision foundation models (VFM) into compact experts using limited labeled and abundant unlabeled data. CAST unfolds in three stages: (1) domain adaptation of the VFM teacher(s) via self-training with contrastive pixel calibration, (2) distillation into a compact student via a unified multi-objective loss that couples standard supervision and pseudo-labels with our instance-aware pixel-wise contrastive term, and (3) fine-tuning on labeled data to remove residual pseudo-label bias. Central to CAST is an instance-aware pixel-wise contrastive lossthat fuses mask and class scores to mine informative negatives and enforce clear inter-instance margins. By maintaining this contrastive signal across both adaptation and distillation, we align teacher and student embeddings and fully leverage unlabeled images. On Cityscapes and ADE20K, our ≈11×smaller student surpasses its adapted VFM teacher(s) by +3.4 AP (33.9 vs. 30.5) and +1.5 AP (16.7 vs. 15.2) and outperforms state-of-the-art semi-supervised approaches. A Introduction Pixel-level instance segmentation is notoriously expensive: annotating detailed masks can take hours per image, and training state-of-the-art detectors often requires hundreds of GPU hours, putting many applications out of reach [ 12,15]. At the same time, recent advancements in VFMs [ 22,20, 43,19] have significantly elevated the capabilities of computer vision systems, enabling exceptional performance across diverse perception benchmarks [2]. Motivation. Despite remarkable achievements, foundation models still cannot serve specific down- stream tasks sufficiently well due to two major issues: (1) the heavy computational overhead during deployment making these models impractical for environments with limited resources [ 37]; and (2) their inherently generic nature, which leads to suboptimal performance on tasks that demand domain specific expertise. The latter stems from foundation models being optimized to perform well across a wide variety of tasks, rather than being finely tuned for the nuanced requirements of specialized appli- cations [ 5]. This challenge is particularly prominent in applications involving outdoor environments, such as autonomous driving, and indoor settings, such as robotic perception [ 13]. SSKD for instance segmentation seeks to compress large models into efficient student models by leveraging both limited labeled data and abundant unlabeled images. Current distillation methods either treat VFMs as fixed feature extractors with simple pseudo-labeling or focus on coarse semantic tasks, failing to exploit the rich structure of large unlabeled datasets to refine per-pixel predictions. Consequently, adjacent Preprint. Under review.arXiv:2505.21904v1 [cs.CV] 28 May 2025 instances remain poorly separated and accuracy degrades sharply under scarce labels. We address these issues by adapting VFMs via self-training to enhance pseudo-label fidelity, and by injecting an instance-aware pixel-wise contrastive loss that leverages unlabeled data to enforce clear inter-instance margins, yielding sharper masks and superior performance in the low-label regime. Status quo. Knowledge distillation has evolved from task-agnostic compression [ 16,8] to adapting VFMs for downstream tasks. For classification and semantic segmentation, Vemulapalli et al. [
https://arxiv.org/abs/2505.21904v1
30] distill a VFM matching its output on an unlabeled transfer set, and SAM-CLIP [ 31] fuses CLIP and SAM. However, neither method targets per-pixel instance masks nor exploits dense self-supervision from the unlabeled pool. Pure semi-supervised instance segmentation methods, such as [ 17,3] train teachers from scratch, doubling GPU cost, and still produce blurred boundaries under scarce labels. To our knowledge, no prior work unifies VFM adaptation, unlabeled data-driven pixel-wise refinement, and extreme student compression for high fidelity instance segmentation. Contributions. We summarize our main contributions as follows: 1.We introduce an instance-aware pixel-wise contrastive loss that fuses mask and class predictions to drive stronger inter-instance separation, and show how to sample negatives efficiently in a instance centric setting. 2.We propose CAST, a SSKD pipeline with three phases: (i) self-training a foundation teacher with contrastive calibration, (ii) distilling into a compact student using supervised, pseudo label, and pixel-wise contrastive objectives, and (iii) final supervised finetuning to remove bias thereby unifying supervised, semi-supervised, and self-supervised signals. 3.We conduct extensive experiments on Cityscapes and ADE20K, demonstrating that our ≈11× smaller student surpass self-trained teacher by +3.4 AP (33.9 vs. 30.5) and +2.1 AP (16.7 vs. 14.6), and outperforms state-of-the-art semi-supervised instance segmentation methods under the same data splits, with lower training cost. B Related Work Vision Foundation Models. Vision foundation models (VFMs) [ 22,20,25,41,4] have revolution- ized computer vision through large scale pretraining. In parallel, recent trends focus on combining VFMs to extend their capabilities [ 26,43]. While these models excel in open-set recognition and transfer learning, their computational demands yet hinder edge deployment. Recent efforts merge VFMs via distillation: Wang et al [ 31] unify SAM and CLIP via multi-task learning, while Zhang etal [ 44] distill CLIP and DINOv2 into a compact model with moderate scale data distillation. We extend these paradigms by leveraging VFMs for instance segmentation, focusing on balancing robustness with computational efficiency. Knowledge Distillation in Vision. Knowledge distillation (KD) has become a ubiquitous technique to transfer knowledge from teachers with high capacity to lightweight students for efficient deployment. Early methods distilled softened logits or intermediate features [ 16] in a task-agnostic way, while later feature-based approaches capture structured spatial cues (e.g., pixel-wise similarity, channel distributions) [ 23,27]. Modern methods tackle VFMs’ scale and complexity: [ 28,38] distills VFMs to impart zero-shot and multimodal capabilities, further multi teacher approaches [ 18,39] combine complementary expertise. Vemulapalli et al. [ 30] adapt a VFM to the target task and then distill on a large unlabeled set for classification and semantic segmentation. Overall with these advances in vision KD We build our method on the insight that a strong teacher (or ensemble of teachers) can guide a lightweight instance segmentation model to high performance. our approach explicitly integrates semi-supervised learning and pixel-level contrastive losses for instance segmentation, to focus on bridging the gap between rich representation of VFMs and compact, efficient student networks. Semi-Supervised Learning. Self-training (or pseudo-labeling) has become a foundational paradigm in semi-supervised learning (SSL), where a model leverages its own predictions with high confidence and iteratively refine itself [ 35].
https://arxiv.org/abs/2505.21904v1
This approach has proven effective across vision tasks, improving image classification performance [ 35] and boosting object detection accuracy when annotation budgets are tight [ 21]. To counteract error accumulation from noisy pseudo-labels [ 29] use exponential moving average of label predictions, or [ 6] employ curriculum labeling schemes that gradually incorporate harder examples. More recent work applies pseudo-labeling for large pretrained models through targeted finetuning and adaptive pseudo selection strategies [ 14]. While many SSL methods focus on classification or detection, several have extended this method to dense prediction tasks [10, 40]. 2 Labels( GT) image Masks Classes Grounding DINO SAM2 person . rider . car . … MLP Bounding Boxes Source of Anchor pixels + Pseudo Labels Pseudo Labels Labels Labels 1. Teacher Adaptation 3. Student Refinement 2. Knowledge Transfer MLP Source of Anchor pixels Source of Anchor pixels DINOv2 Decoder Figure 1: CAST framework overview. Top: Three-stage pipeline: (1) adapt a pretrained VFM teacher to the target domain via self-training with contrastive pixel calibration; (2) transfer knowledge to a compact student using instance-aware pixel-wise contrastive sampling; (3) fine-tune the student on labeled data to correct residual pseudo-label bias. Bottom: Detailed view of stage (2): fused mask and class score maps produce anchor pixels, which are sampled across weak/strong views to form positive and negative pairs; an MLP projects features and computes the pixel-wise contrastive loss. Dashed arrows denote no gradient flow; red components are trainable, blue are frozen. We study self-training with self-supervised contrastive learning and task-specific adaptation. Global contrastive frameworks such as SimCLR [ 7], MoCo [ 9], and their detection extensions [ 34] established the value of large scale visual discrimination learning. Further per pixel contrastive approaches [ 32, 36,46,33,1] have shown promise in retaining spatial sensitivity though they yet conflate pixels from different instances of the same class. We extend these advances by synergizing self-training and self-supervised contrastive learning, and introduce a novel instance aware negative sampling strategy designed specifically for the demands of instance segmentation. C Method C.1 Overview In semi-supervised settings, we are given a small labeled set and a substantially larger unlabeled pool: Dl= (xl i, yl i) Nl i=1andDu= xu i Nu i=1, N u≫Nl, where each yl iconsists of binary masks and class labels for every instance. Our goal is to distill knowledge from a large, pretrained VFM into a compact student fθs, matching or surpassing the teacher’s accuracy with far fewer labels and compute. We propose CAST , a three-stage SSKD pipeline that hinges on two core innovations: ❶Contrastive Calibration. We fine-tune a large VFM teacher via self-training, but rather than simple pseudo-labels we inject a pixel-wise contrastive head to sharpen mask boundaries. ❷Debiased, Instance-Aware Sampling. During both adaptation and distillation, we mine hard negatives via a joint mask-/class-probability embedding, focusing repulsion on informative inter-instance pairs tailored for instance segmentation. These two ideas are then realized in three concise stages (see Fig. 1): 1.Teacher Adaptation. Self-train the VFM with pseudo-labels andpixel-wise contrastive calibra- tion to produce masks specialized to the target domain. 2.Knowledge Transfer. Freeze this calibrated teacher and distill
https://arxiv.org/abs/2505.21904v1
into a lightweight student under a unified loss that harmonizes ground truth, pseudo-label, and contrastive terms, guided by our debiased sampling. 3.Student Refinement. Fine-tune the student on labeled data to remove residual pseudo-label bias. 3 Sec. C.2 formalizes our instance-aware pixel-wise contrastive loss, which is used in both Teacher Adaptation and Knowledge Transfer to enforce intra-instance cohesion and inter-instance separation; Sec. C.3 then details the three stages of the CAST pipeline. C.2 Pixel-wise Contrastive Loss Standard supervised and pseudo-label losses enforce correct mask predictions, ignoring pixel-level feature relationships which underutilize unlabeled data and amplify pseudo label noise. We therefore inject a self-supervised pixel-wise contrastive loss as an additional supervisory signal on both labeled and unlabeled images, sharpening feature discrimination and regularizing against noisy labels. Letzweak, zstrong∈RB×N×Dbeℓ2-normalized embeddings from two views of each image, where Bis the number of images in one mini batch, N=h×wthe number of pixels, and Dthe embedding dimension. For each pixel p∈1,2, ..., N and image index b∈1, ..., B , the corresponding embedding vector is denoted as zb,p∈RD. We construct the positive pair by sampling the weak and strong embeddings for each pixel. The positive similarity between the two views is s+ b,p=⟨zweak b,p, zstrong b,p⟩/T. Negatives are sampled by our instance-aware sampler (§C.2), producing indices {(b′, qr)}R r=1and corresponding similarities s− b,p,r. s− b,p,r=⟨zweak b,p, zstrong b′,qr⟩/T, r = 1, . . . , R. The pixel-wise contrastive loss is then the standard NT-Xent over all anchors: Lpxl=−1 B NBX b=1NX p=1logexp(s+ b,p) exp(s+ b,p) +PR r=1exp(s− b,p,r). Debiased Pixel-Level Negative Sampling. To mine true inter-instance pairs without quadratic cost, we derive a per pixel sampling distribution by fusing mask and class probabilities. Let M∈RB×K×H×W, and L∈RB×K×(C+1),be the model’s mask and class logits respectively. We first resize Mto the feature resolution (h×w)and then normalize logits to probability distributions PmandPcvia softmax along instance and class dimensions respectively. For each pixel index (b, p)to find the aggregated class vote, we compute Expected class distribution Fc. Further to avoid losing encoded instance ids over aggregation in expected class distribution we form a joint “pseudo probability” embedding by concatenation the mask distribution and class cues in a single vector which gives a richer embedding letting the contrastive head learn arbitrary interactions between mask and class. leading to pseudo probability map be y[b, p]. Fc[b, p, c ] =KX k=1Pm[b, k, p ]Pc[b, k, c ], y[b, p] =" Pm[b,1 :K, p] Fc[b, p,1 :C+ 1]# ∈RK+(C+1). We score any two pixels (b, p)̸= (b′, q)by˜ybeing ℓ2-normalized vector of pseudo probability map. sdeb (b, p),(b′, q) = max 0,1− ⟨˜y[b, p],˜y[b′, q]⟩ , We draw Rnegatives {qr}for each anchor (b, p)by sampling proportional to sdeb, and then plug these into the NT-Xent denominator of Lpxl. Theoretical Insight. To give a formal rationale for augmenting our pixel-wise contrastive loss, we show that even under a mild negative sampling guarantee, each gradient step on our contrastive term provably increases the expected inter-instance margin. Assumption C.1 (Negative Sampling Guarantee) .When sampling a negative under our instance aware scheme, the probability it originates from a different
https://arxiv.org/abs/2505.21904v1
instance is at least p >0.5, where pcan be estimated empirically (see Sec. D.3). Proposition C.1 (Expected Margin Growth) .Under Assumption C.1, one gradient update on Lpxl increases the expected inter-instance margin ∆empby ε= Θ( p λpxl)>0. This expectation holds even when pseudo-labels are imperfect, provided negatives are sampled using our instance aware strategy. 4 In practice, raising λpxlenhances margin growth but also increases training cost. If λpxlis too large, it can overemphasize inter-instance separation at the expense of intra-instance cohesion. We validate this effect in Sec. D.3 and provide a proof sketch in Appendix C. C.3 CAST Framework We cast teacher adaptation, student distillation and student refinement as special cases of the same objective with three terms. Let J(θ;Dl,Du;λsemi, λpxl) =1 NlNlX i=1ℓ fθ(xl i), yl i | {z } Lsup+λsemi1 NuNuX j=1ℓ fθ(xu j),ˆyu j | {z } Lsemi+λpxlLpxl θ;Dl∪Du , where Du=∅makes the middle term zero. Teacher adaptation. Starting from pretrained weights θ0 T, we first fine-tune on the labeled set Dl: θ′ T= arg min θJ θ;Dl,∅; 0, λpxl . We then generate pseudo-labels ˆyu j=fθ′ T(xu j), reset to θ0 Tand fine-tune on Dl∪ {(xu j,ˆyu j)}: θ′′ T= arg min θJ θ;Dl,Du; 1, λpxl . This two-step contrastive calibration yields a specialized teacher whose pseudo-labels are both accurate and spatially consistent for the target domain. Knowledge transfer. With calibrated teacher θ′′ Tfrozen, student θsis trained via the unified objective: θ∗ s= arg min θsJ θs;Dl,Du;λsemi, λpxl . (1) Here,Lsupenforces ground truth supervision on Dl,Lsemidistills pseudo-labels from Du, andLpxl imposes our pixel-wise contrastive regularizer across both sets. The coefficients λsemi andλpxl balance signals, guiding the student to approach teacher’s accuracy with far fewer parameters. Student Refinement. Although joint distillation yields a strong initialization, residual pseudo-label noise and contrastive pretext tasks can introduce bias. As a final step, we fine-tune the student on labeled data alone: θ† s= arg min θ∗sJ θ∗ s;Dl,∅; 0,0 , This pass removes pseudo-label drift and sharpens decision boundaries for in-domain data. D Experiments D.1 Experimental Protocol Datasets. We evaluate CAST on two standard instance segmentation benchmarks: Cityscapes [12] contains 2,975 training, 500 validation images of urban street scenes, annotated with 19 semantic categories (8 “thing” classes and 11 “stuff” classes). ADE20K [47] comprises 20,210 training and 2,000 validation images spanning diverse indoor and outdoor environments, annotated with 150 semantic categories (100 “thing” and 50 “stuff” classes). For our semi-supervised experiments, we randomly sample 10% of the Cityscapes training set as labeled and use the remaining images as unlabeled. For ADE20K, to reduce computational cost we use stratified sampling to select 20% of the training pool while preserving the original per category distribution, yielding 3,537 images. From this subset, 1,000 images (10 per instance class) serve as labeled data and the remaining 2,537 as unlabeled. Detailed statistics are provided in Appendix A, and we will release the exact split files alongside our code. Teacher and Student Architectures. Our teacher is a fused ensemble of Grounding-DINO- Large [ 20] and SAM2-L [ 25]. Since the official Grounding-DINO is closed-source, we use its open-source counterpart
https://arxiv.org/abs/2505.21904v1
mm-Grounding-DINO [ 45]. For the student, we pair a DINOv2-S en- coder [ 22] with a DPT-S decoder head [ 24], followed by a lightweight transformer decoder module in 5 the spirit of Mask2Former [ 11]. Our choice of the DINOv2+DPT backbone is motivated by the recent successes of “Depth AnythingV2” in monocular depth estimation [ 41] and UniMatchV2 in semantic segmentation [ 42], and aims to facilitate future multimodal fusion work. We evaluate the impact of different student designs in Sec. D.4, and defer the complete optimizer, learning rate schedules, and other hyperparameters to Appendix B. Implementation Details. All experiments were conducted on Ubuntu 22.04 with Python 3.10 and PyTorch 2.6.0 (CUDA 12.6). Teacher adaptation runs were executed on 2 ×NVIDIA A100 GPUs, while student training and fine-tuning runs used 2 ×NVIDIA GeForce RTX 4090 GPUs. For instance, a single fine-tuning run of the teacher (Grounding-DINO) on the supervised Cityscapes split required ≈3.5GPU hours; a single student training run for this dataset took ≈17GPU hours without the pixel-wise contrastive branch and ≈22GPU hours with it. All reported results in the following tables are averages over three independent runs with different random seeds. D.2 Main Results We extensively evaluate a range of knowledge distillation (KD) strategies, ranging from purely supervised to state-of-the-art semi-supervised baselines, and benchmark them against our CAST pipeline. Table 1 reports maskAP and AP 50on Cityscapes and ADE20K. In the teacher adaptation stage (568 M parameters), adding our pixel-level contrastive loss boosts Cityscapes maskAP from 29.8 to 30.5 (+0.7) and AP 50from 54.9 to 56.6 (+1.7); on ADE20K, maskAP rises from 14.6 to 15.2 (+0.6) and AP 50from 23.6 to 24.5 (+0.9). These improvements confirm that pixel-wise supervision sharply improves feature discrimination and reduces pseudo-label noise. Figure 2 plots key pipeline efficiency metric on a logarithmic scale for both teacher and student models. Params (M)FLOPS (G)Latency (ms) Memory (GB)FPS Speed-upXAP100101102103Value (log scale) 568 52-90.8%880 200-77.3% 350.87 76.92-78.1% 5.6 0.8-85.7%2.8513.0 +356.1% 1.04.56 +356.0%30.533.9 +11.1%T eacher Student Figure 2: Comparison of Teacher and Student Model efficiency Metrics (Log Scale).In the student distillation stage (52 M pa- rameters, just 9 % of the teacher), our semi- supervised model with pixel-level loss achieves 32.2 maskAP and 56.5 AP 50on Cityscapes, out- performing the Guided dist. baseline by +1.4 maskAP. After fine-tuning, the student reaches 33.9 maskAP (+3.4 over the best teacher) and 58.7 AP 50. On ADE20K, it records 16.1 maskAP and 27.4 AP 50semi-supervised, and 16.7 maskAP (+1.5) with 28.0 AP 50after fine- tuning, underscoring CAST’s robustness across domains. Table 1: Instance segmentation results on Cityscapes and ADE20K. We report teacher-adaptation (568 M params) and student distillation (52 M params) under each method. Method Data Regime Cityscapes ADE20K maskAP AP 50 maskAP AP 50 Teacher Adaptation 0-shot(baseline) None (pretrained) 22.0 42 .3 8 .1 18 .2 Supervised Labeled only 28.7 53 .4 14 .2 23 .5 Self-training [35] Labeled + Unlabeled 29.7 54 .9 14 .6 23 .6 + EMA-Augmented [21] Labeled + Unlabeled 29.8 54 .9 14 .8 23 .7 + pixel-level loss (ours) Labeled + Unlabeled 30.5 56.6 15.2 24.5
https://arxiv.org/abs/2505.21904v1
Student Distillation Supervised(baseline) Labeled only 21.1 38 .7 13 .9 24 .2 PAIS [17] Labeled + Unlabeled 22.9 44 .9 10 .3 18 .3 Guided dist. [3] Labeled + Unlabeled 30.8 52 .9 14 .2 23 .8 + unlabeled KD [30] Unlabeled only 24.4 45 .6 5 .1 9 .3 + labeled+unlabeled KD (ours) Labeled + Unlabeled 30.7 54 .9 14 .4 25 .2 + pixel-level loss (ours) Labeled + Unlabeled 32.2 56.5 16.1 27.4 Fine-tuned (best) Labeled only 33.9 58.7 16.7 28.0 6 D.3 Empirical Validation We validate Proposition C.1 by monitoring the false negative rate ( FNR ), the fraction of sampled negatives that actually belong to the same instance, and the empirical margin ∆emp= NegMean −PosMean . Defining p= 1−FNR as the success probability of sampling a true negative, Figure 3 shows: the empirical margin every 10 k iterations for λpxl∈ {0.01,0.05,0.1,0.2}(left), the raw contrastive loss for λpxl= 0.1(center), and the false negative rate for λpxl= 0.1(right, dashed at p= 0.5). Throughout training we observe p >0.9and a linear increase of ∆empwithλpxl, in agreement with Proposition C.1. 0k 10k 20k 30k 40k 50k 60k 70k 80k 90k Training Iterations0.400.450.500.550.600.650.70Average Separation Average Separation per 10k Interval Comparison of Different Values =0.1 =0.5 =0.01 =0.2 (a)Instance Separation 0k 10k 20k 30k 40k 50k 60k 70k 80k 90k Training Iterations0.00.20.40.60.81.0Ratep=0.5 thresholdFalse Negative Rate (p = 1 - FNR) (b)FNR 0k 10k 20k 30k 40k 50k 60k 70k 80k 90k Training Iterations0.250.300.350.400.450.50LossRaw Contrastive Loss (c)Contrastive Loss Figure 3: (Left) Empirical margin (NegMean–PosMean) every 10k iterations for various λpxl.(Center) False negative rate ( FNR ) forλpxl= 0.1, dashed at p= 0.5.(Right) Contrastive loss for λpxl= 0.1. D.4 Ablation Studies We perform a series of ablation experiments on the Cityscapes dataset to isolate the contributions of different components in the CAST pipeline and the pixel-level contrastive loss. Impact of Loss Components. In the knowledge distillation stage, the overall loss is a weighted sum of three components: the supervised loss ( Lsup), the semi-supervised pseudo-label loss ( Lsemi), and the pixel-level self-supervised contrastive loss ( Lpxl). Table 2 presents an ablation study assessing the contribution of each loss term. The results indicate that adding the pseudo-label loss substantially improves performance over the supervised baseline, and further inclusion of the self-supervised loss leads to the highest overall performance. Table 2: Loss Ablation: Impact of supervised ( Lsup), pseudo-label ( Lsemi), and pixel-level ( Lpxl) terms on teacher vs. student maskAP. Method LsupLsemi LpxlmaskAP (%) Teacher Student (a) Supervised Only ✓ 28.7 21.1 (b) + Pseudo-Label ✓ ✓ 29.7 30.7 (c) + pixel-level loss ✓ ✓ 29.6 27.5 (d) + (b) + (c) ✓ ✓ ✓ 30.5 32.2 Ablation of Negative Sampling via Various Probability Maps. To validate our negative sampling strategy in the pixel-level contrastive loss, Table 3a compares four sampling methods: Uniform: negatives sampled uniformly across the image; Mask-Only: The probability map is derived solely from mask predictions, with class probabilities assumed to be uniform. Class-Only: The map is generated only from class predictions, assuming a uniform spatial distribution for the mask. Fusion: Combining both mask
https://arxiv.org/abs/2505.21904v1
and class predictions. The fusion strategy achieves the best results, with 32.2 maskAP and 56.5 AP 50. Hyperparameter Sensitivity. We evaluate CAST’s sensitivity to three key hyperparameters on Cityscapes: contrastive weight λpxl, negatives per anchor K, and temperature T, by measuring 7 (a)Negative Sampling Strategies Method maskAP (%) AP 50(%) Uniform 29.4 50 .2 Mask-Only 30.6 55 .0 Class-Only 31.1 55 .3 Fusion 32.2 56.5(b)Schematic of Sampling Distributions Sampling Probability (%)Anchor pixelIntra-instanceIntra-classBackgroundBoundary-adjacentInter-class Dynamic Threshold for Top-R Negative SamplesUniform Fusion (Debiased) Mask Only Class Only Table 3: Ablation of Negative Sampling Strategies on Cityscapes. (a) Quantitative results for uniform, mask-only, class-only, and fusion samplers (maskAP and AP 50). (b) Schematic sketch of the corresponding pixel-level sampling probability distributions. both teacher and student mask AP (%) and AP 50(%). Table 4 reports the full sweep. We find that λpxl= 0.2andT= 0.2consistently maximize performance. For the number of negatives, K= 256 offers the best trade-off: although K= 512 yields a slight increase in teacher maskAP (30.9 vs. 30.5) and AP 50(57.1 vs. 56.6), and comparable student metrics, the marginal gains saturate relative to the increased sampling cost. Therefore, we adopt K= 256 throughout. Table 4: Hyperparameter Ablation on Cityscapes. Model MetricContrastive Loss Weight( λpxl) Negative Sanples per Anchor( K) Temperature( T) 0 0.01 0.1 0.2 0.5 128 256 512 0.1 0.2 0.4 TeacherAP 29.7 29 .9 30 .2 30 .5 30 .1 30 .4 30 .5 30 .9 30 .1 30 .5 29 .8 AP50 55.3 55 .7 56 .1 56 .6 56 .1 56 .3 56 .6 57 .1 55 .9 56 .6 55 .3 StudentAP 30.7 30 .8 32 .1 32 .2 30 .9 29 .8 32 .2 32 .1 31 .9 32 .2 31 .7 AP50 54.9 55 .2 56 .2 56 .5 55 .7 55 .3 56 .5 56 .6 56 .0 56 .5 55 .8 Student Architecture Variants. We evaluate two design axes for the student model under CAST distillation protocol: (i) the encoder backbone (with a fixed DPT decoder), and (ii) the decoder head (with a fixed DINOv2-S encoder). Ta- ble 5 reports accuracy along with parameter counts, on the Cityscapes validation set. The combination of DINOv2-S encoder and DPT head achieves the best accuracy with a compact footprint. Table 5: Architecture Ablations on Cityscapes. (a) Encoder backbone (fixed DPT decoder). (b) Decoder head (fixed DINOv2-S encoder). (a)Encoder Backbone Encoder maskAP AP 50 Params (M) ResNet50 25.5 49.3 24 SAM2-S 22.1 39.2 35 DINOv2-S 30.7 54.9 22(b)Decoder Head Decoder maskAP AP 50 Params (M) FPN 28.9 52.4 18 DPT 30.7 54.9 22 Figure 4: Qualitative results on Cityscapes. Guided dist. [3] (top) vs. CAST (bottom). 8 E Conclusions We have introduced CAST, a rigorously designed SSKD pipeline that fuses self-training, instance- aware pixel-wise contrastive learning, and final supervised finetuning to compress large VFMs into compact student experts with comparable performance. Empirically, our ≈11×smaller student exceeds its adapted teacher by +3.4 maskAP in Cityscapes and +1.5 maskAP in ADE20K, while cutting compute and parameter counts demonstrating that dense contrastive supervision can unlock substantial gains in low-label regimes. Our theoretical analysis
https://arxiv.org/abs/2505.21904v1
fur- ther guarantees that our negative sampling scheme provably increases inter-instance margins under mild assumptions. Looking forward, streamlining CAST into a single unified objec- tive, extending its evaluation to diverse domains, and integrating uncertainty quantification will be critical steps toward safe, equitable, and broadly deployable segmentation solutions. Cityscapes maskAPCityscapes AP50 ADE20K maskAP ADE20K AP50 Params (M) FPS0.20.40.60.81.0 Overview of CAST Variants: Accuracy vs. EfficiencySelf-training Self-training + pixel-loss Student + pixel-loss Student fine-tuned Figure 5: Performance–complexity radar chart (normalized).Broader Impacts. This paper focuses on developing high fidelity instance segmentation models under tight anno- tation and compute budgets, running efficiently on work- station GPUs and making it practical for domains such as autonomous driving, mobile and industrial robotics, augmented reality and agricultural or infrastructure mon- itoring. These safety critical applications are especially vulnerable to biases inherited from generalized VFMs (e.g., underrepresented object classes or ambiguous cate- gory definitions, which can lead to degraded performance or unfair outcomes. By distilling task-specific expert mod- els, CAST helps reduce these generic failure modes, but careful domain specific validation, bias auditing, and un- certainty quantification remain essential for deployment. Limitations. While our CAST framework achieves sig- nificant performance improvements, the training pipeline comprising multiple stages introduces additional complexity. Future work could focus on streamlining these stages and further reducing the training cost. Figure 6: Qualitative results on ADE20k. Figure 7: Qualitative bias reduction in stage-wise distillation. Top row: pseudo-labels generated by the pretrained teacher. Bottom row: student predictions after distillation and refinement, demonstrating reduced pseudo-label bias and sharper instance boundaries. 9 References [1]Inigo Alonso, Alberto Sabater, David Ferstl, Luis Montesano, and Ana C Murillo. Semi-supervised semantic segmentation with pixel-level contrastive learning from a class-wise memory bank. In Proceedings of the IEEE/CVF international conference on computer vision , pages 8219–8228, 2021. [2]Muhammad Awais, Muzammal Naseer, Salman Khan, Rao Muhammad Anwer, Hisham Cholakkal, Mubarak Shah, Ming-Hsuan Yang, and Fahad Shahbaz Khan. Foundation models defining a new era in vision: a survey and outlook. IEEE Transactions on Pattern Analysis and Machine Intelligence , 2025. [3]Tariq Berrada, Camille Couprie, Karteek Alahari, and Jakob Verbeek. Guided distillation for semi- supervised instance segmentation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision , pages 475–483, 2024. [4]Aleksei Bochkovskii, Amaël Delaunoy, Hugo Germain, Marcel Santos, Yichao Zhou, Stephan R Richter, and Vladlen Koltun. Depth pro: Sharp monocular metric depth in less than a second. arXiv preprint arXiv:2410.02073 , 2024. [5]Rishi Bommasani, Drew A Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, et al. On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258 , 2021. [6]Paola Cascante-Bonilla, Fuwen Tan, Yanjun Qi, and Vicente Ordonez. Curriculum labeling: Revisit- ing pseudo-labeling for semi-supervised learning. In Proceedings of the AAAI conference on artificial intelligence , pages 6912–6920, 2021. [7]Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In International conference on machine learning , pages 1597–1607. PmLR, 2020. [8]Ting Chen, Simon Kornblith, Kevin Swersky, Mohammad Norouzi, and Geoffrey E Hinton. Big self- supervised models
https://arxiv.org/abs/2505.21904v1
are strong semi-supervised learners. Advances in neural information processing systems , 33:22243–22255, 2020. [9]Xinlei Chen, Saining Xie, and Kaiming He. An empirical study of training self-supervised vision transformers. In Proceedings of the IEEE/CVF international conference on computer vision , pages 9640–9649, 2021. [10] Xiaokang Chen, Yuhui Yuan, Gang Zeng, and Jingdong Wang. Semi-supervised semantic segmentation with cross pseudo supervision. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 2613–2622, 2021. [11] Bowen Cheng, Ishan Misra, Alexander G Schwing, Alexander Kirillov, and Rohit Girdhar. Masked- attention mask transformer for universal image segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 1290–1299, 2022. [12] Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus Enzweiler, Rodrigo Benenson, Uwe Franke, Stefan Roth, and Bernt Schiele. The cityscapes dataset for semantic urban scene understanding. InProc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , 2016. [13] Roya Firoozi, Johnathan Tucker, Stephen Tian, Anirudha Majumdar, Jiankai Sun, Weiyu Liu, Yuke Zhu, Shuran Song, Ashish Kapoor, Karol Hausman, et al. Foundation models in robotics: Applications, challenges, and the future. The International Journal of Robotics Research , page 02783649241281508, 2023. [14] Kai Gan and Tong Wei. Erasing the bias: Fine-tuning foundation models for semi-supervised learning. arXiv preprint arXiv:2405.11756 , 2024. [15] Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick. Mask r-cnn. In Proceedings of the IEEE international conference on computer vision , pages 2961–2969, 2017. [16] Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 , 2015. [17] Jie Hu, Chen Chen, Liujuan Cao, Shengchuan Zhang, Annan Shu, Guannan Jiang, and Rongrong Ji. Pseudo-label alignment for semi-supervised instance segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision , pages 16337–16347, 2023. 10 [18] Yuxuan Jiang, Chen Feng, Fan Zhang, and David Bull. Mtkd: Multi-teacher knowledge distillation for image super-resolution. In European Conference on Computer Vision , pages 364–382. Springer, 2024. [19] Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C. Berg, Wan-Yen Lo, Piotr Dollár, and Ross Girshick. Segment anything. arXiv:2304.02643 , 2023. [20] Shilong Liu, Zhaoyang Zeng, Tianhe Ren, Feng Li, Hao Zhang, Jie Yang, Qing Jiang, Chunyuan Li, Jianwei Yang, Hang Su, et al. Grounding dino: Marrying dino with grounded pre-training for open-set object detection. In European Conference on Computer Vision , pages 38–55. Springer, 2024. [21] Yen-Cheng Liu, Chih-Yao Ma, Zijian He, Chia-Wen Kuo, Kan Chen, Peizhao Zhang, Bichen Wu, Zsolt Kira, and Peter Vajda. Unbiased teacher for semi-supervised object detection. arXiv preprint arXiv:2102.09480 , 2021. [22] Maxime Oquab, Timothée Darcet, Théo Moutakanni, Huy V o, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, et al. Dinov2: Learning robust visual features without supervision. arXiv preprint arXiv:2304.07193 , 2023. [23] Jathushan Rajasegaran, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, and Mubarak Shah. Self- supervised knowledge distillation for few-shot learning. arXiv preprint arXiv:2006.09785 , 2020. [24] René Ranftl, Alexey Bochkovskiy, and Vladlen Koltun. Vision transformers for dense prediction. In Proceedings of
https://arxiv.org/abs/2505.21904v1
the IEEE/CVF international conference on computer vision , pages 12179–12188, 2021. [25] Nikhila Ravi, Valentin Gabeur, Yuan-Ting Hu, Ronghang Hu, Chaitanya Ryali, Tengyu Ma, Haitham Khedr, Roman Rädle, Chloe Rolland, Laura Gustafson, et al. Sam 2: Segment anything in images and videos. arXiv preprint arXiv:2408.00714 , 2024. [26] Tianhe Ren, Shilong Liu, Ailing Zeng, Jing Lin, Kunchang Li, He Cao, Jiayu Chen, Xinyu Huang, Yukang Chen, Feng Yan, et al. Grounded sam: Assembling open-world models for diverse visual tasks. arXiv preprint arXiv:2401.14159 , 2024. [27] Changyong Shu, Yifan Liu, Jianfei Gao, Zheng Yan, and Chunhua Shen. Channel-wise knowledge distillation for dense prediction. In Proceedings of the IEEE/CVF international conference on computer vision , pages 5311–5320, 2021. [28] Ximeng Sun, Pengchuan Zhang, Peizhao Zhang, Hardik Shah, Kate Saenko, and Xide Xia. Dime-fm: Distilling multimodal and efficient foundation models. In Proceedings of the IEEE/CVF International Conference on Computer Vision , pages 15521–15533, 2023. [29] Antti Tarvainen and Harri Valpola. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. Advances in neural information processing systems , 30, 2017. [30] Raviteja Vemulapalli, Hadi Pouransari, Fartash Faghri, Sachin Mehta, Mehrdad Farajtabar, Mohammad Rastegari, and Oncel Tuzel. Knowledge transfer from vision foundation models for efficient training of small task-specific models. ICML2024 , 2024. [31] Haoxiang Wang, Pavan Kumar Anasosalu Vasu, Fartash Faghri, Raviteja Vemulapalli, Mehrdad Farajtabar, Sachin Mehta, Mohammad Rastegari, Oncel Tuzel, and Hadi Pouransari. Sam-clip: Merging vision foun- dation models towards semantic and spatial understanding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 3635–3647, 2024. [32] Xinlong Wang, Rufeng Zhang, Chunhua Shen, Tao Kong, and Lei Li. Dense contrastive learning for self-supervised visual pre-training. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 3024–3033, 2021. [33] Xuehui Wang, Kai Zhao, Ruixin Zhang, Shouhong Ding, Yan Wang, and Wei Shen. Contrastmask: Contrastive learning to segment every thing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 11604–11613, 2022. [34] Enze Xie, Jian Ding, Wenhai Wang, Xiaohang Zhan, Hang Xu, Peize Sun, Zhenguo Li, and Ping Luo. Detco: Unsupervised contrastive learning for object detection. In Proceedings of the IEEE/CVF international conference on computer vision , pages 8392–8401, 2021. [35] Qizhe Xie, Minh-Thang Luong, Eduard Hovy, and Quoc V Le. Self-training with noisy student improves imagenet classification. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 10687–10698, 2020. 11 [36] Zhenda Xie, Yutong Lin, Zheng Zhang, Yue Cao, Stephen Lin, and Han Hu. Propagate yourself: Exploring pixel-level consistency for unsupervised visual representation learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 16684–16693, 2021. [37] Xiaohan Xu, Ming Li, Chongyang Tao, Tao Shen, Reynold Cheng, Jinyang Li, Can Xu, Dacheng Tao, and Tianyi Zhou. A survey on knowledge distillation of large language models. arXiv preprint arXiv:2402.13116 , 2024. [38] Chuanguang Yang, Zhulin An, Libo Huang, Junyu Bi, Xinqiang Yu, Han Yang, Boyu Diao, and Yongjun Xu. Clip-kd: An empirical study of clip model distillation. In Proceedings of the IEEE/CVF
https://arxiv.org/abs/2505.21904v1
Conference on Computer Vision and Pattern Recognition , pages 15952–15962, 2024. [39] Chuanguang Yang, Xinqiang Yu, Han Yang, Zhulin An, Chengqing Yu, Libo Huang, and Yongjun Xu. Multi-teacher knowledge distillation with reinforcement learning for visual recognition. arXiv preprint arXiv:2502.18510 , 2025. [40] Lihe Yang, Lei Qi, Litong Feng, Wayne Zhang, and Yinghuan Shi. Revisiting weak-to-strong consistency in semi-supervised semantic segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 7236–7246, 2023. [41] Lihe Yang, Bingyi Kang, Zilong Huang, Zhen Zhao, Xiaogang Xu, Jiashi Feng, and Hengshuang Zhao. Depth anything v2. arXiv preprint arXiv:2406.09414 , 2024. [42] Lihe Yang, Zhen Zhao, and Hengshuang Zhao. Unimatch v2: Pushing the limit of semi-supervised semantic segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence , 2025. [43] Haobo Yuan, Xiangtai Li, Tao Zhang, Zilong Huang, Shilin Xu, Shunping Ji, Yunhai Tong, Lu Qi, Jiashi Feng, and Ming-Hsuan Yang. Sa2va: Marrying sam2 with llava for dense grounded understanding of images and videos. arXiv preprint arXiv:2501.04001 , 2025. [44] Yitian Zhang, Xu Ma, Yue Bai, Huan Wang, and Yun Fu. Accessing vision foundation models via imagenet-1k. In The Thirteenth International Conference on Learning Representations , 2025. [45] Xiangyu Zhao, Yicheng Chen, Shilin Xu, Xiangtai Li, Xinjiang Wang, Yining Li, and Haian Huang. An open and comprehensive pipeline for unified object grounding and detection. arXiv preprint arXiv:2401.02361 , 2024. [46] Yuanyi Zhong, Bodi Yuan, Hong Wu, Zhiqiang Yuan, Jian Peng, and Yu-Xiong Wang. Pixel contrastive- consistent semi-supervised semantic segmentation. In Proceedings of the IEEE/CVF international confer- ence on computer vision , pages 7273–7282, 2021. [47] Bolei Zhou, Hang Zhao, Xavier Puig, Tete Xiao, Sanja Fidler, Adela Barriuso, and Antonio Torralba. Semantic understanding of scenes through the ade20k dataset. International Journal of Computer Vision , 127:302–321, 2019. 12 Supplementary Material This document provides additional details to support the main paper, including dataset statistics, full hyperparameter settings, formal proof, extended training protocols, and additional ablation studies. A Dataset Splits Table 6 summarizes the datasets used in our experiments. We use a 10% labelled split of Cityscapes’ 2 975 training images (298 labeled / 2 677 unlabeled) and a stratified 20% split of ADE20K’s 20 210 training images (1 000 labeled / 2 537 unlabeled). Standard validation sets are retained (500 images for Cityscapes, 2 000 for ADE20K). Exact image-ID lists will be released with our code. Table 6: Semi-supervised splits used in our experiments. Dataset # Classes Labeled / Unlabeled Validation Cityscapes 8 298 / 2 677 500 ADE20K 100 1 000 / 2 537 2 000 B Hyperparameters Key teacher and student hyperparameters are summarized in Table 7. Table 7: Hyperparameter Settings Parameter Teacher Student Learning rate 5.0×10−5Encoder: 5.0×10−6; Decoder: 5.0×10−5 Scheduler Multi-step (milestones at 0.9, 0.95) PolyLR (power 0.9) Batch size 4 8 Weight decay 0.01 0.05 Contrastive loss weight 0.2 0.2 Pseudo-label threshold 0.3 0.3 Dropout rate — 0.1 Gradient clipping — ℓ2norm 0.1 Optimizer AdamW ( β1=0.9, β2=0.999) Augmentations Weak: flip, resize; Strong: random resized crop, jitter, grayscale, blur, Loss weights (mask / class) 5 / 2 C Proof Sketch of Proposition
https://arxiv.org/abs/2505.21904v1
3.1 Proof Sketch. Letza,z+and{z− r}R r=1be the unit norm embeddings of an anchor pixel, its positive, andRnegatives. Define s+=⟨za, z+⟩, s− r=⟨za, z− r⟩, and the pixel-wise contrastive loss ℓ(za) =−logexp(s+) exp(s+) +PR r=1exp(s−r). Let Z= exp( s+) +RX r=1exp(s− r), α r=exp(s− r) Z. A straightforward gradient computation gives ∇zaℓ=RX r=1αr(z− r−z+). 13 Applying one gradient descent step with step size λpxl: z′ a=za−λpxl∇zaℓ=za+λpxlRX r=1αr(z+−z− r). For a randomly chosen negative z−, ∆s+=⟨z′ a−za, z+⟩=λpxlRX r=1αr 1− ⟨z− r, z+⟩ , ∆s−=⟨z′ a−za, z−⟩=λpxlRX r=1αr ⟨z+, z−⟩ − ⟨z− r, z−⟩ . By Assumption 3.1, each negative embedding z− ris inter-instance with probability p, in which case ⟨z− r, z+⟩ ≈0, and intra-instance with probability 1−p, in which case ⟨z− r, z+⟩ ≈1. Hence E 1− ⟨z− r, z+⟩ =p·1 + (1 −p)·0 =p, and sincePR r=1αr= 1, it follows that E[∆s+] =λpxlRX r=1αrE 1− ⟨z− r, z+⟩ =p λpxl. Meanwhile, every term in ∆s−involves an inter-instance inner product, either ⟨z+, z−⟩or⟨z− r, z−⟩ each of which vanishes in expectation, so E[∆s−]≈0. Therefore E[∆s+−∆s−] =p λpxl−0 = Θ p λpxl =ε >0, i.e. one update on Lpxlincreases the expected inter-instance margin by ε. Remark C.1 (Why ⟨z+, z−⟩ ≈0holds) .Under the InfoNCE objective (§3.2), the normalized weights for negative pairs, αr=es− r es++P res−r,vanish at convergence, i.e. αr≈0. Moreover, in high dimensional embeddings, random unit vectors have inner products concentrating near zero, and contrastive training further pushes these negative similarities into a tight, small magnitude distribution [7]. Thus it is reasonable to approximate ⟨z+, z−⟩ ≈0up to O(1/√ D)fluctuations. D More Training Details All teacher models are fine-tuned using 1k iterations on labeled set, followed by 5k iterations in a self-training stage with pseudo-labels. For student models, training on the Cityscapes dataset spans 90k iterations (consistent with prior work [ 11]), while the mini-ADE20k dataset is trained for 80k iterations. Finally, both datasets undergo an additional supervised fine-tuning phase for 2k iterations. E Additional Ablations E.1 Loss Variant: InfoNCE vs. Margin Hinge Replacing our asymmetric InfoNCE (§3.2) with an margin-based hinge loss (margin = 0.2) yields identical maskAP (32.2%) and +0.6 AP 50, at the cost of 1.6×longer training. This evaluates whether enforcing a fixed positive–negative margin can match or improve upon the performance of InfoNCE. Table 8: Loss Variant Ablation. Default InfoNCE vs. margin-based hinge (m = 0.2). Loss Variant maskAP (%) AP 50(%) Asymmetric InfoNCE (§3.2) 32.2 56.5 Margin hinge (m = 0.1) 32.2 57.1 14 E.2 Ablation: Debias Score Formulation We evaluate three instantiations of the debias score function sdeb(§3.2): •Original sdeb:fusion of mask and class confidences (ours). • sdeb2: square each score to amplify high-confidence negatives. •√ sdeb: take the square root of each score to temper the bias. Table 9: Debias Score Formulation Ablation. Original vs. squared vs. square-root debias scores. Score Variant maskAP (%) AP 50(%) Original 32.2 56.5 Squared 32.0 56.3 Square-root 31.9 56.2 E.3 Ablation: Negative Sampling Scope We evaluate two negative sampling scopes: (i) sampling only within the current mini batch vs. (ii) sampling from a small memory bank of past pixel
https://arxiv.org/abs/2505.21904v1
Vision-Language-Action Model with Open-World Embodied Reasoning from Pretrained Knowledge Zhongyi Zhou1,2∗Yichen Zhu1∗†Junjie Wen1Chaomin Shen2Yi Xu1 1Midea Group2East China Normal University chatvla-2.github.io Large Language ModelImage TokensLanguage Instruction Action ExpertLLM Head ActionsReasoning DynamicMoERobotManipulationwithOpen-World Embodied ReasoningMixture-of-Expert VLAGeneral Robot Control Spatial Reasoning:ToyPlacementTrainingData MathematicalReasoning:MathMatchingGame“Answerthequestionandpickthecardwithcorrectanswer.” Q: What is the colors of the door in the building?A: The door in the image is red. TopBottomA:Thebaokemarkerislocatedtothefrontrightofthetoothpaste.Thereisanopenspacetotheleftofthetoothpastewherethebaokemarkercanbeplaced..Placementposition:[(0.078,0.486),(0.117,0.486),(0.078,0.556)] × × ×Q:Wherecouldthebaokemarkerbeplacedtotheleftofdarlietoothpaste? RobotDataRoboticControlPareto FrontVLMsVLAsChatVLA-2(Ours) User Neverseen Minusbefore!Letmethinking…Onthewhiteboardis56-4256–42=14Thereare4(left),6(middle),1(right)Pick1fromrightThenpick4fromleft 𝜋! PickTransferPlace Q: Provide a one-sentence caption for the image.A: The Great Wall of China winds majestically over green mountains under a cloudy sky.Image-TextDataDexVLAOpenVLAEmbodiedReasoning ???Thetapeisonawoodentable……Itcanplacedat:[(0.280,0.329),(0.287,0.340),(0.287,0.319)]“Pickthe brownmugand place it tothetop/bottomoftheshelf.” LeftRightBehindFront User?……PeachMuffin SoapBowl CatCube BananaBread PepperMug VanMouse TapeCube BlockKnife LeftRight FrontTopBottom Behind “Pick the toy pepperand place it in frontof the tape.” User???Thebrownmugislocatedinfrontoftheshelfandshouldbeplacedtothetop/bottomofit.𝜋!.# Figure 1: Our proposed ChatVLA-2 model enables generalized open-world embodied reasoning and reasoning following abilities. We designed two tasks—a math matching game and a toy placement experiment—to demonstrate its generalization ability. Abstract Vision-language-action (VLA) models have emerged as the next generation of mod- els in robotics. However, despite leveraging powerful pre-trained Vision-Language Models (VLMs), existing end-to-end VLA systems often lose key capabilities during fine-tuning as the model adapts to specific robotic tasks. We argue that a generalizable VLA model should retain and expand upon the VLM’s core com- petencies: 1) Open-world embodied reasoning - the VLA should inherit the knowledge from VLM, i.e., recognize anything that the VLM can recognize, be capable of solving math problems, and possess visual-spatial intelligence, 2) Rea- soning following – effectively translating the open-world reasoning into actionable steps for the robot. In this work, we introduce ChatVLA-2 , a novel mixture-of- expert VLA model coupled with a specialized three-stage training pipeline designed to preserve the VLM’s original strengths while enabling actionable reasoning. To validate our approach, we design a math-matching task wherein a robot interprets ∗Equal Contribution. Work done during Zhongyi Zhou’s internship at Midea Group. †Corresponding Author. Preprint. Under review.arXiv:2505.21906v1 [cs.RO] 28 May 2025 math problems written on a whiteboard and picks corresponding number cards from a table to solve equations. Remarkably, our method exhibits exceptional math- ematical reasoning and OCR capabilities, despite these abilities not being explicitly trained within the VLA. Furthermore, we demonstrate that the VLA possesses strong spatial reasoning skills, enabling it to interpret novel directional instructions involving previously unseen objects. Overall, our method showcases reasoning and comprehension abilities that significantly surpass state-of-the-art imitation learning methods such as OpenVLA, DexVLA, and π0. This work represents a substantial advancement toward developing truly generalizable robotic foundation models endowed with robust reasoning capacities. 1 Introduction If I have seen further, it is by standing on the shoulders of giants. Isaac Newton Vision-language-action models (VLAs) have become a popular approach for tasks in robotics ma- nipulation, navigation, and even full-body control. They have demonstrated remarkable capabilities in learning dexterous manipulation, tackling long-horizon tasks [ 1,2], and enabling open-world generation [ 3,4]. The success of VLAs, in contrast to traditional imitation learning methods, lies in their integration of pre-trained Vision-Language Models (VLMs). By leveraging the mature neural architectures from language models and multimodal networks, along with advanced training techniques and
https://arxiv.org/abs/2505.21906v1
pre-trained knowledge from VLMs, VLAs significantly enhance robotic learning. This allows robots to better understand and interact with the world while improving their ability to perform complex physical tasks. Intuitively, pre-training a VLA model consists of a powerful, pre-trained VLMs, such as PaliGemma [ 5] or Qwen-VL [ 6], should equip the robot with not only stronger vision-language fea- ture embeddings but also the comprehensive capabilities inherent to VLMs — including recognizing everyday objects, reasoning about spatial relationships, and solving mathematical problems. Consider a simple task: writing down the answer to the equation 10 + 11 = . Such a task is trivially easy for humans. A conventional hierarchical model would first leverage a pre-trained VLM to produce the answer ( 21), then invoke a low-level policy network to physically write it down. However, why might a VLA model struggle with such a simple task if it has never encountered the specific equation in its training data? In practice, fine-tuning on robotics-specific datasets often leads to the erosion of the original pre-trained knowledge from the VLM. For example, ChatVLA [ 7] illustrates that adapting a VLA model specifically for robotic control can cause previously acquired general knowledge to degrade significantly. As a result, the VLA model may fail to accomplish tasks that seem trivial to humans, simply because these tasks were absent from the training dataset. Such a gap leads to a natural question: How can we build VLA models that both keep their VLM prior intact and actively leverage it to achieve superior generalization in robotic control? In this study, we introduce ChatVLA-2, a significant advancement toward achieving a truly general- izable robotic foundation model. The goal of ChatVLA-2 is not to construct an omnipotent robot model capable of executing every conceivable task. Instead, our primary objective is to demonstrate the feasibility of leveraging the pre-trained knowledge embedded within the VLM backbone. By doing so, we enable end-to-end robotic systems to generalize across diverse tasks, which traditionally require explicit planning by an external agent. We argue that this generalization can be achieved by adhering to two fundamental principles: •Identifying overlapping feature spaces between multimodal understanding and robot control. Image-text data and robotic control data generally reside in distinct feature spaces, often resulting in competition for shared parameter spaces within models. ChatVLA ad- dresses this by employing separate static experts—one dedicated to multimodal understand- ing and another specialized for robotic control—to ensure the clear separation of these tasks into distinct feature spaces. This separation allows VLA models to excel independently in both domains. However, the isolated nature of these feature spaces currently limits the transfer of pre-trained knowledge to robotic control tasks. If mutual beneficial features could be effectively preserved and distinct task-specific features disentangled, the VLA 2 model would be better positioned to intuitively leverage its pre-trained knowledge, thus significantly enhancing its generalization capability in robotic control. •Ensuring VLA models act according to their internal reasoning. Although VLA models demonstrate the capability for sophisticated internal reasoning, it remains uncertain whether their generated robotic actions accurately reflect this internal thought process. Previous
https://arxiv.org/abs/2505.21906v1
research indicates that even large language models frequently produce outputs inconsistent with their thinking process. By ensuring that the action outputs through VLA models reliably follow their reasoning processes, we can substantially enhance their ability to generalize effectively across diverse and previously unseen tasks. To achieve this, we propose a novel VLA model architecture employing a dynamic mixture-of- experts within the VLM backbone. This design explicitly disentangles the feature spaces related to multimodal understanding and robotic action while adaptively identifying and preserving their shared representations. Additionally, we introduce a straightforward reasoning-enhancement module designed to align the action expert’s output more closely with the model’s internal reasoning process. Furthermore, we implement a two-stage training strategy: The initial stage aims to preserve pre- trained multimodal knowledge, simultaneously training robotic actions and establishing connections between these components. During the second stage, the VLM backbone is frozen, and only the action expert remains trainable, explicitly enabling it to learn to generate actions consistent with the internal reasoning derived from the upper levels of the model. To demonstrate the open-world reasoning and understanding capabilities of ChatVLA-2, we designed two tasks: a math matching game and a toy placement experiment. In the math matching game, we placed a whiteboard in front of the robot and wrote down a mathematical equation for the robot to solve. Several potential answers were placed before the robot, from which it had to select the correct solution and place it on the whiteboard. Importantly, we evaluated the robot entirely on out-of-distribution scenarios, meaning the presented equations never appeared in the training dataset. For evaluating spatial reasoning, we conducted a toy placement experiment. In this task, the robot was instructed to pick up a toy and place it at specific positions relative to various reference objects (e.g., to the right, left, front, behind, top, or bottom of objects). Many of the objects and directional instructions were entirely unseen during training. Therefore, this task required the model to accurately interpret the visual scene, reason about novel spatial instructions, and execute appropriate actions. Our experiments clearly illustrate the superior generalization capabilities of ChatVLA-2, particularly in reasoning and understanding tasks, surpassing existing imitation-learning approaches such as OpenVLA [ 8], DexVLA [ 2], and π0[1]. This work represents a significant step toward the development of truly generalizable robotic foundation models that transcend the limitations of fine-tuning data by effectively leveraging pre-trained VLM knowledge. 2 Related Work Vision-language-action models in robot learning. Vision-language-action models (VLAs) form a growing body of research within imitation learning [ 9,10,11,12,13,14,15,16,17,18,19,20,21,22] that leverages pre-trained vision-language models (VLMs) as a backbone to enable both language comprehension and observational understanding. These methods typically fine-tune large pre-trained VLMs to predict robot actions [ 23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40, 41,42,43,44,45,46,47,48,49,50,51,37,52,53]. These methods have demonstrated strong performance across various simulated and real-world tasks, covering diverse robotic embodiments such as bimanual robots, mobile manipulators, legged robots, and humanoids. They also exhibit generalization capabilities across different environments and various objects. However, existing VLA models still lack the ability to generalize beyond the scope of their training data. Despite incorporating pretrained vision-language models (VLMs) as their
https://arxiv.org/abs/2505.21906v1
backbone, current VLA approaches fail to effectively utilize the pretrained knowledge from these VLMs, limiting robots’ capabilities for open-world manipulation. Consequently, this significantly undermines the rationale behind employing pretrained VLMs within large-scale models. In this paper, we introduce ChatVLA-2, a novel model designed specifically to retain and leverage pretrained VLM knowledge, thus enabling robots to perform open-world tasks effectively through pretrained reasoning and extensive general knowledge. 3 VisualEncoderDynamicMixture of ExpertsLLMImage PatchesLanguage Instruction RouterK=2231NN-1…DynamicMixture of ExpertsN×Open-WorldEmbodiedReasoningBlock_1Block_17Action ExpertRobotActionOn the tableis105/5…….Pick 21from left.ChatVLA-2Framework Block_16Block_32…… MLPMLPFigure 2: Model architecture. Left : A reasoning-following enhancement module is incorporated to ensure that the VLA model adheres to logical reasoning when performing actions. Right : Our method leverages a dynamic mixture-of-experts architecture to disentangle conflicting features between multimodal understanding and robotic control, while effectively integrating mutually beneficial features. Embodied Reasoning in VLA models. A substantial amount of research has been dedicated to enhancing vision-language-action (VLA) models by incorporating the chain-of-thought (CoT) [ 54] methodology, inspired by the recent successes of large language models (LLMs) in various cognitive and reasoning tasks. The primary motivation behind adopting CoT is to replicate the sophisticated reasoning and decision-making capabilities of LLMs within robotic systems, enabling robots to perform more complex, context-aware actions in dynamic, real-world environments. For instance, Embodied-CoT [ 55] and CoA-VLA [ 56] utilize structured textual instructions enriched with spatial localization information, CoT-VLA [ 57]/VPP [ 58] integrates reasoning via generated visual imagery, DiffusionVLA [ 59], DexVLA [ 2], and π0.5[3] rely on plain language instructions. However, in these models, reasoning—whether represented through textual instructions or visual cues—is explicitly trained and consequently limited to knowledge contained within the training datasets, restricting their capacity for broader generalization. In this work, we significantly advance this line of research by leveraging pretrained knowledge from VLMs, thereby empowering VLA models with enhanced open-world reasoning and generalization capabilities. 3 Methodology This section introduces our proposed ChatVLA-2 and is organized into three parts. Section 3.1 provides preliminary background on vision-language-action (VLA) models. Section 3.2 details the neural architecture, and Section 3.3 presents the two-stage training strategy. Together, these components empower the VLA model with open-world reasoning and understanding capabilities. 3.1 Preliminary: Vision-Language-Action Model VLA models, leveraging pre-trained VLM perception, are becoming a dominant approach in robotic control. Benefiting from large-scale multi-modal pre-training, VLAs demonstrate significant advan- tages in bimanual manipulation [ 1,2], long-horizon task planning [ 1,59], and mobile manipulation [ 3]. We adopt DexVLA [ 2] as our foundational model architecture. Specifically, we employ the Qwen2- VL [ 60,6] model as its core VLM. The image encoders project the robot’s visual observations into the same embedding space as the language tokens. When handling multiple camera views, the visual embeddings from each view are concatenated. The VLM component produces two types of outputs: reasoning tokens and action tokens. The action tokens undergo further processing through a projection module composed of two linear layers and a LayerNorm layer. Additionally, we employ the pre-trained 1B ScaleDP[ 61] module as our action expert. We chose DexVLA because it is among 4 the few open-source VLA models that output unstructured textual
https://arxiv.org/abs/2505.21906v1
reasoning, allowing our approach to effectively harness the VLM’s pre-trained knowledge and enabling the VLA model to generalize across diverse scenes. 3.2 Model Architecture Dynamic mixture-of-expert. Typically, VLA models utilize a dense vision-language backbone as their foundational architecture. Prior research [ 7] indicates that multi-modal understanding and robotic manipulation tasks often compete within the parameter space, causing dense VLA models to exhibit erosion of multi-modal comprehension capabilities. To this end, we integrate a Dynamic Mixture-of-Experts (MoE) [ 62] architecture to effectively handle diverse and complex multi-modal inputs encountered in different tasks. Specifically, our approach utilizes an adaptive routing strategy where expert modules are dynamically selected based on the characteristics of the visual and textual inputs. Ideally, we anticipate that some experts will specialize in task-specific features, such as multi- modal understanding and robot control. These experts focus exclusively on particular tasks, enabling them to learn specialized feature representations through dedicated sets of weights. Conversely, other experts may capture mutually beneficial features shared across multiple tasks, such as spatial reasoning, which is critical for both scene understanding and manipulation. We also expect the gating network to utilize learned criteria to intelligently evaluate input data, selecting the most appropriate subset of experts for activation. This adaptive strategy ensures efficient allocation of computational resources and reduces unnecessary computations. We use the pre-trained MLP weights to initialize the MLP layers for the experts. Why static/shared experts are not used? The key to enabling VLA models to generalize in open-world robotic manipulation lies in preserving the pre-trained knowledge. For architectures like Qwen2-VL — whose LLM component lacks native MoE support — introducing static or shared experts would disrupt the original model structure. Such architectural alterations risk rapidly degrading the VLM’s pre-trained knowledge, compromising its reasoning capabilities. Dynamic MoE circumvents this issue by preserving the LLM’s intact architecture while selectively activating expert modules. This approach ensures the foundational knowledge remains undisturbed while enabling task-specific adaptation. Our empirical studies in Table3 confirm that dynamic MoE is critical for maintaining the open-world reasoning necessary for generalizable manipulation, as it balances knowledge retention with adaptive learning. In practice, we utilize a total of eight experts and dynamically select two experts during inference. Reasoning following enhancement module. A distinctive feature of our method is that the model not only follows given instructions but also aligns robotic actions closely with the generated reasoning. Prior approaches, such as DiffusionVLA [ 59] and DexVLA [ 2], utilize FiLM layers to incorpo- rate reasoning tokens. These methods primarily handle in-domain reasoning scenarios typically encountered during training, making FiLM layers sufficient for reasoning alignment. In contrast, our approach deals with diverse, novel reasoning types not encountered in the training data. Therefore, our method requires a more robust and flexible VLA model capable of effectively following complex, out-of-distribution reasoning. We introduce an enhanced reasoning-following module designed to improve reasoning capabilities in action models. Specifically, we replace the original observation embedding with reasoning tokens projected through MLP. This reasoning representation is then combined with the current timestep embeddings and used to condition the generation of scale and shift parameters,
https://arxiv.org/abs/2505.21906v1
effectively injecting reasoning context into the model. Importantly, we incorporate this mechanism exclusively into the latter half layers, rather than uniformly across all layers. This design choice aligns with findings from prior studies, such as PointVLA [ 63] and GR00T N1 [ 64], which suggest that modifications to the deeper layers of action experts have a smaller impact on robot control. Our results demonstrate that this selective integration allows the model to robustly handle open-world reasoning scenarios without sacrificing in-domain accuracy. 3.3 Training Strategy Our previous section introduced the neural architecture of ChatVLA-2, which primarily focuses on enabling the VLA model to more effectively extract common knowledge from pre-trained data and robot actions, guiding the robot to adhere more closely to the generated reasoning. However, we 5 Stage 1RobotDataVLM DynamicMoE Image-TextDataViT RobotData …ActionAction Expert Open-WorldReasoningVLMDynamicMoEViT…Action Expert Open-WorldReasoning Action In-Domain Open-World In-Domain ❌ Open-WorldCo-trainingStage 2Reasoning Following Training Figure 3: Training Strategy. We leverage a two-stage training strategy. In the first stage, we perform co-training on image-text data and robot data to empower VLA with open-world reasoning capabilities. In the second stage, we freeze the entire VLM and train only the action expert, thereby preserving open-world reasoning while enhancing instruction-following abilities in VLA. argue that this alone is insufficient for effectively training a general-purpose VLA model. Specifically, mixing image-text data and robot data during training makes it challenging to control the learning process effectively. To address this, we propose a dual-stage training strategy designed to enhance the smoothness of robotic control and increase the success rate of task completion. Empowering VLA with open-world embodied reasoning and understanding. Co-training on image-text and robot data is essential for enabling the robot foundation model to reason and understand scenes in the wild. During this stage, we train the model on both tasks, specifically using datasets COCO [ 65], TextVQA [ 66], and GQA [ 67]. We also construct a dataset of image-text pairs involving robotics scenarios for fine-tuning purposes. Additional details are provided in the Appendix. We apply text augmentation techniques to increase query diversity across all training data. We deliberately avoid selecting training data to bias the VLA toward specific skills such as OCR, mathematical reasoning, or spatial reasoning, as our goal is to utilize pre-trained knowledge for open-world manipulation. For robot data, we collect 600 trajectories from a math-matching game and 300 trajectories from a toy placement experiment. Similar to DexVLA and π0.5, all robot data are annotated with reasoning phrases. We maintain an image-text data to robot data ratio of 1:3. This setup follows previous methods. The model undergoes training for 50k steps, beginning with an initial learning rate of 2e-5 and a warm-up phase for the first 3k steps. Subsequently, we apply a cosine learning rate scheduler, scaling down the learning rate to 2e-6. Enhancing reasoning-following in VLA. By jointly training the model on both image-text data and robot data, it learns to reason, recognize, and effectively act within open-world scenarios. The initial stage preserves a significant portion of the pretrained knowledge. However, since our method aims for robots to perform tasks in
https://arxiv.org/abs/2505.21906v1
open-world environments, the reasoning required may not be presented in the training data. Thus, it becomes particularly crucial to strengthen the connection between reasoning and action, ensuring that actions accurately follow and execute the reasoning outcomes for generalizable robot control. Specifically, we freeze the pretrained VLM and only train the action expert. By keeping the VLM fixed, we effectively preserve the pretrained knowledge acquired in the initial training stage. Consequently, the robot’s actions are guided not just by the initial language instructions and image observations but also significantly by the reasoning outputs generated by the upper layers of the model. We found this strategy particularly beneficial in enhancing the model’s understanding and responsiveness to previously unseen reasoning scenarios. 6 Ontheboardis18+16=34…Pick34fromright1StepMathMatchingGame FrankaSetupToyPlacementInstruction:Pick the [obj]and place it to[place] of the [target] PickthecornToLeftofpinkblockRightoforangebusFrontofbowl Picktheavocado Behindthetape Pickthemug Top/Bottomofshelf… …Ontheboardis4+8,4+8=12……Thereare1(left),2(middle),7(right)First,pick1fromleft2StepsThenpick2frommiddle Instruction:Answerthequestionandpickthecardwithcorrectanswer. CameraARXSetupFigure 4: Experimental setup for math matching game and toy placement. We use a Franka Emika robot equipped with a Robotiq gripper to pick and place items at specified target locations. We utilize the ARX R5 bimanual robots with a top camera of RealSense L515. Our experiments demonstrate that the proposed method successfully completes tasks involving previously unseen spatial instructions and novel objects. 4 Experiments In this section, we conduct extensive real-robot experiments to demonstrate that the end-to-end model is capable of open-world reasoning and understanding and can effectively transfer this knowledge to interactions with the physical world. We do not evaluate using simulation benchmarks, as the VLA capabilities demonstrated by our approach exceed what current simulation benchmarks can assess. We specifically select two scenarios—math matching games and toy placement task—to compre- hensively evaluate our proposed method. These experiments examine the model’s proficiency in mathematical reasoning, spatial reasoning, optical character recognition (OCR), and object recogni- tion and localization, most within an open-world context involving scenarios that were not part of the training dataset. 4.1 Mathematical Reasoning: Math Matching Game Evaluation metrics. We report three types of metrics to evaluate the ability of ChatVLA-2 in manipulation, reasoning, and understanding in both in-domain and open-world. 1) Manipulation success rate: We report the average success rate to measure whether the model completes the task or not. 2) OCR: For OCR, we assign 1 point for correctly recognizing hand-written numbers, 1 point for identifying card values and their positions and 2 points for correctly recognizing the sign. 3) Mathematical reasoning: For mathematical reasoning, we assign 1 point for a correct answer and 1 point for correctly selecting the card. Experimental setup. We consider both in-domain and open-world settings. Specifically, for the in- domain evaluation, all numbers and mathematical symbols exactly match those in the training dataset. However, since numbers and symbols are handwritten, variations in calligraphic style inevitably occur. For the open-vocabulary setting, the mathematical equations tested are entirely absent from the training data. Robot setup. We utilize the bimanual, ALOHA-style robot arm system, ARX-R5, featuring two arms, each with 6 degrees of freedom (6-DoF) and equipped with a top RealSense L515 camera. This configuration results in a 14-dimensional combined state and action space. Data collection is performed through teleoperation
https://arxiv.org/abs/2505.21906v1
equipment at a frequency of 50 Hz. Experimental results. The experimental results are presented in Table 1. We compare our method against several state-of-the-art models, including Octo [ 68], Diffusion Policy [ 30], OpenVLA [ 8], GR00T N1 [ 64], DexVLA [ 2], ChatVLA [ 7], and π0[1]. We first examine the in-domain performance. 7 Table 1: Results on the math matching game. We evaluate multiple models on both in-domain settings, where the data is presented in the training data, and open-world setups. We evaluate average score of OCR (4 scores in total) andmathematical reasoning (2 scores in total) , and average success rate of task execution at both setups. MethodIn Domain Open-World Reasoning Score Success Rate OCR Score Math Reasoning Score Success Rate Octo [68] / 2/13 / / 0/52 Diffusion Policy [30] / 7/13 / / 3/52 OpenVLA [29] / 2/13 / / 0/52 GR00T N1 [64] / 4/13 / / 3/52 DexVLA [2] 5.2/6 12/13 0.21/4 0.06/2 10/52 ChatVLA [7] 5.8/6 10/13 1.08/4 0.42/2 4/52 π0[1] / 12/13 / / 8/52 ChatVLA-2 (Ours) 6.0/6 11/13 3.58/4 1.73/2 43/52 For mathematical reasoning and OCR tasks, only a few models such as DexVLA and ChatVLA can output language-based responses. They demonstrate reasonable accuracy in reasoning and OCR tasks, achieving performance comparable to ChatVLA-2. Similarly, in manipulation tasks, ChatVLA-2 does not significantly outperform models like π0and DexVLA, which already exhibit near-perfect performance. However, substantial differences emerge in open-world scenarios. Even ChatVLA, despite its multimodal understanding capability, fails these tasks when the robot control expert is activated. Consequently, none of the compared methods successfully completed any manipulation tasks in open-world conditions. In contrast, ChatVLA-2 achieves meaningful performance: 3.58 in OCR accuracy, 1.73 in mathematical reasoning accuracy, and 82.7% manipulation success rate . These experiments highlight the core contribution of our approach: although it may not significantly outperform others in well-trained (in-domain) manipulation tasks, ChatVLA-2 demonstrates substan- tial superiority in open-world scenarios, successfully handling novel mathematical equations and unfamiliar typography. This represents a significant advancement from zero to effective generalization capability. 4.2 Spatial Reasoning: Toy Placement Evaluation metrics. We measure the model with three metrics. First of all, similar to the previous experiment, we report the average success rate of robot action success. Additionally, we provide the open-world object recognition performance in the reasoning process. In the output reasoning, the model needs to output the bounding boxes for the objects that are targeted. Experimental setup. We consider both in-domain and open-world settings. For in-domain evaluation, all objects appear in the training set. For open-world evaluation, the target and reference objects are entirely unseen during training. The model must recognize all objects in an open-world setting, identify the reference objects mentioned in the instruction, understand spatial relations, and execute the placement accordingly. Robot setup. We utilize a 7-Degree-of-Freedom Franka Emika robot equipped with a Robotiq gripper. We use one ZED 2 camera positioned on the right side. Data collection is performed using teleoperation equipment at a frequency of 15 Hz. Experimental results. The experimental results are presented in Table 2. In the in-domain setting, our
https://arxiv.org/abs/2505.21906v1
proposed method performing comparably to DexVLA and π0. While ChatVLA was capable of recognizing novel objects in the open-world setting, its performance remained much lower than our method’s 0.94. For action execution, models other than our method and π0exhibited near-random success rates in this setting. Even ChatVLA, despite demonstrating some reasoning ability, showed limited open-world robot manipulation ability. In contrast, our method achieved an average success rate of 81.4%, representing a 3.52-times improvement over DexVLA. This result highlights strong spatial reasoning capabilities and reasoning-following capabilities of our method in open-world scenarios. 8 Table 2: Results on the toy placement task. We evaluate multiple models on both in-domain settings, where the data is presented in the training data, and open-world setups. We evaluate average object recognition score, spatial affordance score and task success rate at both setups. Method ManipulationIn Domain Open-World Object recognition Spatial Affordance Avg. Success Rate Object recognition Spatial Affordance Avg. Success Rate Octo [68] / / 19/67 / / 13/156 Diffusion Policy [30] / / 52/67 / / 17/156 OpenVLA [8] / / 23/67 / / 10/156 GR00T N1 [64] / / 31/67 / / 12/156 DexVLA [2] 1 0.97 63/67 0.23 0.12 36/156 ChatVLA [7] 1 0.97 60/67 0.71 0.35 22/156 π0[1] / / 61/67 / / 25/156 ChatVLA-2 (Ours) 1 0.99 61/67 0.94 0.88 127/156 Table 3: Ablation on mixture-of-expert. Method OCR Math Avg. Dynamic MoE 3.58 1.73 43/52 Static MoE + Dynamic MoE 2.38/4 0.92/2 11/52 Shared MoE + Dynamic MoE 3.07/4 1.12/2 25/52 3B Dense Model 0.04 0.00 2/52 7B Dense Model 0.08 0.00 8/52Table 4: Ablation on training strategy. Stage 1 Stage 2Math Matching Game OCR Math Avg. ✓ 3.20 1.33 12/52 ✓ 0.15 0.04 3/52 ✓ ✓ 3.58 1.73 43/52 4.3 Ablation Study How important is mixture-of-expert in VLA? This section investigates whether the mixture-of- experts (MoE) mechanism in VLA is crucial for enabling VLA models to generalize for reasoning and understanding in an open-world setting. Specifically, using the exact same training configuration, we compare the baseline models that do not incorporate MoE. Since MoE introduces additional com- putational overhead during inference, we further compare the model with a larger VLA configuration, specifically the 7B VLM, which has a significantly higher number of parameters at test time. The experimental results are presented in Table 3. We conducted experiments on the math matching game and observed a significant drop in the average success rate. We hypothesize that this decline is due to conflicts in the parameter space between robotic actions and reasoning/understanding. The mixture-of-experts approach effectively disentangles the feature spaces associated with these conflicting features. Furthermore, we find that increasing the number of parameters to 7B does not alleviate these conflicts. Upon investigating the cause of the failure, we discovered that for unseen mathematical equations, both dense models fail completely. By examining the mathematical reasoning and OCR scores, we find that when the dense models encounter unseen equations, they often fail to arrive at the correct answer and, in most cases, recognize the wrong answer instead. Ablation study on two-stage training. Our paper proposes a
https://arxiv.org/abs/2505.21906v1
two-stage training strategy designed explicitly to enable VLA models to act effectively in open-world scenarios and consistently follow generated reasoning. Table 4 presents the ablation study isolating the effects of Stage 1 and Stage 2 on model performance in the math matching game. When Stage 2 was excluded, the model’s robotic control performance in open-world scenarios dropped to 23% under the same number of training steps. This suggests that while open-world reasoning is generated in Stage 1, it has not been effectively injected in action execution. In contrast, removing Stage 1 resulted in a near-zero score of open-world reasoning capabilities, including both OCR and mathematical tasks, which highlights the critical role of co-training with image-text data. 5 Conclusion Imitation learning typically requires extensive data to master specialized skills for particular tasks. Developing models capable of reasoning and general understanding within open-world scenarios remains a frontier research topic that has yet to be thoroughly explored. In this work, we introduce ChatVLA-2, which endows vision-language-action (VLA) models with the capability to perform diverse tasks by leveraging innate reasoning and understanding abilities derived from pretrained vision-language models in an end-to-end manner. Our core contribution is the introduction of a dynamic Mixture-of-Experts (MoE) module integrated atop a pretrained vision-language backbone. This module efficiently manages different task requirements, where certain experts share common 9 multimodal features, while others are dedicated to task-specific representations. Additionally, we propose a two-stage training strategy: initially, we guide the VLA model to establish connec- tions between pretrained multimodal knowledge and robotic actions; subsequently, we introduce a reasoning-following stage, enabling the model to comprehend reasoning outputs and effectively translate them into corresponding actions. References [1]Kevin Black, Noah Brown, Danny Driess, Adnan Esmail, Michael Equi, Chelsea Finn, Niccolo Fusai, Lachy Groom, Karol Hausman, Brian Ichter, et al. pi_0: A vision-language-action flow model for general robot control. arXiv preprint arXiv:2410.24164 , 2024. [2]Junjie Wen, Yichen Zhu, Jinming Li, Zhibin Tang, Chaomin Shen, and Feifei Feng. Dexvla: Vision- language model with plug-in diffusion expert for general robot control. arXiv preprint arXiv:2502.05855 , 2025. [3]Physical Intelligence, Kevin Black, Noah Brown, James Darpinian, Karan Dhabalia, Danny Driess, Adnan Esmail, Michael Equi, Chelsea Finn, Niccolo Fusai, et al. pi0.5: a vision-language-action model with open-world generalization. arXiv preprint arXiv:2504.16054 , 2025. [4]Minjie Zhu, Yichen Zhu, Jinming Li, Zhongyi Zhou, Junjie Wen, Xiaoyu Liu, Chaomin Shen, Yaxin Peng, and Feifei Feng. Objectvla: End-to-end open-world object manipulation without demonstration. arXiv preprint arXiv:2502.19250 , 2025. [5]Lucas Beyer, Andreas Steiner, André Susano Pinto, Alexander Kolesnikov, Xiao Wang, Daniel Salz, Maxim Neumann, Ibrahim Alabdulmohsin, Michael Tschannen, Emanuele Bugliarello, et al. Paligemma: A versatile 3b vlm for transfer. arXiv preprint arXiv:2407.07726 , 2024. [6]Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, and Jingren Zhou. Qwen-vl: A versatile vision-language model for understanding, localization, text reading, and beyond. arXiv preprint arXiv:2308.12966 , 1(2):3, 2023. [7]Zhongyi Zhou, Yichen Zhu, Minjie Zhu, Junjie Wen, Ning Liu, Zhiyuan Xu, Weibin Meng, Ran Cheng, Yaxin Peng, Chaomin Shen, et al. Chatvla: Unified multimodal understanding and robot control with vision-language-action model. arXiv preprint arXiv:2502.14420 ,
https://arxiv.org/abs/2505.21906v1
2025. [8]Moo Jin Kim, Karl Pertsch, Siddharth Karamcheti, Ted Xiao, Ashwin Balakrishna, Suraj Nair, Rafael Rafailov, Ethan Foster, Grace Lam, Pannag Sanketi, et al. Openvla: An open-source vision-language-action model. [9]Yanjie Ze, Gu Zhang, Kangning Zhang, Chenyuan Hu, Muhan Wang, and Huazhe Xu. 3d diffusion policy: Generalizable visuomotor policy learning via simple 3d representations. In ICRA 2024 Workshop on 3D Visual Representations for Robot Manipulation , 2024. [10] Yanjie Ze, Nicklas Hansen, Yinbo Chen, Mohit Jain, and Xiaolong Wang. Visual reinforcement learning with self-supervised 3d representations. IEEE Robotics and Automation Letters , 8(5):2890–2897, 2023. [11] Kun Wu, Yichen Zhu, Jinming Li, Junjie Wen, Ning Liu, Zhiyuan Xu, Qinru Qiu, and Jian Tang. Dis- crete policy: Learning disentangled action space for multi-task robotic manipulation. arXiv preprint arXiv:2409.18707 , 2024. [12] Qixiu Li, Yaobo Liang, Zeyu Wang, Lin Luo, Xi Chen, Mozheng Liao, Fangyun Wei, Yu Deng, Sicheng Xu, Yizhong Zhang, et al. Cogact: A foundational vision-language-action model for synergizing cognition and action in robotic manipulation. arXiv preprint arXiv:2411.19650 , 2024. [13] Yichen Zhu, Zhicai Ou, Feifei Feng, and Jian Tang. Any2policy: Learning visuomotor policy with any-modality. Advances in Neural Information Processing Systems , 37:133518–133540, 2024. [14] Chuan Wen, Xingyu Lin, John So, Kai Chen, Qi Dou, Yang Gao, and Pieter Abbeel. Any-point trajectory modeling for policy learning. arXiv preprint arXiv:2401.00025 , 2023. [15] Yichen Zhu, Zhicai Ou, Xiaofeng Mou, and Jian Tang. Retrieval-augmented embodied agents. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 17985– 17995, 2024. [16] Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Joseph Dabis, Chelsea Finn, Keerthana Gopalakrishnan, Karol Hausman, Alex Herzog, Jasmine Hsu, et al. Rt-1: Robotics transformer for real-world control at scale. arXiv preprint arXiv:2212.06817 , 2022. [17] Soroush Nasiriany, Sean Kirmani, Tianli Ding, Laura Smith, Yuke Zhu, Danny Driess, Dorsa Sadigh, and Ted Xiao. Rt-affordance: Affordances are versatile intermediate representations for robot manipulation, 2024. 10 [18] Songming Liu, Lingxuan Wu, Bangguo Li, Hengkai Tan, Huayu Chen, Zhengyi Wang, Ke Xu, Hang Su, and Jun Zhu. Rdt-1b: a diffusion foundation model for bimanual manipulation. arXiv preprint arXiv:2410.07864 , 2024. [19] Yueru Jia, Jiaming Liu, Sixiang Chen, Chenyang Gu, Zhilue Wang, Longzan Luo, Lily Lee, Pengwei Wang, Zhongyuan Wang, Renrui Zhang, et al. Lift3d foundation policy: Lifting 2d large-scale pretrained models for robust 3d robotic manipulation. arXiv preprint arXiv:2411.18623 , 2024. [20] Kun Wu, Chengkai Hou, Jiaming Liu, Zhengping Che, Xiaozhu Ju, Zhuqin Yang, Meng Li, Yinuo Zhao, Zhiyuan Xu, Guang Yang, et al. Robomind: Benchmark on multi-embodiment intelligence normative data for robot manipulation. arXiv preprint arXiv:2412.13877 , 2024. [21] Xiaogang Jia, Qian Wang, Atalay Donat, Bowen Xing, Ge Li, Hongyi Zhou, Onur Celik, Denis Blessing, Rudolf Lioutikov, and Gerhard Neumann. Mail: Improving imitation learning with selective state space models. In 8th Annual Conference on Robot Learning . [22] Michał Zawalski, William Chen, Karl Pertsch, Oier Mees, Chelsea Finn, and Sergey Levine. Robotic control via embodied chain-of-thought reasoning. arXiv preprint arXiv:2407.08693 , 2024. [23] Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Xi Chen, Krzysztof Choromanski, Tianli Ding, Danny Driess, Avinava Dubey, Chelsea Finn,
https://arxiv.org/abs/2505.21906v1
et al. Rt-2: Vision-language-action models transfer web knowledge to robotic control. arXiv preprint arXiv:2307.15818 , 2023. [24] Xinghang Li, Minghuan Liu, Hanbo Zhang, Cunjun Yu, Jie Xu, Hongtao Wu, Chilam Cheang, Ya Jing, Weinan Zhang, Huaping Liu, et al. Vision-language foundation models as effective robot imitators. arXiv preprint arXiv:2311.01378 , 2023. [25] Jiangyong Huang, Silong Yong, Xiaojian Ma, Xiongkun Linghu, Puhao Li, Yan Wang, Qing Li, Song-Chun Zhu, Baoxiong Jia, and Siyuan Huang. An embodied generalist agent in 3d world. In ICLR 2024 Workshop: How Far Are We From AGI . [26] Junjie Wen, Yichen Zhu, Jinming Li, Minjie Zhu, Kun Wu, Zhiyuan Xu, Ning Liu, Ran Cheng, Chaomin Shen, Yaxin Peng, et al. Tinyvla: Towards fast, data-efficient vision-language-action models for robotic manipulation. arXiv preprint arXiv:2409.12514 , 2024. [27] Karl Pertsch, Kyle Stachowicz, Brian Ichter, Danny Driess, Suraj Nair, Quan Vuong, Oier Mees, Chelsea Finn, and Sergey Levine. Fast: Efficient action tokenization for vision-language-action models. arXiv preprint arXiv:2501.09747 , 2025. [28] Kevin Black, Noah Brown, Danny Driess, Adnan Esmail, Michael Equi, Chelsea Finn, Niccolo Fusai, Lachy Groom, Karol Hausman, Brian Ichter, Szymon Jakubczak, Tim Jones, Liyiming Ke, Sergey Levine, Adrian Li-Bell, Mohith Mothukuri, Suraj Nair, Karl Pertsch, Lucy Xiaoyang Shi, James Tanner, Quan Vuong, Anna Walling, Haohuan Wang, and Ury Zhilinsky. π0: A vision-language-action flow model for general robot control, 2024. [29] Moo Jin Kim, Karl Pertsch, Siddharth Karamcheti, Ted Xiao, Ashwin Balakrishna, Suraj Nair, Rafael Rafailov, Ethan Foster, Grace Lam, Pannag Sanketi, Quan Vuong, Thomas Kollar, Benjamin Burchfiel, Russ Tedrake, Dorsa Sadigh, Sergey Levine, Percy Liang, and Chelsea Finn. Openvla: An open-source vision-language-action model. arXiv preprint arXiv:2406.09246 , 2024. [30] Cheng Chi, Siyuan Feng, Yilun Du, Zhenjia Xu, Eric Cousineau, Benjamin Burchfiel, and Shuran Song. Diffusion policy: Visuomotor policy learning via action diffusion. arXiv preprint arXiv:2303.04137 , 2023. [31] Minjie Zhu, Yichen Zhu, Jinming Li, Junjie Wen, Zhiyuan Xu, Ning Liu, Ran Cheng, Chaomin Shen, Yaxin Peng, Feifei Feng, et al. Scaling diffusion policy in transformer to 1 billion parameters for robotic manipulation. arXiv preprint arXiv:2409.14411 , 2024. [32] Yixiao Wang, Yifei Zhang, Mingxiao Huo, Ran Tian, Xiang Zhang, Yichen Xie, Chenfeng Xu, Pengliang Ji, Wei Zhan, Mingyu Ding, et al. Sparse diffusion policy: A sparse, reusable, and flexible policy for robot learning. arXiv preprint arXiv:2407.01531 , 2024. [33] Aaditya Prasad, Kevin Lin, Jimmy Wu, Linqi Zhou, and Jeannette Bohg. Consistency policy: Accelerated visuomotor policies via consistency distillation. arXiv preprint arXiv:2405.07503 , 2024. [34] Kevin Black, Michael Janner, Yilun Du, Ilya Kostrikov, and Sergey Levine. Training diffusion models with reinforcement learning. arXiv preprint arXiv:2305.13301 , 2023. [35] Kevin Black, Mitsuhiko Nakamoto, Pranav Atreya, Homer Walke, Chelsea Finn, Aviral Kumar, and Sergey Levine. Zero-shot robotic manipulation with pretrained image-editing diffusion models. arXiv preprint arXiv:2310.10639 , 2023. [36] Sudeep Dasari, Oier Mees, Sebastian Zhao, Mohan Kumar Srirama, and Sergey Levine. The ingredients for robotic diffusion transformers. arXiv preprint arXiv:2410.10088 , 2024. [37] Fanqi Lin, Yingdong Hu, Pingyue Sheng, Chuan Wen, Jiacheng You, and Yang Gao. Data scaling laws in imitation learning for robotic manipulation, 2024. 11 [38] Moritz Reuss, Ömer Erdinç Ya ˘gmurlu,
https://arxiv.org/abs/2505.21906v1
Fabian Wenzel, and Rudolf Lioutikov. Multimodal diffusion transformer: Learning versatile behavior from multimodal goals. 2024. [39] Tony Z Zhao, Jonathan Tompson, Danny Driess, Pete Florence, Seyed Kamyar Seyed Ghasemipour, Chelsea Finn, and Ayzaan Wahid. Aloha unleashed: A simple recipe for robot dexterity. In 8th Annual Conference on Robot Learning . [40] Masatoshi Uehara, Yulai Zhao, Kevin Black, Ehsan Hajiramezanali, Gabriele Scalia, Nathaniel Lee Diamant, Alex M Tseng, Tommaso Biancalani, and Sergey Levine. Fine-tuning of continuous-time diffusion models as entropy-regularized control. arXiv preprint arXiv:2402.15194 , 2024. [41] Masatoshi Uehara, Yulai Zhao, Kevin Black, Ehsan Hajiramezanali, Gabriele Scalia, Nathaniel Lee Diamant, Alex M Tseng, Sergey Levine, and Tommaso Biancalani. Feedback efficient online fine-tuning of diffusion models. arXiv preprint arXiv:2402.16359 , 2024. [42] Gemini Robotics Team, Saminda Abeyruwan, Joshua Ainslie, Jean-Baptiste Alayrac, Montserrat Gonzalez Arenas, Travis Armstrong, Ashwin Balakrishna, Robert Baruch, Maria Bauza, Michiel Blokzijl, et al. Gemini robotics: Bringing ai into the physical world. arXiv preprint arXiv:2503.20020 , 2025. [43] Pengxiang Ding, Jianfei Ma, Xinyang Tong, Binghong Zou, Xinxin Luo, Yiguo Fan, Ting Wang, Hongchao Lu, Panzhong Mo, Jinxin Liu, et al. Humanoid-vla: Towards universal humanoid control with visual integration. arXiv preprint arXiv:2502.14795 , 2025. [44] Can Cui, Pengxiang Ding, Wenxuan Song, Shuanghao Bai, Xinyang Tong, Zirui Ge, Runze Suo, Wanqi Zhou, Yang Liu, Bofang Jia, et al. Openhelix: A short survey, empirical analysis, and open-source dual-system vla model for robotic manipulation. arXiv preprint arXiv:2505.03912 , 2025. [45] Pengxiang Ding, Han Zhao, Wenjie Zhang, Wenxuan Song, Min Zhang, Siteng Huang, Ningxi Yang, and Donglin Wang. Quar-vla: Vision-language-action model for quadruped robots. In European Conference on Computer Vision , pages 352–367. Springer, 2024. [46] Jiaming Liu, Hao Chen, Pengju An, Zhuoyang Liu, Renrui Zhang, Chenyang Gu, Xiaoqi Li, Ziyu Guo, Sixiang Chen, Mengzhen Liu, et al. Hybridvla: Collaborative diffusion and autoregression in a unified vision-language-action model. arXiv preprint arXiv:2503.10631 , 2025. [47] Qingwen Bu, Hongyang Li, Li Chen, Jisong Cai, Jia Zeng, Heming Cui, Maoqing Yao, and Yu Qiao. Towards synergistic, generalized, and efficient dual-system for robotic manipulation. arXiv preprint arXiv:2410.08001 , 2024. [48] Qingwen Bu, Yanting Yang, Jisong Cai, Shenyuan Gao, Guanghui Ren, Maoqing Yao, Ping Luo, and Hongyang Li. Univla: Learning to act anywhere with task-centric latent actions. arXiv preprint arXiv:2505.06111 , 2025. [49] Jiaming Liu, Mengzhen Liu, Zhenyu Wang, Pengju An, Xiaoqi Li, Kaichen Zhou, Senqiao Yang, Renrui Zhang, Yandong Guo, and Shanghang Zhang. Robomamba: Efficient vision-language-action model for robotic reasoning and manipulation. Advances in Neural Information Processing Systems , 37:40085–40110, 2024. [50] Yang Yue, Yulin Wang, Bingyi Kang, Yizeng Han, Shenzhi Wang, Shiji Song, Jiashi Feng, and Gao Huang. Deer-vla: Dynamic inference of multimodal large language models for efficient robot execution. Advances in Neural Information Processing Systems , 37:56619–56643, 2024. [51] Shengliang Deng, Mi Yan, Songlin Wei, Haixin Ma, Yuxin Yang, Jiayi Chen, Zhiqi Zhang, Taoyu Yang, Xuheng Zhang, Heming Cui, et al. Graspvla: a grasping foundation model pre-trained on billion-scale synthetic action data. arXiv preprint arXiv:2505.03233 , 2025. [52] Jiazhao Zhang, Kunyu Wang, Shaoan Wang, Minghan Li, Haoran Liu, Songlin Wei, Zhongyuan Wang, Zhizheng Zhang, and He Wang. Uni-navid: A
https://arxiv.org/abs/2505.21906v1
video-based vision-language-action model for unifying embodied navigation tasks. arXiv preprint arXiv:2412.06224 , 2024. [53] Yuhui Chen, Shuai Tian, Shugao Liu, Yingting Zhou, Haoran Li, and Dongbin Zhao. Conrft: A reinforced fine-tuning method for vla models via consistency policy. arXiv preprint arXiv:2502.05450 , 2025. [54] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems , 35:24824–24837, 2022. [55] Michał Zawalski, William Chen, Karl Pertsch, Oier Mees, Chelsea Finn, and Sergey Levine. Robotic control via embodied chain-of-thought reasoning. arXiv preprint arXiv:2407.08693 , 2024. [56] Jinming Li, Yichen Zhu, Zhibin Tang, Junjie Wen, Minjie Zhu, Xiaoyu Liu, Chengmeng Li, Ran Cheng, Yaxin Peng, and Feifei Feng. Improving vision-language-action models via chain-of-affordance. arXiv preprint arXiv:2412.20451 , 2024. [57] Qingqing Zhao, Yao Lu, Moo Jin Kim, Zipeng Fu, Zhuoyang Zhang, Yecheng Wu, Zhaoshuo Li, Qianli Ma, Song Han, Chelsea Finn, et al. Cot-vla: Visual chain-of-thought reasoning for vision-language-action models. arXiv preprint arXiv:2503.22020 , 2025. 12 [58] Yucheng Hu, Yanjiang Guo, Pengchao Wang, Xiaoyu Chen, Yen-Jen Wang, Jianke Zhang, Koushil Sreenath, Chaochao Lu, and Jianyu Chen. Video prediction policy: A generalist robot policy with predictive visual representations. arXiv preprint arXiv:2412.14803 , 2024. [59] Junjie Wen, Minjie Zhu, Yichen Zhu, Zhibin Tang, Jinming Li, Zhongyi Zhou, Chengmeng Li, Xiaoyu Liu, Yaxin Peng, Chaomin Shen, et al. Diffusion-vla: Scaling robot foundation models via unified diffusion and autoregression. arXiv preprint arXiv:2412.03293 , 2024. [60] Peng Wang, Shuai Bai, Sinan Tan, Shijie Wang, Zhihao Fan, Jinze Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, et al. Qwen2-vl: Enhancing vision-language model’s perception of the world at any resolution. arXiv preprint arXiv:2409.12191 , 2024. [61] Minjie Zhu, Yichen Zhu, Jinming Li, Junjie Wen, Zhiyuan Xu, Ning Liu, Ran Cheng, Chaomin Shen, Yaxin Peng, Feifei Feng, et al. Scaling diffusion policy in transformer to 1 billion parameters for robotic manipulation. arXiv preprint arXiv:2409.14411 , 2024. [62] Damai Dai, Chengqi Deng, Chenggang Zhao, RX Xu, Huazuo Gao, Deli Chen, Jiashi Li, Wangding Zeng, Xingkai Yu, Yu Wu, et al. Deepseekmoe: Towards ultimate expert specialization in mixture-of-experts language models. arXiv preprint arXiv:2401.06066 , 2024. [63] Chengmeng Li, Junjie Wen, Yan Peng, Yaxin Peng, Feifei Feng, and Yichen Zhu. Pointvla: Injecting the 3d world into vision-language-action models. arXiv preprint arXiv:2503.07511 , 2025. [64] Johan Bjorck, Fernando Castañeda, Nikita Cherniadev, Xingye Da, Runyu Ding, Linxi Fan, Yu Fang, Dieter Fox, Fengyuan Hu, Spencer Huang, et al. Gr00t n1: An open foundation model for generalist humanoid robots. arXiv preprint arXiv:2503.14734 , 2025. [65] Tsung-Yi Lin, Michael Maire, et al. Microsoft coco: Common objects in context. In Computer Vision– ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, pages 740–755. Springer, 2014. [66] Amanpreet Singh, Vivek Natarajan, et al. Towards vqa models that can read. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 8317–8326, 2019. [67] Drew A Hudson and Christopher D Manning. Gqa: A new dataset for real-world visual reasoning and compositional question answering.
https://arxiv.org/abs/2505.21906v1
In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 6700–6709, 2019. [68] Octo Model Team, Dibya Ghosh, Homer Walke, Karl Pertsch, Kevin Black, Oier Mees, Sudeep Dasari, Joey Hejna, Charles Xu, Jianlan Luo, Tobias Kreiman, You Liang Tan, Lawrence Yunliang Chen, Pannag Sanketi, Quan Vuong, Ted Xiao, Dorsa Sadigh, Chelsea Finn, and Sergey Levine. Octo: An open-source generalist robot policy. In Proceedings of Robotics: Science and Systems , Delft, Netherlands, 2024. 13 Table 5: Ablation study on number of experts. Expert numbers Top-k numbers OCR Math 8 2 3.58 1.73 6 3 2.42 1.26 4 2 1.87 0.94Table 6: Ablation study on reasoning-following enhancement module. Method Avg. success rate Latter-half-layer injection 43/52 Full-layer injection 36/52 Former-half-layer injection 22/52 A Limitation Our work investigate to retain the pre-trained knowledge from the vision-language model in vision- language-action model. As such, the VLA are able to reasoning over the image observation and language instruction, and enforce the action model to follows such reasoning. Currently, we are unable to fully retain the pre-trained knowledge from VLM. We observe that it is inevitable that many capacity disappear during the fine-tuning with robot data. This is the most challenging part, and current approach cannot fully resolve this problem. We leave this to the future work. Also, our current method is mainly conducted on table top tasks. We aim to expand the embodiment to mobile manipulator to perform more long-horizon and complex real world tasks in the future. B Implementation Details B.1 Training details. We adopt mixed-precision training (FP16) and use the AdamW optimizer. For training stage 1, we co-train on image-text data and robot data, setting the initial learning rate to 2e-5 and training for 15k steps. For training stage 2, we freeze the VLM backbone. The model is trained for 50k steps, starting with a learning rate of 2e-5 and a warm-up phase over the first 3k steps. In both stages, we apply a cosine learning rate scheduler, scaling down the learning rate to 2e-6. The total training cost is 340 GPU hours. B.2 Data details. Image-text data composition. The image-text dataset used in our experiments integrates samples from multiple established benchmarks, including COCO, TextVQA, and GQA, alongside additional data specifically constructed to align with our task formulation. To ensure balanced representation, we incorporate approximately 32k samples from COCO, 20k from TextVQA, and 54k from GQA. These robotics-related image-text pairs employ the reasoning template used in the toy placement task, as illustrated in B.2. Furthermore, we utilize data from RoboPoint, comprising approximately 2k samples collected within a simulated environment. Although the RoboPoint data exhibits lower visual quality due to visual discrepancies and camera viewpoints, our experiments indicate that including this data enhances the visual-language alignment (VLA) model’s spatial understanding capabilities. Additionally, we gathered 5k samples from real-world environments, covering both tabletop setups and broader scenes. These samples follow a similar annotation format to the LLaV A dataset, utilizing a question-answering structure. All collected data is combined and utilized collectively during training in our method. Data pre-processing. For the image-text data, we limit
https://arxiv.org/abs/2505.21906v1
each example to a maximum of 5 dialogue turns. If an instance originally contains more than 5 turns, we retain the first turn and randomly sample four additional turns from the remainder. For the TextVQA dataset, we specifically select samples that do not contain numeric OCR tokens or mathematical operators, as our goal is to utilize pre-trained knowledge for open-world manipulation. We use the image resolution of 320 × 240. Reasoning templates of robot data. All our robot data are annotated with sub-reasoning, similar to the approach used in π0.5and DexVLA. We initialize these reasoning annotations with fixed templates and then augment them using GPT-4o, following a pipeline analogous to the one employed in training large language models. This method allows us to keep our reasoning phrase flexible, such that the action expert would not dominate by certain template. 14 C More Ablation Studies We have discussed the importance of some key components in our ChatVLA-2 in the main text, including the choice of mixture-of-experts and the two-stage training strategy. In this section, we will further discuss the following questions: C.1 Ablation study on number of experts. We conduct experiments to check how many experts we should use to better obtain pretrained knowledge from VLM while maintaining appropriate resource consumption. As is shown in Table 5, experimental results indicate that increasing both the total number of experts and the number of experts selected during inference can enhance the model’s generalization ability in robotic scenarios. A possible explanation for this phenomenon is that, a limited number of experts tend to develop selection biases toward visually similar task images in such scenarios. This can lead to overfitting on robot data and result in the neglect of the pretrained VLM knowledge, ultimately degrading performance. C.2 Ablation Study on Layers for Injecting Reasoning-Following Enhancement Module. As shown in the main text, we replace the original observation embedding with reasoning tokens and use them to condition the generation of scale and shift parameters in the latter half layers of the action expert. This mechanism effectively injected reasoning context into the model. In this section, we conduct experiments on the place of injecting reasoning. The results are shown in Table 6. Experiments show that the former half layers of action expert significantly impacts action generation stability. Introducing reasoning information into the former half layers actually increases instability in the generated actions, which in turn significantly reduces task success rates. We hypothesize that this effect may due to our design choice of replacing the original observation embedding with reasoning information. One possible explanation is that the observations themselves may carry critical information for action generation, and their removal could negatively affect performance. 15
https://arxiv.org/abs/2505.21906v1
arXiv:2505.21907v1 [cs.AI] 28 May 2025MODELING AND OPTIMIZING USERPREFERENCES IN AI COPILOTS : A C OMPREHENSIVE SURVEY AND TAXONOMY Saleh Afzoon1ID, Zahra Jahanandish2ID, Phuong Thao Huynh1ID, Amin Beheshti1ID, Usman Naseem1ID 1School of Computing, Macquarie University, Sydney, Australia 2Department of Computer Engineering and Information Technology, Shiraz University of Technology, Shiraz, Iran saleh.afzoon@hdr.mq.edu.au ,zjahanandish@gmail.com ,{jasmine.huynh, amin.beheshti, usman.naseem }@mq.edu.au ABSTRACT AI copilots, context-aware, AI-powered systems designed to assist users in tasks such as software development and content creation, are becoming integral to modern workflows. As these systems grow in capability and adoption, personalization has emerged as a cornerstone for ensuring usability, trust, and productivity. Central to this personalization is preference optimization: the ability of AI copilots to detect, interpret, and align with individual user preferences. While personalization techniques are well-established in domains like recommender systems and dialogue agents, their adaptation to interactive, real-time systems like AI copilots remains fragmented and underexplored. This survey addresses this gap by synthesizing research on how user preferences are captured, modeled, and refined within the design of AI copilots. We introduce a unified definition of AI copilots and propose a phase-based taxonomy of preference optimization strategies, structured around pre-interaction, mid-interaction, and post-interaction stages. We analyze techniques for acquiring preference signals, modeling user intent, and integrating feedback loops, highlighting both established approaches and recent innovations. By bridging insights from AI personalization, human-AI collaboration, and large language model adaptation, this survey provides a structured foundation for designing adaptive, preference-aware AI copilots. It offers a holistic view of the available preference resources, how they can be leveraged, and which technical approaches are most suited to each stage of system design. Keywords Preference Detection, Personalization, Personalized Response Generation, Cognitive Assistants, AI Copilots, Human-Centric Evaluation, Human–Computer Interaction (HCI) 1 Introduction AI-powered, context-aware systems—commonly referred to as AI copilots—are rapidly emerging as collaborative assistants for developers, analysts, and knowledge workers engaged in complex, high-context tasks. These systems are integrated into user workflows and designed to understand evolving goals, adapt to individual preferences, and provide real-time, personalized support. However, achieving meaningful personalization in such systems presents unique challenges. User intent is often ambiguous or changes over time, tasks may span multiple domains, and responses must be dynamically adapted to nuanced contexts. These systems must be capable of aligning with dynamic user preferences, as personalization is a key success factor for digital assistants, enabling them to adapt their responses and behavior to individual users’ routines and context [1]. At the core of this personalization lies preference optimization—the ability of a system to detect, interpret, and align with user preferences [ 2]. While advances in preference modeling have gained traction in machine learning and recommender systems, their integration into real-time, interactive systems like AI copilots remains limited and fragmented. Designing copilots that can continuously adapt to evolving preferences in high-context environments is still a major open challenge. This survey is driven by the need to systematically examine how preference optimization is realized in AI copilots. Our aim is to consolidate disparate research across assistant technologies, personalization, and alignment learning, and to provide a structured account of how preferences are acquired,
https://arxiv.org/abs/2505.21907v1
modeled, and refined in these systems. In doing so, we offer a comprehensive view of the emerging design space for adaptive, user-centered AI copilots. Research related to this survey broadly falls into two categories: (i) AI systems and intelligent assistants, and (ii) preference optimization techniques. While each has been explored independently in the literature, their integration remains underexamined. The evolution of virtual assistants such as Google Assistant, Siri, Cortana, and Alexa has been well documented, with studies focusing on their core functionalities, user experience, and limitations in areas such as voice recognition, contextual awareness, and multi-turn dialogue management [ 3]. In enterprise contexts, digital assistants have been examined through the lens of business integration, workflow support, and human-computer collaboration, highlighting both their organizational potential and associated risks [ 1]. Concerns around privacy and security in these systems have also been widely addressed, particularly in relation to data handling and unauthorized access [ 4]. Although the design and evaluation of AI copilots in retail has been recently analyzed through Microsoft’s case studies, the work is limited in scope and does not offer a comprehensive perspective on the broader cognitive assistant landscape [ 5]. On the other hand, preference optimization—especially in the context of aligning AI behavior with user intent—has seen significant theoretical and practical development. A recent comprehensive survey on Direct Preference Optimization (DPO) presents an in-depth analysis of its theoretical foundations, training methodologies, benchmark datasets, and diverse application scenarios [ 2]. However, such works primarily examine preference modeling in isolation and do not address its integration within interactive systems like AI copilots or digital assistants. While existing studies contribute significantly to our understanding of intelligent assistant technologies and preference optimization methods, their **integration in the context of AI copilots remains limited and underexplored**. As a result, there is no consolidated view of how preference signals can be effectively incorporated into the behavior of real-time, context-aware systems. This survey aims to bridge these two streams of research by examining preference optimization techniques within the design of AI copilots—systems that operate collaboratively with users in high- context, evolving environments. Specifically, we address how preferences can be detected, modeled, and refined to enhance personalization and system responsiveness. The main contributions of this work are summarized as follows: •A unified and literature-grounded definition of AI copilots is proposed, synthesizing diverse descriptions and terminology from recent studies across domains. •A comprehensive analytical review of preference optimization in AI copilot design is provided, covering possible sources of user preferences, techniques for preference-aware response generation, and methods for feedback-driven adaptation. We organized the content of our survey as follows: Section 2 provides foundational background by examining the historical development and evolving terminology associated with AI copilots. Section 3 introduces a unified, literature- informed definition of AI copilots, integrating insights from diverse fields. Section 4 presents a conceptual AI copilot architecture focused on preference optimization, offering a structured analysis of core components, including preference sources, detection techniques, personalization strategies, and feedback-driven refinement. Finally, Section 5 concludes with a summary of key insights and implications for future research. 2 Priliminary and
https://arxiv.org/abs/2505.21907v1
backgrounds To establish a clear foundation for defining the new generation of AI-powered systems—referred to as AI copilots—we analyze key concepts and terminology from existing literature. This section provides a precise understanding of how AI copilots differ from prior intelligent systems in function, autonomy, interaction, and domain focus. Table 1 summarizes these distinctions to support a consistent and rigorous discussion throughout the paper. 2.1 AI Copilot Related Terms 2.1.1 Cognitive Assistants One of the concepts related to the scope of our survey in the literature is AI Assistant, which shouldn’t be used interchangeably with AI Copilot. As a general definition of AI Assistant, Danaher [ 6] defines it as a program that tries to optimize a required target output based on the possible range of options. This notion highlights the nature of AI assistants as an optimization tool rather than as collaborative enhancers of human abilities. As a more recent idea, AI co-pilots aim to work alongside people to improve their decision-making and capacity in more complex scenarios. The AI Co-Pilot concept, as highlighted in Sellen et al. (2023)[7], is defined as a collaborative partner aiding humans, rather than just a tool. It emphasizes assisting humans with insightful support in professional and industrial settings, enhancing the human-AI partnership with its context-aware capabilities. Therefore, while AI Assistants serve the broader consumer base, AI Co-Pilots are specifically designed with developers in mind as their target user group. 2 Starting with Cognitive Computing, this broad term encompasses hardware or software systems designed to mimic human cognitive abilities, as explored by Kelly (2015)[ 8]. In such a context, Metacognition described by Schmorrow (2005)[ 9] as a nuance expansion, extends the paradigm by fostering a symbolic, closed-loop integration of humans and machines. In such a continuous feedback loop, the communication channel for information sharing and decision-making is done by cognitive functions. Delving into a practical application, the Joint Cognitive System, highlighted by Hollnagel (2005)[ 10], presents an interactive environment where goal-oriented human and machine elements collaboratively share and integrate their cognitive abilities within specific contexts. Rooted in a human-centric approach, this strategy accentuates the synergy of human creativity with AI’s analytical capabilities, ensuring adaptability across scenarios. Cognitive assistants stand at the forefront of Human-AI Collaboration, merging human insight with AI’s vast processing capacity to improve problem-solving and decision-making [ 11]. Through studies in hybrid intelligence systems and empathic AI-supported conversations, these technologies demonstrate their capacity to enhance teamwork, innovation, and human empathy [ 12,13]. This synergy underscores the significant benefits of blending AI with human insights, positioning cognitive assistants as key facilitators of effective human-AI partnerships. 2.1.2 Digital Assistant The term Digital Assistant (DA) is frequently interchanged with Virtual Assistant (V A), yet it represents a broader concept. In a study about its anatomy, Wells et al. [ 14] referred to it as a versatile tool and amorphous compound technology. Mobility, rapid data analysis capability and supporting multiple interfaces are among its key features making it ideal for task delegation and advice provision applications. The study by Knote et al. [ 15] highlights a key feature of
https://arxiv.org/abs/2505.21907v1
Digital Assistants (DAs): their ability to merge the simplicity of intuitive natural language dialogue with personalized, context-specific assistance. This study focuses on categorizing the different forms of the Smart Personal Assistant (SPA), which is essentially a personalized Digital Assistant. Thus, the classification of SPAs into text-based systems, often termed Chatbots, and voice-based systems, as conducted by Cowan et al. [ 16], can be applied to DAs. By considering Chatbots as a form of Digital Assistant, all the categories of employed technologies in their construction can be categorized under DA. This allows DA’s boundaries to be expanded to cover a wider range of technologies, from simple rule-based Chatbots with limited capabilities to intelligent voice-based V As with speech recognition capability [17]. Table 1: Comparison of AI Copilot-Related Terms and Technologies System Type Primary FunctionLevel of AutonomyInteraction ModalityDomain Specialization Cognitive AssistantAdvanced decision support through reasoning and learningProactive / Semi-autonomousMultimodal (text, voice, context-aware UI)Often domain-specific (e.g., healthcare, law) Digital Assistant Task management, scheduling, basic information retrievalSemi-autonomous V oice + UI General-purpose Virtual Assistant Conversational help and simple task executionReactive to semi-autonomousV oice, text General-purpose Chatbot Text-based conversational responsesReactive Text Often domain-specific (e.g., customer support, retail) Expert System Rule-based decision-making using encoded expert knowledgeLow autonomy (deterministic)Menu-driven or text-basedHighly domain-specific AI Copilot Context-aware collaboration on complex or creative tasksProactiveMultimodal (text, code, UI, APIs)Task- or profession-specific (e.g., software development, writing) 3 2.1.3 Virtual Assistant As a good transitional definition from DA to V A, Apple Siri and Amazon Alexa as two popular intelligent assistants referred to as voice-based DAs [ 1,18]. Their ability to perform hands-free operations, context understanding and handle complex conversations are among the key features mentioned by Wellsandt et.al study [ 14]. Besides their speech recognition ability to understand voice commands and communicate over the same channel [ 4], voice-activation ability made them suitable for integrating into daily use devices [ 19]. Although they are mainly referred to as V As, Smart Personal Assistant (SPA) is another term used for Apple Siri and Amazon Alexa as two device-integrated assistants focusing on their personalized assisting[ 15]. This positions them within the broader spectrum of Digital Assistants, primarily known for their voice-based interfaces, but also encompassing text-based interaction as an additional feature, distinguishing another subset of such assistants [20]. 2.1.4 Chatbot Chatbots are specialized programs tailored for customer service, helping users find desired information [ 21]. This focused definition underscores their application and emphasizes their role, setting them apart from other digital assistants despite shared technologies and capabilities. A defining feature of chatbots, as highlighted in literature [ 22], is simulating near-human conversations using natural language or text, to create an impression of engaging in dialogue with a human [ 23]. Emerging from the field of Human-Computer Interaction (HCI), which centres on enabling users to communicate with computers, Dialogue Systems is a term often used to describe Chatbots [ 21,24]. Conversational Systems, Conversational Agents, and Chatter-bot are among other terms used interchangeably to refer to the same term [22]. While there are different criteria for Chatbot classification in the literature, they are primarily classified by their applications,
https://arxiv.org/abs/2505.21907v1
falling into either Task-oriented or Non-task-oriented (open-ended or simple chit-chat) categories [ 21,23]. Task-oriented chatbots generally consist of domain-specific hand-crafted rules with limited and focused conversation context analysis and are typically utilized in devices for convenience. Non-task-oriented or informative Chatbots, often handle random dialogue contexts without aiming for a specific goal and may require online search capabilities to operate effectively [ 24,23]. Based on the Knowledge Domain, chatbots can be categorized into closed-domain or open-domain, where the open-domain modeling, a challenging field, is predominantly addressed using Neural Network (NN) approaches, including both Retrieval-based and Generative (Generation-based) Chatbots [ 22]. Retrieval-based chatbots support immediate response generation in e-business by ranking candidate responses based on similarity measurements between the query and knowledge base of QA pairs. Data-driven Generative Chatbots utilize Natural Language Processing (NLP) to extract content from queries and learn from entire dialogue contexts, rather than solely focusing on the current turn. These chatbots apply both supervised methods, which rely on annotated data and manual feature extraction, and unsupervised methods, which automatically learn features from unlabeled data. Responses are generated word by word using neural network techniques such as RNN, LSTM, and Deep Seq2Seq, all of which operate based on a learned vocabulary [ 22]. In this regard, RAG-based approaches as a more recent approach, combine the previous techniques for creating a better open-domain chatbot and even injecting more domain-specific knowledge by additional training [25]. 2.1.5 Expert System As one of the old terms in cognitive computing, an Expert System (ES) is defined as a computer program that emulates the behavior of a human expert [ 26]. They are a branch of AI (that is a high-level term) that is used for transferring task- specific knowledge from human to computer in the way that it continuously changes and obtains new knowledge[ 27]. Used in problem-oriented domains including social science, cognitive science and human behaviour analysis, ES has two main parts that enable it to produce expert-like advice. As the first main component, the knowledge base, constructed by knowledge engineers using the suitable notation of language, stores the codified collected knowledge. The inference engine, guided by a suitable reasoning method chosen to ensure efficient use of the knowledge, then uses the knowledge to develop a conclusion [ 26]. For transparency reasons, the system can benefit from an Explanation Facility to elucidate the logic behind the conclusion [ 28]. Although Rule-Based Systems (RBS) or Knowledge-Based Systems (KBS) are used to refer to the same term [ 29,28], others consider them as main streams of ES[ 27]. In the RBS, knowledge representation is a collection of IF-THEN rules. KBS as a human-centred design, includes knowledge engineering tools and a user interface for this reason. Case-Based Reasoning (CBR) systems in one category of ESs adapt solutions to new challenges by comparing them to past cases based on set parameters. Meanwhile, Fuzzy Expert Systems (Fuzzy ESs) in another category, embrace uncertainty in human reasoning, transcending binary logic for nuanced decision-making in complex situations where traditional approaches fall short [27]. The Belief Rule-Based Expert System (BRBES) merges the accuracy
https://arxiv.org/abs/2505.21907v1
of expert systems with the adaptability of data-driven models, adeptly handling quantitative and qualitative data to address complexities in uncertain environments [30]. It manages uncertainties—like fuzziness, randomness, and ignorance—through belief degrees and employs the 4 Evidence Reasoning (ER) algorithm for seamless data integration. This approach makes BRBES a transparent and dependable solution for decision-making in situations marked by scarce or ambiguous information. 3 AI Copilot Definition Figure 1: Conceptual design architecture of a preference-aware AI copilot, illustrating the flow from user input and preference modeling to personalized response generation and feedback-driven optimization. The term AI copilot has gained significant attention across various domains, yet its definition remains context-dependent. To establish a clear understanding, this section reviews how recent studies describe AI copilots across both general and domain-specific contexts, aiming to distill a unified definition grounded in the literature. AI copilots have been described as intelligent assistants designed to support users in executing cognitive tasks [ 5]. Their architectures are typically composed of large language models, mechanisms for knowledge retrieval and action execution, orchestration modules to manage subtasks, structured system prompts, and responsible AI safeguards to ensure alignment with human intent. In domain-specific applications such as pathology, copilots have been tailored through fine-tuning on real-world instructions to address complex tasks like question answering [ 31]. These systems are often multimodal and interactive, capable of processing both visual and textual inputs, and are used to assist human-in-the-loop decision-making processes. In programming contexts, copilots have been implemented to provide real-time suggestions that support knowledge workers and improve productivity, while allowing users the autonomy to accept, reject, or modify the outputs [ 32]. A broader vision has been presented in which copilots are viewed as collaborative partners that operate alongside human users, each enhancing the other’s capabilities [ 33]. This metaphor has further been extended to emphasize fluid, conversational interaction and limited autonomy in pursuing shared goals under human supervision [ 34]. Taken together, these perspectives highlight AI copilots as interactive, task-oriented systems that support users, particularly knowledge workers, through real-time assistance, domain adaptation, and human-guided decision-making. These shared characteristics form the foundation for a unified understanding of AI copilots presented in this survey. 5 4 Preference-based Response Generation Considering user preferences in an AI copilot as a system-level attribute that must be addressed across different stages. Fig. 1 shows the conceptual design of an AI copilot from a preference-aware perspective. It briefly illustrates the journey of user preferences—from input channels, through the development of preference representations, to their use in response generation and, finally, feedback-driven optimization. In the subsequent sections, we review related studies for each part of this preference-aware design in detail. 4.1 Preference Detection Sources The effectiveness of AI copilots in aligning with user needs largely depends on the quality and nature of the preference signals they rely on. This section categorizes the primary sources of such signals, spanning both user-driven and model-generated inputs. Table 2 offers a comparative overview of their underlying mechanisms, applications, and associated limitations, providing a foundation for the analyses that follow. Table 2: Taxonomy of Preference Detection Sources:
https://arxiv.org/abs/2505.21907v1
Mechanisms, Use Cases, and Limitations Source Category Core Mechanisms Use Cases Limitations Explicit FeedbackPairwise comparisons (e.g., Chatbot Arena) [35] Choice prompts [36] Satisfaction vs. engagement signals [37]Direct preference labeling Evaluation benchmarking Task-specific tuningHigh annotation cost User fatigue Low scalability Implicit Human FeedbackClickstream analysis [38] Gaze-based attention [39] User edits as coactive signals [40]Low-friction feedback Behavioral modeling Passive signal collectionPrivacy issues Sparse supervision Hybrid Feedback ModelsRLHF-style fusion [41] Hashtag-enriched profiles [42]Behavioral–explicit alignment Long-term preference trackingIntegration complexity Data inconsistency System dependence Interactive ElicitationActive LLM probing [43] Bayesian acquisition [44] Probabilistic elicitation [45, 46]Uncertainty-aware querying Adaptive tuningQuery design complexity Limited domain coverage LLM-Generated SignalsSimulated user modeling [47] Profile-conditioned tuning [48] Alignment via divergence minimization [49] Multi-turn preference tracking [50]Synthetic supervision Long-context adaptationLack of grounding Hallucination bias Evaluation ambiguity 4.1.1 Human-Driven Feedback Channels Preference detection in personalized response generation systems predominantly relies on human-generated signals, which vary from implicit behavioural cues to explicit user feedback. Implicit feedback is particularly valuable due to its non-intrusive nature, allowing preference inference without requiring active user involvement. Early research in this domain utilized web interaction data, applying web usage mining to infer user preferences from browsing behaviours [ 38]. Subsequent methods have evolved to integrate richer implicit signals, such as gaze tracking and physiological responses. For example, in a related study [ 39], gaze-tracking technology is used to predict user preferences for video content with significant accuracy, highlighting the utility of visual attention cues in multimedia contexts. Additionally, to reduce user burden while still capturing meaningful preferences, more nuanced forms of implicit feedback have been explored. In a recent study [ 40], a coactive learning approach was introduced, where minor user edits to AI-generated outputs are interpreted as weak preference signals. It was shown that incorporating these subtle modifications can incrementally refine language model outputs and enhance personalization without requiring extensive user input. Nonetheless, the passive collection of implicit signals raises significant privacy concerns. To address these, frameworks such as Heimdall [ 51] aggregate and anonymize user data securely, ensuring personalization benefits without compromising user privacy. However, implicit feedback inherently lacks granularity, motivating the integration of explicit preference elicitation strategies. Explicit feedback approaches directly solicit user preferences, typically via ratings or comparative evaluations, providing clear and interpretable signals for personalization. Platforms such as Chatbot Arena [ 35] exemplify the utility of explicit feedback, employing pairwise comparisons of model-generated outputs to obtain cognitively efficient yet precise preference signals. This method has proven highly effective for benchmarking language model performance based on human preference. To mitigate limitations such as user fatigue inherent in traditional explicit methods, generative augmentation techniques have been introduced in recent work [ 36]. These techniques dynamically tailor choice-based prompts to increase user engagement and enhance the clarity and reliability of elicited preferences. The value of explicit feedback has been demonstrated through empirical evaluations [ 37], where extensive user studies revealed that satisfaction ratings outperform implicit engagement metrics in aligning with actual user preferences. These findings 6 underscore the importance of explicit signals in achieving accurate personalization. In response, hybrid strategies have been developed to integrate the complementary strengths
https://arxiv.org/abs/2505.21907v1
of both explicit and implicit feedback. For instance, topical user profiling integrates explicit declarations of interest with implicit behavioural signals (e.g., hashtag usage), significantly enhancing the robustness and reliability of personalized recommendations [ 42,41]. Despite their clarity, explicit methods inherently demand continuous active user engagement, motivating the development of interactive and hybrid elicitation approaches. Interactive and hybrid preference elicitation strategies aim to balance explicit clarity and implicit convenience, reducing user effort while maintaining high-quality feedback signals. One promising interactive method [ 43] involves self- exploring language models, which autonomously generate diverse output variations to probe user preferences. The effectiveness of such models has been demonstrated through reinforcement learning loops that iteratively refine responses based on real-time user reactions. This approach substantially reduces user effort while improving alignment accuracy. Furthermore, optimization-driven elicitation strategies utilizing Bayesian optimization have been proposed to systematically minimize user interactions. Specifically, LLM-based acquisition functions [ 44] have been introduced to strategically select the most informative queries, significantly decreasing interaction requirements without sacrificing the quality of preference inference. Complementary approaches have employed probabilistic reasoning [ 46] to dynamically adapt elicitation strategies based on uncertainty in user feedback, enabling effective handling of ambiguous interactions. In parallel, a Bayesian preference elicitation framework [ 45] has been developed to rigorously integrate prior knowledge with real-time user responses, enhancing the statistical validity and reliability of interactive preference modeling in conversational settings. Collectively, these interactive and hybrid elicitation methods represent a significant advancement in preference detection, optimizing user involvement and improving feedback clarity in personalized response generation systems. 4.1.2 LLM-Generated Feedback Channels Complementing human-generated signals, recent advances increasingly leverage autonomous large language models (LLMs) as intrinsic sources of preference detection. Structured persona and profile modeling constitute a primary approach, wherein user contexts are systematically synthesized into structured intermediate representations prior to response generation. For example, a guided profile generation techniques is introduced [ 48], demonstrating a notable improvement in preference alignment accuracy compared to direct raw-context utilization, highlighting the efficacy of structured user profiling for personalized dialogue response generation. Optimization-driven methodologies offer another dimension of LLM-generated feedback, explicitly aligning model outputs with user preferences through targeted minimization of divergence metrics. For instance, an f-divergence minimization technique [ 49] is applied to finely control the alignment between user-preferred and model-generated responses, effectively balancing consistency and diversity in personalized outputs. Beyond structured and optimization-driven methodologies, conversational preference elicitation frameworks system- atically harness LLMs to guide user interactions, capturing rich preference signals. The Coached Conversational Preference Elicitation (CCPE) framework [ 52] demonstrated that structured yet naturalistic conversational interactions capture significantly richer user preferences compared to traditional static feedback mechanisms. Extending this approach, a multi-turn interaction framework [ 50] is presented wherein LLMs incrementally adapt to evolving user preferences through repeated conversational exchanges, resulting in improved alignment accuracy as shown on the ALOE benchmark. Additionally, the long-term retention of user preferences in LLMs has been evaluated using bench- marks like PrefEval [ 53], which revealed notable limitations in maintaining and applying preferences over extended interactions. These findings point to key challenges for future research in conversational personalization. Finally, generative
https://arxiv.org/abs/2505.21907v1
user simulation techniques employing LLMs themselves as realistic user models have emerged as crucial tools for scalable and systematic evaluation of personalized response generation. Generative user simulations [ 47] have been shown to effectively emulate dynamic shifts in user preferences over time, enabling controlled experimental environments for rigorous evaluation and iterative refinement of personalization models without relying on extensive real-world testing. Taken together, these methodologies employing LLM-generated signals offer substantial promise for enhancing the scalability, adaptability, and precision of preference detection, crucially supporting robust and dynamic personalized AI response generation frameworks. 4.2 Preference Detection Techniques Following the identification of preference signal sources, this section turns to techniques for detecting and modeling user preferences across different phases of interaction. These are typically categorized into pre-, mid-, and post- interaction stages, each offering distinct opportunities for adaptation. A summary of representative techniques and their characteristics is presented in Table 3. 7 4.2.1 Predefined Profiles and Persona Development Before any user interaction begins, preference detection is typically performed through predefined user profiles. These profiles are constructed by collecting and analyzing structured data that includes demographic information, behavioral patterns, and previously recorded interactions. These profiles allow AI systems to generate context-aware and personalized responses from the outset, enabling more efficient alignment with expected user behavior [54], [55]. The process of user profiling has undergone significant evolution. Initially, the focus was placed on static features, including age, gender, and interaction frequency. While these attributes provided a coarse approximation of user preferences, they lacked adaptability and generalization. The inability to account for contextual variation or multi-turn interaction led to the exploration of more advanced persona-aware techniques. In such approaches, user preferences were estimated before dialogue using aggregated observations and preference indicators [ 56]. To address the limitations of static profiling, neural models such as SessionBERT were introduced. This model incorporated session-level contextual cues to refine user representation over short sequences of interaction, even prior to any real-time dialogue [ 57]. Unlike earlier statistical models, SessionBERT adapted to usage behavior and allowed profile updates based on session data. By generating session embeddings and conditioning predictions on these dynamic vectors, SessionBERT improved personalization in early conversational stages. This resolved the challenge of rigidity seen in earlier demographic or rule-based systems and introduced a scalable method to encode latent preference variation [ 57], [56]. In parallel, collective behavior modeling was explored to capture user norms and group-aligned preferences. Methods based on collaborative filtering were applied to infer unknown preferences by observing similar users’ selections, thereby enriching persona development prior to interaction [ 58]. Compared to isolated profiling, these techniques introduced a population-based signal that improved personalization accuracy and generalization. In this stage, persona development has played a central role in preparing systems for adaptive interaction. It has been supported by a combination of statistical analysis, collaborative filtering, and multi-dimensional modeling of user characteristics [ 59]. These techniques have enabled the construction of detailed, structured user representations that anticipate needs even before conversation begins. The shift from rigid user profiles to enriched, data-driven personas has improved alignment between initial system output and
https://arxiv.org/abs/2505.21907v1
user intent. 4.2.2 Real-Time Persona Extraction and Preference Adaptation In contrast to predefined user profiles developed before interaction, the during the conversation phase focuses on dynamically identifying and adapting user preferences in real-time. In early approaches, systems were only capable of reactive adaptation, adjusting responses based on superficial turn-level cues. These methods could not infer persistent user traits or adapt to evolving intent across dialogue segments. To address these shortcomings, neural collaborative filtering models were explored to implicitly capture preferences. One such method, SNCF (Social Neural Collaborative Filtering), was proposed to model user preferences from biased implicit feedback using neural embeddings [ 60]. In SNCF, user-item interaction histories were encoded into latent vectors, enabling the capture of nuanced behavioral patterns that could be refined during interactions. Although initially designed for recommendation tasks, the model’s ability to represent evolving user preferences informed later approaches to conversational preference adaptation. As the need for more fine-grained, dialogue-level understanding grew, real-time persona extraction methods were introduced. PAED enabled persona attributes to be extracted directly from conversational content without relying on external profiles [ 61]. By detecting triplet-style attributes within dialogue turns, PAED facilitated lightweight personalization but remained limited to surface-level lexical features. To enrich the contextual modeling of persona traits, attention-based filtering was proposed in emotionally supportive dialogue systems. In this approach, structured user summaries were generated and used alongside dialogue content to modulate persona representations via attention mechanisms [ 62]. These architectures allowed AI agents to respond in emotionally appropriate ways while simultaneously inferring context-aware persona elements. Building on these foundations, in-dialogue learning (IDL) was proposed to enable the continuous inference of both static and dynamic personas during multi-turn interactions [ 63]. In IDL, dialogue history was segmented and analyzed to identify salient persona features, which were then incrementally updated throughout the session. Unlike prior systems that relied on fixed input or one-shot extraction, IDL allowed AI systems to revise their understanding of the user mid-dialogue without requiring predefined profile injection. To further enhance personalization, user engagement and subjective interest were modeled using neural signals during the interaction. In such models, latent cues such as emotion, surprise, or preference shift were tracked to inform real-time response adjustment [ 64]. By modeling engagement signals dynamically, these systems increased their sensitivity to subtle user behavior changes and improved their alignment with user expectations. Recent approaches have extended real-time preference adaptation using reinforcement learning and selection-based fine-tuning mechanisms. Adaptive preference learning strategies have been proposed to select the most suitable personalized output among multiple candidates, based on dialogue context and evolving user profiles [ 65], [66]. These models have allowed AI systems to refine their behavior mid-session, transitioning from reactive adaptation to proactive, user-aligned generation. 8 4.2.3 Post-Interaction Feedback and Preference Refinement While predefined profiling and real-time preference detection enable short-term personalization, they remain limited in their ability to support long-term user alignment. In both prior phases, adaptation is constrained by the information available before or during the interaction. However, the outcomes of a completed conversation provide valuable signals that can be used to refine future behavior. As
https://arxiv.org/abs/2505.21907v1
a result, preference detection techniques have evolved to include a post-conversation phase, where feedback from past interactions is utilized to optimize future responses. Initial methods in this space were built around lightweight feedback integration, where human judgments were used to revise system outputs. To structure this process, human-in-the-loop frameworks were introduced to determine when intervention should occur and how user feedback could be mapped to corrective strategies. In one such approach, the appropriateness of system suggestions was judged based on relevance to prior user intent, and subsequent adjustments were made to improve alignment in future turns [ 66]. This method demonstrated how user feedback could guide post-hoc tuning of suggestion timing and content, particularly in co-creative or supportive dialogue environments. To generalize feedback across users, active preference learning systems were developed. In these frameworks, user preferences were elicited through minimal feedback signals, such as pairwise rankings or accept/reject decisions. These signals were then used to guide updates in the model’s preference space through reinforcement-like loops [ 65]. This strategy allowed systems to improve their personalization behavior over time, even when explicit supervision was unavailable. Compared to traditional tuning methods, these feedback-driven strategies required fewer labeled examples and facilitated continual learning from sparse interactions. A more principled foundation for feedback integration was formulated through models that incorporated preference alignment objectives at the optimization level. In one such method, preference optimization was performed using direct feedback criteria collected after interaction, enabling the model to adapt its internal persona representation based on past interaction quality [ 63]. This approach, referred to as DPOC (Direct Preference Optimization with Criterion), enhanced long-term alignment by assigning learning weights based on how well past dialogue aligned with persona-consistent objectives. To further extend alignment, frameworks such as AFSPP (Agent Framework for Shaping Preference and Personality) were proposed to enable continual preference shaping through long-term user engagement [ 67]. In these models, personality traits and user inclinations were refined across multiple sessions through a feedback loop involving both explicit evaluations and implicit user behavior. Unlike static persona models, AFSPP incorporated the gradual evolution of preference signals, leading to more adaptive and personalized AI agents. Ethical and conceptual implications of preference optimization were also explored, highlighting the tension between user satisfaction and long-term alignment goals. It was emphasized that feedback-driven systems must balance personalization with fairness and transparency, especially when user preferences evolve unpredictably or conflict with system goals [ 68]. These perspectives contributed to a more cautious and principled understanding of how feedback should inform preference modeling over time. Recent advances in retrieval-augmented systems have also demonstrated how generation quality can be improved through post-conversation evaluation. In one such architecture, response quality metrics such as BLEU and F1 were used to provide learning signals to the retriever, which was jointly trained with the generator [ 69]. This approach enabled the system to refine its input selection process based on the success of generated responses, leading to improved alignment in future interactions without requiring explicit human annotation. Together, these models represent a significant evolution in preference detection, shifting from static or real-time
https://arxiv.org/abs/2505.21907v1
adaptation to a reflective, feedback-driven process. By incorporating signals from completed dialogues, AI systems have been enabled to refine their personalization strategies over time, leading to more adaptive, consistent, and user-aligned behavior in future interactions. Table 3: Summary of Preference Detection Techniques Across Conversation Phases. Phase Techniques (Models/Methods) Personalization Focus Advantages Limitations Before the ConversationPredefined profile modeling [54] Gaussian Processes [56] Session embedding [57]Static profiling Probabilistic modeling Dynamic session embeddingsInitial preference modeling Profile construction Coarse personalization Population trends capturingFixed user profiles Limited adaptivity Low contextual sensitivity During the ConversationZero-shot persona attribute extraction [61] In-dialogue learning [63] Neural Collaborative Filtering [60]Persona extraction Dynamic behavioral modeling Embedding-based preference adaptationDynamic persona modeling Real-time context adaptation Fine-grained trait inference Latent behavior integrationHigh model complexity Content-bound inference Computationally intensive After the ConversationFeedback-driven persona optimization [63]; Continual personality shaping [67]; Active Preference Learning [65];Feedback integration Long-term persona refinement Active learning from sparse feedbackLong-term persona alignment Supports continual adaptation Efficient use of sparse feedbackRequires labeled data Prone to feedback bias User privacy risks 4.3 Personalized Response Generation Considering that the usable clues for preference optimization are now available through any of the previously discussed techniques, this section reviews different groups of approaches for response personalization. To this end, we have 9 selected the techniques based on their technical diversity, rather than following a progression-based perspective on how this context has advanced over time. Table 4 presents a range of response personalization studies applied at various levels of dialogue system generation. These include prompt-based approaches, which are model-agnostic, as well as more complex methods that propose entirely new architectures from scratch. In reviewing these studies, we considered a shared set of personalization aspects—such as consistency, coherence, and diversity—as identified in previous work [70, 71]. Personalization Level Core Mechanisms Personalization Focus Advantages Limitations Prompt-BasedHybrid Graph-Augmented Prompting [72] Topic-Focused Summarization [73] RSA-Based Token Reweighting [74] Dual Retrieval Prompting [75] Selective Soft Prompting [76]Persona Consistency Persona Diversity Persona-Grounded Knowledge Dynamic Style AdaptationModel-agnostic Low computational cost Flexible to contextShallow persona integration Dependent on retrieval quality Fine-Tuning-BasedEmotion-Conditioned LoRA Fine-Tuning [77] Cue-Adaptive Decoding [78] Multi-task fine-tuning [79] Data-Centric Curriculum Fine-Tuning [80] Prefix Tuning [81]Interpretability Emotion Alignment Naturalness FluencyMemory-efficient Fast adaptation Low storage overheadLimited deep reasoning Risk of overfitting on small persona data Architecture-levelConcept Expansion and Filtering[82] Latent Act Modeling [83] Multi-Hop Split Memory Attention [84] Multi-Hop Co-Attention Matching [85] Addressee Prediction with Posterior Supervision [86] Persona-Adaptive Cross-Attention [87] Graph Transformer with Attribute Nodes [88]Deep Persona Grounding Multi-Party Dialogue Setup Dialogue Act Control Persona CoherencyHigh expressiveness Fine-grained ReasoningHigh training cost Requires large labeled datasets Table 4: Comparison of personalized response generation approaches organized by personalization level, mechanism, focus, advantages, and limitations. 4.3.1 Prompt-Based Personalization As a straightforward class of methods, prompt-based approaches steer the model’s behavior by modifying the input through prompt engineering or retrieval-augmented generation (RAG), rather than altering the model itself. Their key advantages include model-agnostic applicability, low computational cost, and flexibility at inference time, making them well-suited for lightweight or rapidly deployable systems. A variety of persona coverage and injection approaches have been proposed to carefully construct inputs and avoid altering the underlying language
https://arxiv.org/abs/2505.21907v1
model. In CKE (Cross-Graph Knowledge Exchange) [ 72], a hybrid structured prompt is constructed by first generating dialogue user graphs for both conversation participants and then performing cross-graph knowledge aggregation. Discrete and continuous representations are fused to form a prompt that encodes persona, dialogue, and concept-level relationships for input to a frozen LLM. Similarly, a topic-focused summarization method [ 73] is applied to distill relevant context segments from the dialogue history. These summaries, combined with structured profile attributes and topic-aligned dialogue segments, are assembled into a prompt to guide generation in domain-specific tasks. Beyond structural augmentation, [ 74] employs token-level probabilistic reasoning using the Rational Speech Acts (RSA) framework. A self-conscious speaker adjusts token probabilities by simulating an imaginary listener, supported by a distractor memory storing persona variants, to enforce utterance distinctiveness and consistency during decoding. In PK-ICR [ 75], a dual retrieval prompting strategy is implemented by leveraging both dialogue context and persona to retrieve knowledge facts through a neural QA retriever, enabling precise grounding for generation. Lastly, Selective Prompt Tuning (SPT) [ 76] introduces a soft prompt selection mechanism, where multiple trainable soft prompts are maintained and a dense retriever selects the most suitable prompt per input. Contrastive and fusion learning objectives are further applied to ensure prompt diversity and stability across dialogue contexts. 4.3.2 Fine-Tuning-Based Personalization While prompt-based methods are simple and model-agnostic, they offer limited control over model behavior. In contrast, fine-tuning-based personalization enables more robust and persistent behavioral adaptation, making it suitable when style adaptation or domain-specific alignment is required beyond what input conditioning alone can achieve. To enhance persona consistency in generation, TransferTransfo [ 79] fine-tuned a pretrained Transformer using a multi- task objective that combined language modeling and next-utterance classification. Persona sentences were prepended to the input and processed through the model’s existing embedding layer, allowing the system to learn persona-sensitive representations without architectural modification. Cue Adaptive Decoder (CueAD) [ 78] introduced a decoder-side mechanism using GRU+, a modified GRU with a gating function that selectively incorporated conditioning cues, such as persona or dialogue context, based on their relevance during decoding. This enabled dynamic and interpretable control over cue influence. The D ³framework [ 80] took a data-centric approach, applying a three-stage fine-tuning pipeline: distillation to ensure alignment, diversification to expand behavioral coverage, and curriculum training to manage input 10 complexity. This structured process helped improve generalization and consistency. In PersonaPKT [ 81], prefix tuning was adopted to personalize generation through continuous prefix vectors injected into a frozen backbone. A shared prefix was first trained across multi-persona data and then specialized into persona-specific prefixes, enabling private, modular adaptation. Emotionally adaptive generation was later explored by Varma et al. [ 77], where LoRA-based fine-tuning was applied to an LLaMA-3 model. An emotion classifier triggered prompt selection, guiding the generation of responses conditioned on emotional tone and demographic context. Extending this idea, Tan et al. [ 89] proposed a parameter-efficient method in which small, LoRA-based modules—referred to as pieces—are retrieved and composed at inference time. These modules enable storage-efficient, task-free personalization without altering the base model. 4.3.3
https://arxiv.org/abs/2505.21907v1
Architecture-Level Personalization Despite the increased flexibility offered by fine-tuning methods, they remain bounded by the representational capacity of the underlying architecture. When personalization demands richer structural modeling, such as dynamically encoding speaker roles, incorporating graph-based knowledge, or explicitly handling multi-source inputs, modifications to the model’s architecture become necessary. At this level of intervention, personalization is achieved by introducing new components or redesigning internal mechanisms to explicitly encode and integrate user-specific signals. Architectural personalization methods have progressively evolved to overcome the limitations of earlier prompt- and fine- tuning-based techniques, particularly in representing structured user information and modeling dynamic conversational behavior. One of the earliest efforts introduced discrete latent variables to represent conversational intent [ 83], allowing the model to generate responses conditioned on latent dialogue acts. This enabled one-to-many response modeling and helped align persona traits with underlying communicative goals. Building on the need to capture different dimensions of user information, memory-based architectures were proposed [ 84], where user profiles and interaction histories were encoded into split memory networks, and multi-hop attention mechanisms allowed for selective retrieval of static and dynamic user signals during generation. To better model interactions in multi-party settings, graph-based structures were introduced [ 86], encoding utterances, speakers, and dialogue flow into a heterogeneous graph, while also incorporating an addressee prediction module trained with posterior supervision. This line of work was extended by incorporating user persona attributes directly as graph nodes, connected via meta-relational edges for fine-grained reasoning [ 88]. Meanwhile, matching-based models such as CoBERT [ 85] leveraged multi-hop co-attention to capture second-order interactions between context, persona, and candidate responses, supporting persona-consistent selection in retrieval-based setups. Further developments introduced adaptive attention mechanisms within generative models [87], where cross-attention over persona and dialogue context was modulated through dynamic weighting and masking to filter irrelevant signals at each decoding step. In parallel, broader architectural pipelines were designed to address sparse or fragmented persona inputs, combining concept-based persona generation with alignment modules before feeding curated persona content into the generation process [ 82]. Together, these advancements reflect a shift from modular intent modeling toward structurally integrated architectures capable of deeply encoding and reasoning over rich user information within the generation pipeline. 4.4 Feedback-Driven Preference Optimization In contrast to the approaches discussed in the previous section—which personalize responses by conditioning on explicit user preferences—another line of research focuses on post-training, preference-based alignment methods. These methods aim to align model behavior with human preferences by optimizing the model based on feedback over its generated outputs. A widely adopted approach in this category is Reinforcement Learning from Human Feedback (RLHF) [ 90], which involves training a reward model from human-labeled preferences and then fine-tuning the language model through reinforcement learning. In contrast, Direct Preference Optimization (DPO) [ 91] simplifies this process by eliminating the need for both a reward model and reinforcement learning. Instead, it directly optimizes a preference-based objective over response pairs using a supervised learning framework. In the following subsections, we review both RLHF and DPO in detail, highlighting their design choices, challenges, and recent advancements. 4.4.1 RLHF While Supervised Fine-Tuning (SFT) can
https://arxiv.org/abs/2505.21907v1
produce instruction-following behavior by training a model on labeled examples, it is inherently limited by the static nature of its training data, which does not reflect user preferences that arise during real-world deployment [ 90]. In contrast, Reinforcement Learning from Human Feedback (RLHF) augments this approach by introducing a post-training alignment phase, where the model learns to optimize for outputs judged more favorably by humans [ 92,93]. Instead of relying solely on direct demonstrations, RLHF incorporates a reward model trained on human preference comparisons between multiple candidate responses. This reward signal guides further model updates via reinforcement learning [ 94], enabling it to adapt to subjective and nuanced human values, especially in open-ended tasks lacking a single correct answer [ 94,95]. RLHF thereby extends beyond SFT 11 by embedding a dynamic feedback loop that allows ongoing refinement of model behavior based on human-aligned criteria, improving its helpfulness, harmlessness, and honesty over time [96, 97]. Table 5: Summary of Key Focus Areas in the RLHF Pipeline. RLAIF stands for Reinforcement Learning from AI Feedback. Category Primary Goal Key Techniques Main Challenges Reward ModelingPreference estimation Feedback interpretationPairwise comparison [98] Logistic heads [92] Reward ensembles [99] Dense feedback via credit assignment [100] Adaptive scaling [101]Overfitting Label noise Misaligned incentives Policy Optimization Behavior alignmentActor-Critic methods [102, 103] KL-penalty regularization [104, 105] Rejection sampling [106, 107]Instability Reward hacking Exploration vs. alignment tradeoff AI-Driven FeedbackFeedback automation Human effort reductionRLAIF [108] Self-critiquing [109] Synthetic annotations [110]Bias propagation Hallucination risks Trustworthiness of AI-generated feedback A central pillar of the RLHF pipeline is reward modeling, which serves as the mechanism for translating human preferences into a trainable signal that guides policy optimization. This approach was first formalized through pairwise preference learning, in which reward models were trained using binary comparisons between model outputs, as shown in Deep RL from Human Preferences (2017) [ 98]. Subsequent improvements extended this method to domains such as language generation [ 92] and summarization [ 93], where human preferences were systematically incorporated to better capture quality and alignment. As the field has matured, scalability and fidelity have become pressing concerns. To address computational limitations, ensemble-based reward modeling techniques were developed to enhance performance without excessive overhead [ 99]. To improve feedback quality, Attention-Based Credit Assignment was proposed as a mechanism for generating dense reward signals from sparse human input [ 100]. In parallel, the Adaptive Preference Scaling framework introduced a dynamic way to modulate human preferences in multi-objective settings, helping models align with more nuanced value signals [ 101]. Conceptual challenges—such as reward hacking and misgeneralization—have also been systematically outlined in foundational critiques of RLHF reward modeling [ 111]. Together, these efforts reflect a clear evolution from handcrafted reward structures to scalable, robust, and statistically principled models capable of capturing complex, subjective human values—firmly establishing reward modeling as a cornerstone of alignment in RLHF. Policy optimization in RLHF is the stage where the learned reward signal is used to fine-tune the model’s behavior, often under alignment constraints. A common approach involves actor-critic frameworks, which enable sample-efficient updates by decoupling policy learning from value estimation. Recent work
https://arxiv.org/abs/2505.21907v1
has extended this paradigm to constrained optimization settings to address reward overoptimization and instability in alignment objectives [ 102], and hierarchical actor-critic architectures have been proposed to manage multi-turn decision-making in complex language tasks [ 103]. To ensure the policy does not drift too far from its initial behavior, KL-penalty regularization is widely employed. While initially implemented as a global constraint, more recent advances have introduced token-level KL penalties that offer finer-grained control [ 104], alongside formal analyses that establish conditions under which KL regularization promotes both stability and alignment [ 105]. Another key technique, rejection sampling, has been explored as a safeguard against policy updates from low-quality feedback. Statistical rejection schemes have been proposed to filter unreliable preference comparisons, improving the fidelity of learning signals [ 106], while hybrid strategies like RS-DPO combine rejection filtering with direct preference optimization to enhance robustness in real-world deployment [ 107]. Collectively, these approaches illustrate how diverse optimization strategies are employed to balance safety, efficiency, and fidelity in RLHF-driven model alignment. Recent research in RLHF has explored the use of AI-generated signals as alternatives or complements to direct human feedback, giving rise to the broader class of AI-driven feedback methods. In Reinforcement Learning from AI Feedback (RLAIF), preference data generated by language models is used to bootstrap reward models, reducing reliance on costly human annotation. This approach has shown promise in early experiments with self-generated comparisons guiding alignment [ 108]. A related line of work involves self-critiquing, where models learn to assess their own outputs or simulate preferences under uncertainty. Risk-aware methods have leveraged such self-feedback to refine policy training and improve robustness in alignment tasks [ 109]. Finally, techniques based on synthetic annotations aim to extract alignment signals from unlabeled or weakly labeled data sources. By leveraging proxy metrics or consistency within large corpora, these methods enable scalable preference modeling without explicit supervision [ 110]. Together, these techniques mark a shift toward more scalable and autonomous alignment strategies in the RLHF pipeline. 12 4.4.2 DPO Although RLHF has proven effective in aligning large language models with human preferences, its reliance on reinforcement learning algorithms—often involving reward models, sampling loops, and KL-regularized objectives—can introduce complexity, instability, and inefficiency. To address these challenges, DPO has emerged as a simplified yet powerful alternative. Rather than learning a separate reward function and optimizing a policy through reinforcement learning, DPO directly fine-tunes the policy using pairwise human preference data. By framing alignment as a classification problem between preferred and dispreferred responses, DPO bypasses explicit reward modeling and policy rollouts, offering a more stable and scalable training pipeline. This paradigm shift not only streamlines implementation but also improves sample efficiency, making DPO an increasingly popular strategy for preference-based alignment in large-scale language models. Variant Scoring Function S(yw, yl, x) Regularization Term R(yw, yl, x)Motivating Limitation Approach DPO [91] β·[logp(yw|x)−logp(yl|x)] 0 Lack of explicit reward model Log-likelihood difference CPO [112] β·[logp(yw|x)−logp(yl|x)] −logp(yw|x) Overconfident updates Likelihood penalty ORPO [113] λ·h logp0(yw|x) 1−p0(yw|x)−logp0(yl|x) 1−p0(yl|x)i −logp(yw|x) KL-dependence Odds-ratio scoring SimPO [114]β |yw|logp(yw|x)−β |yl|logp(yl|x)−γ 0 Length bias Length-normalized scoring IRPO [115] β·h logp(cw,yw|x) pt(cw,yw|x)−logp(cl,yl|x) pt(cl,yl|x)i
https://arxiv.org/abs/2505.21907v1
−α·logp(cw,yw|x) |cw|+|yw|Poor generalization Compression + coverage regularization β-DPO [116] β·[logp(yw|x)−logp(yl|x)], with adaptive β0 Static optimization sharpness Adaptive temperature DPOC [63] β·[logp(yw|x)−logp(yl|x)] P(rcho, rcrt) +P(rcrt, rrej) Preference misranking Criterion-based penalty terms Table 6: Comparison of DPO variants by scoring functions, regularization strategies, motivating limitations, and optimization approaches. The DPOC regularization uses: P(ra, rb) =−min(0 ,logra−logrb). Table adapted from [2]. At the core of DPO lies a simple yet effective probabilistic framework that formalizes preference optimization without relying on explicit reward modeling. Given a pair of outputs—one preferred ( yw) and one rejected ( yl)—the objective encourages the model to assign higher likelihood to the preferred response relative to the rejected one. To support a unified view across emerging DPO variants, we generalize the original objective into the following form: L(θ) =−logσ(S(yw, yl, x)) +R(yw, yl, x) Here,Sis a scoring function that quantifies the preference between responses, and Ris an optional regularization term. The original DPO formulation is recovered by setting S(yw, yl, x) =β·[logp(yw|x)−logp(yl|x)]and R= 0. This closed-form structure captures the key intuition behind DPO: to directly align model behavior with human preferences in a stable, efficient, and extensible way. Several algorithmic extensions to the original DPO formulation have been proposed to address practical limitations such as overconfidence, verbosity, prior dependency, and optimization sharpness. These are summarized in Table 6, which categorizes variants by their scoring and regularization formulations. To mitigate reward overoptimization and encourage more calibrated updates, Contrastive Preference Optimization (CPO) [ 112] adds a log-likelihood penalty term to the loss, discouraging the model from assigning excessively high probabilities to preferred responses. Odds-Ratio Preference Optimization (ORPO) [ 113], in contrast, replaces the standard log-likelihood difference with a log-odds scoring function derived from a reference model, thereby anchoring preferences relative to base policy probabilities and removing the need for explicit KL penalties. Simplex Preference Optimization (SimPO) [ 114] targets verbosity by scaling log-probabilities inversely with response length, effectively normalizing the preference score to favor concise outputs. Information-Ratio Preference Optimization (IRPO) [ 115] integrates auxiliary compression and coverage terms into both scoring and regularization components, enhancing robustness to preference noise and improving generalization under limited supervision. β-DPO [ 116] revisits the temperature parameter in the scoring function, replacing the fixed βwith either learnable or input-dependent temperature values to dynamically adjust alignment sensitivity across prompts. Most recently, Direct Preference Optimization with Criterion (DPOC) [ 63] addresses preference misranking by adding penalty terms based on criterion responses, ensuring that the preferred output meaningfully surpasses both the rejected and an intermediate-quality reference. To support controllable alignment across multiple preference dimensions—such as helpfulness, honesty, and harmlessness—multi-objective frameworks have been explored. One such approach is MODPO [ 117], which generalizes DPO to accommodate vectorized preference signals, enabling simultaneous optimization over multiple alignment axes without collapsing them into a single scalar reward. 13 5 CONCLUSION This survey examined the role of preference optimization in the design of AI copilots—AI-powered, context-aware systems that support users in real-time, high-context tasks. We began by establishing a unified definition of AI copilots that consolidates diverse interpretations across domains, providing a conceptual basis for consistent
https://arxiv.org/abs/2505.21907v1
study and development. Building on this, we offered a phase-based view of preference optimization, structured around how user preferences are identified, interpreted, and used to drive personalized interaction throughout the lifecycle of user engagement. By synthesizing methods for detecting preferences, generating contextually appropriate responses, and refining behavior through feedback, this work provides a coherent framework for understanding and advancing personalization in AI copilots. Beyond conceptual synthesis, the survey offers practical guidance for system designers by mapping available preference sources, outlining how they can be integrated, and analyzing technique selection across each stage of a preference-aware AI copilot. As a future direction, this perspective could be extended by systematically exploring implementation pathways and cost trade-offs. Each stage involves choices—such as platform selection, model customization, or data infrastructure—that carry implications for scalability, latency, and operational cost. Studying these dimensions would help with more actionable strategies for building effective and efficient preference-aware AI copilots. Acknowledgments We acknowledge the Centre for Applied Artificial Intelligence at Macquarie University (Sydney, Australia) for funding this research. References [1]Alexander Maedche, Christine Legner, Alexander Benlian, Benedikt Berger, Henner Gimpel, Thomas Hess, Oliver Hinz, Stefan Morana, and Matthias Söllner. Ai-based digital assistants: Opportunities, threats, and research perspectives. Business & Information Systems Engineering , 61:535–544, 2019. [2]Wenyi Xiao, Zechuan Wang, Leilei Gan, Shuai Zhao, Wanggui He, Luu Anh Tuan, Long Chen, Hao Jiang, Zhou Zhao, and Fei Wu. A comprehensive survey of direct preference optimization: Datasets, theories, variants, and applications. arXiv preprint arXiv:2410.15595 , 2024. [3]Amrita S Tulshan and Sudhir Namdeorao Dhage. Survey on virtual assistant: Google assistant, siri, cortana, alexa. In Advances in Signal Processing and Intelligent Recognition Systems: 4th International Symposium SIRS 2018, Bangalore, India, September 19–22, 2018, Revised Selected Papers 4 , pages 190–201. Springer, 2019. [4]Tom Bolton, Tooska Dargahi, Sana Belguith, Mabrook S Al-Rakhami, and Ali Hassan Sodhro. On the security and privacy challenges of virtual assistants. Sensors , 21(7):2312, 2021. [5]Michal Furmakiewicz, Chang Liu, Angus Taylor, and Ilya Venger. Design and evaluation of ai copilots–case studies of retail copilot templates. arXiv preprint arXiv:2407.09512 , 2024. [6]John Danaher. Toward an ethics of ai assistants: An initial framework. Philosophy & Technology , 31(4):629–653, 2018. [7]Abigail Sellen and Eric Horvitz. The rise of the ai co-pilot: Lessons for design from aviation and beyond. arXiv preprint arXiv:2311.14713 , 2023. [8] John E Kelly. Computing, cognition and the future of knowing. IBM Research. Oct , 13(2015):12, 2015. [9] Dylan Schmorrow. Foundations of augmented cognition . Springer, 2005. [10] Erik Hollnagel and David D Woods. Joint cognitive systems: Foundations of cognitive systems engineering . CRC press, 2005. [11] Carlo Reverberi, Tommaso Rigon, Aldo Solari, Cesare Hassan, Paolo Cherubini, and Andrea Cherubini. Experi- mental evidence of effective human–ai collaboration in medical decision-making. Scientific reports , 12(1):14952, 2022. [12] Dominik Dellermann, Adrian Calma, Nikolaus Lipusch, Thorsten Weber, Sascha Weigel, and Philipp Ebel. The future of human-ai collaboration: a taxonomy of design knowledge for hybrid intelligence systems. arXiv preprint arXiv:2105.03354 , 2021. [13] Ashish Sharma, Inna W Lin, Adam S Miner, David C Atkins, and Tim Althoff. Human–ai collaboration enables more empathic conversations in text-based peer-to-peer mental
https://arxiv.org/abs/2505.21907v1
health support. Nature Machine Intelligence , 5(1):46–57, 2023. 14 [14] Stefan Wellsandt, Karl Hribernik, and Klaus-Dieter Thoben. Anatomy of a digital assistant. In Advances in Production Management Systems. Artificial Intelligence for Sustainable and Resilient Production Systems: IFIP WG 5.7 International Conference, APMS 2021, Nantes, France, September 5–9, 2021, Proceedings, Part IV , pages 321–330. Springer, 2021. [15] Robin Knote, Andreas Janson, Matthias Söllner, and Jan Marco Leimeister. Classifying smart personal assistants: An empirical cluster analysis. 2019. [16] Benjamin R Cowan, Nadia Pantidi, David Coyle, Kellie Morrissey, Peter Clarke, Sara Al-Shehri, David Earley, and Natasha Bandeira. " what can i help you with?" infrequent users’ experiences of intelligent personal assistants. InProceedings of the 19th international conference on human-computer interaction with mobile devices and services , pages 1–12, 2017. [17] Guendalina Caldarini, Sardar Jaf, and Kenneth McGarry. A literature survey of recent advances in chatbots. Information , 13(1):41, 2022. [18] Luca Hernández Acosta and Delphine Reinhardt. A survey on privacy issues and solutions for voice-controlled digital assistants. Pervasive and Mobile Computing , 80:101523, 2022. [19] Reza Rawassizadeh, Taylan Sen, Sunny Jung Kim, Christian Meurisch, Hamidreza Keshavarz, Max Mühlhäuser, and Michael Pazzani. Manifestation of virtual assistants and robots into daily life: Vision and challenges. CCF Transactions on Pervasive Computing and Interaction , 1:163–174, 2019. [20] Andrea L Guzman. V oices in and of the machine: Source orientation toward mobile virtual assistants. Computers in Human Behavior , 90:343–350, 2019. [21] Ebtesam H Almansor and Farookh Khadeer Hussain. Survey on intelligent chatbots: State-of-the-art and future research directions. In Complex, Intelligent, and Software Intensive Systems: Proceedings of the 13th International Conference on Complex, Intelligent, and Software Intensive Systems (CISIS-2019) , pages 534–543. Springer, 2020. [22] Ritu Agarwal and Mani Wadhwa. Review of state-of-the-art design techniques for chatbots. SN Computer Science , 1(5):246, 2020. [23] Shafquat Hussain, Omid Ameri Sianaki, and Nedal Ababneh. A survey on conversational agents/chatbots classification and design techniques. In Web, Artificial Intelligence and Network Applications: Proceedings of the Workshops of the 33rd International Conference on Advanced Information Networking and Applications (WAINA-2019) 33 , pages 946–956. Springer, 2019. [24] Eleni Adamopoulou and Lefteris Moussiades. Chatbots: History, technology, and applications. Machine Learning with Applications , 2:100006, 2020. [25] Shamane Siriwardhana, Rivindu Weerasekera, Elliott Wen, Tharindu Kaluarachchi, Rajib Rana, and Suranga Nanayakkara. Improving the domain adaptation of retrieval augmented generation (rag) models for open domain question answering. Transactions of the Association for Computational Linguistics , 11:1–17, 2023. [26] Bassem Abu-Nasser. Medical expert systems survey. International Journal of Engineering and Information Systems (IJEAIS) , 1(7):218–224, 2017. [27] Shu-Hsien Liao. Expert system methodologies and applications—a decade review from 1995 to 2004. Expert systems with applications , 28(1):93–103, 2005. [28] J Joy and K Sreekumar. A survey on expert system in agriculture. International journal of computer science and information technologies , 5:7861–7864, 2014. [29] Joseph Giarratano and Gary Riley. Expert systems: Principles and programming (fouth edition). Canada: Thomson , 2005. [30] Zhi-Jie Zhou, Guan-Yu Hu, Chang-Hua Hu, Cheng-Lin Wen, and Lei-Lei Chang. A survey of belief rule-base expert system. IEEE Transactions on Systems, Man, and Cybernetics: Systems , 51(8):4944–4958, 2019. [31]
https://arxiv.org/abs/2505.21907v1
Ming Y Lu, Bowen Chen, Drew FK Williamson, Richard J Chen, Melissa Zhao, Aaron K Chow, Kenji Ikemura, Ahrong Kim, Dimitra Pouli, Ankush Patel, et al. A multimodal generative ai copilot for human pathology. Nature , 634(8033):466–473, 2024. [32] Hussein Mozannar, Gagan Bansal, Adam Fourney, and Eric Horvitz. When to show a suggestion? integrating human feedback in ai-assisted programming. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 38, pages 10137–10144, 2024. [33] Paul R Daugherty and H James Wilson. Human+ machine: Reimagining work in the age of AI . Harvard Business Press, 2018. 15 [34] Abigail Sellen and Eric Horvitz. The rise of the ai co-pilot: Lessons for design from aviation and beyond. Communications of the ACM , 67(7):18–23, 2024. [35] Wei-Lin Chiang, Lianmin Zheng, Ying Sheng, Anastasios N. Angelopoulos, Tianle Li, Dacheng Li, Banghua Zhu, Hao Zhang, Michael I. Jordan, Joseph E. Gonzalez, and Ion Stoica. Chatbot arena: An open platform for evaluating llms by human preference. arXiv preprint , 2024. [36] Benedikt Loepp and Jürgen Ziegler. Exploring the potential of generative ai for augmenting choice-based preference elicitation in recommender systems. In Adjunct Proceedings of the 32nd ACM Conference on User Modeling, Adaptation and Personalization (UMAP Adjunct ’24) , pages 114–120, 2024. [37] Xin Zhao, Meng Wang, Xiangnan He, Ming Gao, and Liqiang He. Explicit or implicit feedback? engagement or satisfaction? In Proceedings of the 12th ACM Conference on Recommender Systems (RecSys ’18) , pages 24–32, 2018. [38] Bamshad Mobasher. Automatic personalization based on web usage mining. ACM SIGWEB Newsletter , 2003. [39] Melanie Heck, Janick Edinger, Jonathan Bünemann, and Christian Becker. Exploring gaze-based prediction strategies for preference detection in videos. In Proceedings of the 2021 ACM SIGIR Conference on Human Information Interaction and Retrieval (CHIIR) , pages 129–138, 2021. [40] Aaron D. Tucker, Kiante Brantley, Adam Cahall, and Thorsten Joachims. Coactive learning for large language models using implicit user feedback. In Proceedings of the 41st International Conference on Machine Learning (ICML) , 2024. [41] Minsoo Park, Hyunsu Kim, Sungdong Lee, and Seung-won Kim. Rlhf from heterogeneous feedback via personalization and preference aggregation. arXiv preprint , 2024. [42] Xiaowen Cao, Yuan Fang, Feida Zhu, Chuan Zhang, and Ee-Peng Chang. What are you known for? learning user topical profiles with implicit and explicit footprints. In Proceedings of the 10th ACM International Conference on Web Search and Data Mining (WSDM ’17) , pages 439–448, 2017. [43] Shenao Zhang, Donghan Yu, Hiteshi Sharma, Han Zhong, Zhihan Liu, Ziyi Yang, Shuohang Wang, Hany Hassan, and Zhaoran Wang. Self-exploring language models: Active preference elicitation for online alignment. In Proceedings of the 41st International Conference on Machine Learning (ICML) , 2024. [44] David Eric Austin, Anton Korikov, Armin Toroghi, and Scott Sanner. Bayesian optimization with llm-based acquisition functions for natural language preference elicitation. In Proceedings of the 18th ACM Conference on Recommender Systems (RecSys) , 2024. [45] Kunal Handa, Yarin Gal, Ellie Pavlick, Noah Goodman, Jacob Andreas, Alex Tamkin, and Belinda Z. Li. Bayesian preference elicitation with language models. arXiv preprint , 2024. [46] Wasu Top Piriyakulkij, V olodymyr Kuleshov, and Kevin Ellis. Active
https://arxiv.org/abs/2505.21907v1
preference inference using language models and probabilistic reasoning. arXiv preprint , 2023. [47] Se eun Yoon, Zhankui He, Jessica Maria Echterhoff, and Julian McAuley. Evaluating large language models as generative user simulators for conversational recommendation. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT) , 2024. [48] Yi Zhang, Menghan Liu, Liwei Wang, Qianyu Cheng, and Minlie Huang. Guided profile generation improves personalization with llms. In Findings of the Association for Computational Linguistics: EMNLP 2024 , 2024. [49] Kihyuk Go, Jaewook Choe, and Se-Young Kim. Aligning language models with preferences through f-divergence minimization. In Proceedings of the 40th International Conference on Machine Learning (ICML) , 2023. [50] Zhen Wu, Fangzhou Li, Yuchen Zhang, Honglak Lee, and Kyunghyun Cho. Aligning llms with individual preferences via interaction. In Proceedings of the 29th International Conference on Computational Linguistics (COLING) , 2025. [51] Anonymous. Heimdall: A privacy-respecting implicit preference collection framework. In ACM WSDM , 2017. [52] Filip Radlinski, Saravan Chandar, and Hyun Jeon. Coached conversational preference elicitation: A case study in understanding movie preferences. In Proceedings of the 2019 Annual Meeting of the Association for Computational Linguistics (ACL) , 2019. [53] Siyan Zhao, Mingyi Hong, Yang Liu, Devamanyu Hazarika, and Kaixiang Lin. Do llms recognize your preferences? evaluating personalized preference following in llms. arXiv preprint , 2025. [54] Christopher Ifeanyi Eke, Azah Anir Norman, Liyana Shuib, and Henry Friday Nweke. A survey of user profiling: State-of-the-art, challenges, and solutions. IEEE Access , 7:144907–144928, 2019. 16 [55] Jiahui Zhou, Weinan Zhang, and Jun Wang. User modeling and user profiling: A comprehensive survey. ACM Transactions on Information Systems (TOIS) , 40(4):1–49, 2022. [56] Wei Chu and Zoubin Ghahramani. Preference learning with gaussian processes. Advances in neural information processing systems , 18, 2005. [57] Jeonghwan Choi, Jaewook Moon, and Jihie Lee. User persona identification and new service adaptation recommendation. In Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization , pages 56–64, 2021. [58] Lakshmi Venkatesh, Shubham Gupta, Molly O’Neill, and Partha Talukdar. Collaborative filtering to capture ai user’s preferences as norms. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency , pages 1516–1526, 2022. [59] Shuai Zhang, Lina Yao, Aixin Sun, and Yi Tay. A survey on accuracy-oriented neural recommendation: From collaborative filtering to information-rich recommendation. ACM Computing Surveys (CSUR) , 54(8):1–38, 2021. [60] Xiangnan He, Lizi Liao, Hanwang Zhang, Liqiang Nie, Xia Hu, and Tat-Seng Chua. Neural collaborative filtering for user preference discovery from biased implicit feedback. In WWW , pages 173–182, 2017. [61] Zhu, Li, Mao, Pandelea, and Cambria. Paed- zero-shot persona attribute extraction in dialogues. ACL Anthology , 2023. [62] Kong Hao. Enhancing emotional support conversations a framework for dynamic. ACL Anthology , 2025. [63] Chuanqi Cheng, Quan Tu, Shuo Shang, Cunli Mao, Zhengtao Yu, Wei Wu, and Rui Yan. " in dialogues we learn": Towards personalized dialogue without pre-defined profiles through in-dialogue learning. arXiv preprint arXiv:2403.03102 , 2024. [64] Qing Zhao et al. Towards personalized human-ai interaction: Adapting the behavior of ai agents using neural
https://arxiv.org/abs/2505.21907v1
signatures of subjective interest. Proceedings of the National Academy of Sciences , 120(12), 2023. [65] Nathan Lee, Arun Suggala, et al. Active preference learning for large language models. arXiv preprint arXiv:2310.XXXX , 2023. [66] Weiyan Xu, Abigail See, et al. When to show a suggestion? integrating human feedback in ai-assisted programming. arXiv preprint arXiv:2306.XXXX , 2023. [67] Lilian Weng et al. Afspp: An agent framework for shaping preference and personality with llms. arXiv preprint arXiv:2308.XXXX , 2023. [68] James Lang and Gary Marchant. Preferences in ai. AI & Society , 2020. [69] Liu Huang, Fu et al. Learning retrieval augmentation for personalized dialogue generation. arXiv preprint arXiv:2406.18847 , 2024. [70] Saleh Afzoon, Usman Naseem, Amin Beheshti, and Zahra Jamali. Persobench: Benchmarking personalized response generation in large language models. arXiv preprint arXiv:2410.03198 , 2024. [71] Yi-Pei Chen, Noriki Nishida, Hideki Nakayama, and Yuji Matsumoto. Recent trends in personalized dialogue generation: A review of datasets, methodologies, and evaluations. arXiv preprint arXiv:2405.17974 , 2024. [72] Yuezhou Dong, Ke Qin, Pei Ke, Shuang Liang, and Guangchun Luo. Cross-graph knowledge exchange for personalized response generation in dialogue systems. IEEE Internet of Things Journal , 2025. [73] Zhengyuan Liu, Siti Salleh, Pavitra Krishnaswamy, and Nancy Chen. Context aggregation with topic-focused summarization for personalized medical dialogue generation. In Proceedings of the 6th Clinical Natural Language Processing Workshop , pages 310–321, 2024. [74] Hyunwoo Kim, Byeongchang Kim, and Gunhee Kim. Will i sound like me? improving persona consistency in dialogues through pragmatic self-consciousness. arXiv preprint arXiv:2004.05816 , 2020. [75] Minsik Oh, Joosung Lee, Jiwei Li, and Guoyin Wang. Pk-icr: Persona-knowledge interactive multi-context retrieval for grounded dialogue. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing , pages 16383–16395, 2023. [76] Qiushi Huang, Xubo Liu, Tom Ko, Bo Wu, Wenwu Wang, Yu Zhang, and Lilian Tang. Selective prompting tuning for personalized conversations with llms. arXiv preprint arXiv:2406.18187 , 2024. [77] Sandeep Varma, Shivam Shivam, Sarun Natarajan, Biswarup Ray, Bagesh Kumar, and Om Dabral. Talk to your brain: Artificial personalized intelligence for emotionally adaptive ai interactions. In 2024 IEEE International Conference on Computer Vision and Machine Intelligence (CVMI) , pages 1–6. IEEE, 2024. 17 [78] Weichao Wang, Shi Feng, Wei Gao, Daling Wang, and Yifei Zhang. A cue adaptive decoder for controllable neural response generation. In Proceedings of the Web Conference 2020 , pages 2570–2576, 2020. [79] Thomas Wolf, Victor Sanh, Julien Chaumond, and Clement Delangue. Transfertransfo: A transfer learning approach for neural network based conversational agents. arXiv preprint arXiv:1901.08149 , 2019. [80] Yu Cao, Wei Bi, Meng Fang, Shuming Shi, and Dacheng Tao. A model-agnostic data manipulation method for persona-based dialogue generation. arXiv preprint arXiv:2204.09867 , 2022. [81] Xu Han, Bin Guo, Yoon Jung, Benjamin Yao, Yu Zhang, Xiaohu Liu, and Chenlei Guo. Personapkt: Building personalized dialogue agents via parameter-efficient knowledge transfer. arXiv preprint arXiv:2306.08126 , 2023. [82] Jungwoo Lim, Myunghoon Kang, Jinsung Kim, Jeongwook Kim, Yuna Hur, and Heui-Seok Lim. Beyond candidates: adaptive dialogue agent utilizing persona and knowledge. In Findings of the Association for Computational Linguistics: EMNLP 2023 , pages 7950–7963, 2023. [83] Siqi Bao,
https://arxiv.org/abs/2505.21907v1
Huang He, Fan Wang, Hua Wu, and Haifeng Wang. Plato: Pre-trained dialogue generation model with discrete latent variable. arXiv preprint arXiv:1910.07931 , 2019. [84] Yuwei Wu, Xuezhe Ma, and Diyi Yang. Personalized response generation via generative split memory network. InProceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies , pages 1956–1970, 2021. [85] Peixiang Zhong, Chen Zhang, Hao Wang, Yong Liu, and Chunyan Miao. Towards persona-based empathetic conversational models. arXiv preprint arXiv:2004.12316 , 2020. [86] Dongshi Ju, Shi Feng, Pengcheng Lv, Daling Wang, and Yifei Zhang. Learning to improve persona consistency in multi-party dialogue generation via text knowledge enhancement. In Proceedings of the 29th International Conference on Computational Linguistics , pages 298–309, 2022. [87] Qiushi Huang, Yu Zhang, Tom Ko, Xubo Liu, Bo Wu, Wenwu Wang, and H Tang. Personalized dialogue generation with persona-adaptive attention. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 37, pages 12916–12923, 2023. [88] Khyati Mahajan and Samira Shaikh. Persona-aware multi-party conversation response generation. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024) , pages 12712–12723, 2024. [89] Zhaoxuan Tan, Zheyuan Liu, and Meng Jiang. Personalized pieces: Efficient personalized large language models through collaborative efforts. arXiv preprint arXiv:2406.10471 , 2024. [90] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in neural information processing systems , 35:27730–27744, 2022. [91] Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D Manning, Stefano Ermon, and Chelsea Finn. Direct preference optimization: Your language model is secretly a reward model. Advances in Neural Information Processing Systems , 36:53728–53741, 2023. [92] Daniel M Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B Brown, Alec Radford, Dario Amodei, Paul Christiano, and Geoffrey Irving. Fine-tuning language models from human preferences. arXiv preprint arXiv:1909.08593 , 2019. [93] Nisan Stiennon, Long Ouyang, Jeffrey Wu, Daniel Ziegler, Ryan Lowe, Chelsea V oss, Alec Radford, Dario Amodei, and Paul F Christiano. Learning to summarize with human feedback. Advances in neural information processing systems , 33:3008–3021, 2020. [94] Zeqiu Wu, Yushi Hu, Weijia Shi, Nouha Dziri, Alane Suhr, Prithviraj Ammanabrolu, Noah A Smith, Mari Ostendorf, and Hannaneh Hajishirzi. Fine-grained human feedback gives better rewards for language model training. Advances in Neural Information Processing Systems , 36:59008–59033, 2023. [95] Hao Lang, Fei Huang, and Yongbin Li. Fine-tuning language models with reward learning on policy. arXiv preprint arXiv:2403.19279 , 2024. [96] Tomasz Korbak, Kejian Shi, Angelica Chen, Rasika Vinayak Bhalerao, Christopher Buckley, Jason Phang, Samuel R Bowman, and Ethan Perez. Pretraining language models with human preferences. In International Conference on Machine Learning , pages 17506–17533. PMLR, 2023. [97] Alexander Havrilla, Maksym Zhuravinskyi, Duy Phung, Aman Tiwari, Jonathan Tow, Stella Biderman, Quentin Anthony, and Louis Castricato. trlx: A framework for large scale reinforcement learning from human feedback. InProceedings of the 2023 Conference on Empirical Methods in Natural Language Processing , pages 8578–8595, 2023. 18 [98] Paul F Christiano, Jan Leike, Tom Brown, Miljan
https://arxiv.org/abs/2505.21907v1
Martic, Shane Legg, and Dario Amodei. Deep reinforcement learning from human preferences. Advances in neural information processing systems , 30, 2017. [99] Shun Zhang, Zhenfang Chen, Sunli Chen, Yikang Shen, Zhiqing Sun, and Chuang Gan. Improving reinforcement learning from human feedback with efficient reward model ensemble. arXiv preprint arXiv:2401.16635 , 2024. [100] Alex J Chan, Hao Sun, Samuel Holt, and Mihaela Van Der Schaar. Dense reward for free in reinforcement learning from human feedback. arXiv preprint arXiv:2402.00782 , 2024. [101] Ilgee Hong, Zichong Li, Alexander Bukharin, Yixiao Li, Haoming Jiang, Tianbao Yang, and Tuo Zhao. Adaptive preference scaling for reinforcement learning with human feedback. Advances in Neural Information Processing Systems , 37:107249–107269, 2024. [102] Ted Moskovitz, Aaditya K Singh, DJ Strouse, Tuomas Sandholm, Ruslan Salakhutdinov, Anca D Dragan, and Stephen McAleer. Confronting reward model overoptimization with constrained rlhf. arXiv preprint arXiv:2310.04373 , 2023. [103] Yifei Zhou, Andrea Zanette, Jiayi Pan, Sergey Levine, and Aviral Kumar. Archer: Training language model agents via hierarchical multi-turn rl. arXiv preprint arXiv:2402.19446 , 2024. [104] Wei Xiong, Hanze Dong, Chenlu Ye, Ziqi Wang, Han Zhong, Heng Ji, Nan Jiang, and Tong Zhang. Iterative preference learning from human feedback: Bridging theory and practice for rlhf under kl-constraint. arXiv preprint arXiv:2312.11456 , 2023. [105] Rui Zhao and et al. Sharp analysis for kl-regularized contextual bandits and rlhf. arXiv preprint arXiv:2411.04625 , 2024. [106] Tianqi Liu, Yao Zhao, Rishabh Joshi, Misha Khalman, Mohammad Saleh, Peter J Liu, and Jialu Liu. Statistical rejection sampling improves preference optimization. arXiv preprint arXiv:2309.06657 , 2023. [107] Saeed Khaki, JinJin Li, Lan Ma, Liu Yang, and Prathap Ramachandra. Rs-dpo: A hybrid rejection sampling and direct preference optimization method for alignment of large language models. arXiv preprint arXiv:2402.10038 , 2024. [108] Chenghua Huang, Zhizhen Fan, Lu Wang, Fangkai Yang, Pu Zhao, Zeqi Lin, Qingwei Lin, Dongmei Zhang, Saravan Rajmohan, and Qi Zhang. Self-evolved reward learning for llms. arXiv preprint arXiv:2411.00418 , 2024. [109] Yujie Zhao, Jose Aguilar Escamilla, Weyl Lu, and Huazheng Wang. Ra-pbrl: Provably efficient risk-aware preference-based reinforcement learning. Advances in Neural Information Processing Systems , 37:60835–60871, 2024. [110] Zi Liang, Pinghui Wang, Ruofei Zhang, Haibo Hu, Shuo Zhang, Qingqing Ye, Nuo Xu, Yaxin Xiao, Chen Zhang, and Lizhen Cui. Exploring intrinsic alignments within text corpus. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 39, pages 27455–27463, 2025. [111] Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, Jérémy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, Pedro Freire, et al. Open problems and fundamental limitations of reinforcement learning from human feedback. arXiv preprint arXiv:2307.15217 , 2023. [112] Haoran Xu, Amr Sharaf, Yunmo Chen, Weiting Tan, Lingfeng Shen, Benjamin Van Durme, Kenton Murray, and Young Jin Kim. Contrastive preference optimization: Pushing the boundaries of llm performance in machine translation. arXiv preprint arXiv:2401.08417 , 2024. [113] Jiwoo Hong, Noah Lee, and James Thorne. Orpo: Monolithic preference optimization without reference model. arXiv preprint arXiv:2403.07691 , 2024. [114] Yu Meng, Mengzhou Xia, and Danqi Chen. Simpo: Simple preference optimization with a reference-free reward. Advances in Neural Information Processing Systems , 37:124198–124235, 2024. [115] Richard
https://arxiv.org/abs/2505.21907v1
Yuanzhe Pang, Weizhe Yuan, Kyunghyun Cho, He He, Sainbayar Sukhbaatar, and Jason Weston. Iterative reasoning preference optimization. In A. Globerson, L. Mackey, D. Belgrave, A. Fan, U. Paquet, J. Tomczak, and C. Zhang, editors, Advances in Neural Information Processing Systems , volume 37, pages 116617–116637. Curran Associates, Inc., 2024. [116] Junkang Wu, Yuexiang Xie, Zhengyi Yang, Jiancan Wu, Jinyang Gao, Bolin Ding, Xiang Wang, and Xiangnan He.β-DPO: Direct Preference Optimization with Dynamic β.Advances in Neural Information Processing Systems , 37:129944–129966, 2024. [117] Zhanhui Zhou, Jie Liu, Chao Yang, Jing Shao, Yu Liu, Xiangyu Yue, Wanli Ouyang, and Yu Qiao. Beyond one-preference-for-all: Multi-objective direct preference optimization. 2023. 19
https://arxiv.org/abs/2505.21907v1
arXiv:2505.21908v1 [cs.LG] 28 May 2025Reinforcement Learning for Out-of-Distribution Reasoning in LLMs: An Empirical Study on Diagnosis-Related Group Coding Hanyin Wang1,2Zhenbang Wu2Gururaj Kolar3Hariprasad Korsapati1 Brian Bartlett1Bryan Hull4Jimeng Sun2 1Mayo Clinic Health System2University of Illinois Urbana-Champaign 3Mayo Clinic Rochester4Mayo Clinic Phoenix wang.hanyin@mayo.edu Abstract Diagnosis-Related Group (DRG) codes are essential for hospital reimbursement and operations but require labor-intensive assignment. Large Language Models (LLMs) struggle with DRG coding due to the out-of-distribution (OOD) nature of the task: pretraining corpora rarely contain private clinical or billing data. We introduce DRG-S APPHIRE , which uses large-scale reinforcement learning (RL) for automated DRG coding from clinical notes. Built on Qwen2.5-7B and trained with Group Relative Policy Optimization (GRPO) using rule-based rewards, DRG- SAPPHIRE introduces a series of RL enhancements to address domain-specific challenges not seen in previous mathematical tasks. Our model achieves state-of- the-art accuracy on the MIMIC-IV benchmark and generates physician-validated reasoning for DRG assignments, significantly enhancing explainability. Our study further sheds light on broader challenges of applying RL to knowledge-intensive, OOD tasks. We observe that RL performance scales approximately linearly with the logarithm of the number of supervised fine-tuning (SFT) examples , suggesting that RL effectiveness is fundamentally constrained by the domain knowledge encoded in the base model. For OOD tasks like DRG coding, strong RL performance requires sufficient knowledge infusion prior to RL. Consequently, scaling SFT may be more effective and computationally efficient than scaling RL alone for such tasks.1 0102030405060Accuracy (%) 0.53.38.844.750.253.954.8A. DRG-Sapphire vs Baselines DRG-Sapphire (ours) DRG-LLaMA-7B ClinicalBERT CAML GPT-4o O3-Mini R1-Distill-Qwen-32B Model Type Generative Model Classification ModelModel Type Generative Model Classification Model 211212213214215216217218 SFT Sample Size before RL25303540455055Accuracy (%) B. Best RL Accuracy vs SFT Sample Size SFT GRPO Full Data, 50% SFT Figure 1: Main Results. (A) Accuracy of DRG coding on the MIMIC-IV test set (N=26,244). DRG- SAPPHIRE outperforms proprietary reasoning models and the previous SOTA model, DRG-LLaMA. Notably, classification models could not generate reasoning for DRG code assignments. (B) Best RL performance increases linearly with the logarithm of the SFT sample sizes. Dashed line marks where 50% of training data was used for SFT. Best results from vanilla GRPO runs are shown. 1Our code is available at https://github.com/hanyin88/DRG-Sapphire . Preprint. Under review. 1 Introduction Medical codes such as DRG play pivotal roles in modern healthcare. DRG codes are fundamental to the inpatient prospective payment system, directly influencing hospital reimbursement and key quality metrics [ 28]. Currently, assigning DRG codes from clinical notes remains a costly and labor-intensive task, performed manually by highly trained coding specialists. With the emergence of LLMs, there has been growing interest in leveraging these models for auto- mated medical coding [ 9,31,34,22,40]. However, DRG coding remains a particularly challenging task for LLMs (Figure 1 A), with prior attempts yielding limited success [ 34,31]. A primary difficulty arises because DRG coding represents an out-of-distribution (OOD) task for off-the-shelf LLMs. Due to the private nature of medical records, most LLMs likely have minimal exposure to patient notes or billing data during pretraining. Additionally, DRG coding is inherently challenging due to: (1) a high-dimensional search space with over 700
https://arxiv.org/abs/2505.21908v1
DRG codes; (2) advanced clinical reasoning required to link diagnoses with hospital resource use and disease severity; and (3) strict hierarchical rules governing DRG assignment. Recent advances in reasoning models, such as OpenAI-o1 [ 16] and DeepSeek-R1 [ 13], have intro- duced a paradigm shift in LLM post-training. By leveraging large-scale RL with verifiable rewards, these models exhibit test-time scaling through extended chain-of-thought (CoT) reasoning, achieving state-of-the-art (SOTA) performance on complex tasks like competitive mathematics. Despite this progress, the design of optimal RL algorithms for scalable training remain an open challenge [ 41,25]. In the healthcare domain, RL applications using verifiable rewards are still in their early stages, with prior work primarily focused on medical knowledge benchmarks [5, 20, 21]. In this paper, we present a comprehensive exploration of large-scale, reasoning-oriented RL training for automated DRG coding from unstructured clinical notes. In theory, training towards a reasoning model is well-suited for this task: (1) it promotes the development of complex reasoning skills required for accurate code assignment; and (2) more importantly, it generates transparent rationales through CoT reasoning—a key requirement for trust and explainablity in real-world clinical applications. Through this work, we aim to further derive insights into applying RL to challenging OOD tasks with off-the-shelf LLMs. Using Qwen2.5-7B model and GRPO with DRG-rule-based rewards, we systematically investigate the prerequisites for successful RL, the allocation of data between SFT and GRPO under a fixed data budget, and the impact of scaling SFT data. We also explore a series of RL algorithmic enhancements and adaptive learning strategies. Our core contributions are as follows: 1.We introduce DRG-S APPHIRE , a novel model developed through large-scale RL, achieving SOTA performance in automated DRG coding. Unlike prior methods, DRG-S APPHIRE generates clinically helpful, physician-validated reasoning, significantly improving explainability. 2.We demonstrate that the performance ceiling of RL in this OOD task is bounded by the model’s capabilities before RL training. Specifically, we observe that RL performance increases linearly with the logarithm of the number of SFT examples, suggesting that scaling SFT may be more effective and computationally efficient than scaling RL alone for such tasks. 3. We propose a series of algorithmic enhancements and identify unique challenges in applying RL to DRG coding that distinguish it from mathematical domains——such as a preference for an Answer-First cognitive pattern, and sensitivity to KL divergence for stable training. 2 Related Work Automated DRG Coding Given their critical role in hospital operations and reimbursement, there is significant interest in automating DRG coding and enabling early DRG prediction [ 24,14,34,11]. The prior SOTA method, DRG-LLaMA, fine-tunes a LLaMA model as a sequence classifier by replacing its generation head with a classification head [ 34]. Most existing approaches similarly frame DRG coding as a multilabel classification task, offering limited insight into the rationale behind code assignments. While methods like DRGCoder provide input-level weight visualizations [ 14], their interpretability remains insufficient for real-world clinical deployment, where transparency and explainability are critical. 2 Replication Efforts of Deepseek-R1 Recent studies have actively explored replicating the RL recipes of DeepSeek-R1, particularly in mathematical and coding domains, with
https://arxiv.org/abs/2505.21908v1
varying degrees of success [ 43,15,36]. One line of work has proposed approaches to address biases and improve sample efficiency in the original GRPO algorithm [ 41,25,23]. Another active research area focuses on curriculum and staged learning strategies during reasoning-oriented RL [44, 32, 37, 17, 4]. 3 Large-scale RL for Automated DRG Coding 3.1 Problem Formulation We aim to automate the hierarchical assignment of Medicare Severity Diagnosis-Related Group (MS-DRG) codes using LLMs. The MS-DRG system classifies each hospitalization into a single DRG code based on clinical complexity and resource utilization (see Appendix A.1 for details). Given a hospitalization represented by a set of clinical documents D, the DRG coding process applies an extraction function hto identify the principal diagnosis wdor procedure wp, and the presence of Complications or Comorbidities (CC) or Major Complications or Comorbidities (MCC). A hierarchical mapping function fthen determines the final DRG code. Formally, the MS-DRG assignment is defined as: (wd, wp,CC,MCC ) =h(D), g =f(wd, wp,CC,MCC ), where gis the assigned DRG code. In this paper, we use an LLM to automate this complex process. 3.2 Preliminary: GRPO Compared to Proximal Policy Optimization [ 29], GRPO eliminates the value function and estimates the advantage using relative rewards within a group [ 30]. For each question q, GRPO samples a group of outputs {o1, o2,···, oG}from the old policy πθand then optimizes the target policy πθ. In this paper, we enforce πθold=πθto ensure strict on-policy learning. Under this setting, we maximizing the following objective: JGRPO (θ) =E[q∼P(Q),{oi}G i=1∼πθold(O|q)] 1 GGX i=11 |oi||oi|X t=1 ˆAi,t−β(πref(oi,t|q, oi,<t) πθ(oi,t|q, oi,<t)−logπref(oi,t|q, oi,<t) πθ(oi,t|q, oi,<t)−1) ,(1) where βis the coefficient for the KL divergence penalty, πθrefis the reference policy, and ˆAi,tis the advantage, computed based on the relative rewards within each group {ri}G i=1as: ˆAi,t=ri−mean ({ri}G i=1) std({ri}G i=1). (2) Here, ridenotes the reward assigned to output oifor prompt q. The gradient of JGRPO(θ)is: ∇θJGRPO (θ) =E[q∼P(Q),{oi}G i=1∼πθold(O|q)] 1 GGX i=11 |oi||oi|X t=1 ˆAi,t+βπref(oi,t|oi,<t) πθ(oi,t|oi,<t)−1 ∇θlogπθ(oi,t|q, oi,<t).(3) 3.3 Improving GRPO Beyond the Baseline We propose a set of strategies to address key limitations of GRPO. Dynamic Resampling for Advantage Preservation Existing RL algorithms suffer from the gradient-diminishing problem. In GRPO, if all completions {oi}G i=1for a prompt qreceive the same reward value, the resulting advantage for this group becomes zero. As training progresses, this issue becomes more pronounced due to policy optimization and accompanying entropy collapse [ 41], as more prompts yield completions with no reward variance—either because all completions are perfectly correct or uniformly incorrect. This leads to a progressive decrease in the learning signal from the reward-based advantage. 3 To address this, we propose a dynamic resampling strategy (Equation 4). For each prompt q, if sampled completions yield zero reward variance, we resample up to Nmaxtimes until nonzero variance is observed. Optionally, we enforce that at least one completion receives a positive reward, guiding gradient updates toward high-reward trajectories. JGRPO (θ) =E[q∼P(Q),{oi}G i=1∼πθold(O|q)] 1 GGX i=11 |oi||oi|X t=1 ˆAi,t−β(πref(oi,t|q, oi,<t) πθ(oi,t|q, oi,<t)−logπref(oi,t|q, oi,<t) πθ(oi,t|q, oi,<t)−1) s.t. Var ({ri}G i=1)>0within Nmax,optionally: |{oi|ri>0}|>0.(4) Our approach differs from the dynamic sampling strategy in DAPO [
https://arxiv.org/abs/2505.21908v1
41], which discards prompts that yield uniformly correct or incorrect completions. Given the data scarcity in clinical domains , we instead maximize the utility of each training example by resampling rather than discarding. Intervening on Cognitive Behaviors Cognitive behaviors, such as verification and backtracking, are critical for effective reasoning-oriented RL [ 12]. We explored additional reward functions and a specialized SFT dataset (detailed in Section A.3) to incentivize three cognitive patterns in CoT reasoning, as shown in Figure 2. These are: (1) Answer-First , where the model outputs the DRG code before CoT; (2) CoT-First , where the model generates CoT reasoning before the DRG code; and (3) Differential Thinking , where the model evaluates three potential DRG codes before selecting the most appropriate. <think> Based on the provided discharge summary , the most likely Medicare Severity Diagnosis-Related Group (MS-DRG) is "OTHER CIRCULA TORY SYSTEM DIAGNOSES WITH CC." ### Reasoning: 1. **Principal Diagnosis: ... 2. **CC (Complication or Comorbidity): ... 3. **MCC (Major Complication or Comorbidity): ... 4. ** discharged with specific medications and follow-up plans: ... ### Conclusion: The DRG code "OTHER CIRCULA TORY SYSTEM DIAGNOSES WITH CC" is assigned because the patient's primary condition, despite being initially concerning, did not meet the criteria for an MCC... </think> <answer> OTHER CIRCULA TORY SYSTEM DIAGNOSES WITH CC </answer>Answer-First <think> To determine the most likely MS-DRG for this patient, let's break down the key elements from the discharge summary: 1. **Principal Diagnosis**: ... 2. **Secondary Diagnoses and Procedures**:... 3. **Complications and Comorbidities (CC and MCC)**:... **Reasoning for the DRG Assignment**:... **DRG Code Assignment**: The most likely MS-DRG for this patient, based on the provided information, would be **"OTHER CIRCULA TORY SYSTEM DIAGNOSES WITH CC."** This DRG code is assigned because ... </think> <answer> OTHER CIRCULA TORY SYSTEM DIAGNOSES WITH CC </answer>CoT-First <think> #### Principal Diagnosis: ... #### Secondary Diagnoses: ... #### Procedures Performed: ... #### Other Relevant Factors:... ### Possible DRG Codes 1. **[Heart failure and shock without CC/MCC]** 2. **[Other circulatory system diagnoses with CC]** 3. **[Other circulatory system diagnoses with MCC]** ### Analysis for Each DRG Code: ... ### Conclusion Most Likely DRG Code: [Other circulatory system diagnoses with CC], as it best reflects... ... </think> <answer> OTHER CIRCULA TORY SYSTEM DIAGNOSES WITH CC </answer>Differential Thinking Figure 2: Examples of Cognitive Behaviors. KL Divergence Decay The KL divergence term in the GRPO objective (Equation 1) regularizes the divergence between the target policy πθand the reference policy πθref. However, this term exacerbates the gradient-diminishing problem discussed in Section 3.3: as training progresses and more prompts yield zero-variance responses, the gradient, per Equation 3, becomes dominated by the KL term, pulling πθtoward πθref. This drives over-regularization toward the reference policy and risks policy degradation. Recent work suggests that removing the KL penalty enhances reasoning capabilities in mathematical domains [ 41,25,15]. Motivated by this, we explored two setups: (1) completely removing the KL divergence term from the objective, and (2) applying a cosine decay schedule to the KL term’s coefficient β, smoothly reducing it to zero during training (see Section A.4 for details). GRPO
https://arxiv.org/abs/2505.21908v1
Variants In Equation 1, dividing by |oi|during group-level advantage normalization introduces a length bias, diminishing the influence of longer completions on the policy gradient. To address this, DAPO [ 41] usesPG i=1|oi|as the denominator, while Dr. GRPO [ 25] adopts a constant normalization factor. Additionally, Dr. GRPO removes the division by std({ri}G i=1)in Equation 2 to mitigate question-level difficulty bias. We systematically evaluated these three strategies. Due to the strict on-policy nature of our setting ( πθold=πθ), we did not explore other modifications, such as clip-higher [41]. Reward Shaping We implemented two straightforward yet robust rule-based reward components: Format Reward and Accuracy Reward (detailed in the Section A.2). For the Accuracy Reward, we investigated three distinct strategies: Dense Reward, Balanced Reward, and Strict Reward. These 4 reward functions were designed to provide varying levels of reward signal sparsity, contingent on the correctness of the DRG code, its associated principal diagnosis, and the CC/MCC status. 3.4 Adaptive Learning Strategy Curriculum Learning We investigate whether a curriculum learning strategy, which organizes training cases by difficulty, improves performance compared to a mixed-difficulty baseline. We evaluated four setups, detailed in Appendix A.6: (1) excluding easy cases, (2) excluding hard cases, (3) excluding both easy and hard cases (i.e., using only medium-difficulty cases), and (4) training on easy cases first, then progressing to hard cases. Staged Learning Lastly, we explored a staged learning strategy with three training phases of roughly equal length. After each phase, we identified easy and hard cases and evaluated two approaches: (1) additional SFT on hard cases, and (2) additional DPO on hard cases, before advancing to the next stage. As detailed in Appendix A.7, these approaches aim to improve the model’s handling of challenging cases through targeted learning. 4 Implementation Details Dataset We utilized the DRG-LLaMA training and test sets [ 34], derived from the publicly available MIMIC-IV dataset of real-world medical records [ 18]. The full training and test sets include 236,192 and 26,244 cases, respectively. Each case uses the “brief hospital course” section from the discharge summary as input, with MS-DRG codes consolidated to version 34.0. Training Pipeline and Scaling Strategy An overview of the training pipeline is shown in Figure 3. We first sampled a reduced dataset termed DRG-Small , comprising 20% of the full data (N=46,758). This subset served as the foundation for extensive experiments on methodological variants and SFT–RL data mixtures, as detailed in Sections 5.2 through 5.3. After identifying the optimal configuration, we scaled training to the full dataset to produce the final DRG-S APPHIRE model. Step 1 Bootstrap CoT Reasoning for DRG Assignment Using Qwen2.5-7BStep 2 SFT on the Qwen2.5-7B Using Cold-Start DataStep 3 Large-Scale RL with GRPO and Verifiable Rewards Clinical Notes and DRG Pairs from MIMIC-IVCold-Start Dataset Cold-Start Model CheckpointDRG-Sapphire Figure 3: Overview of Pipeline. We construct a CoT cold-start dataset using Qwen2.5-7B-Instruct, followed by SFT with this dataset and large-scale GRPO. Construction of SFT Dataset We prompted the Qwen2.5-7B-Instruct model with medical records and ground-truth DRG codes, tasking it to generate reasoning for DRG assignments (prompt provided in Section H). After extensive
https://arxiv.org/abs/2505.21908v1
prompt engineering, manual inspection by domain expert revealed that the dataset exhibits correct reasoning logic (e.g., analyzing principal diagnosis first) but frequently contains factual errors (e.g., misclassifying a condition’s CC/MCC status). We also included the complete list of original V34.0 MS-DRG codes in a question–answer format within the SFT dataset. Model and RL Training We selected Qwen2.5-7B-Instructs [ 39] for the main experiments after evaluating various model size. GRPO training was conducted using the TRL package [ 33] for one epoch across all experiments. Evaluation Metrics We report model performance on the full test set using Pass@1, Pass@8, and Majority@8 (Maj@8), following prior work in reasoning-oriented RL [ 30,42]. Pass@1, reported as the model’s accuracy, is the mean accuracy across eight runs. Pass@8 assesses whether the correct DRG code appears among eight generated outputs, while Maj@8 determines if the most frequent output matches the correct DRG code. 5 5 Experiments 5.1 Results of DRG-S APPHIRE Our best DRG-S APPHIRE model was achieved using a 90% SFT and 10% RL ratio on the full dataset (see Section 5.2 for SFT vs. RL ratio experiments), incorporating optimal GRPO enhancements and adaptive learning strategies (see Section 5.3 for ablation studies). Comparison with Baselines As shown in Figure 1 A, DRG-S APPHIRE significantly outperforms proprietary reasoning models, non-reasoning models, and the DeepSeek-distilled Qwen 32B. It achieves new SOTA performance on DRG coding, surpassing the previous best, DRG-LLaMA-7B (54.8% vs. 53.9%). In addition to improved accuracy, DRG-S APPHIRE provides interpretable reasoning—a compelling advantage over prior models trained purely as classifiers. Helpfulness Accuracy1=Very Poor2=Poor3=Acceptable4=Good5=Very GoodReview Score Figure 4: Expert Reader Study.Expert Reader Study Results Four physicians in hospital leadership roles, actively engaged in DRG-related initiatives (e.g., reducing geometric mean length of stay), evaluated DRG- SAPPHIRE ’s reasoning across 30 cases. On the dimensions ofHelpfulness andAccuracy ,DRG-S APPHIRE received a median rating of 4 out of 5, suggesting significant potential for real-world applications (Figure 4). Quantitative assessments highlighted the explainability of DRG coding as highly valuable for DRG-related initiatives (see Section D.1 for details), despite occasional factual inaccuracies in the reasoning. 5.2 Optimizing Data Allocation Between SFT and GRPO Pass@1 Pass@8 Maj@81020304050607080Accuracy (%) 25.054.0 31.438.749.6 39.6+54.6%-8.2% +26.0%A. Deepseek-R1-Style (5% SFT, 95% RL) SFT + GRPO Pass@1 Pass@8 Maj@81020304050607080Accuracy (%) 32.968.0 40.143.058.9 44.4+30.6%-13.3% +10.7%B. 25% SFT, 75% RL SFT + GRPO Pass@1 Pass@8 Maj@81020304050607080Accuracy (%) 36.770.9 43.646.557.4 47.2+26.8%-19.0% +8.3%C. 50% SFT, 50% RL SFT + GRPO Pass@1 Pass@8 Maj@81020304050607080Accuracy (%) 38.572.1 45.4 47.459.6 48.3+23.3%-17.4% +6.4%D. 75% SFT, 25% RL SFT + GRPO Pass@1 Pass@8 Maj@820304050607080Accuracy (%) 40.072.6 46.7E. 100% SFT, 0% RL SFT 5% 25% 50% 75% 100% SFT Percentage01020304050607080T otal Training Time (hrs)F. Training Time and Accuracy by SFT % SFT GRPO Accuracy 303540455055 Accuracy (%) 38.543.046.547.4 40.0 Figure 5: Impact of SFT-GRPO Data Ratios on DRG-Small Subset. Panels A–E: GRPO consistently improves Pass@1 and Maj@1 across all SFT ratios but reduces Pass@8. Panel F: Training time decreases with higher SFT ratios, as GRPO is more time-consuming. Effect of SFT-GRPO Ratios on DRG-Small First, we investigated the impact of varying the allocation of a fixed data budget between SFT
https://arxiv.org/abs/2505.21908v1
and GRPO on the DRG-Small subset (N=46,758). This contrasts with Deepseek-R1-style training, where only minimal SFT precedes RL. Across all data splits, GRPO consistently and significantly improved Pass@1 over the SFT baseline by an absolute margin of approximately 10 percentage points (see Figure 5). We observed that this gain is driven by improvements in Maj@8, not Pass@8; in fact, Pass@8 declines with GRPO. This pattern suggests thatRL sharpens the model’s output distribution toward higher-reward pathways , rather than 6 enhancing its underlying reasoning capabilities. Notably, the decline in Pass@8 during training indicates that RL may limit diverse reasoning pathways in our experiments. These findings align with recent studies [ 42,30], which question whether RL improves reasoning beyond the base model’s capabilities. Furthermore, the ultimate performance ceiling achievable with GRPO appears to be largely determined by the capacity of the initial SFT model; a stronger SFT foundation generally leads to better post-GRPO results . From a computational perspective, scaling SFT before RL is more efficient, as GRPO involves costly completion generation (see Figure 5 F). 0 250 500 750 1000 1250 1500 Global Steps45.047.550.052.555.057.5Accuracy (%) A. Accuracy vs. Global Steps Full Data, 50% SFT Full Data, 75% SFT 211212213214215216217218 SFT Sample Size before RL505560657075Pass@8 (%) B. Pass@8 vs SFT Sample Size SFT GRPO Full Data, 50% SFT 211212213214215216217218 SFT Sample Size before RL3540455055Maj@8 (%) C. Maj@8 vs SFT Sample Size SFT GRPO Full Data, 50% SFT Figure 6: Results on Full Dataset. A: Accuracy from the two longest GRPO runs. B–C: Pass@8 and Maj@8 vs. SFT size. Dashed line marks where 50% of training data was used for SFT. Best results from vanilla GRPO runs are shown. Log-Linear Scaling of GRPO with Increasing SFT Next, we scaled our training pipeline to the full dataset (N=236,192). Based on the results above, we started with an SFT-GRPO data ratio of 50%-50% and progressively increased the SFT ratio under a fixed data budget. Plotting these results alongside the DRG-Small subset revealed that both GRPO and SFT performance scale approximately linearly with the logarithm of the number of SFT examples (Figure 1 A). Although the number of GRPO steps varies in Figure 1 A, the benefit of scaling RL appeared limited in our experiments. Figure 6 A illustrates results from our longest GRPO runs, demonstrating modest benefits beyond 500 global steps. Consistent with earlier findings, GRPO reliably improves Pass@1 and Maj@8 while reducing Pass@8 (Figure 6 B and C). As the number of SFT samples increased, the slope of the GRPO curves converged toward that of SFT across all metrics. Additional results from scaling to the full dataset are detailed in Section C.3. 5.3 Ablation Studies on GRPO Enhancements and Adapative Learning We present the results of ablation studies in Table 1 and Figure 12. All ablation studies were conducted on the DRG-Small dataset using Deepseek-R1-style training, with cold-start SFT on 1% of the training data (N=2,362) before RL. 0 100 200 300 400 500 600 700 Global Steps0.200.450.700.95Reward StdA. Reward Std Positive Resampling Neutral Resampling Vanilla GRPO 0 100 200 300 400 500 600 700 Global
https://arxiv.org/abs/2505.21908v1
Steps0.200.450.700.95Reward ScoreB. Training Curve Positive Resampling Neutral Resampling Vanilla GRPO406 272 71Training Time (hours)C. Training Time Positive Resampling Neutral Resampling Vanilla GRPO Figure 7: Dynamic Resampling. Despite maintaining a high reward variance during training (A), dynamic resampling performs worse than vanilla GRPO from training curves (B) and is significantly more time-consuming to train (C). Dynamic Resampling Surprisingly, dynamic resampling—with or without a positive reward con- straint—yielded marginally better or even worse performance than vanilla GRPO, despite preserving high reward variance (Figure 7 A). Moreover, dynamic resampling proved computationally inefficient due to the frequent need to regenerate responses (Figure 7 C). We hypothesize that dynamic resam- pling introduces sampling bias by oversampling prompts with zero reward variance, which may skew the batch toward out-of-distribution responses rarely produced by the current policy. Additionally, 7 DRG Principal Diagnosis CC/MCC Model Pass@1 Pass@8 Maj@8 Pass@1 Pass@8 Maj@8 Pass@1 Pass@8 Maj@8 Baseline Vanilla GRPO + Dense Reward 38.5 48.2 39.3 52.5 58.5 53.4 47.8 60.0 49.0 Dynamic Resampling Neutral Resampling 20.3 41.9 38.1 27.0 52.5 50.5 25.6 52.6 48.0 Positive Reward Resampling 39.2 44.8 39.6 52.9 56.4 53.3 48.3 55.6 49.0 Coginitive Behvaiors Intervention COT-First 35.5 52.2 37.4 50.9 59.6 52.4 46.3 66.7 48.4 Differential Thinking 30.2 47.3 33.9 46.7 57.0 50.9 40.6 63.0 45.2 GRPO Variants DAPO Loss 40.1 48.0 40.6 53.8 58.5 54.3 49.4 59.1 50.3 Dr. GRPO Loss 37.5 47.6 38.1 50.9 57.2 51.4 48.8 60.7 49.8 Dr. GRPO Advantage 38.5 51.9 39.6 53.4 60.5 54.3 47.6 63.6 49.1 KL Divergence No KL 39.8 42.4 39.9 53.6 55.2 53.7 49.1 52.3 49.3 Kl Decay 38.2 42.0 38.3 52.2 54.7 52.4 48.8 53.7 49.0 Reward Shaping Strict Reward 40.1 49.1 40.9 52.8 58.1 53.7 47.6 59.0 48.8 Balanced Reward 38.1 51.3 40.0 52.1 60.4 53.8 48.2 64.0 50.7 Curriculum Learning Remove Easy Cases 35.8 51.9 37.6 50.3 59.2 51.7 46.6 65.8 48.7 Remove Hard Cases 40.4 46.6 40.7 53.2 56.5 53.7 49.5 57.2 50.1 Remove Easy and Hard Cases 38.7 48.2 39.4 52.9 58.1 53.4 48.3 59.9 49.3 From Easy to Hard 29.4 51.7 32.7 43.4 58.6 46.5 40.8 68.5 44.3 Staged Learning Staged SFT 39.3 49.1 40.0 52.9 59.2 53.8 46.0 58.6 47.1 Staged DPO 29.3 46.1 31.2 43.8 54.3 45.5 43.1 64.2 45.7 Table 1: Ablation Study Results. Rows with a blue background indicate superior Pass@1 perfor- mance compared to Vanilla GRPO + Dense Reward. Bold values denote the highest score for each metric. this approach may inadvertently over-penalize low-reward outputs newly introduced into the batch, further distorting the learning signal. Intervening on Cognitive Behaviors Our SFT dataset includes diverse reasoning styles, notably both Answer-First and CoT-First patterns. Interestingly, during training, the policy frequently converged toward the Answer-First strategy. To encourage CoT-First behavior, we experimented with an additional rule-based reward and adjusted the SFT dataset to explicitly promote Differential- Thinking. While both interventions successfully induced the desired cognitive behaviors, they significantly underperformed compared to the naturally converging Answer-First pattern. This finding is surprising, as CoT-First strategies are often effective in complex reasoning tasks [ 35]. We hypothesize that DRG coding benefits from a
https://arxiv.org/abs/2505.21908v1
direct prediction strategy, where outputting the DRG code first leverages implicit knowledge in the model’s latent space, outperforming explicit CoT-grounded reasoning. These findings also align with recent studies [ 26,6], which suggest that CoT and extended reasoning may not always be necessary for reasoning models, and a “no-thinking” pattern can sometimes yield better performance. 0 100 200 300 400 500 Global Steps2.0 1.5 1.0 0.5 0.00.5Reward ScoreA. Training Collapse w/o KL 1.5B with KL 1.5B no KL 7B with KL 7B no KL 650 670 690 710 730 Global Steps0.850.900.951.00Reward ScoreB. Training Curve for Final Steps Vanilla Kl Decay Figure 8: KL divergence. A. Examples of train- ing collapse when removing the KL divergence. B. KL decay appears beneficial late in training.KL Divergence In our experiments, remov- ing the KL penalty frequently led to model collapse (see Figure 8 A). This contrasts sharply with findings in mathematical reasoning tasks, where the KL term is less critical, underscoring its importance for cross-domain generalization. However, in cases where training successfully completed without the KL penalty, performance surpassed that of vanilla GRPO (see Table 1), consistent with the analysis in Section 3.3. Ad- ditionally, a cosine KL decay schedule appeared beneficial. While it yielded no significant performance gains in small-scale runs, it improved the training curve toward the end, suggesting that a low KL penalty toward the end helps prevent over- regularization toward the reference policy (Figure 8 B). Indeed, KL decay proved beneficial when scaling training on the full dataset, as shown in Table 2. 8 0 100 200 300 400 500 600 700 Global Steps0.250.500.751.00Reward ScoreA. Training Curve DAPO Loss Dr. GRPO Loss Dr. GRPO Advantage Vanilla GRPO 0 100 200 300 400 500 600 700 Global Steps500525550575Completion LengthB. Completion Length DAPO Loss Dr. GRPO Loss Dr. GRPO Advantage Vanilla GRPOFigure 9: GRPO Variants. A. Dr. GRPO loss underperforms other GRPO variants from training curve. B. All GRPO variants exhibit similar com- pletion length contraction.GRPO Variants Among three GRPO vari- ants, the DAPO loss achieved the highest per- formance, while the Dr. GRPO loss performed the lowest (Figure 9). This finding aligns with recent work reporting that Dr. GRPO does not outperform vanilla GRPO [ 7]. Across all set- tings, we observed completion length contrac- tion during training: as accuracy improved, out- put lengths sharply decreased before stabilizing. This contrasts with trends observed in mathemat- ical reasoning tasks, where longer completions are often associated with better performance. Reward Shaping The strict accuracy reward, despite providing the sparsest reward signals, outper- formed both dense and balanced reward variants. Notably, we observed no improvement in pincipal diagnosis or CC/MCC accuracy under the denser reward schemes. We hypothesize that denser rewards may lead the policy to converge prematurely to local optima, trading off global performance for easier-to-optimize intermediate signals. Adaptive Learning We observed benefits from removing easy and hard cases during training. Similarly, recent studies suggest that maintaining medium-level difficulty cases may be most effective for RL training in the math domain [ 32,37,17,38]. Staged learning with SFT resulted in modest performance gains despite additional compute.
https://arxiv.org/abs/2505.21908v1
5.4 Prerequisites for Effective GRPO Training We explored prerequisites for effective GRPO training, finding that vanilla Qwen2.5 models (base and instruct) failed to produce correct DRG codes with GRPO alone, despite quickly adopting the target reasoning format (Figure 10 A). Post-SFT, all models showed improved RL performance that generally scaled with model size, though gains from 7B to 14B were modest (Figure 10 B). Higher SFT learning rates (up to 4×10−5) and extended training epochs further enhanced GRPO performance, with diminishing returns at higher learning rates (Figure 10 C). These results align with recent findings [27] emphasizing the importance of aggressive SFT for reasoning-intensive tasks. 0 100 200 300 400 500 600 700 Global Steps2.0 1.5 1.0 0.5 0.00.5Reward ScoreA. RL from Out-of-Shelf Models Vanilla Qwen2.5-7B-Base Vanilla Qwen2.5-7B-Instruct 0 100 200 300 400 500 600 700 Global Steps2.0 1.5 1.0 0.5 0.00.5Reward ScoreB. RL from SFT Models (by Size) Qwen2.5-14B-Instruct Qwen2.5-7B-Instruct Qwen2.5-1.5B-Instruct 0 100 200 300 400 500 600 700 Global Steps1.2 0.7 0.2 0.30.8Reward ScoreC. RL from SFT Models (by LR/Epochs) SFT 3e-6 for 3 epochs SFT 3e-6 for 9 epochs SFT 4e-5 for 3 epochs SFT 4e-5 for 9 epochs Figure 10: Prerequisites for GRPO Training . A. Vanilla models fail to explore. B. GRPO perfor- mance increases with model size post-SFT. C. Higher SFT learning rates boost GRPO performance. 6 Conclusion In this work, we used DRG coding as an empirical study to explore RL for OOD reasoning in LLMs. Our approach, applying GRPO with verifiable rewards, achieved a new SOTA performance while offering a key advantage over prior methods: the generation of physician-validated explanations through CoT reasoning. Critically, our findings reveal that RL performance on this OOD task is fundamentally constrained by the base model’s capacity prior to RL. We observed a logarithmic scaling relationship between the number of SFT examples and subsequent RL performance. Despite extensive experimentation with RL algorithmic enhancements and adaptive learning strategies, these refinements yielded only modest improvements compared to simply initializing RL from stronger SFT baselines——highlighting a “bitter lesson” in applying RL to tasks that fall outside the pretraining distribution of LLMs. 9 References [1] Drg-llama: Tuning llama model to predict diagnosis-related group for hospitalized patients. [2] MIMIC-IV on physionet. [3] Responsible use of mimic data with online services like gpt. [4]S. Bae, J. Hong, M. Y . Lee, H. Kim, J. Nam, and D. Kwak. Online difficulty filtering for reasoning oriented reinforcement learning. arXiv preprint arXiv:2504.03380 , 2025. [5]J. Chen, Z. Cai, K. Ji, X. Wang, W. Liu, R. Wang, J. Hou, and B. Wang. Huatuogpt-o1, towards medical complex reasoning with llms. arXiv preprint arXiv:2412.18925 , 2024. [6]Y . Chen, J. Benton, A. Radhakrishnan, J. U. C. Denison, J. Schulman, A. Somani, P. Hase, M. W. F. R. V . Mikulik, S. Bowman, J. L. J. Kaplan, et al. Reasoning models don’t always say what they think. [7]X. Chu, H. Huang, X. Zhang, F. Wei, and Y . Wang. Gpg: A simple and strong reinforcement learning baseline for model reasoning. arXiv preprint arXiv:2504.02546 , 2025. [8]CMS. Icd-10-cm/pcs ms-drg v34. 0 definitions manual.
https://arxiv.org/abs/2505.21908v1
https://www.cms.gov/icd10m/ version34-fullcode-cms/fullcode_cms/P0001.html. , 2016. [9]H. Dong, M. Falis, W. Whiteley, B. Alex, J. Matterson, S. Ji, J. Chen, and H. Wu. Automated clinical coding: what, why, and where we are? NPJ digital medicine , 5(1):159, 2022. [10] H. Face. Open r1: A fully open reproduction of deepseek-r1, January 2025. [11] Y . Feng. Can large language models replace coding specialists? evaluating gpt performance in medical coding tasks. 2025. [12] K. Gandhi, A. Chakravarthy, A. Singh, N. Lile, and N. D. Goodman. Cognitive behaviors that enable self-improving reasoners, or, four habits of highly effective stars. arXiv preprint arXiv:2503.01307 , 2025. [13] D. Guo, D. Yang, H. Zhang, J. Song, R. Zhang, R. Xu, Q. Zhu, S. Ma, P. Wang, X. Bi, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948 , 2025. [14] D. Hajialigol, D. Kaknes, T. Barbour, D. Yao, C. North, J. Sun, D. Liem, and X. Wang. Drgcoder: Explainable clinical coding for the early prediction of diagnostic-related groups. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations , pages 373–380, 2023. [15] J. Hu, Y . Zhang, Q. Han, D. Jiang, X. Zhang, and H.-Y . Shum. Open-reasoner-zero: An open source approach to scaling up reinforcement learning on the base model. arXiv preprint arXiv:2503.24290 , 2025. [16] A. Jaech, A. Kalai, A. Lerer, A. Richardson, A. El-Kishky, A. Low, A. Helyar, A. Madry, A. Beutel, A. Carney, et al. Openai o1 system card. arXiv preprint arXiv:2412.16720 , 2024. [17] Y . Ji, S. Zhao, X. Tian, H. Wang, S. Chen, Y . Peng, H. Zhao, and X. Li. How difficulty- aware staged reinforcement learning enhances llms’ reasoning capabilities: A preliminary experimental study. arXiv preprint arXiv:2504.00829 , 2025. [18] A. E. Johnson, L. Bulgarelli, L. Shen, A. Gayles, A. Shammout, S. Horng, T. J. Pollard, S. Hao, B. Moody, B. Gow, et al. Mimic-iv, a freely accessible electronic health record dataset. Scientific data, 10(1):1, 2023. [19] W. Kwon, Z. Li, S. Zhuang, Y . Sheng, L. Zheng, C. H. Yu, J. Gonzalez, H. Zhang, and I. Stoica. Efficient memory management for large language model serving with pagedattention. In Proceedings of the 29th Symposium on Operating Systems Principles , pages 611–626, 2023. [20] Y . Lai, J. Zhong, M. Li, S. Zhao, and X. Yang. Med-r1: Reinforcement learning for generalizable medical reasoning in vision-language models. arXiv preprint arXiv:2503.13939 , 2025. 10 [21] W. Lan, W. Wang, C. Ji, G. Yang, Y . Zhang, X. Liu, S. Wu, and G. Wang. Clinicalgpt-r1: Pushing reasoning capability of generalist disease diagnosis with large language model. arXiv preprint arXiv:2504.09421 , 2025. [22] R. Li, X. Wang, and H. Yu. Exploring llm multi-agents for icd coding. arXiv preprint arXiv:2406.15363 , 2024. [23] Z. Lin, M. Lin, Y . Xie, and R. Ji. Cppo: Accelerating the training of group relative policy optimization-based reasoning models. arXiv preprint arXiv:2503.22342 , 2025. [24] J. Liu, D. Capurro, A. Nguyen, and K. Verspoor. Early prediction of diagnostic-related groups and estimation of hospital cost by processing clinical notes. NPJ digital medicine ,
https://arxiv.org/abs/2505.21908v1
4(1):103, 2021. [25] Z. Liu, C. Chen, W. Li, P. Qi, T. Pang, C. Du, W. S. Lee, and M. Lin. Understanding r1-zero-like training: A critical perspective, 2025. URL https://arxiv. org/abs/2503.20783 . [26] W. Ma, J. He, C. Snell, T. Griggs, S. Min, and M. Zaharia. Reasoning models can be effective without thinking. arXiv preprint arXiv:2504.09858 , 2025. [27] G. Penedo, L. Tunstall, A. Lozhkov, H. Kydlicek, E. Beeching, L. B. Allal, Q. Gallouédec, L. von Werra, A. P. Lajarín, and N. Habib. Open r1 update 3: Steady progress and a new technical report, 2024. Hugging Face Blog. [28] K. Quinn. After the revolution: Drgs at age 30. Annals of internal medicine , 160(6):426–429, 2014. [29] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 , 2017. [30] Z. Shao, P. Wang, Q. Zhu, R. Xu, J. Song, X. Bi, H. Zhang, M. Zhang, Y . Li, Y . Wu, et al. Deepseekmath: Pushing the limits of mathematical reasoning in open language models. arXiv preprint arXiv:2402.03300 , 2024. [31] A. Soroush, B. S. Glicksberg, E. Zimlichman, Y . Barash, R. Freeman, A. W. Charney, G. N. Nadkarni, and E. Klang. Large language models are poor medical coders—benchmarking of medical code querying. NEJM AI , 1(5):AIdbp2300040, 2024. [32] K. Team, A. Du, B. Gao, B. Xing, C. Jiang, C. Chen, C. Li, C. Xiao, C. Du, C. Liao, et al. Kimi k1. 5: Scaling reinforcement learning with llms. arXiv preprint arXiv:2501.12599 , 2025. [33] L. von Werra, Y . Belkada, L. Tunstall, E. Beeching, T. Thrush, N. Lambert, S. Huang, K. Ra- sul, and Q. Gallouédec. Trl: Transformer reinforcement learning. https://github.com/ huggingface/trl , 2020. [34] H. Wang, C. Gao, C. Dantona, B. Hull, and J. Sun. Drg-llama: tuning llama model to predict diagnosis-related group for hospitalized patients. npj Digital Medicine , 7(1):16, 2024. [35] J. Wei, X. Wang, D. Schuurmans, M. Bosma, F. Xia, E. Chi, Q. V . Le, D. Zhou, et al. Chain-of- thought prompting elicits reasoning in large language models. Advances in neural information processing systems , 35:24824–24837, 2022. [36] T. Xie, Z. Gao, Q. Ren, H. Luo, Y . Hong, B. Dai, J. Zhou, K. Qiu, Z. Wu, and C. Luo. Logic-rl: Unleashing llm reasoning with rule-based reinforcement learning. arXiv preprint arXiv:2502.14768 , 2025. [37] W. Xiong, J. Yao, Y . Xu, B. Pang, L. Wang, D. Sahoo, J. Li, N. Jiang, T. Zhang, C. Xiong, et al. A minimalist approach to llm reasoning: from rejection sampling to reinforce. arXiv preprint arXiv:2504.11343 , 2025. [38] J. Yan, Y . Li, Z. Hu, Z. Wang, G. Cui, X. Qu, Y . Cheng, and Y . Zhang. Learning to reason under off-policy guidance. arXiv preprint arXiv:2504.14945 , 2025. [39] A. Yang, B. Yang, B. Zhang, B. Hui, B. Zheng, B. Yu, C. Li, D. Liu, F. Huang, H. Wei, et al. Qwen2. 5 technical report. arXiv preprint arXiv:2412.15115 , 2024. 11 [40] Z. Yang, S. S. Batra, J. Stremmel, and E. Halperin. Surpassing gpt-4 medical coding with a two-stage approach. arXiv preprint
https://arxiv.org/abs/2505.21908v1
arXiv:2311.13735 , 2023. [41] Q. Yu, Z. Zhang, R. Zhu, Y . Yuan, X. Zuo, Y . Yue, T. Fan, G. Liu, L. Liu, X. Liu, et al. Dapo: An open-source llm reinforcement learning system at scale. arXiv preprint arXiv:2503.14476 , 2025. [42] Y . Yue, Z. Chen, R. Lu, A. Zhao, Z. Wang, Y . Yue, S. Song, and G. Huang. Does reinforcement learning really incentivize reasoning capacity in llms beyond the base model?, 2025. [43] W. Zeng, Y . Huang, Q. Liu, W. Liu, K. He, Z. Ma, and J. He. Simplerl-zoo: Investigating and taming zero reinforcement learning for open base models in the wild. arXiv preprint arXiv:2503.18892 , 2025. [44] X. Zhang, J. Wang, Z. Cheng, W. Zhuang, Z. Lin, M. Zhang, S. Wang, Y . Cui, C. Wang, J. Peng, et al. Srpo: A cross-domain implementation of large-scale reinforcement learning on llm. arXiv preprint arXiv:2504.14286 , 2025. 12 Appendix Contents A Addtional Methods 14 A.1 Problem Definition of MS-DRG Coding . . . . . . . . . . . . . . . . . . . . . 14 A.2 Rule-Based Reward Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 A.3 Enforcing Cognitive Behaviors . . . . . . . . . . . . . . . . . . . . . . . . . . 15 A.4 KL Divergence Decay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 A.5 GRPO Variants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 A.6 Curriculum Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 A.7 Staged Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 B Additional Implementation Details 16 B.1 SFT Training Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 B.2 GRPO Training Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 B.3 Experimental Hyperparameters . . . . . . . . . . . . . . . . . . . . . . . . . . 16 B.4 Evaluation Details . . . . . . .
https://arxiv.org/abs/2505.21908v1
. . . . . . . . . . . . . . . . . . . . . . . . . . 16 B.5 Dynamic Resampling Details . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 C Additional Results 17 C.1 Experiments with GRPO Hyperparameters . . . . . . . . . . . . . . . . . . . . 17 C.2 Accuracy with RL Training in Ablation Studies . . . . . . . . . . . . . . . . . . 17 C.3 Additional Results from Scaling to the Full Dataset . . . . . . . . . . . . . . . . 18 D Additional Discussion 18 D.1 Clinical Applications of Automated DRG Coding with Reasoning . . . . . . . . 18 D.2 Practical Implication of Improved Pass@1 but Not Pass@k . . . . . . . . . . . . 19 D.3 DRG vs ICD Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 E Limitations and Future Work 19 F Data Access 19 G Instruction to Reviewers 20 H Prompts to LLM 21 I Example Outputs from DRG-Sapphire 22 J Example Outputs Demonstrating Different Cognitive Behaviors 24 13 A Addtional Methods A.1 Problem Definition of MS-DRG Coding Under the Medicare Severity DRG (MS-DRG) system, each hospitalization is assigned a single DRG code based on clinical complexity and resource utilization, following rules established by the Centers for Medicare & Medicaid Services (CMS) [ 8]. Given a hospital stay D={d1, d2, . . . , d n}, where eachdirepresents a clinical document generated during the hospitalization, the DRG assignment process performed by human coders can be mathematically represented as follows: 1.Extraction of Diagnoses and Procedures. From D, extract a set W={w1, w2, . . . , w m}, where eachwi∈Wcorresponds to a distinct medical diagnosis or procedure managed during the stay. 2.Identification of Principal Diagnosis or Procedure. Select a principal diagnosis wd∈W(for medical DRGs) or a principal procedure wp∈W(for surgical DRGs), representing the main reason for admission or the primary surgical intervention. Only one—diagnosis or procedure—is designated as principal depending on the case type. 3.Detection of Complications and Comorbidities. Identify the presence of Complications or Comorbidities (CC) and Major Complications or Comorbidities (MCC) within W, forming subsets: CC⊆W, MCC ⊆W, CC ∩MCC =∅, which reflect distinct levels of clinical severity and resource impact. 4.Hierarchical Mapping to DRG. The final MS-DRG code gis determined via: (wd, wp,CC,MCC ) =h(D), g =f(wd, wp,CC,MCC ), where hextracts the principal diagnosis or procedure and CC/MCC from D, andfrepresents the CMS-defined DRG mapping logic. A.2 Rule-Based Reward Modeling We adopted the following two simple yet rigorous rule-based reward components. Format Reward. The Format Reward enforces a structured response,
https://arxiv.org/abs/2505.21908v1
requiring reasoning con- tent to be enclosed within <think></think> tags and the final answer (DRG code) within <answer></answer> tags. The reward is defined as: Sformat =0, if the response format is correct −2,otherwise Accuracy Reward. The Accuracy Reward evaluates the correctness of the DRG code, and applied only if the Format Reward condition is satisfied. We explored three reward shaping strategies: (a) Dense Reward Sdense=  2, if full match 1.5, if principal diagnosis match only 0.5, if CC/MCC match only −1.5,if invalid DRG (a) Balanced Reward Sbalanced =  2, if full match 1, if principal diagnosis match only 1, if CC/MCC match only −1.5,if invalid DRG 14 (c) Strict Reward Sstrict=  2, if full match 0, if partial or no match but valid DRG −1.5,if invalid DRG A.3 Enforcing Cognitive Behaviors To incentivize CoT-first cognitive behaviors, we introduced an additional format penalty. If the model outputs a DRG code within the first 50 tokens of the reasoning, a penalty score of −0.5is assigned. To encourage differential thinking, we reconstructed the SFT dataset using the same data. We designed a new prompt for the Qwen2.5-7B-Instruct model to generate three potential DRG codes per case (prompt provided in Section H), each accompanied by reasoning, before selecting the most appropriate DRG code. A.4 KL Divergence Decay To gradually relax the regularization imposed by the KL penalty, we apply a cosine decay to the KL coefficient β, reducing it from its initial value to zero over the course of training. For global step t and total training steps T, the decay factor is defined as: decay_factor (t) = 0 .5· 1 + cosπt T The decayed coefficient at step tis then: βt=β·decay_factor (t) This decay schedule promotes stability in the early stages of training while encouraging exploration in later updates. A.5 GRPO Variants We implemented the loss functions for different GRPO variants as follows: #Vanilla GRPO loss. This is the original implementation in TRL v0.15.1. vanila_GRPO_loss = ((per_token_loss * completion_mask).sum(dim=1) / completion_mask. sum(dim=1)).mean() #DAPO loss. This is the default implementation in TRL since v0.16.0. DAPO_GRPO_loss = (per_token_loss * completion_mask).sum() / completion_mask.sum() #Dr.GRPO loss. We set max_tokens to 1024. max_tokens = 1024 Dr_GRPO_loss = ((per_token_loss * completion_mask).sum(dim=1) / max_tokens).mean() For the Dr. GRPO advantage, we modified the advantage computation by removing the denominator in Equation 2. A.6 Curriculum Learning Unlike mathematical problems, the difficulty of DRG coding is not easily defined. High prediction accuracy does not necessarily imply that a DRG code is inherently easy. For instance, the frequently occurring code “Septicemia or Severe Sepsis without MV >96 Hours with MCC” may be straight- forward in most cases but can pose challenges when the clinical narrative emphasizes a different condition, such as a urinary tract infection. Moreover, no standardized benchmark exists to quantify DRG coding difficulty. To address this, we employed a static online filtering strategy. For each experiment, we first ran the model without filtering to establish a baseline. Easy cases were identified as those with zero reward variance and perfect accuracy scores ( 2) under both dense and strict rewards. Hard cases
https://arxiv.org/abs/2505.21908v1
were defined 15 as those with zero reward variance, an accuracy score of −0.5under the dense reward, and 0under the strict reward. We then reran the experiment from the SFT model after excluding these filtered cases. A.7 Staged Learning For staged learning, we divided the training process into three stages, each with approximately the same number of global steps. After each stage, we identified hard and easy cases using the methodology described in Section A.6. For hard cases, we prompted the current GRPO model checkpoint to generate reasoning given the case and the correct DRG code. We then performed SFT or DPO on this new dataset. For SFT, we used a learning rate of 4×10−5for 3 epochs. For DPO, we designated the original model output as the rejected response, and trained with a learning rate of 3×10−6for 3 epochs. B Additional Implementation Details B.1 SFT Training Details We used the SFT trainer from the TRL library for all SFT runs [ 33]. Training was conducted on 4 H100 or A100 GPUs, depending on availability, using bf16 precision. We set packing=False and max_seq_lengthh to 12846. A cosine learning rate schedule with a minimum of 10% of the initial rate was applied, along with a warm-up ratio of 0.05. The global batch size was adjusted based on VRAM constraints to roughly match the number of unique cases per step used in GRPO training. B.2 GRPO Training Details Our implementation of GRPO was based on the Open R1 framework [ 10], which leverages vLLM [ 19] for inference and GRPO Trainer from TRL library (v0.15.1) [ 33] for training. All training was conducted on 3 to 5 H100 or A100 GPUs, depending on availability, using bf16 precision. For all GRPO experiments, we set num_generations to 8, per_device_train_batch_size to 2 or 4, and gradient_accumulation_steps to 32 or 64, ensuring a consistent global batch size of 512 across experiments. Each global step consisted of 64 unique prompts, each with 8 generated completions. We set the max_prompt_length to 4096 and the max_completion_length to 10240. The temperature of the policy model is set to 1. All other training parameters were kept at their default values, including a KL regularization coefficient of β= 0.04. All GRPO experiments were run for a single epoch. As we enforced πθold=πθto ensure strict on-policy learning, this is equivalent to setting num_iterations to 1 in later versions of the TRL library. We adopted the default system prompt from Open R1. B.3 Experimental Hyperparameters SFT For SFT, we experimented with learning rates and training epochs, as detailed in Section 5.4. For all experiments in Section 5.3, we initialized GRPO training with an SFT model trained using a learning rate of 4×10−5for 9 epochs. The only exception is the result shown in Figure 8 A, which illustrates training collapse from earlier runs using a learning rate of 3×10−6. For experiments in Sections 5.2, we used SFT models trained with a learning rate of 4×10−5but for 3 epochs, as the SFT data was scaled. GRPO For GRPO, we experimented with learning rate values and schedulers, as detailed in
https://arxiv.org/abs/2505.21908v1
Section C.1. For all experiments in Sections 5.2 to 5.3, we used a GRPO learning rate of 3×10−6 with a constant learning rate scheduler and a warmup ratio of 0.1. B.4 Evaluation Details We used vLLM [ 19] for inference during evaluation. All evaluations were conducted on the full test set (N = 26,244). We set the temperature to 0.6, top_p to 0.95, and max_tokens to 4096. To compute Pass@8, we set nto 8 in SamplingParams , generating eight completions per case. Pass@1 is reported as the mean accuracy across these eight generations. For evaluation, we extracted the DRG 16 code enclosed within <answer></answer> tags and computed exact match against the reference code after text normalization. All training curves figures in Section 5.3 are smoothed using a moving average with a window of 50 steps. B.5 Dynamic Resampling Details For both neutral and positive dynamic resampling, we set the maximum number of regenera- tion attempts to 12. During regeneration, the model randomly selects a temperature from the set{0.7,0.8,0.9,1.0}. C Additional Results C.1 Experiments with GRPO Hyperparameters 0100 200 300 400 500 600 700 Global Steps0.250.500.751.00Reward Score GRPO 3e-6 Decay GRPO 3e-6 Constant GRPO 1e-6 Decay GRPO 1e-6 Constant Figure 11: Training Curve vs GRPO Learning RatesWe performed a limited hyperparameter search to tune the learn- ing rate and scheduler for GRPO. As detailed in Section 5.3, our best configuration combines DAPO loss, strict reward enforce- ment, and KL decay. A learning rate of 3×10−6consistently outperformed 1×10−6, as shown in Figure 11. We also com- pared constant and decaying learning rate schedules and found comparable overall performance. Notably, a constant learning rate was more effective at the lower learning rate, though this advantage diminished at higher rates. Nevertheless, constant learning rates may help mitigate the gradient vanishing issue, as discussed in Section 3.3. C.2 Accuracy with RL Training in Ablation Studies We present accuracy results from various ablation studies in Section 5.3, as shown in Figure 12. 200 300 400 500 600 700 Global Steps202530354045Accuracy (%) A. Dynamic Resampling Positive Resampling Neutral Resampling 200 300 400 500 600 700 Global Steps202530354045Accuracy (%) B. Cognitive Behaviors CoT-First Differential Thinking 200 300 400 500 600 700 Global Steps30354045Accuracy (%) C. KL Divergence No KL KL Decay 200 300 400 500 600 700 Global Steps30354045Accuracy (%) D. GRPO Variants DAPO Loss Dr. GRPO Loss Dr. GRPO Advantage 200 300 400 500 600 700 Global Steps30354045Accuracy (%) E. Reward Shape Strict Reward Balanced Reward 200 300 400 500 600 700 Global Steps30354045Accuracy (%) F. Curriculum Learning Remove Easy Cases Remove Hard Cases Remove Easy and Hard Cases From Easy to Hard Figure 12: Accuracy with RL Training in Ablation Studies. The dashed line indicates the baseline performance of vanilla GRPO with dense rewards. Error bars indicate the standard deviation across 8 runs. 17 DRG Principal Diagnosis CC/MCC Model Pass@1 Pass@8 Maj@8 Pass@1 Pass@8 Maj@8 Pass@1 Pass@8 Maj@8 50% SFT SFT 44.6 75.3 50.9 58.1 77.1 63.4 51.8 80.7 58.4 Vanilla GRPO 52.8 64.2 53.9 63.9 70.9 64.9 59.0 69.6 60.2 Best Config 53.7 59.1 53.9 63.5 66.9
https://arxiv.org/abs/2505.21908v1
63.9 58.8 64.1 59.4 75% SFT SFT 46.5 76.2 52.8 59.3 77.6 64.5 53.3 80.7 59.6 Vanilla GRPO 53.5 64.9 54.6 64.0 71.4 65.1 59.2 69.7 60.5 Best Config 54.6 60.6 54.9 64.4 68.2 64.8 59.6 65.3 60.2 Best Config - KL Decay 54.0 65.3 55.0 63.8 71.0 64.9 58.7 69.4 60.0 Best Config + Remove Hard Case 54.4 58.1 54.5 63.8 66.1 64.0 59.1 62.7 59.4 Best Config + Remove Easy and Hard Case 54.7 58.8 54.8 64.5 67.1 64.8 59.5 63.5 59.9 90% SFT SFT* 10.5 41.4 35.7 12.4 47.1 43.6 11.7 47.0 41.6 Vanilla GRPO 54.1 65.2 55.0 64.3 71.2 65.2 59.8 70.0 60.7 Best Config 54.6 62.2 54.9 64.0 68.9 64.6 59.3 66.3 60.0 Best Config - KL Decay 54.2 66.9 55.4 63.9 72.1 65.2 59.3 70.8 60.8 Best Config + Remove Hard Case 54.8 60.3 54.9 64.4 68.1 64.7 59.9 64.9 60.4 Best Config + Remove Easy and Hard Case 54.5 61.4 54.8 64.2 68.8 64.9 59.0 65.4 59.8 95% SFT SFT 47.0 76.9 53.3 59.7 78.5 65.0 53.9 81.0 59.9 Vanilla GRPO 53.5 67.7 55.2 64.0 72.6 65.6 59.2 72.3 61.0 Best Config 54.4 64.9 55.1 64.2 70.5 65.0 59.4 69.4 60.5 Best Config - KL Decay 53.0 69.4 55.1 63.2 73.5 65.1 58.4 73.4 60.8 Best Config + Remove Hard Case 54.3 62.8 54.7 64.2 69.7 64.9 59.5 67.4 60.3 Best Config + Remove Easy and Hard Case 52.9 69.3 54.9 63.1 73.0 65.0 58.1 73.3 60.5 Table 2: Scaling of GRPO on the Full Dataset. All experiments were conducted on the full training set (N=236,192) with varying SFT-to-RL ratios, and the best result from each experiment is reported in the table. The best configuration of GRPO consists of DAPO Loss, Strict Reward, and KL Decay. The row highlighted in blue indicates the best Pass@1 performance. Bold values denote the highest score for each metric. * Checkpoint from the 90% SFT training runs exhibited format-following instabilities, resulting in lower-than-expected scores. Despite this unstable SFT baseline, RL training remained robust. C.3 Additional Results from Scaling to the Full Dataset We present experimental results on the full dataset with varying SFT-to-RL data splits (Table 2). Our best configuration, combining DAPO loss, strict reward enforcement, and KL decay (Section 5.3), consistently outperformed vanilla GRPO across all experiments. Curriculum learning, implemented by excluding hard or easy cases, further improved performance. The best overall performance of DRG-S APPHIRE was achieved with a 90% SFT and 10% RL split using the best GRPO configuration and hard-case exclusion. Notably, we excluded the 90% SFT-only results and the RL results from the 95% SFT group as outliers in Figure 1 B and Figures 6 B and C. The checkpoints from the 90% SFT runs exhibited format-following instabilities, resulting in lower-than-expected scores. Despite this unstable SFT baseline, RL training remained robust, effectively leveraging the knowledge encoded through SFT and ultimately producing our best overall results. The RL outcomes from the 95% SFT experiments are likely not representative of true RL potential due to insufficient RL training ( <250global steps). Additionally, we did not conduct
https://arxiv.org/abs/2505.21908v1
experiments without KL decay or with curriculum learning for the 50% SFT group, given the limited performance observed with vanilla GRPO in that setting. D Additional Discussion D.1 Clinical Applications of Automated DRG Coding with Reasoning In discussions with domain experts, DRG-Sapphire shows significant potential for real-world clinical applications. Here are two examples: 1.Currently, DRGs are assigned by professional coders and are typically available only after hospital discharge. DRG-Sapphire can provide early DRG predictions to inform hospital operations and financial forecasting. 2.DRG-Sapphire can support DRG-related quality improvement initiatives, such as those aimed at reducing the geometric length of stay, a metric directly determined by DRG. It provides 18 transparent, interpretable explanations of DRG assignments, enabling clinicians to improve their clinical documentation to better reflect patient severity. D.2 Practical Implication of Improved Pass@1 but Not Pass@k Our experiments demonstrate that RL improves Pass@1 (i.e., accuracy) but not Pass@k for higher k values, indicating that RL enhances the model’s ability to produce the correct DRG code in a single attempt without necessarily improving its broader reasoning capacity. However, this outcome aligns well with the requirements of high-stakes clinical applications like DRG coding, where only the first prediction truly matters, as users typically do not sample multiple outputs. Moreover, selecting the correct answer from multiple candidate responses is challenging, as methods beyond the best-of-N approach, which RL already optimizes by improving Pass@1 through better majority voting (Maj@k), are not well-established. D.3 DRG vs ICD Coding Although both DRG and International Classification of Diseases (ICD) codes serve clinical and administrative purposes, they differ significantly in classification approach and real-world applications. DRG assignment is typically formulated as a multi-class classification task, in which exactly one DRG code is assigned to summarize resource utilization and clinical complexity for an entire hospitalization. In contrast, ICD coding is a multi-label classification problem, as multiple ICD codes—covering both diagnoses and procedures—may be assigned to document a single encounter. Furthermore, the two coding systems exhibit distinct hierarchical structures: DRG assignment explicitly emphasizes identifying a principal diagnosis that primarily drives the hospitalization, along with secondary conditions and procedures that influence clinical complexity and reimbursement [ 8]. Finally, the utilization contexts for these codes differ significantly; DRGs are directly tied to inpatient reimbursement mechanisms and hospital resource management, whereas ICD codes have broader applications, including both inpatient and outpatient clinical documentation. E Limitations and Future Work Our study encountered several limitations. First, we employed only rule-based rewards for final DRG assignments, without utilizing process supervision during the reasoning steps. While it’s unclear how best to implement such supervision, theoretically, more granular and dense reward signals throughout the reasoning process could help guide the policy toward more effective exploration. Future work exploring this direction—potentially combining explicit DRG rules with techniques such as process reward modeling—represents an intriguing avenue. Second, we applied relatively static curriculum learning and case-filtering strategies, which were conducted only once following the completion of a base run. A dynamic, online, difficulty-based filtering approach—applied at the per-batch level—may be more effective and warrants further investigation. Lastly, our work focused exclusively on the challenging
https://arxiv.org/abs/2505.21908v1
task of DRG coding. Extending our approach to other medical-domain tasks, or even diverse OOD tasks across different domains, would be valuable. In particular, it would be compelling to investigate whether scaling RL methods across multiple tasks and domains encourages exploration of more diverse reasoning pathways beyond the base model, rather than merely refining the output distribution toward outcomes that yield higher immediate rewards. F Data Access Access to MIMIC-IV can be requested via [ 2], which requires signing a data use agreement. The training and test datasets used in this study can be obtained by following the instructions in [ 1]. For experiments involving MIMIC-IV data and proprietary models, we adhered to the guidelines in [ 3] and utilized the Azure OpenAI service. 19 G Instruction to Reviewers Instructions for physician expert reviewers on scoring the reasoning chase of DRG-Sapphire are provided below. Instruction to Reviewers 1. You will be provided with a discharge summary from the public MIMIC-IV dataset, along with a corresponding DRG code assignment and its rationale generated by a large language model (LLM). 2. Please note that, similar to existing DRG prediction tools currently in use, the LLM-generated DRG code assignment may be either correct or incorrect. 3. Your task is to rate the LLM output along two dimensions: Helpfulness andAccuracy , using a scale from 1 to 5 (very poor, poor, acceptable, good, or very good) , where higher scores indicate better quality. 4.Helpfulness : For this dimension, please answer the question: “Is the LLM’s reasoning and explanation helpful to frontline healthcare providers?” Reflect on real-world initiatives you are engaged in that center around DRG optimization (e.g., efforts to reduce geo- metric mean length of stay). Assess whether the information provided by the LLM would meaningfully assist physicians in such settings, addressing questions commonly raised in practice. Rubric: •Score of 1 (very poor): The content is not helpful — for example, it may be too generic, lack necessary detail, or be overly vague. •Score of 3 (acceptable): The content is sufficiently helpful and acceptable for use in real-world clinical settings. •Score of 5 (very helpful): The content is highly helpful and could positively impact real-world DRG-related initiatives. 5.Accuracy : For this dimension, please answer the question: “Does the information provided by the LLM accurately reflect MS-DRG assignment rules?” Base your evaluation on your best knowledge and understanding of the MS-DRG system. Rubric: • Score of 1 (very poor): The information is substantially inaccurate. •Score of 3 (acceptable): The information is accurate enough to support decision-making by frontline healthcare providers. •Score of 5 (very accurate): The information is highly accurate and consistent with MS-DRG assignment rules. 20 H Prompts to LLM Prompt for Qwen2.5-7B-Instruct for SFT cold-start dataset genearation. Generate Reasoning for DRG Assignment MS-DRG (Medicare Severity Diagnosis-Related Groups) is a system used by the U.S. Centers for Medicare & Medicaid Services to classify hospital inpatient stays for payment purposes. It groups patients with similar clinical conditions and resource needs into categories to determine reimbursement amounts. Each MS-DRG is assigned based on the patient’s principal diagnosis, secondary
https://arxiv.org/abs/2505.21908v1
diagnoses, procedures performed, age, discharge status, and other factors. The goal is to ensure fair and consistent hospital reimbursement based on the severity of the illness and the complexity of care required. CC and MCC in MS-DRG: •CC (Complication or Comorbidity): A secondary diagnosis that increases the com- plexity of care and resource utilization. •MCC (Major Complication or Comorbidity): A more severe secondary condition that has a significant impact on resource use and hospital reimbursement. •MCCs have a greater effect on the DRG weight than CCs due to increased patient care complexity. What is the most likely Medicare Severity Diagnosis Related Group (MS-DRG) based on the following discharge summary of a hospitalization? Provide the complete text description of the DRG code without including the numerical code. [Discharge Summary] The answer is [DRG Code] Explain the reason why the DRG code is assigned. In your reasoning step, assume you don’t know the right DRG code yet. For SFT dataset generation with differential thinking, we added the following content to the prompt above. Additional Prompt to Generate Differential Thinking First, analyze the case details, including the principal diagnosis, secondary diagnoses, procedures performed, and any other relevant factors that may influence DRG assignment. Next, list three possible DRG codes that you believe are the most likely. Enclose each DRG code in square brackets, like this: [Heart failure without MCC]. One of the three DRG codes should be [DRG Code] . Then, provide a detailed analysis for each of the three DRG codes. For each one, explain: • Why this DRG code might be appropriate? • Why it might not be appropriate? Finally, based on your analysis, identify the most likely DRG code and it should be [DRG Code] . 21 I Example Outputs from DRG-Sapphire Example One <think> Based on the provided discharge summary, the most likely Medicare Severity Diagnosis-Related Group (MS-DRG) is "ESOPHAGITIS GASTROENTERITIS AND MISCELLANEOUS DIGESTIVE DISORDERS WITHOUT MCC." Let’s break down the reasoning step-by-step: 1.Principal Diagnosis and Conditions: • The principal diagnosis is colitis, which is a digestive disorder. •There are no complications or comorbidities (CCs or MCCs) that significantly im- pact the complexity of care or resource utilization more than the primary diagnosis of colitis. 2.Secondary Conditions and CCs/MCCs: •Atypical Carcinoid Tumor of the Lung: A significant condition, but not the primary reason for the hospitalization and does not elevate complexity of care related to colitis. •Metastatic Pancreatic Neuroendocrine Tumor: Also significant, but not directly impacting the care for colitis. •Perforated Diverticulitis: Required prior hospitalization but is not contributing to the current hospital stay. •Transaminitis: Mild and resolved quickly; not related to the colitis. •Hypothyroidism: Chronic and well-managed; no impact on the complexity of care. •Ischemia: A possible cause of colitis, but not confirmed and does not significantly impact complexity. 3.Complications or Comorbidities (CCs): •The atypical carcinoid tumor, metastatic pancreatic neuroendocrine tumor, and ischemia are significant but not classified as CCs that elevate complexity for colitis. •The perforated diverticulitis is historical and not relevant to the current hospital stay. 4.Major Complication or Comorbidity (MCC): •No significant conditions qualify as an MCC
https://arxiv.org/abs/2505.21908v1