text string | source string |
|---|---|
Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149 , 2015. Zeyu Han, Chao Gao, Jinyang Liu, Jeff Zhang, and Sai Qian Zhang. Parameter-efficient fine-tuning for large models: A comprehensive survey. Transactions on Machine Learning Re... | https://arxiv.org/abs/2505.21895v1 |
The Thirteenth International Conference on Learning Representations , 2025. Soroush Abbasi Koohpayegani, Navaneet K L, Parsa Nooralinejad, Soheil Kolouri, and Hamed Pirsiavash. NOLA: Compressing lora using linear combination of random basis. In The Twelfth International Conference on Learning Representations , 2024. Da... | https://arxiv.org/abs/2505.21895v1 |
Proceedings of the 38th International Conference on Machine Learning , pages 8748–8763. PMLR, 2021. Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. Xnor-net: Imagenet classification using binary convolutional neural networks. In ECCV , 2016. Nataniel Ruiz, Yuanzhen Li, Varun Jampani, Yael Pritch, M... | https://arxiv.org/abs/2505.21895v1 |
Herglotz et al. (2022, 2024), and has occasionally been applied for other modalities such as Point Cloud Wang et al. (2021a,b); Herglotz et al. (2024); Barman et al. (2022) or Neural Radiance Field compression Ji et al. (2025). The metric involves evaluating two comparison codecs at two rate and performance positions. ... | https://arxiv.org/abs/2505.21895v1 |
explicit distributional schemes is that they are easy to standardise within a new data-type, fast to compute, and require no additional codebook for dequantization. The disadvantage is that they perform poorly in the presence of outlier values or non-Gaussian distributions Ashkboos et al. (2024); Liu et al. (2025); Ger... | https://arxiv.org/abs/2505.21895v1 |
in the low-bit region. Consistent with theory, this gap narrows at higher bit-rates with equivalent performance at 8-bits. Some individual evaluations at 5-Bit show slightly improved performance despite lower quantization error due to the downstream classification evaluation. 16 C Additional Stable Diffusion Results C.... | https://arxiv.org/abs/2505.21895v1 |
21 E Additional LLAMA-3 Results E.1 Non-Quantized Base Model Table 6: LLAMA-3-8B performance on Commonsense Reasoning evaluated for Ranks (1, 2, 4, 8, 16), and PTQ (2, 3, 5). Memory in KB. Training conducted with a non-quantized base model. PTQ applied to model adapters. Results as the (correct / total) for each task. ... | https://arxiv.org/abs/2505.21895v1 |
.058 0 .068 0 .213 0 .049 0 .405 0 .165 4521 LoRA 2 0.773 0.900 0.624 0.753 0 .774 0 .841 0 .695 0 .659 0 .752 4316 SineLoRA 2 0.782 0.900 0.618 0.779 0.792 0.843 0.714 0.680 0.764 4422 DoRA 3 0.352 0 .489 0 .509 0 .222 0 .360 0 .417 0 .320 0 .502 0 .396 6344 LoRA 3 0.784 0 .901 0 .623 0 .761 0.812 0.845 0.713 0 .681 0... | https://arxiv.org/abs/2505.21895v1 |
0.622 0.524 0 .718 0 .816 0 .612 0 .627 0 .687 1107 QSineLoRA 2 0.743 0.896 0.622 0.701 0.770 0.828 0.688 0.642 0.736 1126 QDoRA 3 0.225 0 .343 0 .589 0 .025 0 .286 0 .018 0 .188 0 .470 0 .268 1854 QLoRA 3 0.744 0 .894 0 .622 0 .541 0 .756 0.834 0.681 0 .648 0 .715 1547 QSineLoRA 3 0.754 0.896 0.654 0.646 0.774 0.826 0... | https://arxiv.org/abs/2505.21895v1 |
arXiv:2505.21898v1 [cs.CL] 28 May 2025Co-Saving: Resource Aware Multi-Agent Collaboration for Software Development Rennai Qiu⋆†Chen Qian♣†Ran Li⋆Yufan Dang⋆Weize Chen⋆ Cheng Yang♠Yingli Zhang♢Ye Tian♡Xuantang Xiong♡ Lei Han♡Zhiyuan Liu⋆BMaosong Sun⋆B ⋆Tsinghua University♣Shanghai Jiao Tong University ♠Beijing Universit... | https://arxiv.org/abs/2505.21898v1 |
consumption and excessive time usage, which directly incurs inefficiency of system. As the scale of tasks expands and the number of participating agents increases, the frequency and complexity of agent interactions correspondingly increase, exacerbating operational overhead. Thus, effectively managing and reducing the ... | https://arxiv.org/abs/2505.21898v1 |
and quantitative evaluation of the shortcut mechanism. To enable a more rigorous representation and analysis of the multi-agent collaboration process, we abstract each complete task execution as a directed graph. During the interaction, an instructor issues a series of instructions ( I={i1, i2,···, in}) , and an assist... | https://arxiv.org/abs/2505.21898v1 |
facilitates between two solutions—specifically, the transition from one node to another in the solution graph. For a given solution denoted by njlocated at a specific node, we define its score as follows: w(nj) =sim(nj,task)×sim(nj, s |N|)×[ [sj] ] (4) Here, s|N|denotes the solution at the final node in the graph, repr... | https://arxiv.org/abs/2505.21898v1 |
our method, we select a diverse set of representative LLM- driven software engineering methods and pure LLMs to facilitate a comprehensive multidimensional comparison: •GPT-3.5-Turbo [ 24], GPT-4 [ 25], LLaMA 3 70B [ 26]and are widely adopted foundation models that serve as baselines for pure LLM performance, covering ... | https://arxiv.org/abs/2505.21898v1 |
practical proxy. A higher value indicates greater code detail. •Quality : A comprehensive metric obtained by integrating completeness, executability, consistency, and granularity. Specifically, it is defined as the product of these four metrics, serving as an overall indicator of code quality. •Budgeted Completion Rate... | https://arxiv.org/abs/2505.21898v1 |
through role-based coordination to perform multi- step reasoning, but still struggles to generate logically coherent code for complex tasks, leading to a relatively lower Executability score. For the Completeness metric, ChatDev slightly outperforms Co-Saving. We hypothesize that this advantage stems from Co-Saving’s r... | https://arxiv.org/abs/2505.21898v1 |
presented in Figure 3. The inclusion of the Co-Saving algorithm results in a significant reduction in the number of reasoning iterations required for task execution. Additionally, both total execution time and token consumption are notably decreased. These findings demonstrate that Co-Saving effectively streamlines the... | https://arxiv.org/abs/2505.21898v1 |
empowered by large-scale pretraining and parameter-rich architectures, have achieved remarkable advancements in this area. With the rapid development of LLMs, there is increasing interest in building autonomous agents [ 36,15,5,13, 37,4,11] that leverage LLMs for domain-specific tasks. These agents combine LLMs’ reason... | https://arxiv.org/abs/2505.21898v1 |
Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language Models are Few-Shot Learners. In H. Larochelle, M. Ranzato, R. Hadsell, M.F... | https://arxiv.org/abs/2505.21898v1 |
Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, and Anima Anandkumar. V oyager: An Open-Ended Embodied Agent with Large Language Models. In Intrinsically-Motivated and Open-Ended Learning Workshop @NeurIPS2023 , 2023. [16] Xizhou Zhu, Yuntao Chen, Hao Tian, Chenxin Tao, Weijie Su, Chenyu Yang, Gao Huang, Bin Li, Lewei Lu... | https://arxiv.org/abs/2505.21898v1 |
Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, Red Avila, Igor Babuschkin, Suchir Balaji, Valerie Balcom, Paul Baltescu, Haiming Bao, Mohammad Bavarian, Jeff Belgum, Irwan Bello, Jake Berdine, Gabriel Bernadett-Shapiro, Christopher Berner, Lenny Bogdonoff, Oleg Boiko, Madelaine Boyd, Anna-Luisa... | https://arxiv.org/abs/2505.21898v1 |
Wang, Jonathan Ward, Jason Wei, CJ Weinmann, Akila Welihinda, Peter Welinder, Jiayi Weng, Lilian Weng, Matt Wiethoff, Dave Willner, Clemens Winter, Samuel Wolrich, Hannah Wong, Lauren Workman, Sherwin Wu, Jeff Wu, Michael Wu, Kai Xiao, Tao Xu, Sarah Yoo, Kevin Yu, Qiming Yuan, Wojciech Zaremba, Rowan Zellers, Chong Zha... | https://arxiv.org/abs/2505.21898v1 |
Enables Expert-level Prompt Optimization. In The Twelfth International Conference on Learning Representations (ICLR) , 2024. [38] Tianle Cai, Xuezhi Wang, Tengyu Ma, Xinyun Chen, and Denny Zhou. Large Language Models as Tool Makers. In The Twelfth International Conference on Learning Representations (ICLR) , 2024. [39]... | https://arxiv.org/abs/2505.21898v1 |
InThe Thirteenth International Conference on Learning Representations , 2025. [52] Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, and Hannaneh Hajishirzi. Self-Instruct: Aligning Language Models with Self-Generated Instructions. In Anna Rogers, Jordan Boyd-Graber, and Naoaki Oka... | https://arxiv.org/abs/2505.21898v1 |
Ruidi Chang, Shichao Pei, Nitesh V . Chawla, Olaf Wiest, and Xiangliang Zhang. Large language model based multi-agents: A survey of progress and challenges. 2024. [65] Zhexuan Wang, Yutong Wang, Xuebo Liu, Liang Ding, Miao Zhang, Jie Liu, and Min Zhang. Agentdropout: Dynamic agent elimination for token-efficient and hi... | https://arxiv.org/abs/2505.21898v1 |
CAST: Contrastive Adaptation and Distillation for Semi-Supervised Instance Segmentation Pardis Taghavi Texas A&M University ptgh@tamu.eduTian Liu Texas A&M University ltmask@tamu.eduRenjie Li Texas A&M University renjie@tamu.edu Reza Langari Texas A&M University rlangari@tamu.eduZhengzhong Tu Texas A&M University tzz@t... | https://arxiv.org/abs/2505.21904v1 |
30] distill a VFM matching its output on an unlabeled transfer set, and SAM-CLIP [ 31] fuses CLIP and SAM. However, neither method targets per-pixel instance masks nor exploits dense self-supervision from the unlabeled pool. Pure semi-supervised instance segmentation methods, such as [ 17,3] train teachers from scratch... | https://arxiv.org/abs/2505.21904v1 |
This approach has proven effective across vision tasks, improving image classification performance [ 35] and boosting object detection accuracy when annotation budgets are tight [ 21]. To counteract error accumulation from noisy pseudo-labels [ 29] use exponential moving average of label predictions, or [ 6] employ cur... | https://arxiv.org/abs/2505.21904v1 |
into a lightweight student under a unified loss that harmonizes ground truth, pseudo-label, and contrastive terms, guided by our debiased sampling. 3.Student Refinement. Fine-tune the student on labeled data to remove residual pseudo-label bias. 3 Sec. C.2 formalizes our instance-aware pixel-wise contrastive loss, whic... | https://arxiv.org/abs/2505.21904v1 |
instance is at least p >0.5, where pcan be estimated empirically (see Sec. D.3). Proposition C.1 (Expected Margin Growth) .Under Assumption C.1, one gradient update on Lpxl increases the expected inter-instance margin ∆empby ε= Θ( p λpxl)>0. This expectation holds even when pseudo-labels are imperfect, provided negativ... | https://arxiv.org/abs/2505.21904v1 |
mm-Grounding-DINO [ 45]. For the student, we pair a DINOv2-S en- coder [ 22] with a DPT-S decoder head [ 24], followed by a lightweight transformer decoder module in 5 the spirit of Mask2Former [ 11]. Our choice of the DINOv2+DPT backbone is motivated by the recent successes of “Depth AnythingV2” in monocular depth est... | https://arxiv.org/abs/2505.21904v1 |
Student Distillation Supervised(baseline) Labeled only 21.1 38 .7 13 .9 24 .2 PAIS [17] Labeled + Unlabeled 22.9 44 .9 10 .3 18 .3 Guided dist. [3] Labeled + Unlabeled 30.8 52 .9 14 .2 23 .8 + unlabeled KD [30] Unlabeled only 24.4 45 .6 5 .1 9 .3 + labeled+unlabeled KD (ours) Labeled + Unlabeled 30.7 54 .9 14 .4 25 .2 ... | https://arxiv.org/abs/2505.21904v1 |
and class predictions. The fusion strategy achieves the best results, with 32.2 maskAP and 56.5 AP 50. Hyperparameter Sensitivity. We evaluate CAST’s sensitivity to three key hyperparameters on Cityscapes: contrastive weight λpxl, negatives per anchor K, and temperature T, by measuring 7 (a)Negative Sampling Strategies... | https://arxiv.org/abs/2505.21904v1 |
fur- ther guarantees that our negative sampling scheme provably increases inter-instance margins under mild assumptions. Looking forward, streamlining CAST into a single unified objec- tive, extending its evaluation to diverse domains, and integrating uncertainty quantification will be critical steps toward safe, equit... | https://arxiv.org/abs/2505.21904v1 |
are strong semi-supervised learners. Advances in neural information processing systems , 33:22243–22255, 2020. [9]Xinlei Chen, Saining Xie, and Kaiming He. An empirical study of training self-supervised vision transformers. In Proceedings of the IEEE/CVF international conference on computer vision , pages 9640–9649, 20... | https://arxiv.org/abs/2505.21904v1 |
the IEEE/CVF international conference on computer vision , pages 12179–12188, 2021. [25] Nikhila Ravi, Valentin Gabeur, Yuan-Ting Hu, Ronghang Hu, Chaitanya Ryali, Tengyu Ma, Haitham Khedr, Roman Rädle, Chloe Rolland, Laura Gustafson, et al. Sam 2: Segment anything in images and videos. arXiv preprint arXiv:2408.00714 ... | https://arxiv.org/abs/2505.21904v1 |
Conference on Computer Vision and Pattern Recognition , pages 15952–15962, 2024. [39] Chuanguang Yang, Xinqiang Yu, Han Yang, Zhulin An, Chengqing Yu, Libo Huang, and Yongjun Xu. Multi-teacher knowledge distillation with reinforcement learning for visual recognition. arXiv preprint arXiv:2502.18510 , 2025. [40] Lihe Ya... | https://arxiv.org/abs/2505.21904v1 |
3.1 Proof Sketch. Letza,z+and{z− r}R r=1be the unit norm embeddings of an anchor pixel, its positive, andRnegatives. Define s+=⟨za, z+⟩, s− r=⟨za, z− r⟩, and the pixel-wise contrastive loss ℓ(za) =−logexp(s+) exp(s+) +PR r=1exp(s−r). Let Z= exp( s+) +RX r=1exp(s− r), α r=exp(s− r) Z. A straightforward gradient computat... | https://arxiv.org/abs/2505.21904v1 |
Vision-Language-Action Model with Open-World Embodied Reasoning from Pretrained Knowledge Zhongyi Zhou1,2∗Yichen Zhu1∗†Junjie Wen1Chaomin Shen2Yi Xu1 1Midea Group2East China Normal University chatvla-2.github.io Large Language ModelImage TokensLanguage Instruction Action ExpertLLM Head ActionsReasoning DynamicMoERobotM... | https://arxiv.org/abs/2505.21906v1 |
pre-trained knowledge from VLMs, VLAs significantly enhance robotic learning. This allows robots to better understand and interact with the world while improving their ability to perform complex physical tasks. Intuitively, pre-training a VLA model consists of a powerful, pre-trained VLMs, such as PaliGemma [ 5] or Qwe... | https://arxiv.org/abs/2505.21906v1 |
research indicates that even large language models frequently produce outputs inconsistent with their thinking process. By ensuring that the action outputs through VLA models reliably follow their reasoning processes, we can substantially enhance their ability to generalize effectively across diverse and previously uns... | https://arxiv.org/abs/2505.21906v1 |
backbone, current VLA approaches fail to effectively utilize the pretrained knowledge from these VLMs, limiting robots’ capabilities for open-world manipulation. Consequently, this significantly undermines the rationale behind employing pretrained VLMs within large-scale models. In this paper, we introduce ChatVLA-2, a... | https://arxiv.org/abs/2505.21906v1 |
reasoning, allowing our approach to effectively harness the VLM’s pre-trained knowledge and enabling the VLA model to generalize across diverse scenes. 3.2 Model Architecture Dynamic mixture-of-expert. Typically, VLA models utilize a dense vision-language backbone as their foundational architecture. Prior research [ 7]... | https://arxiv.org/abs/2505.21906v1 |
effectively injecting reasoning context into the model. Importantly, we incorporate this mechanism exclusively into the latter half layers, rather than uniformly across all layers. This design choice aligns with findings from prior studies, such as PointVLA [ 63] and GR00T N1 [ 64], which suggest that modifications to ... | https://arxiv.org/abs/2505.21906v1 |
open-world environments, the reasoning required may not be presented in the training data. Thus, it becomes particularly crucial to strengthen the connection between reasoning and action, ensuring that actions accurately follow and execute the reasoning outcomes for generalizable robot control. Specifically, we freeze ... | https://arxiv.org/abs/2505.21906v1 |
equipment at a frequency of 50 Hz. Experimental results. The experimental results are presented in Table 1. We compare our method against several state-of-the-art models, including Octo [ 68], Diffusion Policy [ 30], OpenVLA [ 8], GR00T N1 [ 64], DexVLA [ 2], ChatVLA [ 7], and π0[1]. We first examine the in-domain perf... | https://arxiv.org/abs/2505.21906v1 |
proposed method performing comparably to DexVLA and π0. While ChatVLA was capable of recognizing novel objects in the open-world setting, its performance remained much lower than our method’s 0.94. For action execution, models other than our method and π0exhibited near-random success rates in this setting. Even ChatVLA... | https://arxiv.org/abs/2505.21906v1 |
two-stage training strategy designed explicitly to enable VLA models to act effectively in open-world scenarios and consistently follow generated reasoning. Table 4 presents the ablation study isolating the effects of Stage 1 and Stage 2 on model performance in the math matching game. When Stage 2 was excluded, the mod... | https://arxiv.org/abs/2505.21906v1 |
2025. [8]Moo Jin Kim, Karl Pertsch, Siddharth Karamcheti, Ted Xiao, Ashwin Balakrishna, Suraj Nair, Rafael Rafailov, Ethan Foster, Grace Lam, Pannag Sanketi, et al. Openvla: An open-source vision-language-action model. [9]Yanjie Ze, Gu Zhang, Kangning Zhang, Chenyuan Hu, Muhan Wang, and Huazhe Xu. 3d diffusion policy: ... | https://arxiv.org/abs/2505.21906v1 |
et al. Rt-2: Vision-language-action models transfer web knowledge to robotic control. arXiv preprint arXiv:2307.15818 , 2023. [24] Xinghang Li, Minghuan Liu, Hanbo Zhang, Cunjun Yu, Jie Xu, Hongtao Wu, Chilam Cheang, Ya Jing, Weinan Zhang, Huaping Liu, et al. Vision-language foundation models as effective robot imitato... | https://arxiv.org/abs/2505.21906v1 |
Fabian Wenzel, and Rudolf Lioutikov. Multimodal diffusion transformer: Learning versatile behavior from multimodal goals. 2024. [39] Tony Z Zhao, Jonathan Tompson, Danny Driess, Pete Florence, Seyed Kamyar Seyed Ghasemipour, Chelsea Finn, and Ayzaan Wahid. Aloha unleashed: A simple recipe for robot dexterity. In 8th An... | https://arxiv.org/abs/2505.21906v1 |
video-based vision-language-action model for unifying embodied navigation tasks. arXiv preprint arXiv:2412.06224 , 2024. [53] Yuhui Chen, Shuai Tian, Shugao Liu, Yingting Zhou, Haoran Li, and Dongbin Zhao. Conrft: A reinforced fine-tuning method for vla models via consistency policy. arXiv preprint arXiv:2502.05450 , 2... | https://arxiv.org/abs/2505.21906v1 |
In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 6700–6709, 2019. [68] Octo Model Team, Dibya Ghosh, Homer Walke, Karl Pertsch, Kevin Black, Oier Mees, Sudeep Dasari, Joey Hejna, Charles Xu, Jianlan Luo, Tobias Kreiman, You Liang Tan, Lawrence Yunliang Chen, Pannag Sanketi, Q... | https://arxiv.org/abs/2505.21906v1 |
each example to a maximum of 5 dialogue turns. If an instance originally contains more than 5 turns, we retain the first turn and randomly sample four additional turns from the remainder. For the TextVQA dataset, we specifically select samples that do not contain numeric OCR tokens or mathematical operators, as our goa... | https://arxiv.org/abs/2505.21906v1 |
arXiv:2505.21907v1 [cs.AI] 28 May 2025MODELING AND OPTIMIZING USERPREFERENCES IN AI COPILOTS : A C OMPREHENSIVE SURVEY AND TAXONOMY Saleh Afzoon1ID, Zahra Jahanandish2ID, Phuong Thao Huynh1ID, Amin Beheshti1ID, Usman Naseem1ID 1School of Computing, Macquarie University, Sydney, Australia 2Department of Computer Enginee... | https://arxiv.org/abs/2505.21907v1 |
modeled, and refined in these systems. In doing so, we offer a comprehensive view of the emerging design space for adaptive, user-centered AI copilots. Research related to this survey broadly falls into two categories: (i) AI systems and intelligent assistants, and (ii) preference optimization techniques. While each ha... | https://arxiv.org/abs/2505.21907v1 |
backgrounds To establish a clear foundation for defining the new generation of AI-powered systems—referred to as AI copilots—we analyze key concepts and terminology from existing literature. This section provides a precise understanding of how AI copilots differ from prior intelligent systems in function, autonomy, int... | https://arxiv.org/abs/2505.21907v1 |
Digital Assistants (DAs): their ability to merge the simplicity of intuitive natural language dialogue with personalized, context-specific assistance. This study focuses on categorizing the different forms of the Smart Personal Assistant (SPA), which is essentially a personalized Digital Assistant. Thus, the classifica... | https://arxiv.org/abs/2505.21907v1 |
falling into either Task-oriented or Non-task-oriented (open-ended or simple chit-chat) categories [ 21,23]. Task-oriented chatbots generally consist of domain-specific hand-crafted rules with limited and focused conversation context analysis and are typically utilized in devices for convenience. Non-task-oriented or i... | https://arxiv.org/abs/2505.21907v1 |
of expert systems with the adaptability of data-driven models, adeptly handling quantitative and qualitative data to address complexities in uncertain environments [30]. It manages uncertainties—like fuzziness, randomness, and ignorance—through belief degrees and employs the 4 Evidence Reasoning (ER) algorithm for seam... | https://arxiv.org/abs/2505.21907v1 |
Mechanisms, Use Cases, and Limitations Source Category Core Mechanisms Use Cases Limitations Explicit FeedbackPairwise comparisons (e.g., Chatbot Arena) [35] Choice prompts [36] Satisfaction vs. engagement signals [37]Direct preference labeling Evaluation benchmarking Task-specific tuningHigh annotation cost User fatig... | https://arxiv.org/abs/2505.21907v1 |
of both explicit and implicit feedback. For instance, topical user profiling integrates explicit declarations of interest with implicit behavioural signals (e.g., hashtag usage), significantly enhancing the robustness and reliability of personalized recommendations [ 42,41]. Despite their clarity, explicit methods inhe... | https://arxiv.org/abs/2505.21907v1 |
user simulation techniques employing LLMs themselves as realistic user models have emerged as crucial tools for scalable and systematic evaluation of personalized response generation. Generative user simulations [ 47] have been shown to effectively emulate dynamic shifts in user preferences over time, enabling controll... | https://arxiv.org/abs/2505.21907v1 |
user intent. 4.2.2 Real-Time Persona Extraction and Preference Adaptation In contrast to predefined user profiles developed before interaction, the during the conversation phase focuses on dynamically identifying and adapting user preferences in real-time. In early approaches, systems were only capable of reactive adap... | https://arxiv.org/abs/2505.21907v1 |
a result, preference detection techniques have evolved to include a post-conversation phase, where feedback from past interactions is utilized to optimize future responses. Initial methods in this space were built around lightweight feedback integration, where human judgments were used to revise system outputs. To stru... | https://arxiv.org/abs/2505.21907v1 |
adaptation to a reflective, feedback-driven process. By incorporating signals from completed dialogues, AI systems have been enabled to refine their personalization strategies over time, leading to more adaptive, consistent, and user-aligned behavior in future interactions. Table 3: Summary of Preference Detection Tech... | https://arxiv.org/abs/2505.21907v1 |
model. In CKE (Cross-Graph Knowledge Exchange) [ 72], a hybrid structured prompt is constructed by first generating dialogue user graphs for both conversation participants and then performing cross-graph knowledge aggregation. Discrete and continuous representations are fused to form a prompt that encodes persona, dial... | https://arxiv.org/abs/2505.21907v1 |
Architecture-Level Personalization Despite the increased flexibility offered by fine-tuning methods, they remain bounded by the representational capacity of the underlying architecture. When personalization demands richer structural modeling, such as dynamically encoding speaker roles, incorporating graph-based knowled... | https://arxiv.org/abs/2505.21907v1 |
produce instruction-following behavior by training a model on labeled examples, it is inherently limited by the static nature of its training data, which does not reflect user preferences that arise during real-world deployment [ 90]. In contrast, Reinforcement Learning from Human Feedback (RLHF) augments this approach... | https://arxiv.org/abs/2505.21907v1 |
has extended this paradigm to constrained optimization settings to address reward overoptimization and instability in alignment objectives [ 102], and hierarchical actor-critic architectures have been proposed to manage multi-turn decision-making in complex language tasks [ 103]. To ensure the policy does not drift too... | https://arxiv.org/abs/2505.21907v1 |
−α·logp(cw,yw|x) |cw|+|yw|Poor generalization Compression + coverage regularization β-DPO [116] β·[logp(yw|x)−logp(yl|x)], with adaptive β0 Static optimization sharpness Adaptive temperature DPOC [63] β·[logp(yw|x)−logp(yl|x)] P(rcho, rcrt) +P(rcrt, rrej) Preference misranking Criterion-based penalty terms Table 6: Com... | https://arxiv.org/abs/2505.21907v1 |
study and development. Building on this, we offered a phase-based view of preference optimization, structured around how user preferences are identified, interpreted, and used to drive personalized interaction throughout the lifecycle of user engagement. By synthesizing methods for detecting preferences, generating con... | https://arxiv.org/abs/2505.21907v1 |
health support. Nature Machine Intelligence , 5(1):46–57, 2023. 14 [14] Stefan Wellsandt, Karl Hribernik, and Klaus-Dieter Thoben. Anatomy of a digital assistant. In Advances in Production Management Systems. Artificial Intelligence for Sustainable and Resilient Production Systems: IFIP WG 5.7 International Conference,... | https://arxiv.org/abs/2505.21907v1 |
Ming Y Lu, Bowen Chen, Drew FK Williamson, Richard J Chen, Melissa Zhao, Aaron K Chow, Kenji Ikemura, Ahrong Kim, Dimitra Pouli, Ankush Patel, et al. A multimodal generative ai copilot for human pathology. Nature , 634(8033):466–473, 2024. [32] Hussein Mozannar, Gagan Bansal, Adam Fourney, and Eric Horvitz. When to sho... | https://arxiv.org/abs/2505.21907v1 |
preference inference using language models and probabilistic reasoning. arXiv preprint , 2023. [47] Se eun Yoon, Zhankui He, Jessica Maria Echterhoff, and Julian McAuley. Evaluating large language models as generative user simulators for conversational recommendation. In Proceedings of the 2024 Conference of the North ... | https://arxiv.org/abs/2505.21907v1 |
signatures of subjective interest. Proceedings of the National Academy of Sciences , 120(12), 2023. [65] Nathan Lee, Arun Suggala, et al. Active preference learning for large language models. arXiv preprint arXiv:2310.XXXX , 2023. [66] Weiyan Xu, Abigail See, et al. When to show a suggestion? integrating human feedback... | https://arxiv.org/abs/2505.21907v1 |
Huang He, Fan Wang, Hua Wu, and Haifeng Wang. Plato: Pre-trained dialogue generation model with discrete latent variable. arXiv preprint arXiv:1910.07931 , 2019. [84] Yuwei Wu, Xuezhe Ma, and Diyi Yang. Personalized response generation via generative split memory network. InProceedings of the 2021 Conference of the Nor... | https://arxiv.org/abs/2505.21907v1 |
Martic, Shane Legg, and Dario Amodei. Deep reinforcement learning from human preferences. Advances in neural information processing systems , 30, 2017. [99] Shun Zhang, Zhenfang Chen, Sunli Chen, Yikang Shen, Zhiqing Sun, and Chuang Gan. Improving reinforcement learning from human feedback with efficient reward model e... | https://arxiv.org/abs/2505.21907v1 |
Yuanzhe Pang, Weizhe Yuan, Kyunghyun Cho, He He, Sainbayar Sukhbaatar, and Jason Weston. Iterative reasoning preference optimization. In A. Globerson, L. Mackey, D. Belgrave, A. Fan, U. Paquet, J. Tomczak, and C. Zhang, editors, Advances in Neural Information Processing Systems , volume 37, pages 116617–116637. Curran ... | https://arxiv.org/abs/2505.21907v1 |
arXiv:2505.21908v1 [cs.LG] 28 May 2025Reinforcement Learning for Out-of-Distribution Reasoning in LLMs: An Empirical Study on Diagnosis-Related Group Coding Hanyin Wang1,2Zhenbang Wu2Gururaj Kolar3Hariprasad Korsapati1 Brian Bartlett1Bryan Hull4Jimeng Sun2 1Mayo Clinic Health System2University of Illinois Urbana-Champa... | https://arxiv.org/abs/2505.21908v1 |
DRG codes; (2) advanced clinical reasoning required to link diagnoses with hospital resource use and disease severity; and (3) strict hierarchical rules governing DRG assignment. Recent advances in reasoning models, such as OpenAI-o1 [ 16] and DeepSeek-R1 [ 13], have intro- duced a paradigm shift in LLM post-training. ... | https://arxiv.org/abs/2505.21908v1 |
varying degrees of success [ 43,15,36]. One line of work has proposed approaches to address biases and improve sample efficiency in the original GRPO algorithm [ 41,25,23]. Another active research area focuses on curriculum and staged learning strategies during reasoning-oriented RL [44, 32, 37, 17, 4]. 3 Large-scale R... | https://arxiv.org/abs/2505.21908v1 |
41], which discards prompts that yield uniformly correct or incorrect completions. Given the data scarcity in clinical domains , we instead maximize the utility of each training example by resampling rather than discarding. Intervening on Cognitive Behaviors Cognitive behaviors, such as verification and backtracking, a... | https://arxiv.org/abs/2505.21908v1 |
Variants In Equation 1, dividing by |oi|during group-level advantage normalization introduces a length bias, diminishing the influence of longer completions on the policy gradient. To address this, DAPO [ 41] usesPG i=1|oi|as the denominator, while Dr. GRPO [ 25] adopts a constant normalization factor. Additionally, Dr... | https://arxiv.org/abs/2505.21908v1 |
prompt engineering, manual inspection by domain expert revealed that the dataset exhibits correct reasoning logic (e.g., analyzing principal diagnosis first) but frequently contains factual errors (e.g., misclassifying a condition’s CC/MCC status). We also included the complete list of original V34.0 MS-DRG codes in a ... | https://arxiv.org/abs/2505.21908v1 |
and GRPO on the DRG-Small subset (N=46,758). This contrasts with Deepseek-R1-style training, where only minimal SFT precedes RL. Across all data splits, GRPO consistently and significantly improved Pass@1 over the SFT baseline by an absolute margin of approximately 10 percentage points (see Figure 5). We observed that ... | https://arxiv.org/abs/2505.21908v1 |
Steps0.200.450.700.95Reward ScoreB. Training Curve Positive Resampling Neutral Resampling Vanilla GRPO406 272 71Training Time (hours)C. Training Time Positive Resampling Neutral Resampling Vanilla GRPO Figure 7: Dynamic Resampling. Despite maintaining a high reward variance during training (A), dynamic resampling perfo... | https://arxiv.org/abs/2505.21908v1 |
direct prediction strategy, where outputting the DRG code first leverages implicit knowledge in the model’s latent space, outperforming explicit CoT-grounded reasoning. These findings also align with recent studies [ 26,6], which suggest that CoT and extended reasoning may not always be necessary for reasoning models, ... | https://arxiv.org/abs/2505.21908v1 |
5.4 Prerequisites for Effective GRPO Training We explored prerequisites for effective GRPO training, finding that vanilla Qwen2.5 models (base and instruct) failed to produce correct DRG codes with GRPO alone, despite quickly adopting the target reasoning format (Figure 10 A). Post-SFT, all models showed improved RL pe... | https://arxiv.org/abs/2505.21908v1 |
https://www.cms.gov/icd10m/ version34-fullcode-cms/fullcode_cms/P0001.html. , 2016. [9]H. Dong, M. Falis, W. Whiteley, B. Alex, J. Matterson, S. Ji, J. Chen, and H. Wu. Automated clinical coding: what, why, and where we are? NPJ digital medicine , 5(1):159, 2022. [10] H. Face. Open r1: A fully open reproduction of deep... | https://arxiv.org/abs/2505.21908v1 |
4(1):103, 2021. [25] Z. Liu, C. Chen, W. Li, P. Qi, T. Pang, C. Du, W. S. Lee, and M. Lin. Understanding r1-zero-like training: A critical perspective, 2025. URL https://arxiv. org/abs/2503.20783 . [26] W. Ma, J. He, C. Snell, T. Griggs, S. Min, and M. Zaharia. Reasoning models can be effective without thinking. arXiv ... | https://arxiv.org/abs/2505.21908v1 |
arXiv:2311.13735 , 2023. [41] Q. Yu, Z. Zhang, R. Zhu, Y . Yuan, X. Zuo, Y . Yue, T. Fan, G. Liu, L. Liu, X. Liu, et al. Dapo: An open-source llm reinforcement learning system at scale. arXiv preprint arXiv:2503.14476 , 2025. [42] Y . Yue, Z. Chen, R. Lu, A. Zhao, Z. Wang, Y . Yue, S. Song, and G. Huang. Does reinforce... | https://arxiv.org/abs/2505.21908v1 |
. . . . . . . . . . . . . . . . . . . . . . . . . . 16 B.5 Dynamic Resampling Details . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 C Additional Results 17 C.1 Experiments with GRPO Hyperparameters . . . . . . . . . . . . . . . . . . . . 17 C.2 Accuracy with RL Training in Ablation Studies . . . . . . . . . ... | https://arxiv.org/abs/2505.21908v1 |
requiring reasoning con- tent to be enclosed within <think></think> tags and the final answer (DRG code) within <answer></answer> tags. The reward is defined as: Sformat =0, if the response format is correct −2,otherwise Accuracy Reward. The Accuracy Reward evaluates the correctness of the DRG code, and applied only i... | https://arxiv.org/abs/2505.21908v1 |
were defined 15 as those with zero reward variance, an accuracy score of −0.5under the dense reward, and 0under the strict reward. We then reran the experiment from the SFT model after excluding these filtered cases. A.7 Staged Learning For staged learning, we divided the training process into three stages, each with a... | https://arxiv.org/abs/2505.21908v1 |
Section C.1. For all experiments in Sections 5.2 to 5.3, we used a GRPO learning rate of 3×10−6 with a constant learning rate scheduler and a warmup ratio of 0.1. B.4 Evaluation Details We used vLLM [ 19] for inference during evaluation. All evaluations were conducted on the full test set (N = 26,244). We set the tempe... | https://arxiv.org/abs/2505.21908v1 |
63.9 58.8 64.1 59.4 75% SFT SFT 46.5 76.2 52.8 59.3 77.6 64.5 53.3 80.7 59.6 Vanilla GRPO 53.5 64.9 54.6 64.0 71.4 65.1 59.2 69.7 60.5 Best Config 54.6 60.6 54.9 64.4 68.2 64.8 59.6 65.3 60.2 Best Config - KL Decay 54.0 65.3 55.0 63.8 71.0 64.9 58.7 69.4 60.0 Best Config + Remove Hard Case 54.4 58.1 54.5 63.8 66.1 64.0... | https://arxiv.org/abs/2505.21908v1 |
experiments without KL decay or with curriculum learning for the 50% SFT group, given the limited performance observed with vanilla GRPO in that setting. D Additional Discussion D.1 Clinical Applications of Automated DRG Coding with Reasoning In discussions with domain experts, DRG-Sapphire shows significant potential ... | https://arxiv.org/abs/2505.21908v1 |
task of DRG coding. Extending our approach to other medical-domain tasks, or even diverse OOD tasks across different domains, would be valuable. In particular, it would be compelling to investigate whether scaling RL methods across multiple tasks and domains encourages exploration of more diverse reasoning pathways bey... | https://arxiv.org/abs/2505.21908v1 |
diagnoses, procedures performed, age, discharge status, and other factors. The goal is to ensure fair and consistent hospital reimbursement based on the severity of the illness and the complexity of care required. CC and MCC in MS-DRG: •CC (Complication or Comorbidity): A secondary diagnosis that increases the com- ple... | https://arxiv.org/abs/2505.21908v1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.