text
string
source
string
providing robust support for system optimization and reliable deployment [207]. C. Chapter Summary This chapter provides a comprehensive summary of the key research challenges and future directions for LAMs and Agentic AI in future intelligent communication systems. It systematically reviews the core limitations hinder...
https://arxiv.org/abs/2505.22311v1
in Proceedings of naacL-HLT , vol. 1, 2019, p. 2. [3] A. Radford, “Improving language understanding by generative pre- training,” 2018. [4] A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, I. Sutskever et al. , “Language models are unsupervised multitask learners,” OpenAI blog , vol. 1, no. 8, p. 9, 2019. [5] T. B. Bro...
https://arxiv.org/abs/2505.22311v1
C. Chen, H. Wu, D. Yuan, L. Jiang, D. Wu et al. , “Large language model (llm) for telecommu- nications: A comprehensive survey on principles, key techniques, and opportunities,” arXiv preprint arXiv:2405.10825 , 2024.[21] G. O. Boateng, H. Sami, A. Alagha, H. Elmekki, A. Hammoud, R. Mizouni, A. Mourad, H. Otrok, J. Ben...
https://arxiv.org/abs/2505.22311v1
preprint arXiv:2309.11811 , 2023. [34] J. Zhang, J. Li, Z. Wang, L. Shi, S. Jin, W. Chen, and H. V . Poor, “Decision transformers for wireless communications: A new paradigm of resource management,” IEEE Wireless Communications , 2025. [35] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthine...
https://arxiv.org/abs/2505.22311v1
audio semantic communication,” in ICASSP 2024- 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) . Seoul, Korea: IEEE, April 2024, pp. 13 136– 13 140. [50] Y . Zeng, X. He, X. Chen, H. Tong, Z. Yang, Y . Guo, and J. Hao, “Dmce: Diffusion model channel enhancer for multi-user semanti...
https://arxiv.org/abs/2505.22311v1
Touvron, L. Martin, K. Stone, P. Albert, A. Almahairi, Y . Babaei, N. Bashlykov, S. Batra, P. Bhargava, S. Bhosale et al. , “Llama 2: Open foundation and fine-tuned chat models,” arXiv preprint arXiv:2307.09288 , 2023. [64] A. Dubey, A. Jauhri, A. Pandey, A. Kadian, A. Al-Dahle, A. Letman, A. Mathur, A. Schelten, A. Ya...
https://arxiv.org/abs/2505.22311v1
Hu, T. Zhang, F. Wu et al. , “Instruction tuning for large language models: A survey,” arXiv preprint arXiv:2308.10792 , 2023. [79] J. Wei, X. Wang, D. Schuurmans, M. Bosma, F. Xia, E. Chi, Q. V . Le, D. Zhou et al. , “Chain-of-thought prompting elicits reasoning in large language models,” Advances in neural informatio...
https://arxiv.org/abs/2505.22311v1
arXiv preprint arXiv:2407.09424 , 2024. [93] R. Nikbakht, M. Benzaghta, and G. Geraci, “Tspec-llm: An open- source dataset for llm understanding of 3gpp specifications,” arXiv preprint arXiv:2406.01768 , 2024. [94] F. Jiang, W. Zhu, L. Dong, K. Wang, K. Yang, C. Pan, and O. A. Dobre, “Commgpt: A graph and retrieval-aug...
https://arxiv.org/abs/2505.22311v1
Cirp , vol. 99, pp. 650–655, 2021. [107] Y . Han, C. Liu, and P. Wang, “A comprehensive survey on vector database: Storage and retrieval technique, challenge,” arXiv preprint arXiv:2310.11703 , 2023. [108] K. Ueki, “Survey of visual-semantic embedding methods for zero- shot image retrieval,” in 2021 20th IEEE Internati...
https://arxiv.org/abs/2505.22311v1
solutions,” arXiv preprint arXiv:2503.13819 , 2025. [125] B. Xiao, B. Kantarci, J. Kang, D. Niyato, and M. Guizani, “Efficient prompting for llm-based generative internet of things,” IEEE Internet of Things Journal , 2024. [126] Y . Otoum, A. Asad, and A. Nayak, “Llm-based threat detec- tion and prevention framework fo...
https://arxiv.org/abs/2505.22311v1
Workshops (GC Wkshps) . Kuala Lumpur, Malaysia: IEEE, December 2023, pp. 2049–2054. [141] V . Komanduri, S. Estropia, S. Alessio, G. Yerdelen, T. Ferreira, G. P. Roldan, Z. Dong, and R. Rojas-Cessa, “Optimizing llm prompts for automation of network management: A user’s perspective,” in 2025 International Conference on ...
https://arxiv.org/abs/2505.22311v1
A survey,” Journal of Electronic Science and Technology , p. 100301, 2025. [156] W. Lee and J. Park, “Llm-empowered resource allocation in wireless communications systems,” arXiv preprint arXiv:2408.02944 , 2024. [157] H. Du, G. Liu, Y . Lin, D. Niyato, J. Kang, Z. Xiong, and D. I. Kim, “Mixture of experts for intellig...
https://arxiv.org/abs/2505.22311v1
and alignment,” IEEE Wireless Communications , vol. 31, no. 6, pp. 63–71, 2024. [171] W. Yang, Z. Xiong, Y . Yuan, W. Jiang, T. Q. S. Quek, and M. Debbah, “Agent-driven generative semantic communication with cross-modality and prediction,” IEEE Transactions on Wireless Communications , vol. 24, no. 3, pp. 2233–2248, 20...
https://arxiv.org/abs/2505.22311v1
uav mission planning via multi-agent react and vision-language reasoning,” arXiv preprint arXiv:2505.07236 , 2025. [186] B. Liu, S. Mazumder, E. Robertson, and S. Grigsby, “Ai autonomy: Self-initiated open-world continual learning and adaptation,” AI Mag- azine , vol. 44, no. 2, pp. 185–199, 2023. [187] D. Yu, X. Zhang...
https://arxiv.org/abs/2505.22311v1
. Song, S. Qi, R. Rui, and W. Zhang, “Agentnet: Decentralized evolutionary coordination for llm-based multi-agent systems,” arXiv preprint arXiv:2504.00587 , 2025. [202] A. Saleh, S. Tarkoma, P. K. Donta, N. H. Motlagh, S. Dustdar, S. Pirt- tikangas, and L. Lov ´en, “Usercentrix: An agentic memory-augmented ai framewor...
https://arxiv.org/abs/2505.22311v1
arXiv:2505.22312v1 [cs.LG] 28 May 2025Skywork Open Reasoner 1 Technical Report Jujie He∗,†, Jiacai Liu∗, Chris Yuhao Liu , Rui Yan , Chaojie Wang , Peng Cheng , Xiaoyu Zhang , Fuxiang Zhang , Jiacheng Xu , Wei Shen , Siyuan Li , Liang Zeng , Tianwen Wei , Cheng Cheng , Bo An , Yang Liu , and Yahui Zhou Skywork AI, Kunl...
https://arxiv.org/abs/2505.22312v1
. . . . . . . . . . . . . . . 14 3.2.5 Adaptive Entropy Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 3.2.6 No KL Loss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 4 Empirical Studies on Mitigating Policy Entropy Collapse 18 4.1 Ablation Setup ....
https://arxiv.org/abs/2505.22312v1
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 8 Experiments 35 8.1 Training and Evaluation Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 8.2 Evaluation Results of Skywork-OR1 models . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 9 Conclusion 37 2 1 Intro...
https://arxiv.org/abs/2505.22312v1
sized models, scoring 69.8 on AIME24, 52.3 on AIME25, and 43.6 on LiveCodeBench. We conducted exhaustive ablation experiments to validate the effectiveness of the core components in the training pipeline. Balancing exploration and exploitation is crucial in RL training [ 22]. We conducted a comprehensive study on prema...
https://arxiv.org/abs/2505.22312v1
test performance. Nevertheless, entropy still converges faster than in on-policy training (Section 4.5). Organization In Section 2, we introduce the preliminaries of several important policy optimization methods in RL. Section 3 elaborates on our training pipeline, including comprehensive ablation studies that validate...
https://arxiv.org/abs/2505.22312v1
generally uses GAE [ 19] to estimate the token-level advantage Aπk(st i, at i). Group Relative Policy Optimization (GRPO) Suppose Mi.i.d. responses yi1, .., y iMare sampled for each prompt xi. GRPO [ 21] estimates the token-level advantage using the group-normalized rewards and 6 introduces an additional length normali...
https://arxiv.org/abs/2505.22312v1
process into multiple stages. We found that multi-stage training significantly reduces computational costs while preserving scalability, as supported by the evidence presented in Section 3.2.2. 2.Advantage Mask for Truncated Responses. To address potential noise in training signals when outcomes cannot be derived from ...
https://arxiv.org/abs/2505.22312v1
Loss. We found that including a KL loss term hinders performance gains, particularly in the later stages of multi-stage training. Therefore, we omit the KL loss from our training recipe. See Section 3.2.6 for further discussion. 8 3.2 Effectiveness of MAGIC Components In this section, we present results from extensive ...
https://arxiv.org/abs/2505.22312v1
described in Section 6 for all subsequent exploration experiments. 3.2.2 Multi-Stage Training One of the major challenges in optimizing long Chain-of-Thought (CoT) models with RL is managing excessively long outputs, which can lead to slow convergence and high training variance. Inspired by DeepScaleR [ 17], we incorpo...
https://arxiv.org/abs/2505.22312v1
T. When response lengths exceed T, the outcomes cannot be derived, and accuracy rewards are set to 0, resulting in negative advantages for these truncated responses, which may introduce bias. To mitigate this issue, we investigated several advantage mask strategies aimed at reducing the influence of truncated responses...
https://arxiv.org/abs/2505.22312v1
responses to avoid erroneously receiving a zero accuracy reward – i.e., increasing pπ non−trunc(x)– we explored various advantage mask strategies. These strategies were designed to mitigate the impact of noisy training signals introduced by truncated samples. We conducted ablation experiments using DeepSeek-R1-Distill-...
https://arxiv.org/abs/2505.22312v1
responses are typically not truncated (e.g., 32K) – shows no improvement. This may be attributed to the smaller effective training batch size caused by the increased clip ratio under the Adv-Mask-Before strategy. The behavior of Adv-Mask-After serves as an intermediate point between Adv-Mask-Before and No-Adv-Mask. Adv...
https://arxiv.org/abs/2505.22312v1
KL Loss 64 32 16 Stage I16K target entropy 0.2 0 Table 3: Shared hyperparameters in Ablation Experiments 4 In our experiments, we identified an additional entropy-related phenomenon: when a low temperature is used (e.g., 0.6), the model either begins with extremely low entropy or its entropy quickly collapses to near z...
https://arxiv.org/abs/2505.22312v1
the original policy loss defined in (3.1),βis the KL coefficient. We first run a stage 1 experiment with β=1e-3 based on DeepSeek-R1-Distill-Qwen-7B (reference policy). Then in stage 2, we conducted ablations based on the stage 1 checkpoint, comparing β= 1e-3with β= 0. The other hyper-parameters are reported in Table 4...
https://arxiv.org/abs/2505.22312v1
size – affect the model’s policy entropy during RL training. •Policy update. We also investigate how different components of the policy update influence entropy. In this section, we focus primarily on the number of stochastic gradient descent (SGD) steps per training step and the use of additional entropy control metho...
https://arxiv.org/abs/2505.22312v1
5e-3, and found that the entropy loss with the higher coefficient αk, i.e., 5e-3, more effectively prevents entropy collapse and achieves higher test performance. Left: Accuracy curves on test benchmarks during RL training. Right: Entropy of generated responses during RL training. 4.2 Premature Entropy Collapse General...
https://arxiv.org/abs/2505.22312v1
On-policy update applies a single SGD step to the entire rollout batch, whereas off-policy update implements multiple SGD steps through rollout batch decomposition and reuse. The rollout batch is partitioned intoDR DTmini-batches, with each mini-batch undergoing an independent SGD step. Then, one can iterate over the r...
https://arxiv.org/abs/2505.22312v1
due to both the smaller mini-batch size – 22 leading to greater gradient variance – and the presence of off-policy data. To better understand which factor contributes more significantly, we conducted the following ablation experiments. Ablation Experiments 7: On-policy vs. Off-policy with the Same SGD Data Size DT Cons...
https://arxiv.org/abs/2505.22312v1
coefficient can mitigate entropy collapse and improve test performance. However, we later observed that the effectiveness of entropy regularization is highly sensitive to both the choice of coefficient and the characteristics of the training data, making it difficult to select an optimal coefficient in advance. This mo...
https://arxiv.org/abs/2505.22312v1
in Figure 21. Aspreviouslyanalyzed, increasing NSGDacceleratespolicyconvergenceandleadstodegradedtestperformance. As shown in Figure 21, applying adaptive entropy control successfully prevents entropy collapse and results in higher test performance. However, it is worth noting that, although the coefficient is adjusted...
https://arxiv.org/abs/2505.22312v1
ablation experiments as supporting evidence. The training process of online RL algorithms can generally be divided into two distinct phases: data rollout andpolicy update (which includes both forward and backward passes). Let tR,tT, and tOdenote the time spent on rollout, policy update, and other operations (e.g., rewa...
https://arxiv.org/abs/2505.22312v1
DT, Nreuse)total ttotalrollout tRpolicy update tTothers tOtR/ttotal tT/ttotal (1,64,64,1) 116 90 8 18 77.6% 6.9% (2,64,32,1) 114 87 10 17 76.3% 8.7% (4,64,16,1) 118 90 12 16 76.3% 10.2% Table 8: Detailed time usage for three experiments from Ablation Experiments 6 over 1000 training steps. All the experiments utilized ...
https://arxiv.org/abs/2505.22312v1
training resources can lead to better test performance with similar training time consumption. Larger Group Size, Better Test Performance. To investigate how the group size affects the training dynamics, we conducted the following ablation experiments. Ablation Experiments 12: The Impact of Group Size ( gs) Consider th...
https://arxiv.org/abs/2505.22312v1
that contain external URLs or potential figures in the problem statement. •We then perform cross-dataset deduplication to eliminate potentially duplicated problems from similar sourcesanddecontaminateagainstAIME24andAIME25problems, followingDeepScaleR’sdeduplication scheme. This process yields approximately 105K math p...
https://arxiv.org/abs/2505.22312v1
composed of 5 identical squares. The number of triangles that can be formed using the 12 points in the figure as vertices is.10. (12 points) The figure is composed of 5 identical squares. The number of triangles that can be formed using the 12 points in the figure as vertices is. (NuminaMath-1.5, Olympiads) Interesting...
https://arxiv.org/abs/2505.22312v1
in a text solution is correct using the following steps: 34 •Extract the answer that appears after the reasoning process. •Use Math-Verify’s parser to parse the answer and obtain its string representation. •If the string representation directly matches the gold answer, return True; otherwise, fall back to Math-Verify’s...
https://arxiv.org/abs/2505.22312v1
3.2.6. Please refer to Section 3.1 for more details on the policy update procedure. All experiments use multi-stage training. We report the detailed configuration for each training stage in Table 10, Table 11, and Table 12. The released checkpoints correspond to step 2160 for Skywork-OR1-Math-7B, step 1320 for Skywork-...
https://arxiv.org/abs/2505.22312v1
the reasoning capabilities of long CoT models. Building upon the DeepSeek-R1-Distill model series, our RL approach achieves significant performance improvements on various mathematical and coding benchmarks. The Skywork-OR1-32B model outperforms both DeepSeek-R1 and Qwen3-32B on AIME24 and AIME25, while delivering comp...
https://arxiv.org/abs/2505.22312v1
robust mathematical expression evaluation system. https://github.c om/huggingface/Math-Verify , 2025. Version 0.6.1. [12]Hung Le, Yue Wang, Akhilesh Deepak Gotmare, Silvio Savarese, and Steven Chu-Hong Hoi. Coderl: Mastering code generation through pretrained models and deep reinforcement learning. In Advances in Neura...
https://arxiv.org/abs/2505.22312v1
Zhenyu Duan Qi An, Yimin Du, Junchen Liu, Lifu Tang, Xiaowei Lv, Haosheng Zou, Yongchao Deng, Shousheng Jia, and Xiangzheng Zhang. Light-r1: Surpassing r1-distill from scratch with $1000 through curriculum sft & dpo, 2025. [30]Yunhui Xia, Wei Shen, Yan Wang, Jason Klein Liu, Huifeng Sun, Siyue Wu, Jian Hu, and Xiaolong...
https://arxiv.org/abs/2505.22312v1
Advancing Multimodal Reasoning via Reinforcement Learning with Cold Start Lai Wei1,3Yuting Li1Kaipeng Zheng1,2Chen Wang3Yue Wang3Linghe Kong1 Lichao Sun4Weiran Huang1,2∗ 1School of Computer Science, Shanghai Jiao Tong University 2Shanghai Innovation Institute 3Zhongguancun Academy4Lehigh University MathVision MathVerse...
https://arxiv.org/abs/2505.22334v1
improved reasoning capabilities. Our analysis reveals that, while RL may increase the frequency of such a reflective pattern, it may be more of a mirage rather than an indicator of genuine improvements in reasoning capability. This finding challenges the prevailing assumptions about emergent reasoning in MLLMs and sugg...
https://arxiv.org/abs/2505.22334v1
insights, we present the first comprehensive study of the impact of supervised fine-tuning as a cold start for reinforcement learning in the multimodal domain, exploring diverse SFT strategies to enhance reasoning abilities. •Empirical results demonstrate that an SFT-based cold start provides a robust foundation for RL...
https://arxiv.org/abs/2505.22334v1
Specifically, the accuracy is 20.17 for Qwen2.5-VL, but only 18.98 for VLAA-Thinker and 18.68 for MM-EUREKA in these “aha moment” cases. These findings suggest that the so-called “aha moment” may be more of a mirage than a reliable indicator of genuine reasoning ability. While GRPO can increase the frequency of such pa...
https://arxiv.org/abs/2505.22334v1
reflection-based CoT settings. Reflection-CoT (v1): In the rejection sampling process, we obtain both correct and incorrect responses, denoted as y+andy−. We explore a simple two-step reasoning pattern inspired by prior work [9]: 4 Teacher MLLM MLLMDistillationSFT DatasetSupervised Fine-Tuning (Cold Start) Seed Dataset...
https://arxiv.org/abs/2505.22334v1
69.20 65.29 55.72 Skywork R1V [36] 38B 39.31 40.91 67.50 60.06 51.95 Open-Source 7B Multi-Modal Models Qwen2.5-VL [2] 7B 24.87 43.83 66.30 62.87 49.47 MM-Eureka [34] 7B 28.36 49.52 70.40 66.03 53.58 OpenVLThinker [10] 7B 25.30 42.79 64.10 63.91 49.03 ThinkLite-VL [50] 7B 27.71 47.41 69.00 67.99 53.03 VLAA-Thinker [4] 7...
https://arxiv.org/abs/2505.22334v1
multimodal mathematical reasoning benchmarks: MathVision [ 48], MathVista [ 31], MathVerse [ 64], and We-Math [ 37]. These benchmarks offer comprehensive evaluations with diverse problem types, including geometry, charts, and tables, featuring multi-subject math problems and meticulously categorized visual math challen...
https://arxiv.org/abs/2505.22334v1
+ RL 25.10 39.95 66.50 63.62 48.79 + SFT and RL 27.04 44.03 68.20 64.08 50.84 Qwen2.5-VL-7B [2] 24.87 43.83 66.30 62.87 49.47 + SFT 28.62 46.90 71.20 67.76 53.62 + RL 29.80 49.29 73.50 67.82 55.10 + SFT and RL 30.26 48.58 73.40 70.40 55.66 Table 4: Ablation study on different cold-start strategies for Qwen2.5-VL-3B. We...
https://arxiv.org/abs/2505.22334v1
20222426Accuracy (%)MathVision 3436384042MathVerse 565860626466MathVista 50.052.555.057.560.062.565.0We-Math 404244464850AverageBase Model Data with Aha Moment Randomly Selected DataFigure 4: Comparison of model performance when trained on data with “aha moment” patterns (Reflection-CoT v2) versus randomly selected 32B...
https://arxiv.org/abs/2505.22334v1
reasoning ability. Even when trained exclusively on incorrect data, the model outperforms the base model after the cold start phase and continues to benefit from subsequent RL. These results suggest that the structural patterns present in Chain-of-Thought reasoning can be learned independently of solution correctness, ...
https://arxiv.org/abs/2505.22334v1
Haibin Wang, Yaliang Li, Xiuyu Sun, Yichen Qian, Bolin Ding, and Jingren Zhou. Text-to-sql empowered by large language models: A benchmark evaluation. arXiv preprint arXiv:2308.15363 , 2023. [13] Google. Introducing gemini 2.0: our new ai model for the agentic era, 2024. URL https: //blog.google/technology/google-deepm...
https://arxiv.org/abs/2505.22334v1
geometry problem solving with formal language and symbolic reasoning. arXiv preprint arXiv:2105.04165 , 2021. [28] Pan Lu, Liang Qiu, Jiaqi Chen, Tony Xia, Yizhou Zhao, Wei Zhang, Zhou Yu, Xiaodan Liang, and Song-Chun Zhu. Iconqa: A new benchmark for abstract diagram understanding and visual language reasoning. arXiv p...
https://arxiv.org/abs/2505.22334v1
2025. Accessed: 2025-02-15. [42] Wenhao Shi, Zhiqiang Hu, Yi Bin, Junhua Liu, Yang Yang, See-Kiong Ng, Lidong Bing, and Roy Ka-Wei Lee. Math-llava: Bootstrapping mathematical reasoning for multimodal large language models. arXiv preprint arXiv:2406.17294 , 2024. [43] Qwen Team. Qvq: To see the world with wisdom, Decemb...
https://arxiv.org/abs/2505.22334v1
Li Shen, et al. Mulberry: Empowering mllm with o1-like reasoning and reflection via collective monte carlo tree search. arXiv preprint arXiv:2412.18319 , 2024. [59] Zheng Yaowei, Lu Junting, Wang Shenzhi, Feng Zhangchi, Kuang Dongdong, and Xiong Yuwen. Easyr1: An efficient, scalable, multi-modality rl training framewor...
https://arxiv.org/abs/2505.22334v1
• Logical flaws or inconsistencies • Missing perspectives or alternatives • Areas for improvement • Suggestions for a better version • Steering toward the given answer The critical comments should: • Be specific and actionable • Reference particular parts of the draft • Suggest concrete improvements • Consider differen...
https://arxiv.org/abs/2505.22334v1
time using 8 A800 for training Qwen2.5-VL-7B using GRPO is around 12 hours. C Limitations While our study provides comprehensive insights into the interplay between supervised fine-tuning (SFT) and reinforcement learning (RL) for multimodal reasoning, several limitations remain. Firstly, we primarily evaluate models at...
https://arxiv.org/abs/2505.22334v1
BandE. •∠BCD = 2×∠BAD . 2. Use the property of cyclic quadrilaterals: In a cyclic quadrilateral, the sum of the opposite angles is 180◦. Therefore, ∠BAD +∠BCD = 180◦. 3. Express ∠BCD in terms of ∠BAD : Given∠BCD = 2×∠BAD , substitute this into the equation: ∠BAD + 2×∠BAD = 180◦, 3×∠BAD = 180◦, ∠BAD = 60◦, ∠BCD = 2×60◦=...
https://arxiv.org/abs/2505.22334v1
of 30. • We need to find the length of the altitude hof this triangle. 18 2. Find the Side Length of the Triangle: • An equilateral triangle has all three sides of equal length. • Let the side length of the triangle be s. • The perimeter of the triangle is given by: Perimeter =s+s+s= 3s • We are given that the perimete...
https://arxiv.org/abs/2505.22334v1
arXiv:2505.22338v1 [cs.CL] 28 May 2025TEXT2GRAD: REINFORCEMENT LEARNING FROM NATURAL LANGUAGE FEEDBACK Hanyang Wang1,∗, Lu Wang2, Chaoyun Zhang2, Tianjun Mao3, Si Qin2, Qingwei Lin2, Saravan Rajmohan2, Dongmei Zhang2 1University of Chicago2Microsoft3Fudan University ∗Work done during an internship at Microsoft Correspo...
https://arxiv.org/abs/2505.22338v1
these methods have led to impressive improvements in model helpfulness and safety, they also introduce limitations. By reducing rich, contextual critiques to a single scalar reward, RLHF discards token-level information about what was right or wrong, and where. This makes credit assignment imprecise, slows convergence,...
https://arxiv.org/abs/2505.22338v1
code generation, and question answering benchmarks. TEXT2GRAD demonstrates that natural language feedback, when properly aligned and grounded, can serve as a direct training signal rather than just auxiliary guidance, opening a new path for building language models that learn from human-like supervision. 2 Related Work...
https://arxiv.org/abs/2505.22338v1
from text to gradient signals, facilitating generalization across tasks. 3.2 Overview of T EXT2GRAD The central objective of TEXT2GRAD is to construct a Natural Language Gradient that directly drives policy updates. This requires solving two key challenges: (1) converting free-form textual critiques—which humans can ea...
https://arxiv.org/abs/2505.22338v1
0, otherwise . To reduce labeling cost while retaining informativeness, we adopt a class-prioritized strategy: only positive and negative spans are explicitly labeled, while neutral spans are left unannotated and default to δt= 0. This yields a token-level reward vector δ= (δ1, . . . , δ T), which supports token-wise a...
https://arxiv.org/abs/2505.22338v1
settings is given by ∆AA t−∆AB t=PT−1 k=t(γλ)k−t∆δk, which amplifies early feedback differences. For typical settings where γλ≈0.95, a token-level reward at step k=T−20is weighted nearly 0.95−20≈2.8 times more than it would be under end-of-sequence supervision—showing that natural language-guided token-level feedback i...
https://arxiv.org/abs/2505.22338v1
particularly strong span-level grounding in the code domain (KodCode win rate: 72%). These results confirm that natural language feedback—when structured through reasoning—enables more accurate and discriminative reward modeling, forming the basis for token-level learning in TEXT2GRAD. The pairwise-comparison prompt is...
https://arxiv.org/abs/2505.22338v1
within 75% of training steps compared to PPO’s 97%. These highlight the efficiency of token-level feedback in accelerating training and guiding interpretable policy updates. 4.4 KodCode [28]: Code Generation We evaluate TEXT2GRAD on the KodCode dataset [ 28], which focuses on generating correct and efficient Python sol...
https://arxiv.org/abs/2505.22338v1
and preference satisfaction. On ARC-Challenge, TEXT2GRAD shows improved reasoning (+3.9 vs. base, +1.7 vs. PPO), while MT-Bench results highlight better multi-turn dialogue performance. Table 4: Performance comparison across benchmarks for Llama3-8B- Instruct trained with Ultrafeedback. Model AlpacaEval 2.0 ARC-C MTBen...
https://arxiv.org/abs/2505.22338v1
Wu, Shilin He, Si Qin, Minghua Ma, Xiaoting Qin, Yu Kang, Yuyi Liang, Xiaoyu Gou, et al. Allhands: Ask me anything on large-scale verbatim feedback via large language models. arXiv preprint arXiv:2403.15157 , 2024. [2]Nisan Stiennon, Long Ouyang, Jeffrey Wu, Daniel Ziegler, Ryan Lowe, Chelsea V oss, Alec Radford, Dario...
https://arxiv.org/abs/2505.22338v1
Long Ouyang, Christina Kim, Christopher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, et al. Webgpt: Browser-assisted question-answering with human feedback. arXiv preprint arXiv:2112.09332 , 2021. [17] Chaoyun Zhang, Shilin He, Jiaxu Qian, Bowen Li, Liqun Li, Si Qin, Yu Kang, Minghua Ma, Guyue Liu, Qingwei ...
https://arxiv.org/abs/2505.22338v1
Zhiqing Sun, Sheng Shen, Shengcao Cao, Haotian Liu, Chunyuan Li, Yikang Shen, Chuang Gan, Liang-Yan Gui, Yu-Xiong Wang, Yiming Yang, et al. Aligning large multimodal models with factually augmented rlhf. arXiv preprint arXiv:2309.14525 , 2023. [33] Chenghua Huang, Zhizhen Fan, Lu Wang, Fangkai Yang, Pu Zhao, Zeqi Lin, ...
https://arxiv.org/abs/2505.22338v1
provide dense, token-level feedback rather than sparse, end-of-sequence rewards. Intuitively, localized reward signals allow the policy to attribute credit or blame more precisely to specific parts of the output. In this section, we formalize this intuition and show how token-level rewards lead to sharper and more disc...
https://arxiv.org/abs/2505.22338v1
details , as explained in ’textual_feedback ’. Empty if none apply . 15- ** poor_spans **: 0-2 phrases from the summary that noticeably harm its quality due to inaccuracy , redundancy , poor wording , or being less important and replaceable with more critical content , as explained in ’textual_feedback ’. Empty if none...
https://arxiv.org/abs/2505.22338v1
<PoorSpans > 19 <Description > Phrases from the response that best highlight its weaknesses (e.g., inaccurate , irrelevant , vague ). Select only the most essential and impactful phrases , directly tied to ’textual_feedback ’. </ Description > 14 20 </ PoorSpans > 21 <Requirement > Spans must be exact quotes from the r...
https://arxiv.org/abs/2505.22338v1
problematic code segments FROM THE SUBMITTED SOLUTION that cause failures . Must be exact quotes . Leave empty [] if none found ."] , 39 " improvement_code ": [" Extract ONLY the working but improvable code segments FROM THE SUBMITTED SOLUTION . Must be exact quotes . Leave empty [] if none needed ."] 40} 41‘‘‘ 42Note ...
https://arxiv.org/abs/2505.22338v1
span-level optimization guided by natural language feedback. Table 8: Hyperparameters for NL-Gradient PPO Training on SLF5K Hyperparameter Value Base Policy Model Llama 3.1 8B Instruct Base Reward Model Llama 3.1 8B Instruct Learning Rate 1×10−6 LR Scheduler Linear Batch Size 12 Mini-batch Size 1 PPO Epochs 4 Gradient ...
https://arxiv.org/abs/2505.22338v1
between 0-10>, 43 }, 44 "{ score_key_2 }": { 20 45 " strengths ": [" specific strength 1", " specific strength 2", ...] , 46 " weaknesses ": [" specific weakness 1", " specific weakness 2", ...] 47 " score ": <overall score between 0-10>, 48 " accuracy ": <score between 0-10>, 49 " completeness ": <score between 0-10>,...
https://arxiv.org/abs/2505.22338v1
has more precise span selections while avoiding unnecessary or overly broad spans " 28} 29 30Guidelines for span evaluation : 311. Each problematic span should capture ONLY the specific problematic text , nothing more 322. General or overly broad selections are penalized 333. Spans should not include irrelevant surroun...
https://arxiv.org/abs/2505.22338v1
< 2: 3 return False 4 for i in range (2, int( num ** 0.5) + 1): 5 if num % i == 0: 6 return False 7 return True 8def prime_fib (n: int ) -> int: 9 a, b = 0, 1 10 count = 0 11 while True : 12 a, b = b, a + b 13 if is_prime (b): 14 count += 1 15 if count == n: 16 return b F.3 File Name Validation Problem Statement 1def f...
https://arxiv.org/abs/2505.22338v1
[’txt ’, ’exe ’, ’dll ’]: 15 return ’No ’ 16 17 # Check if the file name contains exactly one dot 18 if file_name . count (’.’) != 1: 19 return ’No ’ 20 21 # If all checks pass , return ’Yes ’ 22 return ’Yes ’ Our Approach 1def file_name_check ( file_name ): 2 """ 3 Checks if a file name is valid according to the given...
https://arxiv.org/abs/2505.22338v1
on UltraFeedback dataset Figure 5: Comparative analysis of training dynamics between TEXT2GRAD and standard PPO. The results demonstrate thatTEXT2GRAD (red line) achieves more stable and consistent learning progress, while standard PPO (blue line) shows significant volatility and unstable oscillations throughout the tr...
https://arxiv.org/abs/2505.22338v1
Reward Mapping 13: Initialize token-level pseudo-rewards δ= (δ1, . . . , δ T)with zeros. 14: for all labeled span skinA(y)do 15: Letℓk=A(y)[sk]. 16: ifℓk=positive then 17: for all token index tsuch that yt∈skdo 18: δt←+1. 19: end for 20: else if ℓk=negative then 21: for all token index tsuch that yt∈skdo 22: δt← −1. 23...
https://arxiv.org/abs/2505.22338v1
1 Empowering Intelligent Low-altitude Economy with Large AI Model Deployment Zhonghao Lyu, Member, IEEE , Yulan Gao, Member, IEEE , Junting Chen, Member, IEEE , Hongyang Du, Member, IEEE , Jie Xu, Fellow, IEEE , Kaibin Huang, Fellow, IEEE , and Dong In Kim, Life Fellow, IEEE Abstract —Low-altitude economy (LAE) represe...
https://arxiv.org/abs/2505.22343v1
aerial agents (IAAs) and ground nodes. This makes direct deployment of full-scale LAIMs infeasible. To tackle this issue, recent efforts have explored scalable LAIM deployment strategies, including efficient pre-training and fine-tuning [6], joint task-offloading and resource management [7], and light- weight model inf...
https://arxiv.org/abs/2505.22343v1
adaptive LAIM-empowered intelligence across LAE services. A. Low-altitude System Architecture with LAIM Integration The ground layer comprises heterogeneous terrestrial nodes, such as sensors, mobile user equipments, base stations (BSs), and edge servers, which are responsible for ubiquitous data collection and prelimi...
https://arxiv.org/abs/2505.22343v1
for precise wireless channel measurement and modeling, enabling predictive channel estimation, 3D beamforming, and interference mitigation, thereby providing a resilient communication backbone for LAE networks. 2) Sensing-centric Applications: Sensing-centric scenarios focus on air-ground collaborative and high-resolut...
https://arxiv.org/abs/2505.22343v1
adaptive 3D beamforming, allowing IAAs and edge nodes to sustain high-quality communication links and perform high-precision sensing collaboratively, even in highly dynamic or low-visibility environments. 2) ICC for LAIM-empowered LAE: ICC focuses on the co- design of communication and computation to ensure efficient t...
https://arxiv.org/abs/2505.22343v1
resource allocation, and IAA positioning to accelerate the convergence speed. •ISCC for efficient LAIM inference in LAE: ISCC also supports efficient inference by enabling flexible parti- tioning and execution of LAIMs across ground, aerial, and cloud nodes based on real-time QoS requirements, which is commonly referre...
https://arxiv.org/abs/2505.22343v1
begins with an offline initialization phase grounded in a digital twin or virtual world model. At this stage, IAA positioning, resource alloca- tion, cooperation strategies, and task scheduling are jointly optimized under task-oriented objectives and QoS constraints. This optimization problem can be solved via a range ...
https://arxiv.org/abs/2505.22343v1
of 10 m/s and executing a line- by-line scanning trajectory. We scan an area of approximately 634×301 meters, with a spatial resolution of 1 meter between samples. RSRP values are sampled at 10 Hz from the PBCH-XSS RSRP metric, while RTK-based positioning ensured centimeter-level localization accuracy. The collecteddat...
https://arxiv.org/abs/2505.22343v1
execution pipeline, and real-world case studies, this work investigated efficient LAIM deployment in dynamicand resource-constrained low-altitude environments. It en- ables a new air-ground intelligence paradigm for LAE, where real-time perception, high-quality communication, adaptive decision-making, and continual LAI...
https://arxiv.org/abs/2505.22343v1
ensure secure and reliable operation againstjamming attacks, where the primary and secondary control links should be designed in a layered and integrated manner under the proposed hierarchical network infrastructure and task-oriented ISCC design paradigm. 6) Standardization Alignment: As the LAE increasingly relies on ...
https://arxiv.org/abs/2505.22343v1
“Joint maneuver and beamforming design for UA V-enabled integrated sensing and communication,” IEEE Trans. Wireless Commun. , vol. 22, no. 4, pp. 2424-2440, Apr. 2023. [13] W. Liu and J. Chen, “UA V-aided radio map construction exploiting environment semantics,” IEEE Trans. Wireless Commun. , vol. 22, no. 9, pp. 6341-6...
https://arxiv.org/abs/2505.22343v1
arXiv:2505.22349v1 [cs.DB] 28 May 2025ChatPD: An LLM-driven Paper-Dataset Networking System Anjie Xu Key Lab of High Confidence Software Technologies (Peking University), Ministry of Education Beijing, China School of Computer Science, Peking University Beijing, China anjiexu@stu.pku.edu.cnRuiqing Ding Key Laboratory o...
https://arxiv.org/abs/2505.22349v1
not be aware of datasets from other disciplines that could be beneficial to their research, which makes the process of finding a suitable dataset challenging. Despite the maturity of information retrieval technologies in the text domain, numerous challenges persist in the realm of data retrieval [ 19,26]. These challen...
https://arxiv.org/abs/2505.22349v1
paper-dataset network, which currently includes 60,126 papers, 4,224 dataset entities, and 137,004 paper-dataset usage records. Notably, the network constructed by ChatPD includes 444 new datasets not covered in PwC, demonstrat- ing the superiority of its automated dataset collection strategy over the manual annotation...
https://arxiv.org/abs/2505.22349v1