text string | source string |
|---|---|
we specifically selected CE, CU, and PQ as they effectively reflect the overall quality and naturalness of our dataset. (We exclude PC from our analysis as it primarily measures the number of audio components, which is less relevant for our test samples where each audio clip con- tains only single-speaker utterances.) ... | https://arxiv.org/abs/2505.22029v1 |
[11] F1-score 0.77 0.64 0.62 0.62 0.54 0.56 Yolo-Stutter [14] Recall - 0.82 0.72 0.72 - 0.89 Table 4: Token Error Rate and Token Distance on SEP-28k Model MetricsWord Level Phoneme Level Ins Rep Pau Pau Rep Pro Ours (3*4000 samples)TER(%,↓)24.90 22.32 16.27 7.12 11.68 11.05 TD(↓) 1.17 0.27 1.59 1.06 1.50 1.38 Ours (3*1... | https://arxiv.org/abs/2505.22029v1 |
Word-level, Libri = LibriTTS, Acc = Accuracy, Pre = Precision, Rec = Re- call, wF1 = weighted F1 score computed based on disfluency type frequencies). 4.3.3. Scaling Law Our scaling experiments reveal that the model’s performance, as measured by F1 scores, reaches a substantial level with a dataset size of 3×4000 sampl... | https://arxiv.org/abs/2505.22029v1 |
detect and pass,” arXiv preprint arXiv:2202.05396 , 2022. [11] D. Wagner, S. P. Bayerl, I. Baumann, K. Riedhammer, E. N ¨oth, and T. Bocklet, “Large language models for dysfluency detection in stuttered speech,” Interspeech , 2024. [12] J. Lian, C. Feng, N. Farooqi, S. Li, A. Kashyap, C. J. Cho, P. Wu, R. Netzorg, T. L... | https://arxiv.org/abs/2505.22029v1 |
, pp. 271–350, 2019. [25] L. Wagner, B. Thallinger, and M. Zusag, “Crisperwhisper: Accu- rate timestamps on verbatim speech transcriptions,” Interspeech , 2024. [26] A. Tjandra, Y .-C. Wu, B. Guo, J. Hoffman, B. Ellis, A. Vyas, B. Shi, S. Chen, M. Le, N. Zacharov et al. , “Meta audiobox aes- thetics: Unified automatic ... | https://arxiv.org/abs/2505.22029v1 |
arXiv:2505.22038v1 [cs.CV] 28 May 2025Balanced Token Pruning: Accelerating Vision Language Models Beyond Local Optimization Kaiyuan Li1∗, Xiaoyue Chen1∗, Chen Gao2, Yong Li2, Xinlei Chen1 1Tsinghua Shenzhen International Graduate School 2Tsinghua University {likaiyua23,chenxiao24}@mails.tsinghua.edu.cn {chgao96,liyong0... | https://arxiv.org/abs/2505.22038v1 |
on the out- puts of subsequent pruning layers (global) . We begin by visualizing the spatial distribution of image tokens that receive higher attention from text tokens across different layers. As shown in Figure 1, we observe that the image tokens attended by text tokens vary across different layers. This indicates th... | https://arxiv.org/abs/2505.22038v1 |
visual tokens grows. Using a multi-resolution encoding strategy, LLaV A-NeXT can generate up to 2,880 tokens per image. 2.2 Visual Token Pruning Early efforts to reduce visual token redundancy primarily focus on attention-based pruning [ 4,13,39]. For example, FastV [ 5] prunes visual tokens with low attention scores a... | https://arxiv.org/abs/2505.22038v1 |
maximizes the objective function Ldiv: Ldiv= max F(Pdiv). (4) 4 Methodology 4.1 Limitations of existing methods Attention-based methods pursue local optima We analyze the impact of pruning image tokens on the subsequent text and response tokens. From Equations 1 and 2, we can see that pruning image tokens at l-th layer... | https://arxiv.org/abs/2505.22038v1 |
can get optimal pruned token set P∗ lbased on attention. However, since the attention distribution varies across input samples and Pl3⊆Pl2⊆Pl1, it is difficult to 4 Pruning Stage Selection Balanced Token Pruning Calibration Image SetMLLMImage Token Processing1 2 3 4 5Decoder Layers Pruning LayersSelect Image Token Impo... | https://arxiv.org/abs/2505.22038v1 |
the selection by first retaining tokens from earlier positions, followed by selecting additional tokens from later positions: Ipre=Ik′[Ik′<N 2], (8) Ipost=Ik′[Ik′≥N 2][:k− |Ipre|], (9) Ik=Concat (Ipre, Ipost). (10) 5 Through the rebalancing operation, we are able to preserve the attention objective while selecting more... | https://arxiv.org/abs/2505.22038v1 |
attention mechanisms; DivPrune [ 1], which filters tokens based on visual diversity and VTW [ 19], which discards all image tokens at a specific transformer layer determined by validation performance. Benchmarks and evaluation We conduct comprehensive experiments on standard visual under- standing tasks using models of... | https://arxiv.org/abs/2505.22038v1 |
all models and benchmarks, thus avoiding separate calibration for each benchmark. We gradually reduce the number of image tokens at each stage. In the early layers, we use a larger λvalue to focus more on global information, while in the deeper layers, we use a smaller lambda to emphasize local details. More implementa... | https://arxiv.org/abs/2505.22038v1 |
diversity set. Since we compute attention only between the final token and the image tokens, the added attention complexity is O(n). For the selection of the diversity set, our proposed spatial initialization strategy and progressive weight decay allow us to select only a small number of additional tokens. In this sect... | https://arxiv.org/abs/2505.22038v1 |
tend to shift at- tention disproportionately toward later tokens, leading to suboptimal token selection. On the other hand, omitting the spatial initialization module causes a marked increase in inference latency, in some cases even surpassing that of the original unpruned model. This suggests that while pruning reduce... | https://arxiv.org/abs/2505.22038v1 |
European Conference on Computer Vision , pages 19–35. Springer, 2024. [6]Zhe Chen, Jiannan Wu, Wenhai Wang, Weijie Su, Guo Chen, Sen Xing, Muyan Zhong, Qinglong Zhang, Xizhou Zhu, Lewei Lu, et al. Internvl: Scaling up vision foundation models and aligning for generic visual-linguistic tasks. In Proceedings of the IEEE/... | https://arxiv.org/abs/2505.22038v1 |
Yuanhan Zhang, Sheng Shen, and Yong Jae Lee. Llava-next: Improved reasoning, ocr, and world knowledge, January 2024. [22] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning, 2023. [23] Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Congh... | https://arxiv.org/abs/2505.22038v1 |
multimodal models for integrated capabilities. In International conference on machine learning . PMLR, 2024. [38] Shaolei Zhang, Qingkai Fang, Zhe Yang, and Yang Feng. Llava-mini: Efficient image and video large multimodal models with one vision token. arXiv preprint arXiv:2501.03895 , 2025. [39] Shaolei Zhang, Qingkai... | https://arxiv.org/abs/2505.22038v1 |
arXiv:2505.22042v1 [cs.LG] 28 May 2025Estimating the Effects of Sample Training Orders for Large Language Models without Retraining Hao Yang1Haoxuan Li2Mengyue Yang3Xu Chen1∗Mingming Gong4 1Gaoling School of Artificial Intelligence, Renmin University of China 2Center for Data Science, Peking University 3School of Engin... | https://arxiv.org/abs/2505.22042v1 |
the target sample order can be arbitrary in an extremely large space, identifying a common basis to effectively bridge the reference and target performances becomes a non-trivial challenge. And then, even if we can successfully identify a common basis for relating different sample orders, efficiently storing this basis... | https://arxiv.org/abs/2505.22042v1 |
Bltis the (t+1)th training batch, our problem aims to efficiently derive the model parameters {γt}T t=0, where γt+1is the model parameter after training batch Blt, and we set γ0=θ0. 2Note that the reference sample order can be arbitrary or chosen based on user preference. 2 𝜃!=𝜃+−𝜂Γ(𝜃+,)…𝜃#$"𝜃#$!𝜃#𝜃+Γ(𝜃+,) Γ(�... | https://arxiv.org/abs/2505.22042v1 |
mtandvtare the first and second momentum statistics, respectively. β1andβ2are both the smoothing coefficients that control the decay rate of past gradients. ϵis a small constant to prevent mtandvtfrom being divided by zero. Similar to the above updating rule, we have γt+1−γt=−ηΓ(γt, Blt) (0≤t≤T−1). To compute γt+1, we ... | https://arxiv.org/abs/2505.22042v1 |
(JL) theorem [ 51] to efficiently reduce their dimensionality. To illustrate this process, consider storing a 2-dimensional matrix M∈Rd1×d2. We first generate a random matrix A∈Rd2×kthat follows a Gaussian 4 Algorithm 1 FUT Framework for Deriving {γt}T t=0with First-order Taylor Expansion Require: Initialized model par... | https://arxiv.org/abs/2505.22042v1 |
final model parameters estimated using our FUT framework, and the training order ltis induced by π. The performance metric Ris implemented using Perplexity (PPL)[ 24], and Πdenotes the space of all possible permutation functions. Our solution based on FUT . Since objective (9) is non-differentiable, we design a Genetic... | https://arxiv.org/abs/2505.22042v1 |
of 2048 and consists of 10 stacked transformer layers with 10 attention heads. We choose this relatively compact architecture because our main experiments involve repeated LLM training to validate that the proposed FUT framework can accurately estimate model parameters under various training orders. In appendix, we sca... | https://arxiv.org/abs/2505.22042v1 |
of sample orders increases5. We compare different methods with various T’s. The results are presented in Figure 2, where the solid bars represent our method and the dashed bars represent Retraining. We observe that as the total number of orders increases, our method progressively achieves higher time efficiency per ord... | https://arxiv.org/abs/2505.22042v1 |
(Retraining) 1.341.361.381.401.421.441.461.48 (c) True Figure 3: Memorization effects. Heatmaps in (a) and (b) are estimated by our FUT and FUT++ methods, respectively. Heatmap in (c) represents the true memorization effect obtained by retraining. perplexity as the evaluation metric and measure different models by vary... | https://arxiv.org/abs/2505.22042v1 |
batches is set as 8 ( i.e.,T= 8). We visualize the value of Mi,jin Section 4.2 based on perplexity by setting different (i, j)pairs. Results . The results are presented in Figure 3, 4 and 5. We can see: Compared to the true memorization effect (in Figure 3(c)), where we retrain the LLM to compute Mi,j, FUT and FUT++ in... | https://arxiv.org/abs/2505.22042v1 |
Adam via Taylor expansion and employing random projection for efficient parameter estimation, our framework enables accurate performance prediction under arbitrary sample orders. We demonstrate the utility of this framework in two key research problems of LLMs: training curriculum design, and memorization & generalizat... | https://arxiv.org/abs/2505.22042v1 |
neural network training. arXiv preprint arXiv:2002.10365 , 2020. [16] Saeed Ghadimi and Guanghui Lan. Stochastic first-and zeroth-order methods for nonconvex stochastic programming. SIAM journal on optimization , 23(4):2341–2368, 2013. [17] Henok Ghebrechristos and Gita Alaghband. Deep curriculum learning optimization.... | https://arxiv.org/abs/2505.22042v1 |
zeroth-order optimization in signal processing and machine learning: Principals, recent advances, and applications. IEEE Signal Processing Magazine , 37(5):43–54, 2020. [34] Zeyu Liu, Yizhong Wang, Jungo Kasai, Hannaneh Hajishirzi, and Noah A Smith. Probing across time: What does roberta know and when? In Findings of t... | https://arxiv.org/abs/2505.22042v1 |
The johnson-lindenstrauss transform: an empirical study. In 2011 Proceedings of the Thirteenth Workshop on Algorithm Engineering and Experiments (ALENEX) , pages 164–173. SIAM, 2011. [52] Xin Wang, Yuwei Zhou, Hong Chen, and Wenwu Zhu. Curriculum learning for multimedia in the era of large language models. In Proceedin... | https://arxiv.org/abs/2505.22042v1 |
Algorithm for Training Curriculum Design in FUT Framework . . . . . . 16 B Experimental Details 18 B.1 General Capability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 B.2 Training Curriculum Design for LLMs . . . . . . . . . . . . . . . . . . . . . . . . 19 C Additional Experimental Results 20... | https://arxiv.org/abs/2505.22042v1 |
for each ∇2Γ(θt, Blt)term, we can also expand it as: ∇2Γ(θt, Blt) =∂2Γ(θt, Blt) ∂θ2=1 √vt+ϵ2"∂2mt ∂θ2 (√vt+ϵ)−∂2√vt ∂θ2 mt −2∂√vt ∂θ∂mt ∂θ + 2∂√vt ∂θ2mt√vt+ϵ# (14) 15 where ∂2mt ∂θ2=β1·∂2mt−1 ∂θ2+ (1−β1)· ∇3 θL(Blt;θt) 1−βt 1, ∂2√vt ∂θ2=β2·∂2vt−1 ∂θ2+ 2(1−β2) ∇2 θL(Blt;θt)· ∇2 θL(Blt;θt) +∇θL(Blt;θt)· ∇3 θ... | https://arxiv.org/abs/2505.22042v1 |
K, mutation probability pm Ensure: Optimal sample order πGA∗ 1:Initialize permutation space ST={π|πis a permutation of {1, . . . , T }} 2:Randomly sample Npermutations as initial population: POP ={πi}N i=1⊂ ST 3:fork= 1toKdo 4: for all πi∈POP do 5: Compute γπi Tusing FUT with sample order πi 6: Evaluate fitness ri=R(γπ... | https://arxiv.org/abs/2505.22042v1 |
Experimental Details B.1 General Capability In this section, we introduce more details for the experiments to test the general capability of our FUT framework in Section 5.1. B.1.1 Base Model We conduct all of our experiments on a language model that follows the LLaMA architecture [ 49], but with a reduced number of pa... | https://arxiv.org/abs/2505.22042v1 |
NY t=1P(xt|x<t)!−1 N = exp −1 NNX t=1logP(xt|x<t)! . (16) This is equivalent to the exponential of the average cross-entropy loss. Thus, for a given validation setDvaland final model parameters θT, we compute: 18 Table 3: Genetic Algorithm hyperparameters used in our framework Hyperparameter Notation Description Scope ... | https://arxiv.org/abs/2505.22042v1 |
design choices focus on maintaining a balance between exploration and exploitation: a moderately sized population ensures sufficient diversity, while elitist selection preserves high-quality solutions across generations. The complete set of hyperparameters and their configurations are summarized in Table 3. 19 0.8B 1.0... | https://arxiv.org/abs/2505.22042v1 |
step. Results. As shown in Figure 7, both FUT and FUT++ generate accurate perplexity estimates across different training stages. While FUT performs well in general, FUT++ shows higher fi- delity—especially as the number of batches increases. This is most evident in the T= 32 case, where FUT++ remains close to the true ... | https://arxiv.org/abs/2505.22042v1 |
FUT framework is a performance estimation tool—not an optimizer—that precomputes all necessary update terms using Taylor expansions. This enables efficient, deterministic evaluation of arbitrary curricula without retraining, making FUT quite suitable for analyzing training dynamics and guiding curriculum design. E Broa... | https://arxiv.org/abs/2505.22042v1 |
Preprint. Under review. Reinforced Reasoning for Embodied Planning Di Wu Tongji University diwu7012@gmail.comJiaxin Fan* Tongji University 2253538@tongji.edu.cnJunzhe Zang* Tongji University 2250724@tongji.edu.cn Guanbo Wang Tsinghua University wanggb23@mails.tsinghua.edu.cnWei Yin Bank of Communications yinw_8@bankcom... | https://arxiv.org/abs/2505.22050v1 |
paradigms that explicitly strengthen a model’s reasoning capacity via reward-guided op- timization, and have achieved promising results in math and code problems. Extensions of this paradigm into multimodal con- texts have begun to emerge[ 47], tackling tasks such as visual mathematics and diagram-based reasoning[ 59,3... | https://arxiv.org/abs/2505.22050v1 |
generate plans from textual and visual observations, typically relying on carefully crafted prompts[ 39, 34,17,20,42,13] or auxiliary tools[ 34,6,41] to provide necessary planning cues. While simple and data-efficient, such methods often struggle with spatial grounding and temporal coherence in visually rich environmen... | https://arxiv.org/abs/2505.22050v1 |
Rule -based Reward Model TrainingResults Seen (ALFRED ) Ours Unseen (Habitat ) SFT Figure 2: Overview of our proposed framework. We adopt a two-stage training paradigm consisting of supervised fine-tuning (SFT) followed by reinforcement fine-tuning (RFT) to enhance multi-step planning capabilities of the vision-languag... | https://arxiv.org/abs/2505.22050v1 |
the reasoning generalization needed for unseen scenarios. Recent work such as DeepSeek-R1[ 14] shows that reinforce- ment learning (RL) with rule-based rewards can effectively enhance reasoning by optimizing for quality over imitation, improving both task success and generalization—especially im- portant in embodied co... | https://arxiv.org/abs/2505.22050v1 |
corresponding ground-truth action. Once a mismatch is encountered, the comparison stops. Let ndenote the number of consecutively matched steps, i.e., the prefix length such that ai=a∗ ifor all i∈[1,n]. The accuracy reward is defined as: Raccuracy =R(n;k), (10) where R(n;k)denotes the multi-step reward allocation curve ... | https://arxiv.org/abs/2505.22050v1 |
the group contains a balanced mix of good and poor responses. Accepted samples are buffered into a memory set Bof size NB. Once the buffer is filled, we perform K2steps of GRPO optimization on the collected data, after which the buffer is cleared and the process repeats.This filtering mechanism significantly improves l... | https://arxiv.org/abs/2505.22050v1 |
Under review. Model Params. EB-Habitat (Unseen) Avg Base Common Complex Visual Spatial Long Closed-Source MLLMs Claude-3.5-Sonnet — 67.7 96 68 74 74 40 54 Gemini-2.0-flash — 34.3 76 30 30 30 26 14 GPT-4o — 54.0 82 34 62 58 32 56 GPT-4o-mini — 32.3 68 38 28 28 22 10 Open-Source General MLLMs Llama-3.2-11B 11B 23.3 62 16... | https://arxiv.org/abs/2505.22050v1 |
Despite overall improvement on other categories, the performance gain in Long-Horizon tasks is marginal, highlighting the need for future research on planning depth and temporal reasoning. 4.2.2 Out-of-Domain Results To evaluate generalization, we tested our models in the EB-Habitat environment, which differs from ALFR... | https://arxiv.org/abs/2505.22050v1 |
strong performance and generalization in simulated benchmarks, it has not yet been deployed on real-world robotic platforms. Extending this framework to physical agents and integrating it with low-level control systems is an important step toward realizing embodied intelligence in practical applications. 6 Conclusion I... | https://arxiv.org/abs/2505.22050v1 |
Emerging Topics in Computational Intelligence , 6(2):230–244, 2022. [12] Hugging Face. Open r1: A fully open reproduction of deepseek-r1, January 2025. URL: https://github.com/huggingface/open-r1 . [13] Xian Fu, Min Zhang, Peilong Han, Hao Zhang, Lei Shi, Hongyao Tang, et al. What can vlms do for zero-shot embodied tas... | https://arxiv.org/abs/2505.22050v1 |
Zongkai Liu, Zhixiang Zhou, Quanfeng Lu, Daocheng Fu, Tiancheng Han, Botian Shi, Wenhai Wang, Junjun He, et al. Mm-eureka: Exploring the frontiers of multimodal reasoning with rule-based reinforcement learning. arXiv preprint arXiv:2503.07365 , 2025. [28] Chancharik Mitra, Brandon Huang, Trevor Darrell, and Roei Herzig... | https://arxiv.org/abs/2505.22050v1 |
Kaelbling, and Michael Katz. Generalized planning in PDDL domains with pretrained large language models. 38(18):20256–20264. URL: https://ojs.aaai.org/index.php/AAAI/article/ view/30006 ,doi:10.1609/aaai.v38i18.30006 . [42] Ishika Singh, Valts Blukis, Arsalan Mousavian, Ankit Goyal, Danfei Xu, Jonathan Tremblay, Dieter... | https://arxiv.org/abs/2505.22050v1 |
Levine. Robotic control via embodied chain-of-thought reasoning. In 8th Annual Conference on Robot Learning , 2024. [57] Jianke Zhang, Yanjiang Guo, Xiaoyu Chen, Yen-Jen Wang, Yucheng Hu, Chengming Shi, and Jianyu Chen. Hirt: Enhancing robotic control with hierarchical robot transformers. arXiv preprint arXiv:2410.0527... | https://arxiv.org/abs/2505.22050v1 |
Training Results We record the final metrics and loss curve from the supervised fine-tuning process, as shown in Figure 8. The table summarizes key training statistics after 3 epochs of full-parameter tuning. 17 Preprint. Under review. SFT dataset example { "messages": [ { "role": "user", "content": {EB-ALFRED prompt} ... | https://arxiv.org/abs/2505.22050v1 |
43,898 samples, each formatted to include a natural language instruction, a visual observation, and a ground-truth action sequence used for reward computation. We provide a full example of a training sample from the RFT dataset for reference in figureB.2 C.2 Training Hyperparameters We implement reinforcement fine-tuni... | https://arxiv.org/abs/2505.22050v1 |
These improvements provide a more realistic and flexible environment for assessing embodied planning capabilities. EB-Habitat. EB-Habitat extends the Language Rearrangement benchmark [ 35], based on the Habitat 2.0 simulator. It focuses on five high-level skills: navigation ,pick,place ,open , and close . Unlike ALFRED... | https://arxiv.org/abs/2505.22050v1 |
across diverse real-world challenges. In our setup, we use EB-ALFRED for in-domain training and evaluation, while EB-Habitat serves as an out-of-domain testbed to examine generalization performance. D.2 Detailed Introduction to Baselines To comprehensively evaluate our proposed method, we compare it against a diverse s... | https://arxiv.org/abs/2505.22050v1 |
and Environment Steps (ES) . Progress Rate (PR) quantifies the degree to which the agent completes the task, measured as the proportion of goal conditions satisfied by the final environment state. This metric provides a finer-grained signal than binary success, especially for partially completed tasks. Environment Step... | https://arxiv.org/abs/2505.22050v1 |
10.24 Tapa 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Open-Source Embodied + Reasoning MLLMs Ours (Base) 19.05 12.58 44.5 10.64 6.5 14.9 17 11.12 6.4 14.12 28.8 11.74 11.1 12.94 Ours (SFT only) 20.05 12.40 38.75 10.62 7 12.3 19.5 12.76 16 11.24 34.6 15.26 4.5 12.26 Ours (SFT+RFT) 27.18 13.31 58.75 8.72 15 14.98 23 13.3 20 13.36 37 13... | https://arxiv.org/abs/2505.22050v1 |
a Desk \"}, {\"action_id \": 133, \"action_name \": \"put down the object in hand \"} ] }""reasoning": "{ \"visual_state_description \": \"The image shows a room with a bed, a dresser, and a safe. The bed has a blanket and a teddy bear on it. The dresser has a few items on it, including a book . The safe is closed and ... | https://arxiv.org/abs/2505.22050v1 |
the TV stand. Since the right counter is already visited, the next step is to identify the mug and pick it up. \", \"language_plan \": \"The task is to find a mug on the right counter and move it to the TV stand. The right counter has been navigated to, and the mug needs to be picked up and placed on the TV stand. \", ... | https://arxiv.org/abs/2505.22050v1 |
is visible in the scene.", "reasoning_and_reflection": "The task requires examining a TV remote under the light of a tall lamp. However, there is no TV remote visible in the image. The lamp is present, but the TV remote is not. This suggests that the TV remote might be out of view or not in the current scene.", "langua... | https://arxiv.org/abs/2505.22050v1 |
to pick up the ball and then place it on the sofa.", "language_plan": "The plan is to first navigate to the ball, pick it up, and then place it on the sofa.", "executable_plan": [{"action_id": 8, "action_name": "navigate to the ball"}, {"action_id": 16, "action_name": "pick up the ball"}, {"action_id": 12, "action_name... | https://arxiv.org/abs/2505.22050v1 |
The next step is to navigate to the sofa and place the ball there. The previous attempts to pick up the ball were successful, but the robot needs to move to the sofa to complete the task.", "language_plan": "The robot needs to navigate to the sofa and place the ball there.", "executable_plan": [{"action_id": 12, "actio... | https://arxiv.org/abs/2505.22050v1 |
floor. The ball is not visible in the image.", "reasoning_and_reflection": "The previous attempts to pick up the ball were unsuccessful due to the robot being too far away from the ball. The ball is likely located on the floor, but the robot has not yet navigated to the correct position to pick it up. The current plan ... | https://arxiv.org/abs/2505.22050v1 |
close. Only valid if the receptacle is open and the robot is close to the receptacle. * Turn on: Parameterized by the name of the object to turn on. Only valid if the object is turned off and the robot is close to the object. * Turn off: Parameterized by the name of the object to turn off. Only valid if the object is t... | https://arxiv.org/abs/2505.22050v1 |
to open. Only valid if the receptacle is closed and the robot is close to the receptacle. * Close: Parameterized by the name of the receptacle to close. Only valid if the receptacle is open and the robot is close to the receptacle. ## The available action id (0 ~ 69) and action names are:{HABITAT ACTION LIST} ## Task E... | https://arxiv.org/abs/2505.22050v1 |
* CleanObject: Parameterized by the name of the object to clean. Requires the robot to be near a water source and the object supports cleaning. * HeatObject: Parameterized by the name of the object to heat. Requires the robot to be holding the object and near a heating appliance such as a microwave or stove. ## The ava... | https://arxiv.org/abs/2505.22050v1 |
a Knife, action id 35: find a Tomato, action id 36: find a ButterKnife, action id 37: find a Dresser, action id 38: find a Microwave, action id 39: find a CounterTop, action id 40: find a GarbageCan, action id 41: find a WateringCan, action id 42: find a Vase, action id 43: find a ArmChair, action id 44: find a Safe, a... | https://arxiv.org/abs/2505.22050v1 |
navigate to the right counter in the kitchen, action id 11: navigate to the left counter in the kitchen, action id 12: navigate to the sofa, action id 13: navigate to the refrigerator, action id 14: navigate to the left drawer of the kitchen counter, action id 15: navigate to the right drawer of the kitchen counter, ac... | https://arxiv.org/abs/2505.22050v1 |
9: goto bread, action id 10: goto butterknife, action id 11: goto cabinet, action id 12: goto candle, action id 13: goto cart, action id 14: goto cellphone, action id 15: goto cloth, action id 16: goto coffeemachine, action id 17: goto coffeetable, action id 18: goto countertop, action id 19: goto creditcard, action id... | https://arxiv.org/abs/2505.22050v1 |
arXiv:2505.22067v1 [cs.CV] 28 May 2025From Failures to Fixes: LLM-Driven Scenario Repair for Self-Evolving Autonomous Driving Xinyu Xia1, Xingjun Ma2, Yunfeng Hu1, Ting Qu1, Hong Chen3, Xun Gong1,† 1Jilin University2Fudan University3Tongji University †Corresponding author Abstract Ensuring robust and generalizable auto... | https://arxiv.org/abs/2505.22067v1 |
we propose SERA, an inno- 1 Pre-evaluation Scenario repairt=6s t=7s t=8s t=9s t=10s Failure analysis Failure-aware scenario recommendation Self-EvolvingFigure 1. Conceptual illustration of SERA . The system performs pre-evaluation to detect failure cases, leverages failure-aware scenario recommendation to retrieve vuln... | https://arxiv.org/abs/2505.22067v1 |
rare, unexpected fail- ures [11]. To enhance coverage, recent works introduced automated methods, leveraging simulation-based complex- ity assessment [13], genetic algorithms [44], and adversarial reinforcement learning [5]. However, these techniques typ- ically require explicitly defined seed scenarios or optimiza- ti... | https://arxiv.org/abs/2505.22067v1 |
textual scenarios o Agent observation (sensor input at a time step) a Agent action (trajectory output or control command) πθ Autonomous driving policy parameterized byθ τ Pre-evaluation driving route T Set of all pre-evaluation routes ℓ(τ, πθ)Performance log collected on route τ L Set of all performance logs p Individu... | https://arxiv.org/abs/2505.22067v1 |
trians, aggressive cut-ins). These descriptions are con- structed to reflect both explicit and latent factors affect- ing autonomous driving behavior, ensuring that the retrieval process can match subtle failure patterns revealed during pre-evaluation. 3.4. Failure-Aware Scenario Recommendation Conventional methods typ... | https://arxiv.org/abs/2505.22067v1 |
scenario candidates Cand the extracted failure patterns P. Specifically, the reflection process leverages the reason- ing capabilities of LLMs to assess the coverage adequacy ofCand diagnose any critical failure aspects that remain in- sufficiently addressed. Based on this analysis, the reflection module outputs a set ... | https://arxiv.org/abs/2505.22067v1 |
images and ego-vehicle states to jointly predict trajectories and control com- mands. 6 Table 2. Overall performance comparison of baseline models with and without SERA. Method Input Driving Score ↑Success Rate (%) ↑ Efficiency ↑ Comfortness ↑ AD-MLP[40] Ego State 7.83 0.00 44.89 26.36 AD-MLP + SERA Ego State 8.28 (+5.... | https://arxiv.org/abs/2505.22067v1 |
Rate, benefits from an +8.33% improvement in Comfortness, highlight- ing that scenario repair through SERA improves not just success likelihood but also the qualitative aspects of driv- ing behavior. This reflects the framework’s ability to enrich long-tail conditions impacting ride quality. For AD-MLP, although Succes... | https://arxiv.org/abs/2505.22067v1 |
6.85% under ran- dom sampling. Initial Recommendation improves upon this baseline, reaching 33.25 and 7.52% respectively, demon- strating that targeted retrieval based on failure patterns is beneficial. Notably, Full SERA yields the best performance across all models. With reflection-enhanced refinement, UniAD further ... | https://arxiv.org/abs/2505.22067v1 |
ear- lier. 5. Conclusion In this paper, we proposed SERA, a self-evolving scenario repair framework that systematically enhances autonomous driving systems by addressing failure cases through LLM- driven efficient scenario recommendation. Unlike tradi- tional retraining or static scenario generation approaches, SERA le... | https://arxiv.org/abs/2505.22067v1 |
2 [12] Mathias Gehrig, Willem Aarents, Daniel Gehrig, and Davide Scaramuzza. 2021. Dsec: A stereo event cam- era dataset for driving scenarios. IEEE Robotics and Automation Letters 6, 3 (2021), 4947–4954. 1 [13] Zahra Ghodsi, Siva Kumar Sastry Hari, Iuri Frosio, Timothy Tsai, Alejandro Troccoli, Stephen W Keck- ler, Si... | https://arxiv.org/abs/2505.22067v1 |
2021. Explanations in autonomous driv- ing: A survey. IEEE Transactions on Intelligent Transportation Systems 23, 8 (2021), 10142–10162. 1 [27] Sagar Pathrudkar, Saadhana Venkataraman, Deepika Kanade, Aswin Ajayan, Palash Gupta, Shehzaman Khatib, Vijaya Sarathi Indla, and Saikat Mukherjee. 2023. SAFR-A V: Safety Analys... | https://arxiv.org/abs/2505.22067v1 |
(2025), 121101. 1 11 [39] Pengfei Yao, Yinglong Zhu, Huikun Bi, Tianlu Mao, and Zhaoqi Wang. 2024. TrajCLIP: Pedestrian trajec- tory prediction method using contrastive learning and idempotent networks. Advances in Neural Informa- tion Processing Systems 37 (2024), 77023–77037. 2 [40] Jiang-Tian Zhai, Ze Feng, Jinhao D... | https://arxiv.org/abs/2505.22067v1 |
arXiv:2505.22068v1 [cs.CL] 28 May 2025Beyond path selection: Better LLMs for Scientific Information Extraction with MimicSFT and Relevance and Rule-induced(R2)GRPO Ran Li HKUST Hong Kong SAR, China rlibb@connect.ust.hkShimin Di SEU Jiangsu, China shimin.di@seu.edu.cn Yuchen Liu HKUST(GZ) Guangzhou, China yliu356@connec... | https://arxiv.org/abs/2505.22068v1 |
and relational inference, highlighting a misalignment between their training objectives and IE’s dual requirements. We argue that IE’s hybrid nature with both knowledge memorization andcontextual rule reasoning makes it an ideal lens to study what RLVR truly learns. [ 6] States SFT is better at memorization and RL is g... | https://arxiv.org/abs/2505.22068v1 |
but not necessary for other types tasks. On the other hand, some argue that SFT can also help to gain reasoning capacity [ 20]. So is there a clear gap between SFT and RLVR and what do RLVR really learns? Our work investigates this question within the less explored domain of Information Extraction, where both knowledge... | https://arxiv.org/abs/2505.22068v1 |
In terms of Equation 11, o=y,D=DSFT, and GC(x, y, t, π ref) = 1 for all tokens. To improve generalization, we decompose IE into distinct sub-tasks (NER only, RE with Gold Entities, RE only, End-to-End IE) and employ a multi-task learning approach: LMT-SFT (θ) =−KX k=1X (x,y)∈D SFT,TklogP(y|x, Tk;θ) (3) where Tkindicate... | https://arxiv.org/abs/2505.22068v1 |
generation problem into a more tractable form by decomposing it into stages. For MimicSFT with a single reasoning level z1: P(y|x;θ)≈X z1∈Z1P(y|z1, x;θ)P(z1|x;θ), (7) 5 where Z1is the space of valid reasoning templates. This decomposition allows the model to first focus on generating valid reasoning ( z1) that satisfie... | https://arxiv.org/abs/2505.22068v1 |
F1 score as the reward signal. R2GRPO: Our proposed Reinforcement Learning framework, R2GRPO (Relevance and Rule-Induction Group Relative Policy Optimization), incorporating a composite reward function as detailed in Section 3.3. The overall prompt can be seen in the appendix A.4. The system prompt for R 2GRPO training... | https://arxiv.org/abs/2505.22068v1 |
General-purpose LLMs are evaluated using zero-shot. Dataset We conduct our experiments primarily on the SciER dataset and OOD datasets [ 41]. SciER is a benchmark for information extraction in the scientific domain. It contains 24k entities and 12k relations over 106 scientific publications. It features diverse entity ... | https://arxiv.org/abs/2505.22068v1 |
2. RLVR and SFT both Enhance Reasoning Capacity: From Figure 2 and Figure 3, both RLVR- based (GRPO, R2GRPO) and SFT-based models (SFT, MimicSFT) significantly outperform the base Qwen2.5-7B-Instruct model across all K values. This contradicts the hypothesis that RLVR merely optimizes path selection without improving u... | https://arxiv.org/abs/2505.22068v1 |
Figure 4 shows that our models consistently outperform baselines across different temperature settings. The optimal performance is at lower temperatures (<0.6). This indicates that SciIE benefits from more deterministic generation since the task requires precise entity boundary detection and relation inference based on... | https://arxiv.org/abs/2505.22068v1 |
2025. [7]Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. Scaling instruction-finetuned language models. Journal of Machine Learning Research , 25(70):1–53, 2024. [8]John Dagdelen, Alexander Dunn, Sanghoon Lee, Nicholas Wal... | https://arxiv.org/abs/2505.22068v1 |
, 2024. [23] Suphakit Niwattanakul, Jatsada Singthongchai, Ekkachai Naenudorn, and Supachanun Wanapu. Using of jaccard coefficient for keywords similarity. In Proceedings of the international multiconference of engineers and computer scientists , volume 1, pages 380–384, 2013. [24] Long Ouyang, Jeffrey Wu, Xu Jiang, Di... | https://arxiv.org/abs/2505.22068v1 |
extraction with chatgpt. arXiv preprint arXiv:2304.05454 , 2023. [39] Yang Yue, Zhiqi Chen, Rui Lu, Andrew Zhao, Zhaokai Wang, Shiji Song, and Gao Huang. Does reinforcement learning really incentivize reasoning capacity in llms beyond the base model? arXiv preprint arXiv:2504.13837 , 2025. [40] Bowen Zhang and Harold S... | https://arxiv.org/abs/2505.22068v1 |
14 (a) Completion Length (b) Ner reward (c) Rel reward (d) Reasoning reward (e) Total reward (f) Standard Deviation Figure 7: R2GRPO training detail v.s. steps Ner Background Extract specific entities from the following sentence. The entities to be identified are: ’Dataset’, ’Task’, and ’Method’. ### Entity Definitions... | https://arxiv.org/abs/2505.22068v1 |
that one entity (e.g., a Method) is utilized for achieving or performing another entity (e.g., a Task). This relationship is highly flexible. - ’Compare-With’: This relationship is used when one entity is compared with another to highlight differences, similarities, or both. ### Notes: - Determine the ’Relationship’ th... | https://arxiv.org/abs/2505.22068v1 |
arXiv:2505.22074v1 [cs.LG] 28 May 2025The Resurrection of the ReLU Co¸ sku Can Horuz1∗Geoffrey Kasenbacher1,2∗Saya Higuchi1Sebastian Kairat1 Jendrik Stoltz1Moritz Pesl1Bernhard A. Moser3,4Christoph Linse5 Thomas Martinetz5Sebastian Otte1 1Institute of Robotics and Cognitive Systems, University of Lübeck 2Mercedes-Benz ... | https://arxiv.org/abs/2505.22074v1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.