text
string
source
string
¨uller et al., 2023; Germano et al., 2023; M ¨uller et al., 2024; Kitamura et al., 2024) provide sublinear regret guarantees for stochas- tic constraints but struggle to generalize to such adversarial cases. The adversarial setting is inherently more challenging due to the dynamic and unpredictable nature of constraint...
https://arxiv.org/abs/2505.21841v1
Primal- Dual (OMDPD) algo- rithm that ensures optimal regret and strong constraint vi- olation bounds with respect to the number of episodes K, regardless of whether the reward and cost functions are gen- erated stochastically or adversarially. Our contributions are summarized as follows: •We present the first work add...
https://arxiv.org/abs/2505.21841v1
Op? Kq N/A Op? Kq ✓ No (M¨uller et al., 2023):Op? Kq N/A Op? Kq ✓ Yes (Stradi et al., 2024c):˜Op? Kq N/A ˜Op? Kq ✓ No (M¨uller et al., 2024):˜OpK0.93q N/A ˜OpK0.93q ✓ No (Kitamura et al., 2024):˜OpK6 7q N/A ˜OpK6 7q ✓ No OMDPD: ˜Op? Kq ˜Op? Kq ˜Op? Kq ✗ No Table 1. Comparison between OMDPD and existing related work. We...
https://arxiv.org/abs/2505.21841v1
(6) Alternatively, the online optimization problem (3)can also be represented using the notion of occupancy measure (Alt- man, 1999)tqπ hps, a;pquH h“1under a policy πand transition kernel p.For every sPS, aPA,we have the occupancy measure defined as: qπ hps, a, s1q“Prpsh`1“s1, sh“s, ah“a|p, π, s 1q qπ hps, aq“ÿ s1PSqπ...
https://arxiv.org/abs/2505.21841v1
4.2. Surrogate Objective Function The objective of online CMDP learning is twofold: (1) to control the constraint violations over time, and (2) to maximize cumulative reward. Thus, after constructing the feasible candidate set for the policy Qk,our algorithm aims to solve the following optimization problem at each epis...
https://arxiv.org/abs/2505.21841v1
tighter bound by incorporating historical gradients and occupancy measures. The full algorithm is presented in Algorithm 1. 5. Main Result We first provide the main theoretical results of OMDPD. Theorem 5.1. Choose α“1 2p1`? LδqSAH, β“ SAH 8? C? 6SAHKand denote C“supq1,q2PQDpq1, q2q, where Lδis defined in Appendix B.1....
https://arxiv.org/abs/2505.21841v1
section, we show the theoretical analysis of Al- gorithm 1. We first introduce the following facts for the CMDPs considered in this paper. Fact 5.4.For any q1, q2PQk,@kPrKs,we have}q1´ q2}ď? SAH. Fact 5.5. For any ˜rk,dkor˜dk,the reward/cost value func- tion in terms of qPQkis convex and Lipschitz continu- ous such tha...
https://arxiv.org/abs/2505.21841v1
foun- dational inequality: ΦpλKq`αřK k“1` ˜rJ kq˚´˜rJ kqk˘ ďřK k“1` fkpqkq´fkpq˚q˘ .Here, for simplicity, we momen- tarily ignore the factor of SAH to discuss how the Op1q result in Remark 5.2 can be achieved. First, choosing the function Φpxq “exppβxq´1ensures that Φ1pλKqcan be combined with ΦpλKqin the foundational i...
https://arxiv.org/abs/2505.21841v1
bound the optimization error. 5.4. Upper Bound of Optimization Error Regret Analysis. We first focus on bounding the optimiza- tion error associated with regret. The following lemma es- tablishes that the cumulative variation of gradients between consecutive episodes under OMDPD is bounded, enabling adaptive regret-vio...
https://arxiv.org/abs/2505.21841v1
process unfolds over a fixed horizon of 8 An Optimistic Algorithm for online CMDPS with Anytime Adversarial Constraints H“5steps. At each time step, the agent receives a reward rPr0,1sHˆSˆAsampled uniformly from the unit interval. In the stochastic setting, the cost cPr´1,1sHˆSˆAis also drawn uniformly and held fixed a...
https://arxiv.org/abs/2505.21841v1
bounds for reinforcement learning. In International con- ference on machine learning , pp. 263–272. PMLR, 2017. Bai, Q., Bedi, A. S., Agarwal, M., Koppel, A., and Ag- garwal, V . Achieving zero constraint violation for con- strained reinforcement learning via primal-dual approach. InAAAI Conf. Artificial Intelligence ,...
https://arxiv.org/abs/2505.21841v1
viola- tion for constrained MDPs. In Advances Neural Informa- tion Processing Systems (NeurIPS) , volume 34, 2021a. Liu, T., Zhou, R., Kalathil, D., Kumar, P., and Tian, C. Learning policies with zero or bounded constraint viola- tion for constrained mdps. Advances in Neural Informa- tion Processing Systems , 34:17183–...
https://arxiv.org/abs/2505.21841v1
Descent and thus cannot attain a tighter bound even when the reward is fixed. Meanwhile, Lekeufack & Jordan (2024) proposed an algorithm based on optimistic online mirror descent, attaining comparable regret and violation bounds to ours. B. Optimistic estimates Related Lemmas B.1. Proof of Lemma 5.6 Lemma 5.6.With prob...
https://arxiv.org/abs/2505.21841v1
equal transformation: xqk´q˚,∇ky“term 1hkkkkkkkkkkkkikkkkkkkkkkkkj xqk´ˆqk,∇k´∇k´1y`term 2hkkkkkkkkikkkkkkkkj xqk´ˆqk,∇k´1y`term 3hkkkkkkkikkkkkkkj xˆqk´q˚,∇ky (29) We can directly have upper bound for term 1: xqk´ˆqk,∇k´∇k´1yď}qk´ˆqk}2}∇k´∇k´1}2 And any update of the form a˚“arg min aPAηxa, xy`Dpa, cqsatisfies for any...
https://arxiv.org/abs/2505.21841v1
Constraints Proof: To prove (33), it is sufficient to show that Kÿ k“1ˇˇˇVπkp˜ℓk,˜pkq´Vπkp¯ℓ, pqˇˇˇď˜O´? NSAH3K`S2AH3¯ forℓ“r, d. The right-hand side of the above inequality can be decomposed as Kÿ k“1ˇˇˇVπkp˜ℓk,˜pkq´Vπkp¯ℓ, pqˇˇˇďKÿ k“1ˇˇˇVπkp˜ℓk,˜pkq´Vπkp˜ℓk, pqˇˇˇ loooooooooooooooooomoooooooooooooooooon Term 1`Kÿ k“...
https://arxiv.org/abs/2505.21841v1
Lδq2SAHK“2αp1`a Lδq? 6SAHK` 1`Φ1pλKq˘ “? 6SAHKp1`Φ1pλKqq SAH where the last equality holds for choosing α“1 2p1`?LδqSAH. C.4. Proof of Lemma 5.11 Lemma 5.11.Based on Lemma 5.8, 5.9, the following upper bound holds: Kÿ k“1“ ˜rJ kq˚´˜rJ kqk‰ ď2p1`a LδqpSAH`4? C? 6SAHKq Proof: Based on Lemma 5.8, 5.9, we have the followin...
https://arxiv.org/abs/2505.21841v1
4? C? 6SAHK which the choosing of βmatch when we prove the regret bound in that case we choose βďSAH 4? C? 6SAHK. Here, we let β“SAH 8? C? 6SAHKand we have: exppβλKqďαpp1`?LδqSAHKq`4? C´? 6SAHK SAH¯ `1 ´ 1´β¨4? C´? 6SAHK SAH¯¯ “αpp1`?LδqSAHKq`4? C´? 6SAHK SAH¯ `1 1 2 “2αpp1`a LδqSAHKq`8? C˜? 6SAHK SAH¸ `2 “K`8? C˜? 6SA...
https://arxiv.org/abs/2505.21841v1
5.10 with adversarial case and with Lemma 5.13, the following bound can be obtained: ViolationpKq“Kÿ k“1rVπkpdk, pqs`“Kÿ k“1rVπkpdk, pq´Vπkpdk,˜pkqs` loooooooooooooooooooomoooooooooooooooooooon Lemma 5.10`Kÿ k“1rVπkpdk,˜pkqs` loooooooooomoooooooooon Lemma 5.13 ď˜O`? NSAH3K`S2AH3˘ `16p1`? Lδq? C? 6SAHK ln˜ K`8? C˜? 6SAH...
https://arxiv.org/abs/2505.21841v1
pq´Vπk h`1p¨;ℓk,˜pkqq|s1, πk, pffˇˇˇˇˇloooooooooooooooooooooooooooooooooooooooooooooooooomoooooooooooooooooooooooooooooooooooooooooooooooooon Term (II) where we use the short-hand notation php¨|s, aqVπp¨;ℓk, pq“ř s1PSphps1|s, aqVπps1;ℓk, pq. Under the good event G, we have |pph´˜pk hqps1|s, aq|ďC1d phps, aqLp δ nk´1 hp...
https://arxiv.org/abs/2505.21841v1
arXiv:2505.21847v1 [cs.CV] 28 May 2025RePaViT: Scalable Vision Transformer Acceleration via Structural Reparameterization on Feedforward Network Layers Xuwei Xu1 2Yang Li3Yudong Chen1Jiajun Liu3 1Sen Wang1 2 Abstract We reveal that feedforward network (FFN) layers, rather than attention layers, are the primary con- tri...
https://arxiv.org/abs/2505.21847v1
Acceleration via Structural Reparameterization on Feedforward Network Layers 100.0 150.0 200.0 300.0 500.0 1000.0 1500.0 2000.0 Throughput (images/second)767880828486Top-1 accuracy (%)DeiT-Base (17.6 GMACs, 81.8% Acc.) RePa-DeiT-Base (9.9 GMACs, 81.3% Acc.)DeiT-Small (4.3 GMACs, 79.8% Acc.) RePa-DeiT-Small (3.2 GMACs, ...
https://arxiv.org/abs/2505.21847v1
the model size grows, as shown in Figure 3. These observations reflect the urgent demand for techniques to optimize FFN layers, especially for large-scale ViTs. To facilitate structural reparameterization for FFN layers, in this work, we propose an innovative channel idle mech- anism. Specifically, in each FFN layer, o...
https://arxiv.org/abs/2505.21847v1
architectures that combine self-attentions with computationally efficient con- volutions (Graham et al., 2021; Mehta & Rastegari, 2022a; Chen et al., 2022a; Li et al., 2022; Cai et al., 2023; Vasu et al., 2023a; Zhang et al., 2023; Shaker et al., 2023) are introduced to reduce the computationally expensive self- attent...
https://arxiv.org/abs/2505.21847v1
large ViTs. 3 RePaViT: Scalable Vision Transformer Acceleration via Structural Reparameterization on Feedforward Network Layers 0 1 2 3 4 Latency (ms)RePa-ViT-Large ViT-LargeRePa-Swin-Base Swin-BaseRePa-Swin-Small Swin-SmallRePa-DeiT-Base DeiT-BaseRePa-DeiT-Small DeiT-SmallPatch Embedding MHSA FFN Reparameterized FFN F...
https://arxiv.org/abs/2505.21847v1
further reparameterize the weights as eW=eWIn [:, µC+1:ρC]eWOut [µC+1:ρC,:]+I. (5) By substituting Equation 5 into Equation 4, we obtain the updating function for the FFN layer during the testing stage with three reparameterized weights as Z=Act(YeWIn [:,1:µC])eWOut [1:µC,:]+YeW. (6) As Figure 1(c) shows, after reparam...
https://arxiv.org/abs/2505.21847v1
reparameterized, whereas in our approach, one branch is linear while the other one is nonlinear. 4. Experiments 4.1. Datasets, Training and Evaluation Settings We mainly train and test RePaViTs for the image classifica- tion task on the widely recognized ImageNet-1k (Deng et al., 2009) dataset, following the data augme...
https://arxiv.org/abs/2505.21847v1
methods for efficient ViTs. "-" indicates that the statistic is either missing or irreproducible. Our method demonstrates significantly higher speed-ups compared to pruning methods while achieving competitive or even higher top-1 accuracies across various ViT backbones. Backbone Method #MParam. ↓Compl. (GMACs)↓Speed im...
https://arxiv.org/abs/2505.21847v1
Sensitivity of channel idle ratio θ.The performance of RePaViT on plain (DeiT (Touvron et al., 2021)) and hierarchical (Swin (Liu et al., 2021)) ViTs with various θis reported. θ=* represents the vanilla backbone. θ=1.00 implies the nonlinear activation being re- moved from the model. The results show a significant acc...
https://arxiv.org/abs/2505.21847v1
with network pruning methods, RePaVit yields more accelera- tions and smaller performance gaps on larger models even with the same channel idle ratio θ. This underscores the important practical value of RePaViT on large foundation models for vision tasks. 4.4. Comparison Against State-of-The-Art Method Table 3 compares...
https://arxiv.org/abs/2505.21847v1
training with less parame- ters for ViTs, which aligns with the findings in Vasu et al. (2023a;b). Meanwhile, train-time overparameterization also helps to stabilize the training process for large models. For instance, when trained with reparameterized structure, RePa- DeiT-Base, RePa-ViT-Large, RePa-LV-ViT-S and RePa-...
https://arxiv.org/abs/2505.21847v1
There are many potential societal consequences of our work, none which we feel must be specifically highlighted here. Acknowledgement This research was partially supported by the Australian Government through the Australian Research Council’s In- dustrial Transformation Training Centre for Information Resilience (CIRES...
https://arxiv.org/abs/2505.21847v1
recognition at scale. In ICLR , 2021. 9 RePaViT: Scalable Vision Transformer Acceleration via Structural Reparameterization on Feedforward Network Layers Fayyaz, M., Koohpayegani, S. A., Jafari, F. R., Sengupta, S., Joze, H. R. V ., Sommerlade, E., Pirsiavash, H., and Gall, J. Adaptive token sampling for efficient visi...
https://arxiv.org/abs/2505.21847v1
dense object detection. In ICCV , 2017. Liu, Z., Lin, Y ., Cao, Y ., Hu, H., Wei, Y ., Zhang, Z., Lin, S., and Guo, B. Swin transformer: Hierarchical vision transformer using shifted windows. In ICCV , 2021. Liu, Z., Hu, H., Lin, Y ., Yao, Z., Xie, Z., Wei, Y ., Ning, J., Cao, Y ., Zhang, Z., Dong, L., et al. Swin tran...
https://arxiv.org/abs/2505.21847v1
Sablayrolles, A., and Jégou, H. Training data-efficient image trans- formers & distillation through attention. In ICML , 2021. Vasu, P. K. A., Gabriel, J., Zhu, J., Tuzel, O., and Ranjan, A. Fastvit: A fast hybrid vision transformer using structural reparameterization. In ICCV , 2023a. Vasu, P. K. A., Gabriel, J., Zhu,...
https://arxiv.org/abs/2505.21847v1
reparame- terization lightweight network for video action recogni- tion. In ICASSP , 2023. 11 RePaViT: Scalable Vision Transformer Acceleration via Structural Reparameterization on Feedforward Network Layers Zong, Z., Li, K., Song, G., Wang, Y ., Qiao, Y ., Leng, B., and Liu, Y . Self-slimmed vision transformer. In ECC...
https://arxiv.org/abs/2505.21847v1
with a negligible 0.3% accuracy drop. On the larger CLIP-ViT-B/16 model, our method improves inference speed by 24.7% while achieving a 0.8% gain in zero-shot classification top-1 accuracy. These results demonstrate the effectiveness of RePaViT in enhancing the 13 RePaViT: Scalable Vision Transformer Acceleration via S...
https://arxiv.org/abs/2505.21847v1
Xinyu AI Search: Enhanced Relevance and Comprehensive Results with Rich Answer Presentations Bo Tang∗ tangbo@mail.ustc.edu.cn AIDS and SIAR, University of Science and Technology of China Suzhou, ChinaJunyi Zhu∗ junyizhu.ai@gmail.com ESAT-PSI, KU Leuven Leuven, BelgiumChenyang Xi, Yunhang Ge firstname.lastname@iaar.ac.c...
https://arxiv.org/abs/2505.21849v1
that closely resemble human communication. Modern large language models (LLMs) have demonstrated human-level performance in tasks such as reading comprehension and reasoning within specific contexts [ 21,42,62,75]. Their vast parameters also enable the encod- ing of extensive knowledge [ 14,59]. Despite these strengths...
https://arxiv.org/abs/2505.21849v1
Fig. 1. Our contributions can be summarized as follows: (1)We systematically decompose this domain into specific subproblems and provide detailed descriptions of our solutions, including prompt design, data preparation, and model training, to facilitate future research and applications. (2)We introduce novel approaches...
https://arxiv.org/abs/2505.21849v1
the quality of the generated response [ 41]. Reference filtering techniques aim to eliminate irrelevant or noisy retrieved documents, ensuring only pertinent information is considered for generation [ 20,46]. Context selection focuses on identifying the most relevant portions of the retrieved context while discarding l...
https://arxiv.org/abs/2505.21849v1
the network to correctly predict the positive sample among the negatives using a cross- entropy loss over the scored candidates: 3 Preprint, Bo Tang and Junyi Zhu et al. Query User Intent UnderstandingQDG Multi-Source RetrievalPassage PoolPassage Deduplication & SelectionPassage RerankingTimeline Visualization Extract ...
https://arxiv.org/abs/2505.21849v1
rewriting, a single query is trans- formed into a QDG. A specific example is provided at the bottom left of Fig. 3. In the QDG, nodes represent sub-queries, while directed edges indicate dependencies. Given a query, we use a fine-tuned gen- erative LLM (Qwen2.5-72B) to construct the corresponding QDG by defining nodes ...
https://arxiv.org/abs/2505.21849v1
to their corre- sponding sub-queries in the QDG. A fine-tuned LLM then generates responses for each sub-query following the dependency structure, ensuring that parent nodes are processed before their child nodes. Notably, retrieved passages vary in relevance and often contain duplicate information, which can distract t...
https://arxiv.org/abs/2505.21849v1
aiding users in result verifica- tion and fostering confidence in synthesized outputs is crucial. To address these challenges, we incorporate timeline visualizations, textual-visual choreography, and built-in citations, as discussed below, to optimize the reading experience. 3.5.1 Built-In Citation. A straightforward a...
https://arxiv.org/abs/2505.21849v1
according to their timestamps. 3.5.3 Textual-Visual Choreography. A picture is worth a thousand words. As illustrated in the bottom right of Fig. 3, Xinyu integrates relevant images into textual responses to enhance information as- similation. These images are extracted from retrieved documents. To ensure quality and r...
https://arxiv.org/abs/2505.21849v1
8.621 KIMI [4] 9.840 9.515 8.529 8.224 8.966 8.155 9.223 9.709 6.796 8.773 Metaso [49] 9.760 8.941 8.515 7.408 8.403 5.689 9.383 9.689 4.759 8.061 ChatGLM [16] 9.810 9.420 8.949 9.124 8.346 6.168 9.533 9.726 5.047 8.458 Baichuan [1] 9.660 9.596 6.486 7.612 8.220 8.252 9.223 9.612 6.117 8.309 Tongyi [7] 9.803 9.009 7.58...
https://arxiv.org/abs/2505.21849v1
generated answer relies on retrieved information and (2) the placement of citations. Some existing systems, such as Perplexity AI, often position cita- tions at the end of a paragraph, making it difficult for users to trace specific claims, especially when multiple citations correspond to different parts of a paragraph...
https://arxiv.org/abs/2505.21849v1
average scores. Representation Enhancement. We further conduct an ablation study on built-in citation, timeline visualization, and textual-visual choreography to assess their impact on clarity and comprehensive- ness based on human evaluation. As shown in Fig. 5, removing any of these modules significantly reduces the ...
https://arxiv.org/abs/2505.21849v1
Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al .2020. Language models are few-shot learners. Advances in neural information processing systems 33 (2020), 1877–1901. [15] Harrison Chase. 2022. LangChain . https://github.com/lang...
https://arxiv.org/abs/2505.21849v1
Query expansion by prompting large language models. arXiv preprint arXiv:2305.03653 (2023). [31] Jiaming Ji, Mickel Liu, Josef Dai, Xuehai Pan, Chi Zhang, Ce Bian, Boyuan Chen, Ruiyang Sun, Yizhou Wang, and Yaodong Yang. 2024. Beavertails: Towards improved safety alignment of llm via a human-preference dataset. Advance...
https://arxiv.org/abs/2505.21849v1
He, hai zhao, and Nan Duan. 2023. Query Rewriting in Retrieval-Augmented Large Language Models. In The 2023 Confer- ence on Empirical Methods in Natural Language Processing . https://openreview. net/forum?id=gXq1cwkUZc [45] Xinbei Ma, Yeyun Gong, Pengcheng He, Hai Zhao, and Nan Duan. 2023. Query Rewriting in Retrieval-...
https://arxiv.org/abs/2505.21849v1
Language Models with It- erative Retrieval-Generation Synergy. In Findings of the Association for Com- putational Linguistics: EMNLP 2023 , Houda Bouamor, Juan Pino, and Kalika Bali (Eds.). Association for Computational Linguistics, Singapore, 9248–9274. doi:10.18653/v1/2023.findings-emnlp.620 [61] Statista. 2023. http...
https://arxiv.org/abs/2505.21849v1
Answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing , Ellen Riloff, David Chiang, Julia Hockenmaier, and Jun’ichi Tsujii (Eds.). Association for Computational Linguistics, Brussels, Belgium, 2369–2380. doi:10.18653/v1/D18- 1259[77] Ori Yoran, Tomer Wolfson, Ori Ram, and ...
https://arxiv.org/abs/2505.21849v1
ensuring com- patibility with irregular HTML structures. Text Processing. : Extracted text blocks are separated by spaces or line breaks to improve readability. Redundant whitespace and 10 Xinyu AI Search: Enhanced Relevance and Comprehensive Results with Rich Answer PresentationsPreprint, Table 8: Multi-faceted evalua...
https://arxiv.org/abs/2505.21849v1
(11) creative content generation. To construct a training dataset, we collect a set of seed queries based on open-source datasets: Do-Not-Answer [ 70], BeaverTails [ 31] and Safety-Prompts [ 63]. Additionally, we generate synthetic queries to compensate the imbalance number for each class in the collected dataset. Then...
https://arxiv.org/abs/2505.21849v1
"child": "How long did this natural disaster last?"} ``` - Dependency principles: 1) If sub-queries are **independent**, 'parent_child 'remains an empty list. 2) If the **child question cannot be answered without the parent**, it is a dependent relationship. - Example: "What is the latest iPhone model" is the parent no...
https://arxiv.org/abs/2505.21849v1
theme and its variations. - **Case Study**: Explain a theory or concept through specific cases. - **Hierarchical Structure**: Arrange information by importance or sequence. - **Issue and Counterarguments**: Present an issue with supporting and opposing views. [Language Requirements]: (1) Use concise and clear language....
https://arxiv.org/abs/2505.21849v1
are as follows: [1] {Retrieved Document} [2] {Retrieved Document} [3] ... When making your determination, ensure that the selected reference document matches as much key information from the excerpted sentence as possible. The higher the degree of key information overlap, the more likely the reference document is the s...
https://arxiv.org/abs/2505.21849v1
performance, 10 is the maximum. Model Conciseness Numerical Precision Relevance Factuality Timeliness Comprehensiveness Clarity Coherence Insightfulness Average Perplexity AI 9.913 9.607 9.740 9.727 8.120 8.280 9.887 9.853 6.613 9.082 Tiangong AI [6] 9.819 9.188 9.570 9.738 7.758 7.517 9.839 9.799 6.161 8.821 Ernie Bot...
https://arxiv.org/abs/2505.21849v1
arXiv:2505.21850v1 [cs.CV] 28 May 2025Beyond Perception: Evaluating Abstract Visual Reasoning through Multi-Stage Task Yanbei Jiang1, Yihao Ding1,2, Chao Lei1, Jiayang Ao1 Jey Han Lau1,Krista A. Ehinger1 1The University of Melbourne,2University of Sydney yanbeij@student.unimelb.edu.au jeyhan.lau@gmail.com kehinger@unim...
https://arxiv.org/abs/2505.21850v1
is RA VEN (Raven, 2003; Zhang et al., 2019), as shown in the left part of Figure 1. The solver needs to select the correct panel from a answer set to complete a 3x3 problem matrix by deducing the visual rules governing the grid’s arrangement. For instance, by analyzing the colors of each panel, one might ob- serve the ...
https://arxiv.org/abs/2505.21850v1
the model to combine current information with outputsfrom previous stages. To assess the correctness of intermediate steps, we introduce a novel met- ric, MSEval, which provides a more fine-grained assessment of the model’s reasoning process for the logical chain task. MSEval uses the correct answer probabilities at ea...
https://arxiv.org/abs/2505.21850v1
(Logical Chain) 0.56K 3.92K Abstract shapes Template & Neural MCQA ✓ ✓ Table 1: Comparison of various VQA datasets. Template : generated using predefined rules, Manual : written by humans, Neural : generated using large language models, Template & Neural : generated using predefined rules and rewritten by large languag...
https://arxiv.org/abs/2505.21850v1
e) Two Row Rule Deduction ( 2R):The puzzle image contains the first two rows, each with three panels, denoted as I= ({p1,1,p1,2,p1,3}, {p2,1,p2,2,p2,3}). The task is to find a rule that applies to both rows. f) RAVEN puzzle ( Final ):The original puzzle from RA VEN dataset. Formally, given an puzzle image I(which con- ...
https://arxiv.org/abs/2505.21850v1
the answer space. Subtasks Creation: To create the Direct Answer subtask, we first sample XML files from RA VEN, and for each XML file, we generate one ques- tion for each template . During question formation, placeholders (e.g., <X1>, <X2>) are replaced with randomly selected any possible values consistent with the va...
https://arxiv.org/abs/2505.21850v1
Our MultiStAR dataset generation pipeline. process, we introduce a new metric, MSEval. As the example illustrated in Figure 4, the score for the 1R node is designed to aggregate from all its related nodes, which include three 1P nodes, two 2P nodes, and the 1R node itself. This aggregation captures the interconnectivit...
https://arxiv.org/abs/2505.21850v1
we apply a log to the expression. The final MSEval score for stage t, instance iis computed as: MSEval(i) t= logY j∈Dt∪{t}(exp(p(i) j ϵ(i) j))NCMI (i,j,t) =X j∈Dt∪{t}NCMI (i, j, t)·p(i) j ϵ(i) j(8) MSEval(i) t=X j∈Dt∪{t}w(i) j·p(i) j ϵ(i) j(9) As MSEval relies on access to the logits of the model’s final layer, which c...
https://arxiv.org/abs/2505.21850v1
training data or additional new functionalities. Interestingly, as the questions become increas- ingly complex and require deeper reasoning, a no- ticeable decline in performance is observed across all models, gradually approaching the random base- line. While models demonstrate strong perfor- mance on basic perception...
https://arxiv.org/abs/2505.21850v1
reasoning models struggle to solve RA VEN puzzles (Ahrabian et al., 2024; Gendron et al., 2024). However, MSEvalMetric Prior 1P 2P 1R 2R Final GPT-4o Accw/o 73.8 39.1 34.7 28.9 15.7 w 73.8 43.9 41.8 50.6 10.0 Gemini Accw/o 75.5 61.6 49.6 44.6 5.7 w 75.5 64.4 52.6 57.1 18.6 Idefics2 (8B)Accw/o 57.8 39.6 34.4 35.1 25.7∗ ...
https://arxiv.org/abs/2505.21850v1
models. Given the ground truth for intermediate steps, how does it influence the final results? Table 4 highlights the ground truth priors generally demon- strate a positive impact. For example, the 1R stage benefits significantly from the insights about each panel and intra-panel comparisons. The 2R stage also sees su...
https://arxiv.org/abs/2505.21850v1
further details and examples of the conversion methods). Prior 1P 2P 1R 2R Final GPT-4oVanilla 73.8 43.9 41.8 50.6 10.0 Struct. 82.2 64.4 47.8 50.9 8.6 Doc. 80.8 44.8 31.1 24.9 10.0 GeminiVanilla 75.5 64.4 52.6 57.1 18.6 Struct. 70.6 66.4 52.9 57.8 17.1 Doc. 69.6 51.0 36.7 33.1 14.3 Qwen2Vanilla 74.1 57.8 47.3 54.2 65....
https://arxiv.org/abs/2505.21850v1
defined object attributes, such as the XML files provided by RA VEN. This limits our expansion to RA VEN dataset, as most datasets lack such metadata. Ex- panding these methods to other datasets will require machine learning approaches, such as automatic object boundary detection, which could eliminate the need for met...
https://arxiv.org/abs/2505.21850v1
, pages 585–601. Springer. Difei Gao, Ruiping Wang, Shiguang Shan, and Xilin Chen. 2022. Cric: A vqa dataset for compositional reasoning on vision and commonsense. IEEE Trans- actions on Pattern Analysis and Machine Intelligence , 45(5):5561–5578. Gael Gendron, Qiming Bao, Michael Witbrock, and Gillian Dobbie. 2024. La...
https://arxiv.org/abs/2505.21850v1
Pro- cessing Systems , 33:16468–16480. OpenAI. 2024. https://openai.com/index/ hello-gpt-4o/ . Jean Raven. 2003. Raven progressive matrices. In Handbook of nonverbal assessment , pages 223–237. Springer. Tanik Saikh, Tirthankar Ghosal, Amish Mittal, Asif Ekbal, and Pushpak Bhattacharyya. 2022. Scienceqa: A novel resour...
https://arxiv.org/abs/2505.21850v1
without prior information. Incorporating prior information from earlier stages significantly increases the maximum prompt length to 261.2 tokens, posing a challenge for MLLMs to parse effectively. A.2 Generation Template Table 7 outlines the question templates used for the Direct Answer Task. Notably, it is impractical...
https://arxiv.org/abs/2505.21850v1
others. Prompt Details: The RA VEN dataset includes various puzzle settings, such as Left-Right, Up- Down, and In-Out, where rules are applied sepa- rately to distinct parts of the panels (Figure 9). To address these settings, when we decompose the problem into subproblems, we treat each part inde- pendently. For insta...
https://arxiv.org/abs/2505.21850v1
for brevity. After the prior information is transformed, the prompt is structured as follows: [Extra Setting Info] Below is the information generated from the previous steps, please be aware that it may or may not contain errors: [[Prior Info 1], [Prior Info 2], ...] Question: [question] Please select one of the follow...
https://arxiv.org/abs/2505.21850v1
tested them on GPT-4o, manually review- ing the output explanations. As shown in Table 12, interestingly, as the questions become more difficult, particularly at the final step, the model increasingly fails to distinguish unanswerable ques- tions, resulting in higher rates of hallucination. Fur- thermore, under Setting...
https://arxiv.org/abs/2505.21850v1
relevant and appropriately challenging for its designated stage. The metrics used to evaluate performance in Part A included correctness, clarity, and content validity, with positive rates for each metric provided in Table 13. The positive rate is the proportion of questions answered by "Yes". The results indicate that...
https://arxiv.org/abs/2505.21850v1
observed with Qwen2-VL-72B, which may already perform well on RA VEN. Incorporat- ing information from earlier stages might introduce misleading details, leading to a significant perfor- mance drop. A.8 Additional Details about MSEval A.8.1 Algorithm Pseudo Code Algorithm 1 shows the details Pseudo Code for our propose...
https://arxiv.org/abs/2505.21850v1
of Document structured prompts. <!DOCTYPE html > <html > <body > <h1> In this visual puzzle , you are given two panels . Each panel divided into two sections by a vertical line , separating the < strong >left </ strong > side from the < strong >right </ strong > side , with objects might present in both sections . Belo...
https://arxiv.org/abs/2505.21850v1
Gaussian smooth- ing. Figure 20: The Two-Panels Comparison accuracy trend for the Direct Answer task as model sizes gradually in- crease. The trend line is derived using Gaussian smooth- ing. Algorithm 1 Overall Workflow Input: Logical Chain Dt Define: St={t} ∪ D t Model logits Z={z(i) j|j∈ St, i= 1, . . . , N } All po...
https://arxiv.org/abs/2505.21850v1
to the ob- ject on the <X2>?In panel 1, does the object on the top-left the same, smaller or larger in size compared to the ob- ject on the bottom-right?Size Not_Equal(X, X2) [ "The same", "Smaller", "Larger" ] In panel <P>, does the object on the <X> the same, darker or brighter in color compared to the object on the ...
https://arxiv.org/abs/2505.21850v1
2? If the colors within either panel are already different from each other, select ’Not Compara- ble.’Color Not_Equal(P, P2), Same_Row(P, P2)[ "The same", "Darker", "Brighter", "Not compara- ble" ] Is the position of all the objects in panel <P> the same as the objects in panel <P2>?Is the position of all the objects i...
https://arxiv.org/abs/2505.21850v1
of the inner part, bottom-left of the inner part, bottom-right of the inner part Panel (<P>) 0, 1, 2, 3, 4, 5, 6, 7 Shape (<S>) triangle, square, pentagon, hexagon, circle Rules Number Rule The number of objects gradually decreases by 1; The number of objects remains constant; The number of objects gradually increases ...
https://arxiv.org/abs/2505.21850v1
in the panel? 2P: Is the position of all the objects in the left panel the same as the objects in the right panel? 1R: Examine the three panels in the image from left to right and identify the rule that governs the position of the objects. 2R: Examine the three panels in the first row, then the three panels in the seco...
https://arxiv.org/abs/2505.21850v1
20.18 26.36 5.71 MSEvalw/o 2.006 0.794 0.965 0.953 1.290 w 2.006 1.082 0.812 0.985 1.013 Llava-13bAccw/o 30.91 31.55 23.09 22.36 15.71 w 30.91 31.09 22.36 21.64 15.71 MSEvalw/o 1.059 0.997 0.941 0.910 1.002 w 1.059 1.009 0.944 0.929 0.975 RandomAcc - 31.1 31.7 25.0 25.0 12.5 MSEval - 1.00 1.00 1.00 1.00 1.00 Table 10: ...
https://arxiv.org/abs/2505.21850v1
Final GPT-4o AccVanilla 73.8 43.9 41.8 50.6 10.0 Struct. 82.2 64.4 47.8 50.9 8.6 Doc. 80.8 44.8 31.1 24.9 10.0 Gemini AccVanilla 75.5 64.4 52.6 57.1 18.6 Struct. 70.6 66.4 52.9 57.8 17.1 Doc. 69.6 51.0 36.7 33.1 14.3 Qwen2-VL (72B)AccVanilla 74.1 57.8 47.3 54.2 65.7∗ Struct. 77.2 67.7 55.1 53.6 61.4 Doc. 76.5 63.1 50.2...
https://arxiv.org/abs/2505.21850v1
of all the objects in the left panel the same as, darker or brighter than the objects in the right panel? If the colors within either panel are already different from each other, select 'Not Comparable. ' Dependent Stage Choice: [ 'A: Not comparable ','B: Darker ','C: Brighter ','D: The same '] Dependent Stage Ground T...
https://arxiv.org/abs/2505.21850v1
'] Dependent Stage Ground Truth: D Dependent Stage Logits: { 'A': 21.625, 'B': 20.875, 'C': 20.0, 'D': 22.25} Dependent Stage Generated Answer: D --------------------------------- Dependent Stage Name: two_panels_2_3_left Dependent Stage Question: Consider only the left part of the two panels in the image. Is the shape...
https://arxiv.org/abs/2505.21850v1
within either panel are already different from each other, select 'Not Comparable. '(Note: The edge number increases in the following order: triangle, square, pentagon, hexagon, circle) Dependent Stage Choice: [ 'A: Not comparable ','B: Fewer ','C: The same ','D: More '] Dependent Stage Ground Truth: D Dependent Stage ...
https://arxiv.org/abs/2505.21850v1
the image. Is the size of all the objects in the left panel the same as, smaller or larger than the objects in the right panel? If the sizes within either panel are already different from each other, select 'Not Comparable. Dependent Stage Choice: [ 'A: Not comparable ','B: Smaller ','C: Larger ','D: The same '] Depend...
https://arxiv.org/abs/2505.21850v1
Consider only the right part of the two panels in the image. Is the shape of all the objects in the left panel have the same, more, or fewer edges compared to the objects in the right panel? If the shapes within either panel are already different from each other, select 'Not Comparable. '(Note: The edge number increase...
https://arxiv.org/abs/2505.21850v1
arXiv:2505.21851v1 [cs.RO] 28 May 2025Streaming Flow Policy Simplifying diffusion /flow-matching policies by treating action trajectories as flow trajectories Website: https://streaming-flow-policy.github.io Sunshine Jiang MITXiaolin Fang MITNicholas Roy MIT Tom ´as Lozano-P ´erez MITLeslie Kaelbling MITSiddharth Ancha...
https://arxiv.org/abs/2505.21851v1
we iteratively integrate the learned velocity field to generate an action trajectory. Sampled trajectories (shown in red) cover both behavior modes in the training data. (b)We find that constructing conditional flows that stabilize around demonstration trajectories reduces distribution shift and improves imitation lear...
https://arxiv.org/abs/2505.21851v1
O,H ξ Action trajectory (chunk), where time is rescaled from [0, Tpred]to[0,1] [0,1]→ A ˙ξ Time derivative of action trajectory: ˙ξ(t) =d dtξ(t) [0,1]→TA pD(h, ξ) Distribution of observation histories and future action chunks. Training set is assumed to be sampled from this distribution.∆(H × [0,1]→ A) vθ(a, t|h) Learn...
https://arxiv.org/abs/2505.21851v1
can only be represented in the joint distribution, it can learn local constraints such as joint constraints, and convex velocity constraints; see Sec. 9 for more details. In practice, we find that streaming flow policy performs comparably to diffusion policy while being significantly faster. 2 Background and problem fo...
https://arxiv.org/abs/2505.21851v1
train a neural network velocity field. In particular, we construct a velocity field vξ(a, t)and an initial distribution p0 ξ(a)such that the induced marginal probability distributions pξ(a|t)form a thin Gaussian “tube” around ξ. By “Gaussian tube”, we mean that pξ(a|t)is a narrow Gaussian distribution centered at ξ(t)f...
https://arxiv.org/abs/2505.21851v1
between a candidate velocity field v(a, t|h)and the the analyti- cally constructed conditional velocity field vξ(a, t)as target. The expectation is over histories and trajectories under the probability distribution pD(h, ξ), time tsampled uniformly from [0,1], and action asampled from the constructed conditional flow k...
https://arxiv.org/abs/2505.21851v1
action chunk, we integrate the velocity field in t∈[0, Tchunk/Tpred], producing Tchunk/(Tpred∆t)many actions. The action chunk is computed and executed open-loop i.e.the neural network vθinputs the same observation history hchunk for all integration steps. Importantly, we are able to stream and execute actions on the r...
https://arxiv.org/abs/2505.21851v1