text
string | source
string |
|---|---|
¨uller et al., 2023; Germano et al., 2023; M ¨uller et al., 2024; Kitamura et al., 2024) provide sublinear regret guarantees for stochas- tic constraints but struggle to generalize to such adversarial cases. The adversarial setting is inherently more challenging due to the dynamic and unpredictable nature of constraints, compounded by the assumption that error cancellation in constraint violations is not allowed. Adversarial CMDPs are thus crucial for handling dynamic environments, ensur- ing robust and safe decision-making in situations where conventional stochastic models fall short. Constraint violation is usually used to theoretically evaluate the performance of the safety of a safe RL algorithm. One commonly used constraint violation evaluates the policies in the beverage sense such that it allows error cancellation, as defined by (Efroni et al., 2020), involves summing positive 1 An Optimistic Algorithm for online CMDPS with Anytime Adversarial Constraints (unsafe) and negative (safe) constraint violations, ensuring a sublinear total constraint violation during learning. In this paper we consider a stronger notion of constraint violation, focusing exclusively on the sum of positive errors. To il- lustrate, consider a cost function dpπkq, which equals ´1 when the policy πkused in episode kis safe, and 1when it is unsafe. If half of the policies over Kepisodes are safe and the other half are unsafe, the weak constraint violation— permitting cancellation—results in rřK k“1dpπkqs`“0, wherer¨s`“maxt¨,0u.However, under strong constraint violation, which disallows cancellation, the total violation becomesřK k“1rdpπkqs`“K{2.Clearly, weaker sublinear constraint violations do not ensure relatively safe policies during learning. In this work, we aim to address two fundamental research questions: RQ1: Can we design a unified algorithm that achieves the optimal order of regret and hard constraint violation in unknown CMDPs with both stochastic and ad- versarial costs under minimum assumption? RQ2: What are the bottlenecks for further improving the bound? CMDPs with cumulative constraints that allow cancellation have been extensively studied under both model-free (Wei et al., 2022a;b; 2023; Ghosh et al., 2022; Bai et al., 2022) and model-based approaches (Ding et al., 2021; Liu et al., 2021a; Bura et al., 2021; Singh et al., 2020; Ding et al., 2021; Chen et al., 2022; Efroni et al., 2020). The study by (Qiu et al., 2020; Stradi et al., 2024a) focuses on CMDPs with only an adversarial reward function. Recent work (Germano et al., 2023; Stradi et al., 2024c) considers online learning CMDPs under strong constraint violation for the long-term average cost and sublinear regret and violation results are well established. However, we note that in our scenario, the constraints are sufficiently strict—particularly in adversarial settings—that an average safe policy fails to guarantee safety for each individual episode . Consequently, focusing on the long-term average constraint alone is less meaningful under these conditions. Other works such as (Ding & Lavaei, 2022; Wei et al., 2023) consider scenarios where rewards, costs, and transition kernels are non-stationary, assuming bounded total variation. However, all of the above works we mentioned are not applicable to settings with adversarial costs, and they only address weak constraint violations. To address the aforementioned challenges, we propose the Optimistic Mirror Descent
|
https://arxiv.org/abs/2505.21841v1
|
Primal- Dual (OMDPD) algo- rithm that ensures optimal regret and strong constraint vi- olation bounds with respect to the number of episodes K, regardless of whether the reward and cost functions are gen- erated stochastically or adversarially. Our contributions are summarized as follows: •We present the first work addressing online CMDPs with anytime adversarial constraints. Our work advances the theoretical understanding of CMDPs under unknown ad-versarial cost functions by proposing a novel unified al- gorithm, OMDPD, capable of handling both stochastic and adversarial rewards/costs without relying on Slater’s condition. OMDPD achieves ˜Op? Kqregret and ˜Op? Kq strong constraint violation when rewards and costs are either stochastic or adversarial, both of which are optimal with respect to the total number of learning episodes K. •It is well known that one of the bottlenecks of forbidding algorithms for online CMDP from achieving a higher bound is because of the estimation errors of reward/cost and transition kernels. We further show that if a perfect simulator (generative model) is given such that we can have an accurate estimate of the reward and transition kernels (cost function is also not known and can be adver- sarial), our regret bound can be further improved to Op1q when the reward function(also unknown) is fixed. 2. More Related Work M¨uller et al. (2023) proposes an augmented Lagrangian method for addressing CMDPs with strong constraint vio- lations under a requirement of a strictly known safe policy. Stradi et al. (2024c) propose a primal-dual algorithm (CPD- PO), building on the policy optimization framework of (Luo et al., 2021), which achieves ˜Op? Kqregret. However, nei- ther of these works addresses the adversarial cost setting. In addition, Stradi et al. (2024b) consider the adversarial reward setting but still assume stochastic constraints, requir- ing strong assumptions such as access to a strictly feasible policy and knowledge of its associated cost. Clearly, Stradi et al. (2024b) also cannot be applied to adversarial constraint scenarios. Additional studies by (M ¨uller et al., 2024) and (Kitamura et al., 2024) focus on last-iterate convergence under stochastic constraints, achieving rates of ˜OpK0.93q and˜OpK6{7q, respectively. These results crucially rely on a stationary setting. A detailed comparison of the theoretical results between our algorithm and the most existing studies is summarized in Table 1. 3. Preliminaries Notation. For any nPN, we use the short-hand notation rnsto refer to the set of integers t1, . . . , nu. ForxPR, we define the operation rxs`:“maxt0, xuto be the positive truncation of x. Throughout the paper, we use }¨}to denote the Euclidean norm. Additionally, for a given 1-strongly convex function U, we define the Bregman divergence be- tween two points: Dpa, bq“Upaq´Upbq´x∇Upaq, a´by. We consider a finite-horizon episodic CMDP, which is defined as a tuple M“ pµ,S,A, H,tPhuH h“1,trkuK k“1, tdkuK k“1q,where µis the initial state distribution, Sand Aare the state and action spaces. We assume that both the state space and action space are finite and countable 2 An Optimistic Algorithm for online CMDPS with Anytime Adversarial Constraints Algorithm Regret Adversarial Violation Stochastic Violation Slater’s Condition Known Safe Policy (Efroni et al., 2020)
|
https://arxiv.org/abs/2505.21841v1
|
Op? Kq N/A Op? Kq ✓ No (M¨uller et al., 2023):Op? Kq N/A Op? Kq ✓ Yes (Stradi et al., 2024c):˜Op? Kq N/A ˜Op? Kq ✓ No (M¨uller et al., 2024):˜OpK0.93q N/A ˜OpK0.93q ✓ No (Kitamura et al., 2024):˜OpK6 7q N/A ˜OpK6 7q ✓ No OMDPD: ˜Op? Kq ˜Op? Kq ˜Op? Kq ✗ No Table 1. Comparison between OMDPD and existing related work. We omit the dependence on the dimension of the action, state space, and the number of steps in CMDPs here. ::Consider the stronger notion of constraint violation, which disallows cancellation. (Efroni et al., 2020) only consider a weaker version. (M ¨uller et al., 2024) need to access a strictly feasible policy. More discussions can be found in Section 2. with cardinalities |S| “S,|A| “A.In the online learn- ing under finite-horizon episodic CMDPs, each episode kP rKshasHsteps and at each hP rHs,we use Phps1|s, aq:SˆAˆSÑ r0,1sto denote the tran- sition kernel from state action pair ps, aqto a next state s1at step h.Without loss of generality, we assume that the reward function trkuK k“1is a sequence of vectors at each episode kPrKs,in particular, rk“prk,1, . . . , r k,Hq, where rk,h:SˆAÑ r0,1s,@hP rHs, kP rKs. Similar, the cost function dk,hat step hin episode kis dk,h:SˆAÑr´ 1,1s,both rewards and costs are bounded for any hPrHs, kPrKs.The transition kernels, reward functions, and cost functions are unknown. In this paper, we consider the stochastic reward where rkis a random variable distributed according to a distribution Rfor every kPK,with two different types of cost functions: stochastic constraint andadversarial constraint : •Stochastic cost: In stochastic cost setting, dkis a ran- dom variable distributed according to a fixed probability distribution Dfor every kPrKs. •Adversarial cost: In adversarial cost setting, dkare adversarially-selected and unknown. In online CMDPs, the agent interacts with the CMDP by executing a policy π“tπ1, π2, ..., π Hu,where πhp¨|sqP ∆pAq,and∆p¨qis a probability simplex. We denote by πp¨|sqthe probability distribution for a state sPS,When- ever the agent takes an action ain state sat step h,in episode k,it observes reward rk,hps, aqsampled from a fixed dis- tribution, and cost , dk,hps, aqsampled either from a fixed distribution for the stochastic setting or chosen by an adver- sary for the adversarial setting. Then the value function for the reward and cost under the policy πand transition kernel pare defined as: Vπprk, pq:“E«Hÿ h“1rk,hpsh, ahq|s1, π, pff (1) Vπpdk, pq:“E«Hÿ h“1dk,hpsh, ahq|s1, π, pff . (2)In the following, we denote by Πthe set of all the possible policies the agent can choose from. we are interested in solving the following optimization problem: π˚Parg max πPΠVπp¯r, pq s.t. Vπp¯d, pqď0,(stochastic cost) , (3) s.t. Vπpdk, pqď0,@kPrKs(adversarial cost) , where ¯r:“Er„Rrrs,¯d:“Ed„Drds.The solution of this offline optimization problem (3)is considered as the base- line algorithm which serves to evaluate the performances of online algorithms. The goal of the online CMDP problem is to learn an optimal policy to minimize cumulative regret and strong cumulative violation of constraints after Kepisodes, which are defined below: RegretpKq“Kÿ k“1” Vπ˚p¯r, pq´Vπkp¯r, pqı (4) ViolationpKq“Kÿ k“1“ Vπkp¯d, pq‰`,(stochastic cost) (5) ViolationpKq“Kÿ k“1rVπkpdk, pqs`,(adversarial cost)
|
https://arxiv.org/abs/2505.21841v1
|
(6) Alternatively, the online optimization problem (3)can also be represented using the notion of occupancy measure (Alt- man, 1999)tqπ hps, a;pquH h“1under a policy πand transition kernel p.For every sPS, aPA,we have the occupancy measure defined as: qπ hps, a, s1q“Prpsh`1“s1, sh“s, ah“a|p, π, s 1q qπ hps, aq“ÿ s1PSqπ hps, a, s1q (7) It is well known that the CMDP problem can be formulated as an LP problem (Altman, 1999), then the optimal occu- pancy measure can be obtained by solving the following optimization problem: max qPQ¯rJq (8) s.t.¯dJqď0,(stochastic cost) (9) 3 An Optimistic Algorithm for online CMDPS with Anytime Adversarial Constraints s.t. dJ kqď0,@kPrKs,(adversarial cost) , (10) where qPr0,1sSAHis the occupancy measure vector, with its values defined in Eq. (7), and Qrepresents the set of all valid occupancy measures. ¯rPr0,1sSAHand¯dpdkqP r´1,1sSAHdenote the reward and cost vectors, respectively, with a slight abuse of notation. On either hand, for any q, the corresponding policy can be reconstructed as: πq hpa|sq“qhps, aqř a1qhps, a1q. (11) Given all the notations above, we denote the optimal so- lution to the optimization problem (3)asq˚,which also serves as the baseline. In this paper, following the standard assumptions, we consider the bandit feedback setting in which the learner observes only the rewards and costs for the chosen actions in the stochastic cost setting. However, in the adversarial cost setting, the full cost vector dkis re- vealed after episode k,while the reward remains as bandit feedback throughout. 4. Main Algorithm In this section, we introduce our main algorithm and the designs behind that to ensure the optimal order on the regret and violation bounds. 4.1. Optimistic Estimates To encourage the exploration of the unknown CMDP, we first need to use the principle of optimistic estimate (Auer et al., 2008). Let nk´1 hps, aq “řk´1 k1“11tsk1 h“s,ak1 h“aude- note the number of times that the state-action pair ps, aq is visited at step hbefore episode k. Here,psk1 h, ak1 hqde- notes the state-action pair visited at step hin episode k1, and 1t¨uis the indicator function. Then the empirical transition kernels, rewards and violations can be calculated as follows: ˆpk´1 hps1|s, aq:“řk´1 k1“11tsk1 h“s,ak1 h“a,sk1 h`1“s1u nk´1 hps, aq_1, (12) ˆrk´1 hps1|s, aq:“řk´1 k1“1Rk1 hps, aq1tsk1 h“s,ak1 h“au nk´1 hps, aq_1,(13) ˆdk´1 hps1|s, aq:“řk´1 k1“1Dk1 hps, aq1tsk1 h“s,ak1 h“au nk´1 hps, aq_1,(14) where a_b:“maxta, bu. Remark that Eq. (14) is only used for the stochastic cost case. Then we can define the optimistic rewards, costs, and confidence set of transition kernels Bk,hps, aqas ˜rk,hps, aq:“ˆrk´1 hps, aq`βr k,hps, aq (15) ˜dk,hps, aq:“ˆdk´1 hps, aq´βd k,hps, aq (16) Bk,hps, aq:“t˜php¨|s, aqP∆pSq|@s1PS:ˇˇ˜phps1|s, aq´ˆpk´1 hps1|s, aqˇˇďβp k,hps, a, s1qu Bk:“␣ ˜p|@s, a, h : ˜pk hp¨|s, aqPBk,hps, aq( (17) where βp k,hps, a, s1qą0is a UCB-type bonus, which de- notes the confidence threshold for the transitions and is de- fined in Appendix B.1. Thus, we can construct a candidate set for selecting the policy at each episode kas: Qk:“tqπppqPRSAH|πPΠ, pPBku, (18) where we denote qπppqPRSAHas the stacked occupancy measure vector under transition kernel p,a policy π,andΠ is the set of all the feasible policies.
|
https://arxiv.org/abs/2505.21841v1
|
4.2. Surrogate Objective Function The objective of online CMDP learning is twofold: (1) to control the constraint violations over time, and (2) to maximize cumulative reward. Thus, after constructing the feasible candidate set for the policy Qk,our algorithm aims to solve the following optimization problem at each episode: max qPQk,dJ kqď0rJ kq. (19) Inspired by (Sinha & Vaze, 2024; Guo et al., 2022) we consider the following surrogate objective function with an exponential potential Lyapunov function: Φpxq:“ exppβxq´1,for some constant βą0 : fkpqq“α´ ´˜rJ kq`Φ1pλkq“˜dJ kq‰`¯ ´1 2}q´qk}2.(20) Using the prediction (estimation) of the reward and cost function together with the online mirror descent optimiza- tion methods we can achieve a tighter bound. The dual variable λkaims to track cumulative constraint violations during learning. Then through analysis of the drift term Φpλkq´Φpλk´1q, the algorithm adaptively reg- ulates long-term violation behavior: exponential growth inΦpλkqdynamically amplifies constraint penalties during high-violation regimes, while bounded drift guarantees vi- olations remain controllable. Together, these components enforce an safe exploration. Specifically, we define the dual variable update as λk“λk´1`α“˜dJ kqk‰`, (21) where ˜dkis the estimated constraint vector (Eq. 16) for the stochastic cost and is replaced with dkwhen the cost is adversarial. The operator r¨s`considers only the positively violated cost to efficiently control the hard constraint. We then employ the Lyapunov function Φpλkqto track the evo- lution of these violations. Then we will show later that the one-step Lyapunov drift can be bounded by: Φpλkq´Φpλk´1q ď Φ1pλkq ¨α“˜dkqk‰`. (22) 4 An Optimistic Algorithm for online CMDPS with Anytime Adversarial Constraints Then by using the drift-plus-penalty framework (Neely, 2010), we are able to minimize the surrogate cost func- tionstfkpqquT t“1which is the combination of the drift upper bound (Eq. 22) and the cost function. More precisely, by selecting q˚to minimize fkpqqwithin the feasible set Qk and summing over Kepisodes, we can obtain ΦpλKq`αKÿ k“1` ˜rJ kq˚´˜rJ kqk˘ ďKÿ k“1` fkpqkq´fkpq˚q˘ loooooooooomoooooooooon pIq(23) We refer the term pIqasRegret alg. Hence, minimizing this algorithm’s regret is crucial to bound the cumulative reward and constraint violation. 4.3. Optimistic OMD To obtain a tight bound on term pIq,we adopt the Optimistic Online Mirror Descent (OMD) algorithm to dynamically control the constraints and adapt to evolving environments. The algorithm alternates between two phases in each it- eration. In the optimistic phase , the algorithm constructs an anticipated occupancy measure, ˆqk`1, by solving the following regularized optimization problem: ˆqk`1“arg min qPQkηkxq,∇fkpqkqy`Dpq,ˆqkq, where ηkis the learning rate, and Dpq,ˆqkqrepresents the Bregman divergence, ensuring smooth updates. This step predicts the next occupancy measure by incorporating the gradient of the current potential function and regularization. In the refinement phase , the algorithm updates its policy by leveraging the predicted gradient ∇ˆfk`1pˆqk`1q. Follow- ing the setup in (Rakhlin & Sridharan, 2013), we assume ˆfk`1“fk. The subsequent occupancy measure, qk`1, is obtained by solving: qk`1“arg min qPQkηk`1xq,∇ˆfk`1pˆqk`1qy`Dpq,ˆqk`1q. After we obtain the qk`1, we can then construct πk`1us- ing Eq. (11) and execute the policy and get estimations of the reward function, transition kernels, and cost functions if for the stochastic setting. The optimistic update is criti- cal for enabling a
|
https://arxiv.org/abs/2505.21841v1
|
tighter bound by incorporating historical gradients and occupancy measures. The full algorithm is presented in Algorithm 1. 5. Main Result We first provide the main theoretical results of OMDPD. Theorem 5.1. Choose α“1 2p1`? LδqSAH, β“ SAH 8? C? 6SAHKand denote C“supq1,q2PQDpq1, q2q, where Lδis defined in Appendix B.1. Let ∇k“ ∇fkpqkq,∇k´1“∇fk´1pqk´1qand consider ηk“Algorithm 1 OMDPD Input: q1,ˆq1PQ1,˜r1“˜d1“λ1“0, learning rate ηk. Parameters: Φpxq“exppβxq´1,α“1 2p1`? LδqSAH, Lδis defined in Appendix B.1. Define function fkby: fkpqq“αp´˜rJ kq`Φ1pλkqr˜dJ kqs`q´1 2}q´qk}2 fork“1toKdo Construct the optimistic occupancy measure ˆqk`1by: ˆqk`1“arg min qPQkηkxq,∇fkpqkqy`Dpq,ˆqkq Assume ˆfk`1“fk; Compute ηk`1and update qk`1 by: qk`1“arg min qPQkηk`1xq,∇ˆfk`1pˆqk`1qy`Dpq,ˆqk`1q Construct πk`1byqk`1and execute policy, and get estimation ˜rk`1,˜dk`1by Eq.(15), (16). dk`1is revealed to the agent for the adversarial case. Update λk`1as follows: λk`1“# λk`α“˜dk`1qk`1‰`pStochastic Case q λk`α“ dk`1qk`1‰`pAdversarial Case q Update set Qk`1by Eq.(17). end for Return: πK`1 ? Cmin" 1 ?řk´1 i“1}∇i´∇i´1}2 2`?řk´2 i“1}∇i´∇i´1}2 2,1* . We have with probability at least 1´2δ, OMDPD achieves: RegretpKqď˜O´? NSAH3K`S2AH3 `? C? SAHK`SAH¯ , ViolationpKqď˜O´? NSAH3K`S2AH3 `? C? SAHK¯ pBoth settingsq Theorem 5.1 establishes the optimal ˜Op? Kqregret and constraint violation bounds under minimum assumptions. This is the first result with optimal order in terms of the total episode K,for the online CMDPS with anytime adversar- ial constraints. Our results do not rely on the satisfaction of Slater’s condition—a common assumption requiring the existence of a strictly feasible solution. Since in the adver- sarial setting, it is possible to make the slackness arbitrarily small by the adversary, thus the upper bound could be ex- tremely large. The removal of this restrictive assumption represents a key theoretical contribution, as it aligns the algorithmic framework more closely with practical settings where Slater’s condition cannot be ensured a priori. Remark 5.2.Our approach depends on the cumulative vari- 5 An Optimistic Algorithm for online CMDPS with Anytime Adversarial Constraints ation of consecutive gradients, which is often very small when the reward is fixed and known or can be accurately estimated. Specifically, by choosing βď2SAH 3.5? C? 2SAHK and setting α, ηkas in Theorem 5.1, we can ensure thatřK k“1“ ˜rJ kq˚´˜rJ kqk‰ is bounded by Op1q.Consequently, in a CMDP with a generative model or a perfect simulator is given, such that the transitional kernels and reward functions can be accurately estimated(which eliminates the estimation step in Section 4.1, the error term ˜Op? NSAH3Kq) will be replaced by a constant that independent with episode K. Thus, the regret is bounded as Op1q. The detailed proof is deferred in Appendix D.2. Remark 5.3 (The Constant Bound C).The constant Cde- pends on the divergence measurement. If Dis chosen as the KL divergence, it does not admit a uniform upper bound over the simplex, as KLpq}q1qmay go to infinity. However, a smoothing track can be applied to keep all updated distri- butions bounded away from the boundary of the simplex. Such smoothing can ensure a boundness of Cwhich is inde- pendent of the time horizon K. This technique is standard in online convex optimization under entropy regularization, more details can be found in (Wei et al., 2020). 5.1. Sketch of the Theoretical Analysis In this
|
https://arxiv.org/abs/2505.21841v1
|
section, we show the theoretical analysis of Al- gorithm 1. We first introduce the following facts for the CMDPs considered in this paper. Fact 5.4.For any q1, q2PQk,@kPrKs,we have}q1´ q2}ď? SAH. Fact 5.5. For any ˜rk,dkor˜dk,the reward/cost value func- tion in terms of qPQkis convex and Lipschitz continu- ous such that |˜rJ kq1´˜rJ kq2| ď p 1`?Lδq? SAH}q1´ q2},|dJ kq1´dJ kq2|ď? SAH}q1´q2},|˜dJ kq1´˜dJ kq2|ď p1`?Lδq? SAH}q1´q2},@q1, q2PQk,@k, where Lδ is the logarithmic term defined in Appendix B.1. Now, we introduce the Good event which captures the con- fidence of the current estimation and will be used to prove the policy used by OMDPD is comparable to the optimal solution. Our goal is to show that with a high probability, the true transition kernel lies in our confidence set such that the optimal solution is a feasible solution given the current estimation. We first show that Good event happens with a high probability. The detailed proof is deferred to Appendix B.1 due to page limit. Lemma 5.6. With probability at least 1´δ,PrrGsě1´δ, where Gis the good event and δPp0,1q. Basically, the good event shows that our estimation for the CMDP model is close to the true underlying model with high probability under the UCB-type exploration. Next, we will show that under the condition of Good event , the optimal solution of the CMDP problem Eq. (3)is a feasiblepolicy for the given Qkin each episode kP rKs,which make it possible to bound the regret and violation. Detailed proof can be found in Appendix B.1. Lemma 5.7. Conditioning on the Good event G,the opti- mal policy π˚is a feasible solution policy for any episode kPrKssuch that: π˚P! π:˜dJ kqπpp1qď0, p1PBk) Therefore, π˚is afeasible solution for any episode kP rKs,where qπ˚is the occupancy measure under the optimal policy π˚and we denote qπ˚byq˚for simplicity. Lemma 5.7 ensures that the optimal solution q˚is a feasible solution given the confidence set at any episode k, which makes it comparable to the policy used by OMDPD. Upper Bound of Regretalg.(Eq. (23))Based on the Good event G,we first present the upper bound of the Regretalg. Using optimistic online mirror for selecting the policies in Algorithm 1, we have Lemma 5.8. LetC“ supq1,q2PQDRpq1, q2q, ∇k“∇pfkpqkqq,∇k´1“∇pfk´1pqk´1qq, and define the learning rate as: ηk“ ? Cmin" 1 ?řk´1 i“1}∇i´∇i´1}2 2`?řk´2 i“1}∇i´∇i´1}2 2,1* . Then, the regret is bounded as: Regret algď3.5? C¨ ˝gffeKÿ k“1}∇k´∇k´1}2 2`1˛ ‚ This lemma shows that the upper bound depends on the sequence of the one-step gradient of the surrogate objective function over Kepisodes which can be shown that be further bounded by ˜Op? Kq.Then, we will introduce the following lemma to specify how the termbřK k“1}∇k´∇k´1}2can be bounded. Lemma 5.9. Let∇k“∇fkpqkqdenote the subgradient of the surrogate objective function fkevaluated at qk(Eq. (19) ). Under OMDPD, the cumulative variation of consecutive gradients is bounded as: gffeKÿ k“1}∇k´∇k´1}2 2ďgffeKÿ k“1}˜rk´˜rk´1}2piq `gffeKÿ k“1}Φ1pλkq˜dk´Φ1pλk´1q˜dk´1}2piiq `gffe2Kÿ k“1}˜rk´˜rk´1}}Φ1pλkq˜dk´Φ1pλk´1q˜dk´1}piiiq ď? 6SAHKp1`Φ1pλKqq SAH 6 An Optimistic Algorithm for online CMDPS with Anytime Adversarial Constraints This lemma first establishes that the aggregated variation of policy gradients under OMDPD can be bounded by ˜OpSAH? Kq`Φ1pλKqSAH? K.We also recall the
|
https://arxiv.org/abs/2505.21841v1
|
foun- dational inequality: ΦpλKq`αřK k“1` ˜rJ kq˚´˜rJ kqk˘ ďřK k“1` fkpqkq´fkpq˚q˘ .Here, for simplicity, we momen- tarily ignore the factor of SAH to discuss how the Op1q result in Remark 5.2 can be achieved. First, choosing the function Φpxq “exppβxq´1ensures that Φ1pλKqcan be combined with ΦpλKqin the foundational inequality. Using Lemma 5.9, we obtain ΦpλKq`αřK k“1` ˜rJ kq˚´ ˜rJ kqk˘ ď? K`Φ1pλKq? K.Then we rearrange to get αřK k“1` ˜rJ kq˚´˜rJ kqk˘ ďexppβλKq` β? K´1˘ `1`? K. By choosing the βfrom Theorem 5.1 so that β? K´1ď0, it follows thatřK k“1` ˜rJ kq˚´˜rJ kqk˘ ď? K, which has anOp? Kqbound. Now, when the reward is fixed, terms piqandpiiiqvanish (because ˜rk“˜rk´1), which makes αřK k“1` ˜rJ kq˚´˜rJ kqk˘ ďexppβλKq` β? K´1˘ `1, so that the? Kfactor disappears and a suitable βyields anOp1qbound. The details for showingřK k“1“ ˜rJ kq˚´ ˜rJ kqk‰ ďOp1qare deferred to Appendix D.2. Consequently, using Lemmas 5.8 and 5.9, we can now move on to prove Theorem 5.1 in the following sections. 5.2. Proof of the Main Theorem To prove the main theorem, we first know that the regret can be expressed as: RegretpKq“Kÿ k“1rVπkp˜rk,˜pkq´Vπkp¯r, pqs loooooooooooooooooomoooooooooooooooooon Estimation Error `Kÿ k“1” Vπ˚p¯r, pq´Vπkp˜rk,˜pkqı loooooooooooooooooomoooooooooooooooooon Optimization Error.(24) Similarly, the violation in stochastic setting can be formu- lated as: ViolationpKq“Kÿ k“1” Vπkp¯d, pq´Vπkp˜dk,˜pkqı` looooooooooooooooooomooooooooooooooooooon Estimation Error `Kÿ k“1” Vπkp˜dk,˜pkqı` looooooooooomooooooooooon Optimization Error. (25) In the constrained adversarial setting, we do not explicitly perform estimation on constraint d, so the only source of estimation error arises from the unknown transition kernel. Overall, the decomposition operation separates the regret and violation into two distinct components: (i) estimation er- ror, which arises due to inaccuracies in the estimated modelparameters, and (ii) optimization error , which is influenced by the online learning algorithm. In the following, we will analyze and bound each term individually. To better illustrate the analysis of the main theorem, we provide a proof roadmap in Figure 1, which establishes the inequality ΦpλKq`αřK k“1p˜rJ kq˚´˜rJ kqkqďRegret alg, along with the resulting regret and violation bounds. 5.3. Upper Bound of Estimation Error In the following lemma, we provide an upper bound on the estimation errors for both regret and violation. Lemma 5.10. Let˜pkdenote the transition kernel in the candidate set Bk, and let ˜rkand˜dkbe the estimations used by OMDPD. Then, conditioned on the good event G, the estimation errors for the stochastic cost case can be bounded as follows: Kÿ k“1rVπkp˜rk,˜pkq´Vπkp¯r, pqs ď˜Op? NSAH3K`S2AH3q, Kÿ k“1” Vπkp¯d, pq´Vπkp˜dk,˜pkqı` ď˜Op? NSAH3K`S2AH3q. The estimation error for the adversarial cost case can be bounded as follows: Kÿ k“1rVπkpdk, pq´Vπkpdk,˜pkqs` ď˜Op? NSAH3K`S2AH3q. The above lemma bounds the error incurred by using the esti- mated transition kernels, rewards, and costs during the learn- ing process, which improves upon Lemma 29 of (Efroni et al., 2020) by a factor of ˜Op? Hq. This improvement is achieved by leveraging a Bellman-type law of total variance to control the expected sum of value estimates for bounding the error from estimating the transition kernel (Azar et al., 2017; Chen & Luo, 2021). The detailed proof is deferred in Appendix C.2. Next, we will
|
https://arxiv.org/abs/2505.21841v1
|
bound the optimization error. 5.4. Upper Bound of Optimization Error Regret Analysis. We first focus on bounding the optimiza- tion error associated with regret. The following lemma es- tablishes that the cumulative variation of gradients between consecutive episodes under OMDPD is bounded, enabling adaptive regret-violation guarantees. To further relate the regret optimization error to Algorithm 7 An Optimistic Algorithm for online CMDPS with Anytime Adversarial Constraints Figure 1. Proof Roadmap of the Theorem 5.1 1, consider: Kÿ k“1” Vπ˚p¯r, pq´Vπkp˜rk,˜pkqı “Kÿ k“1“ Er¯rJq˚s´Er˜rJ kqks‰ “Kÿ k“1“ Er¯rJq˚s´Er˜rJ kq˚s‰ looooooooooooooomooooooooooooooon Term 1`Kÿ k“1“ Er˜rJ kq˚s´Er˜rJ kqks‰ looooooooooooooomooooooooooooooon Term 2. (26) Notably, Term 2 corresponds to the violation-regret relation- ship in Eq. (23), providing a critical link to our theoretical analysis. Consequently, we will first derive an upper bound forTerm 2 in the regret decomposition. Lemma 5.11. Based on Lemma 5.8, 5.9, the following upper bound holds: Kÿ k“1“ ˜rJ kq˚´˜rJ kqk‰ ď2p1`a LδqpSAH`4? C? 6SAHKq To complete the upper bound for the optimization error in regret, we now consider Term 1 . The following lemma provides the required result: Lemma 5.12. Under the stochastic rewards setting, with probability at least 1- 2δ, we have: Kÿ k“1“ ¯rJq˚´˜rJ kq˚‰ ďSAHc pK´1q 2lnp2 δq`SAH By combining Term 1 andTerm 2 , we can obtain the bound ofřK k“1“ Vπ˚p¯r, pq´Vπkp˜rk,˜pkq‰ .Finally, by incorpo- rating the estimation error bounds from Lemma 5.10, we can establish the complete regret bound, combining both the estimation and optimization errors. The detailed proofs of Lemmas 5.9, 5.11 and 5.12 can be found in Appendix C.3, C.4, C.5.Violation Analysis. We now analyze the sublinear violation guarantee in stochastic and adversarial settings. Stochastic Setting. As discussed earlier, in the stochastic setting, the overall violation can be decomposed into esti- mation and optimization parts, with Lemma 5.10 addressing the estimation error. We now turn our attention to bounding the optimization error. The following lemma provides the critical result needed for analyzing this optimization error. Lemma 5.13. Based on Lemma 5.8, 5.9. Then, the follow- ing upper bound holds: Kÿ k“1“ dJ kqk‰`ď16p1`? Lδq? C? 6SAHK lnpAq where A“´ K`8? C´? 6SAHK SAH¯ `2¯ ;dk“˜dkunder stochastic setting and dk“dkin adversarial setting. Thus, by combining the estimation bound from Lemma 5.10 with the violation analysis from Lemma 5.13, we have de- rived the complete violation bound for the stochastic setting. Adversarial Setting. As defined in Eq. (6), the estimation error in the adversarial setting differs slightly from that in the stochastic setting, since there is no estimation error asso- ciated with the constraints and the only source of estimation error arises from the transition kernel. Consequently, based on Lemma 5.10 under the adversarial case and Lemma 5.13, we obtain the complete violation bound for the adversarial setting. A detailed proof of Lemma 5.13 can be found in Appendix C.6. 6. Simulation We evaluate our algorithm in a synthetic and finite-horizon CMDP environment constructed to assess performance un- der both stochastic and adversarial cost settings. The CMDP consists of a state space S“t0,1,2,3,4uwith five discrete states and an action space A“t0,1,2uwith three available actions. The decision
|
https://arxiv.org/abs/2505.21841v1
|
process unfolds over a fixed horizon of 8 An Optimistic Algorithm for online CMDPS with Anytime Adversarial Constraints H“5steps. At each time step, the agent receives a reward rPr0,1sHˆSˆAsampled uniformly from the unit interval. In the stochastic setting, the cost cPr´1,1sHˆSˆAis also drawn uniformly and held fixed across episodes. In contrast, the adversarial setting introduces a discrete cost perturbation mechanism: in each episode, the cost is independently sam- pled from a finite set t´1.0,´0.6,´0.2,0.0,0.2,0.6,1.0u, simulating abrupt shifts in constraint feedback. The tran- sition dynamics are time-dependent, where each transition distribution Ph,s,a is independently sampled from a Dirich- let distribution with a concentration parameter α“0.5. A smaller concentration parameter like 0.5 encourages spar- sity in the resulting probability vectors, meaning that the sampled distributions are likely to concentrate mass on a small subset of next states. This induces partially deter- ministic behavior while still preserving stochasticity across transitions. The initial state is sampled uniformly, ensuring that each trajectory starts from a randomly selected state. Throughout all experiments, the cumulative cost constraint threshold is 0. This controlled CMDP environment enables us to evaluate our algorithm under both stochastic and adver- sarial constraint settings. We plot the cumulative constraint violation across learning episodes( K“3000 ), where both the stochastic and adversarial curves clearly demonstrate the algorithm’s ability to ensure sublinear violation growth. In particular, the observed trend aligns with the theoretical Op? Kq,highlighting the algorithm’s robustness in main- taining feasibility over time. Figure 2. Cumulative Violation over Learning Episodes 7. Conclusion In this work, we addressed the challenge of online safe reinforcement learning in dynamic environments with ad- versarial constraints by proposing the Optimistic Mirror Descent Primal- Dual (OMDPD) algorithm. Our approach is the first to provide optimal guarantees in terms of both regret and strong constraint violation under anytime adver- sarial cost functions, without requiring Slater’s condition or the existence of a strictly known safe policy. OMDPD achieves regret and violation bounds of ˜Op? Kq, which are optimal with respect to the number of learning episodes K. We also demonstrated that access to accurate estimates of rewards and transitions can further improve these per-formance guarantees. Our work advances the theoretical understanding of CMDPs and provides a robust solution for safe decision-making in adversarial and non-stationary environments. Future research directions include extend- ing our framework to multi-agent settings and investigating scenarios with partial observability. Impact Statement This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none of which we feel must be specifically highlighted here. Acknowledgment This work was supported by the National Research Founda- tion of Korea (NRF) grants (No. RS-2024-00350703 and No. RS-2024-00410082). References Achiam, J., Held, D., Tamar, A., and Abbeel, P. Constrained policy optimization. In Int. Conf. Machine Learning (ICML) , volume 70, pp. 22–31. JMLR, 2017. Altman, E. Constrained Markov decision processes , vol- ume 7. CRC Press, 1999. Auer, P., Jaksch, T., and Ortner, R. Near-optimal regret bounds for reinforcement learning. NeurIPS , 21, 2008. Azar, M. G., Osband, I., and Munos, R. Minimax regret
|
https://arxiv.org/abs/2505.21841v1
|
bounds for reinforcement learning. In International con- ference on machine learning , pp. 263–272. PMLR, 2017. Bai, Q., Bedi, A. S., Agarwal, M., Koppel, A., and Ag- garwal, V . Achieving zero constraint violation for con- strained reinforcement learning via primal-dual approach. InAAAI Conf. Artificial Intelligence , volume 36, pp. 3682– 3689, 2022. Bura, A., HasanzadeZonuzy, A., Kalathil, D., Shakkottai, S., and Chamberland, J.-F. Safe exploration for constrained reinforcement learning with provable guarantees. arXiv preprint arXiv:2112.00885 , 2021. Chen, L. and Luo, H. Finding the stochastic shortest path with low regret: The adversarial cost and unknown tran- sition case, 2021. URL https://arxiv.org/abs/ 2102.05284 . Chen, L., Jain, R., and Luo, H. Learning infinite-horizon average-reward markov decision process with constraints. InInt. Conf. Machine Learning (ICML) , pp. 3246–3270. PMLR, 2022. 9 An Optimistic Algorithm for online CMDPS with Anytime Adversarial Constraints Chow, Y ., Ghavamzadeh, M., Janson, L., and Pavone, M. Risk-constrained reinforcement learning with percentile risk criteria. The Journal of Machine Learning Research , 18(1):6070–6120, 2017. Dann, C., Lattimore, T., and Brunskill, E. Unifying pac and regret: Uniform pac bounds for episodic reinforcement learning. Advances in Neural Information Processing Systems , 30, 2017. Ding, D., Wei, X., Yang, Z., Wang, Z., and Jovanovic, M. Provably efficient safe exploration via primal-dual policy optimization. In Int. Conf. Artificial Intelligence and Statistics (AISTATS) , volume 130, pp. 3304–3312. PMLR, 2021. Ding, Y . and Lavaei, J. Provably efficient primal-dual rein- forcement learning for cmdps with non-stationary objec- tives and constraints. arXiv preprint arXiv:2201.11965 , 2022. Efroni, Y ., Mannor, S., and Pirotta, M. Exploration- exploitation in constrained MDPs. arXiv preprint arXiv:2003.02189 , 2020. Germano, J., Stradi, F. E., Genalti, G., Castiglioni, M., Marchesi, A., and Gatti, N. A best-of-both-worlds algo- rithm for constrained mdps with long-term constraints. arXiv preprint arXiv:2304.14326 , 2023. Ghosh, A., Zhou, X., and Shroff, N. Provably efficient model-free constrained rl with linear function approxima- tion. In NeurIPS , 2022. Guo, H., Liu, X., Wei, H., and Ying, L. Online convex optimization with hard constraints: Towards the best of two worlds and beyond. In Advances Neural Information Processing Systems (NeurIPS) , 2022. Isele, D., Nakhaei, A., and Fujimura, K. Safe reinforcement learning on autonomous vehicles. In 2018 IEEE/RSJ In- ternational Conference on Intelligent Robots and Systems (IROS) , pp. 1–6. IEEE, 2018. Jin, C., Jin, T., Luo, H., Sra, S., and Yu, T. Learning ad- versarial markov decision processes with bandit feedback and unknown transition. In International Conference on Machine Learning , pp. 4860–4869. PMLR, 2020. Kirschner, J., Lattimore, T., Vernade, C., and Szepesv ´ari, C. Asymptotically optimal information-directed sampling. Arxiv preprint arXiv:2011.05944 , 2021. Kitamura, T., Kozuno, T., Kato, M., Ichihara, Y ., Nishi- mori, S., Sannai, A., Sonoda, S., Kumagai, W., and Matsuo, Y . A policy gradient primal-dual algorithm for constrained mdps with uniform pac guarantees. arXiv preprint arXiv:2401.17780 , 2024.Lekeufack, J. and Jordan, M. I. An optimistic algorithm for online convex optimization with adversarial constraints. arXiv preprint arXiv:2412.08060 , 2024. Liu, T., Zhou, R., Kalathil, D., Kumar, P., and Tian, C. Learning policies with zero or bounded constraint
|
https://arxiv.org/abs/2505.21841v1
|
viola- tion for constrained MDPs. In Advances Neural Informa- tion Processing Systems (NeurIPS) , volume 34, 2021a. Liu, T., Zhou, R., Kalathil, D., Kumar, P., and Tian, C. Learning policies with zero or bounded constraint viola- tion for constrained mdps. Advances in Neural Informa- tion Processing Systems , 34:17183–17193, 2021b. Luo, H., Wei, C.-Y ., and Lee, C.-W. Policy optimization in adversarial mdps: Improved exploration via dilated bonuses. Advances in Neural Information Processing Systems , 34:22931–22942, 2021. M¨uller, A., Alatur, P., Ramponi, G., and He, N. Cancellation- free regret bounds for lagrangian approaches in con- strained markov decision processes. arXiv preprint arXiv:2306.07001 , 2023. M¨uller, A., Alatur, P., Cevher, V ., Ramponi, G., and He, N. Truly no-regret learning in constrained mdps. arXiv preprint arXiv:2402.15776 , 2024. Neely, M. J. Stochastic network optimization with applica- tion to communication and queueing systems. Synthesis Lectures on Communication Networks , 3(1):1–211, 2010. Qiu, S., Wei, X., Yang, Z., Ye, J., and Wang, Z. Upper confidence primal-dual reinforcement learning for CMDP with adversarial loss. In Advances Neural Information Processing Systems (NeurIPS) , volume 33, pp. 15277– 15287. Curran Associates, Inc., 2020. Rakhlin, S. and Sridharan, K. Optimization, learning, and games with predictable sequences. Advances in Neural Information Processing Systems , 26, 2013. Singh, R., Gupta, A., and Shroff, N. B. Learning in markov decision processes under constraints. arXiv preprint arXiv:2002.12435 , 2020. Sinha, A. and Vaze, R. Optimal algorithms for online convex optimization with adversarial constraints, 2024. URL https://arxiv.org/abs/2310.18955 . Stradi, F. E., Castiglioni, M., Marchesi, A., and Gatti, N. Learning adversarial mdps with stochastic hard con- straints. arXiv preprint arXiv:2403.03672 , 2024a. Stradi, F. E., Castiglioni, M., Marchesi, A., and Gatti, N. Learning adversarial mdps with stochastic hard con- straints. arXiv preprint arXiv:2403.03672 , 2024b. 10 An Optimistic Algorithm for online CMDPS with Anytime Adversarial Constraints Stradi, F. E., Castiglioni, M., Marchesi, A., and Gatti, N. Optimal strong regret and violation in constrained mdps via policy optimization. arXiv preprint arXiv:2410.02275 , 2024c. Wei, H., Liu, X., and Ying, L. Triple-Q: a model-free algorithm for constrained reinforcement learning with sublinear regret and zero constraint violation. In Int. Conf. Artificial Intelligence and Statistics (AISTATS) , 2022a. Wei, H., Liu, X., and Ying, L. A provably-efficient model- free algorithm for infinite-horizon average-reward con- strained markov decision processes. In AAAI Conf. Artifi- cial Intelligence , February 2022b. Wei, H., Ghosh, A., Shroff, N., Ying, L., and Zhou, X. Prov- ably efficient model-free algorithms for non-stationary CMDPs. In Int. Conf. Artificial Intelligence and Statistics (AISTATS) , pp. 6527–6570. PMLR, 2023. Wei, X., Yu, H., and Neely, M. J. Online primal-dual mir- ror descent under stochastic constraints. Proceedings of the ACM on Measurement and Analysis of Computing Systems , 4(2):1–36, 2020. Yinka-Banjo, C. and Ugot, O.-A. A review of generative adversarial networks and its application in cybersecurity. Artificial Intelligence Review , 53:1721–1736, 2020. 11 An Optimistic Algorithm for online CMDPS with Anytime Adversarial Constraints A. Missing Related Work Online Constrained Optimization. Recently, (Sinha & Vaze, 2024) achieved sublinear regret in an adversarial violation setting. However, their approach relies on Online Gradient
|
https://arxiv.org/abs/2505.21841v1
|
Descent and thus cannot attain a tighter bound even when the reward is fixed. Meanwhile, Lekeufack & Jordan (2024) proposed an algorithm based on optimistic online mirror descent, attaining comparable regret and violation bounds to ours. B. Optimistic estimates Related Lemmas B.1. Proof of Lemma 5.6 Lemma 5.6.With probability at least 1´δ,PrrGsě1´δ,where Gis the good event defined in Eq. (27) for stochastic constraint setting and Eq.(28) for adversarial setting, where δPp0,1q. Proof: Define the following failure events representing the set in which the transitions and observations are far from our current optimistic-estimation: Fp k“! Ds, a, s1, h:ˇˇphps1|s, aq´ˆpk´1 hps1|s, aqˇˇąβp k,hps, a, s1q) FN“#Kÿ k“1ÿ ps,a,hqqπk hps, a|pq nk´1 hps, aq_1ą4HSA`2HSA lnK`4 ln2HK δ1, Kÿ k“1ÿ ps,a,hqqπk hps, a|pqb nk´1 hps, aq_1ą6HSA`2H? SAK`2HSA lnK`5 ln2HK δ1+ Fr k“␣ Ds, a, h :ˇˇ¯rhps, aq´ˆrk hps, aqˇˇąβr k,hps, aq( Fd k“! Ds, a, h :ˇˇˇ¯dhps, aq´ˆdk hps, aqˇˇˇąβd k,hps, aq) (Stochastic setting) we define βp k,has βp k,hps, a, s1q:“2d ˆpk´1 hps1|s, aqp1´ˆpk´1 hps1|s, aqqLp δ nk´1 hps, aq_1`14 3Lp δ nk´1 hps, aq_1 βr k,hps, aq:“βd k,hps, aq:“d Lδ nk´1 hps, aq_1 where we set Lδ“lnp12SAHK δq, Lp δ“lnp6SAHK δqandδ1“δ 3. Then set Fp:“ŤK k“1Fp k, Fr“ŤK k“1Fr k, Fd“ŤK k“1Fd k. Hence the Good event is denoted as: G:“´ FNď Fpď Frď Fd¯ (27) And because we did not estimate constraint din the adversarial setting, the Good event is defined as: G:“´ FNď Fpď Fr¯ (28) Finally, it is easy to show that the good event Ghappens with probability at least 1´δ. Specifically, PrrFpYFrYFdsď2 3δ, where the detailed proof can be found in (Efroni et al., 2020)(Appendix A.1). Furthermore, by Lemma E.2, PrrFNsďδ1“ 1 3δ. Then we can prove that Pr rGsě1´δby union bound. B.2. Proof of Lemma 5.7 Lemma 5.7.Conditioning on the Good event G,the optimal policy π˚induced by the occupancy measure qπ˚under the optimal policy for solving the CMDP problem (3) is a feasible solution policy for any episode kPrKssuch that: π˚P! πP∆S A:˜dJ kqπpp1qď0, p1PBk) 12 An Optimistic Algorithm for online CMDPS with Anytime Adversarial Constraints Proof: For the stochastic cost case, the good event Gimplies that|¯dhps, aq´ˆdk,hps, aq|ďβd k,hps, aqfor allps, a, h, kqP SˆAˆSˆrKs. Due to the definition of the optimistic cost ˜dk, we have ˜dk,hps, aq ď ¯dhps, aq. Then we have ˜dJ kq˚ď¯dJq˚. Since π˚is a feasible solution of (3), we have ˜dJ kq˚ď¯dJq˚ď0. For the adversarial cost case, we take ˜dk“dk. Furthermore, conditioned on the good event G, we know that the true transition kernel pPBk. Therefore, q˚ satisfies that q˚P! q:˜dJ kqpp1qď0, p1PBk) . C. Key Lemmas proofs for Theorem 5.1 C.1. Proof of Lemma 5.8 Lemma 5.8.LetC“supq1,q2PQDpq1, q2q,∇k“∇pfkpqkqq,∇k´1“∇pfk´1pqk´1qq, and define the learning rate as: ηk“? Cmin" 1 ?řk´1 i“1}∇i´∇i´1}2 2`?řk´2 i“1}∇i´∇i´1}2 2,1* .Then, the regret is bounded as: Regret algď3.5? C¨ ˝gffeKÿ k“1}∇k´∇k´1}2 2`1˛ ‚ Proof: Updating rule we use for Optimistic OMD in Algorithm 1: fkpqq“αp´˜rJ kq`Φ1pλkqr˜dJ kqs`q´1 2}q´qk}2 where we know fkpqqis 1-strong convex. By apply Optimistic OMD and convexity of fkpqq, we have: pfkpqkq´fkpq˚qqďx ∇pfkpqkqq, qk´q˚y For easy notation, we denote ∇pfkpqkqq“∇k,∇pfk´1pqk´1qq“∇k´1Then, we can arrange for the following
|
https://arxiv.org/abs/2505.21841v1
|
equal transformation: xqk´q˚,∇ky“term 1hkkkkkkkkkkkkikkkkkkkkkkkkj xqk´ˆqk,∇k´∇k´1y`term 2hkkkkkkkkikkkkkkkkj xqk´ˆqk,∇k´1y`term 3hkkkkkkkikkkkkkkj xˆqk´q˚,∇ky (29) We can directly have upper bound for term 1: xqk´ˆqk,∇k´∇k´1yď}qk´ˆqk}2}∇k´∇k´1}2 And any update of the form a˚“arg min aPAηxa, xy`Dpa, cqsatisfies for any dPA: xa˚´d, xyď1 ηpDpd, cq´Dpd, a˚q´Dpa˚, cqq In our form, replace a˚“qk, d“ˆqk, c“ˆqk´1, x“∇k´1, η“ηk, we have upper bound for term 2: xqk´ˆqk,∇k´1yď1 ηkpDpˆqk,ˆqk´1q´Dpˆqk, qkq´Dpqk,ˆqk´1qq (30) Replace a˚“ˆqk, d“q˚, c“ˆqk´1, x“∇k, η“ηk, we have upper bound for term 3: xˆqk´q˚,∇kyď1 ηkpDpq˚,ˆqk´1q´Dpq˚,ˆqkq´Dpˆqk,ˆqk´1qq (31) Combine their upper bound together we have: xqk´q˚,∇kyď}qk´ˆqk}2}∇k´∇k´1}2`1 ηkrDpˆqk,ˆqk´1q´Dpˆqk, qkq´Dpqk,ˆqk´1qs `1 ηkrDpq˚,ˆqk´1q´Dpq˚,ˆqkq´Dpˆqk,ˆqk´1qs (32) ď}qk´ˆqk}2}∇k´∇k´1}2`1 ηkrDpq˚,ˆqk´1q´Dpq˚,ˆqkq´Dpˆqk, qkq´Dpqk,ˆqk´1qs 13 An Optimistic Algorithm for online CMDPS with Anytime Adversarial Constraints Because we set Uis a 1-strongly convex function, we have Dpq1, q2qě1 2}q1´q2}2 2; then: xqk´q˚,∇kyď}qk´ˆqk}2}∇k´∇k´1}2`1 ηk„ Dpq˚,ˆqk´1q´Dpq˚,ˆqkq´1 2}ˆqk´qk}2 2´1 2}qk´ˆqk´1}2 2ȷ Then, sum it: Kÿ k“1xqk´q˚,∇kyďKÿ k“1}qk´ˆqk}2}∇k´∇k´1}2`1 η1Dpq˚,ˆq0q`Kÿ k“2Dpq˚´ˆqk´1qp1 ηk´1 ηk´1q ´Kÿ k“11 2ηkp}qk´ˆqk}2 2`}qk´ˆqk´1}2 2q We define C“supq1,q2PQDpq1, q2q, then we have: Kÿ k“1xqk´q˚,∇kyďterm (a)hkkkkkkikkkkkkj p1 η1`1 ηKqC`term (b)hkkkkkkkkkkkkkkkkkikkkkkkkkkkkkkkkkkj Kÿ k“1}qk´ˆqk}2}∇k´∇k´1}2´term (c)hkkkkkkkkkkkkkkkkkkkkkkikkkkkkkkkkkkkkkkkkkkkkj Kÿ k“11 2ηkp}qk´ˆqk}2 2`}qk´ˆqk´1}2 2q Besides, we will use the following fact: }qk´ˆqk}2}∇k´∇k´1}2“inf ρą0"ρ 2}∇k´∇k´1}2 2`1 2ρ}qk´ˆqk}2 2* And by setting ρ“ηk`1, we have upper bound for term (b): }qk´ˆqk}2}∇k´∇k´1}2ďηk`1 2}∇k´∇k´1}2 2`1 2ηk`1}qk´ˆqk}2 2 Then, consider learning rate ηkin the following condition: ηk“? Cmin$ & %1břk´1 i“1}∇i´∇i´1}2 2`břk´2 i“1}∇i´∇i´1}2 2,1, . - ηkě? Cmin$ & %1 2břk´1 i“1}∇i´∇i´1}2 2,1, . - 1 ηkď1? Cmax$ & %2gffek´1ÿ i“1}∇i´∇i´1}2 2,1, . - Then for term (a), the upper bound is p1 η1`1 ηKq? Cď? Cp2břK´1 k“1}∇k´∇k´1}2 2`2q. Now, we get: Kÿ k“1xqk´q˚,∇kyď? C¨ ˝2gffeKÿ k“1}∇k´∇k´1}2 2`2˛ ‚`Kÿ k“1ηk`1 2}∇k´∇k´1}2 2 `Kÿ k“11 2ηk`1}qk´ˆqk}2 2´Kÿ k“11 2ηk}qk´ˆqk}2 2´Kÿ k“11 2ηk}qk´ˆqk´1}2 2 ď? C¨ ˝2gffeKÿ k“1}∇k´∇k´1}2 2`2˛ ‚`Kÿ k“1ηk`1 2}∇k´∇k´1}2 2 `Kÿ k“11 2ηk`1}qk´ˆqk}2 2´Kÿ k“11 2ηk}qk´ˆqk}2 2 where the second inequality arrived by dropping positive term }qk´ˆqk´1}2 2. Now, we first deal with the last two terms: Kÿ k“11 2ηk`1}qk´ˆqk}2 2´Kÿ k“11 2ηk}qk´ˆqk}2 2ďC 2Kÿ k“1p1 ηk`1´1 ηkqďC 2ηK`1 14 An Optimistic Algorithm for online CMDPS with Anytime Adversarial Constraints Now, we get: Kÿ k“1xqk´q˚,∇kyď? C¨ ˝2gffeKÿ k“1}∇k´∇k´1}2 2`2˛ ‚`Kÿ k“1ηk`1 2}∇k´∇k´1}2 2`C 2ηK`1 ď3? C¨ ˝gffeKÿ k“1}∇k´∇k´1}2 2`1˛ ‚`Kÿ k“1ηk`1 2}∇k´∇k´1}2 2 And notice ηk`1: ηk`1“? Cmin$ & %1břk i“1}∇i´∇i´1}2 2`břk´1 i“1}∇i´∇i´1}2 2,1, . - “? Cmin$ & %břk i“1}∇i´∇i´1}2 2´břk´1 i“1}∇i´∇i´1}2 2 }∇k´∇k´1}2 2,1, . - Thus, we get: Kÿ k“1ηk`1 2}∇k´∇k´1}2 2ď? C 2Kÿ k“1¨ ˝gffekÿ i“1}∇i´∇i´1}2 2´gffek´1ÿ i“1}∇i´∇i´1}2 2˛ ‚ ď? C 2gffeKÿ k“1}∇k´∇k´1}2 2 Next, we can have the upper bound: Kÿ k“1xqk´q˚,∇kyď3? C¨ ˝gffeKÿ k“1}∇k´∇k´1}2 2`1˛ ‚`Kÿ k“1ηk`1 2}∇k´∇k´1}2 2 ď3.5? C¨ ˝gffeKÿ k“1}∇k´∇k´1}2 2`1˛ ‚ Finally, we get: Kÿ k“1pfkpqkq´fkpq˚qqďKÿ k“1x∇pfkpqkqq, qk´q˚y“Kÿ k“1xqk´q˚,∇ky ď3.5? C¨ ˝gffeKÿ k“1}∇k´∇k´1}2 2`1˛ ‚ C.2. Proof of Lemma 5.10 Lemma 5.10.Let˜pkdenote the transition kernel in the candidate set Bk, and let ˜rkand˜dkbe the estimations used by OMDPD. Then, conditioned on the good event G, the estimation errors for the stochastic cost case can be bounded as follows: Kÿ k“1rVπkp˜rk,˜pkq´Vπkp¯r, pqsď ˜Op? NSAH3K`S2AH3q, Kÿ k“1” Vπkp¯d, pq´Vπkp˜dk,˜pkqı` ď˜Op? NSAH3K`S2AH3q.(33) The estimation error for the adversarial cost case can be bounded as follows: Kÿ k“1rVπkpdk, pq´Vπkpdk,˜pkqs`ď˜Op? NSAH3K`S2AH3q. (34) 15 An Optimistic Algorithm for online CMDPS with Anytime Adversarial
|
https://arxiv.org/abs/2505.21841v1
|
Constraints Proof: To prove (33), it is sufficient to show that Kÿ k“1ˇˇˇVπkp˜ℓk,˜pkq´Vπkp¯ℓ, pqˇˇˇď˜O´? NSAH3K`S2AH3¯ forℓ“r, d. The right-hand side of the above inequality can be decomposed as Kÿ k“1ˇˇˇVπkp˜ℓk,˜pkq´Vπkp¯ℓ, pqˇˇˇďKÿ k“1ˇˇˇVπkp˜ℓk,˜pkq´Vπkp˜ℓk, pqˇˇˇ loooooooooooooooooomoooooooooooooooooon Term 1`Kÿ k“1ˇˇˇVπkp˜ℓk, pq´Vπkp¯ℓ, pqˇˇˇ looooooooooooooooomooooooooooooooooon Term 2. Note that βℓ k,hps, aq “b Lδ{pnk´1 hps, aq_1q ď?Lδ. Then the estimated function ˜ℓksatisfies ˜ℓk,hps, aq P r´ 1´?Lδ,1`?Lδsfor allps, a, h, kqPSˆAˆrHsˆrKs. By Lemma E.6 with C“1`?Lδ, Term 1 is bounded as Term 1ď˜O´? NSAH3K`S2AH3¯ . To bound Term 2, by Lemma E.1, we can write it as Term 2“Kÿ k“1E«Hÿ h“1ˇˇˇ¯ℓhpsh, ahq´˜ℓk,hpsh, ahqˇˇˇ|s1, πk, pff . Furthermore, conditioned on the good event G, it follows that ˇˇˇ¯ℓhps, aq´˜ℓk,hps, aqˇˇˇďˇˇˇ¯ℓhps, aq´ˆℓk´1 hps, aqˇˇˇ`ˇˇˇˆℓk´1 hps, aq´˜ℓk,hps, aqˇˇˇď2d Lδ nk´1 hps, aq_1. Applying this, Term 2 can be bounded as Term 2ďÿ k“1E«Hÿ h“12d Lδ nk´1 hps, aq_1|s1, πk, pff ď˜O´ H? SAK`HSA¯ where the last inequality follows from Lemma E.2. Finally, we have Kÿ k“1ˇˇˇVπkp˜ℓk,˜pkq´Vπkp¯ℓ, pqˇˇˇ“Term 1`Term 2ď˜O´? NSAH3K`S2AH3¯ as required. Next, (34) is a direct consequence of Lemma E.6 with C“1. C.3. Proof of Lemma 5.9 Lemma 5.9.Let∇k“∇fkpqkqdenote the subgradient of the potential function fkevaluated at qk. Under OMDPD, the cumulative variation of consecutive gradients is bounded as: gffeKÿ k“1}∇k´∇k´1}2 2ď? 6SAHKp1`Φ1pλKqq SAH Proof: In the algorithm, we have: λk`1“λk`αr˜dJ k`1qk`1s`,Φpxq“exppβxq´1, f kpqq“αp´˜rJ kq`Φ1pλkqr˜dJ kqs`q´1 2}q´qk}2 Based on convexity, we have: ΦpλkqďΦpλk´1q`Φ1pλkqpλk´λk´1q “Φpλk´1q`Φ1pλkq¨αr˜dJ kqks` 16 An Optimistic Algorithm for online CMDPS with Anytime Adversarial Constraints Based on drift analysis, we have: Φpλkq´Φpλk´1qďΦ1pλkq¨αpr˜dJ kqks`q (35) And we know: fkpqkq“αp´˜rJ kqk`Φ1pλkqr˜dJ kqks`q´1 2}qk´qk}2 fkpqkq`αp˜rJ kqkq“Φ1pλkq¨αpr˜dJ kqks`q (36) Combine (35) and (36) together: Φpλkq´Φpλk´1qďfkpqkq`αp˜rJ kqkq Also, we have: fkpq˚q“αp´˜rJ kq˚`Φ1pλkqr˜dJ kq˚s`q´1 2}q˚´qk}2 “´α¨˜rJ kq˚´1 2}q˚´qk}2 Combine the equation together, we have: Φpλkq´Φpλk´1qďfkpqkq`αp˜rJ kqkq Φpλkq´Φpλk´1q´αp˜rJ kqkq´ˆfkpq˚qďfkpqkq´fkpq˚q Φpλkq´Φpλk´1q`αp˜rJ kq˚´˜rJ kqkqďfkpqkq´fkpq˚q Take the summation over K, we have: ΦpλKq`αKÿ k“1p˜rJ kq˚´˜rJ kqkqďKÿ k“1fkpqkq´fkpq˚q Based on Lemma 5.8, we have: Kÿ k“1fkpqkq´fkpq˚qď3.5? C¨ ˝gffeKÿ k“1}∇k´∇k´1}2 2`1˛ ‚ Now, we analyze the upper bound part. ∇k“∇fkpqkq,∇k´1“∇fk´1pqk´1q ∇k“∇fkpqkq“αp´˜rk`Φ1pλkq˜dkq ∇k´1“∇fk´1pqk´1q“αp´˜rk´1`Φ1pλk´1q˜dk´1q Thus: }∇k´∇k´1}2“}∇fkpqkq´∇fk´1pqk´1q}2 “α2}´p˜rk´˜rk´1q`Φ1pλkq˜dk´Φ1pλk´1q˜dk´1}2 “α2}˜rk´˜rk´1}2`α2}Φ1pλkq˜dk´Φ1pλk´1q˜dk´1}2`2α}˜rk´˜rk´1}}Φ1pλkq˜dk´Φ1pλk´1q˜dk´1} }Φ1pλkq˜dk´Φ1pλk´1q˜dk´1}2“}Φ1pλkq˜dk´Φ1pλk´1q˜dk`Φ1pλk´1q˜dk´Φ1pλk´1q˜dk´1}2 “}˜dkpΦ1pλkq´Φ1pλk´1qq`Φ1pλk´1qp˜dk´˜dk´1q}2 ď2p1`a Lδq2SAH}Φ1pλkq´Φ1pλk´1q}2`2p1`a Lδq2SAH}Φ1pλk´1q}2 First deal with }Φ1pλk´1q}2, we define Φpxq“exppβxq´1, so we have Φ1pλkq“βexppβλkqą0,@kPK.Because exppxqis increasing, we have Φ1pλ1qďΦ1pλ2qď...ďΦ1pλKq. Thus,@kPKwe obtain: }Φ1pλk´1q}2ď}Φ1pλKq}2 For}Φ1pλkq´Φ1pλk´1q}2, we usepa´bq2ďa2`b2and have: }Φ1pλkq´Φ1pλk´1q}2ď}Φ1pλkq}2`}Φ1pλk´1q}2 17 An Optimistic Algorithm for online CMDPS with Anytime Adversarial Constraints ď}Φ1pλKq}2`}Φ1pλKq}2“2}Φ1pλKq}2 Therefore, we have the upper bound: }Φ1pλkq˜dk´Φ1pλk´1q˜dk´1}2ď2p1`a Lδq2SAH}Φ1pλkq´Φ1pλk´1q}2`2p1`a Lδq2SAH}Φ1pλk´1q}2 ď2p1`a Lδq2SAH¨2}Φ1pλKq}2`2p1`a Lδq2SAH¨}Φ1pλKq}2 ď6p1`a Lδq2SAH}Φ1pλKq}2 Therefore, gffeKÿ k“1}∇k´∇k´1}2 2“αgffeKÿ k“1}˜rk´˜rk´1}2`Kÿ k“1}Φ1pλkq˜dk´Φ1pλk´1q˜dk´1}2`2Kÿ k“1}˜rk´˜rk´1}}Φ1pλkq˜dk´Φ1pλk´1q˜dk´1} ďαgffeKÿ k“1}˜rk´˜rk´1}2 loooooooooomoooooooooon diff 1`αgffeKÿ k“1}Φ1pλkq˜dk´Φ1pλk´1q˜dk´1}2 looooooooooooooooooooomooooooooooooooooooooon diff 2`αgffe2Kÿ k“1}˜rk´˜rk´1}}Φ1pλkq˜dk´Φ1pλk´1q˜dk´1} loooooooooooooooooooooooooooooomoooooooooooooooooooooooooooooon diff 3 For diff 1: αgffeKÿ k“1}˜rk´˜rk´1}2ďαb p1`a Lδq2SAHK For diff 2: αgffeKÿ k“1}Φ1pλkq˜dk´Φ1pλk´1q˜dk´1}2ďαgffeKÿ k“16p1`a Lδq2SAH}Φ1pλKq}2“αΦ1pλKqb 6p1`a Lδq2SAHK For diff 3:gffe2Kÿ k“1}˜rk´˜rk´1}}Φ1pλkq˜dk´Φ1pλk´1q˜dk´1}ďgffe2Kÿ k“1b p1`a LδqSAH}Φ1pλkq˜dk´Φ1pλk´1q˜dk´1} ďgffe2Kÿ k“1b p1`a LδqSAHb 6p1`a Lδq2SAH Φ1pλKq “gffeKÿ k“1SAHb 24p1`a Lδq3Φ1pλKq ďgffeKÿ k“1SAHb 36p1`a Lδq4Φ1pλKq “gffeKÿ k“16p1`a Lδq2SAH Φ1pλKq“a Φ1pλKqb 6p1`a Lδq2SAHK ď` Φ1pλKq`1˘b 6p1`a Lδq2SAHK where the last inequality holds for?aďa`1,@aą0. Therefore, we have the upper bound ofbřK k“1}∇k´∇k´1}2 2: gffeKÿ k“1}∇k´∇k´1}2 2ďαgffeKÿ k“1}˜rk´˜rk´1}2 loooooooooomoooooooooon diff 1`αgffeKÿ k“1}Φ1pλkq˜dk´Φ1pλk´1q˜dk´1}2 looooooooooooooooooooomooooooooooooooooooooon diff 2`αgffe2Kÿ k“1}˜rk´˜rk´1}}Φ1pλkq˜dk´Φ1pλk´1q˜dk´1} loooooooooooooooooooooooooooooomoooooooooooooooooooooooooooooon diff 3 18 An Optimistic Algorithm for online CMDPS with Anytime Adversarial Constraints ďαb p1`a Lδq2SAHKloooooooooooooomoooooooooooooon diff 1`αΦ1pλKqb 6p1`a Lδq2SAHKloooooooooooooooooooomoooooooooooooooooooon diff 2`αb 6p1`a Lδq2SAHK`αΦ1pλKqb 6p1`a Lδq2SAHKlooooooooooooooooooooooooooooooooooooooomooooooooooooooooooooooooooooooooooooooon diff 3 ď2αb 6p1`a Lδq2SAHK`2αΦ1pλKqb 6p1`a
|
https://arxiv.org/abs/2505.21841v1
|
Lδq2SAHK“2αp1`a Lδq? 6SAHK` 1`Φ1pλKq˘ “? 6SAHKp1`Φ1pλKqq SAH where the last equality holds for choosing α“1 2p1`?LδqSAH. C.4. Proof of Lemma 5.11 Lemma 5.11.Based on Lemma 5.8, 5.9, the following upper bound holds: Kÿ k“1“ ˜rJ kq˚´˜rJ kqk‰ ď2p1`a LδqpSAH`4? C? 6SAHKq Proof: Based on Lemma 5.8, 5.9, we have the following relation: ΦpλKq`αKÿ k“1p˜rJ kq˚´˜rJ kqkqďKÿ k“1ˆfkpqkq´ˆfkpq˚qď3.5? C¨ ˝gffeKÿ k“1}∇k´∇k´1}2 2`1˛ ‚ ď3.5? C¨ ˝gffeKÿ k“1}∇k´∇k´1}2 2`1˛ ‚ ď3.5? C˜? 6SAHKp1`Φ1pλKqq SAH¸ Now we can have: ΦpλKq`αKÿ k“1r˜rJ kq˚´˜rJ kqksď3.5? C˜? 6SAHKp1`Φ1pλKqq SAH¸ ΦpλKq`αKÿ k“1r˜rJ kq˚´˜rJ kqksď3.5? C˜? 6SAHKp1`Φ1pλKqq SAH¸ exppβλKq´1`αKÿ k“1r˜rJ kq˚´˜rJ kqksď4? C˜? 6SAHK SAH¸ `4? C˜ βexppβλKq? 6SAHK SAH¸ αKÿ k“1r˜rJ kq˚´˜rJ kqksďexppβλKq˜ β¨4? C? 6SAHK SAH´1¸ `4? C˜? 6SAHK SAH¸ `1 Kÿ k“1r˜rJ kq˚´˜rJ kqksďexppβλKq˜ β¨4? C? 6SAHK αSAH´1 α¸ `4? C˜? 6SAHK αSAH¸ `1 α “2p1`a LδqSAH`8p1`a Lδq? C? 6SAHK “2p1`a LδqpSAH`4? C? 6SAHKq where the last equality obtained by α“1 2p1`?LδqSAH, βďSAH 4? C? 6SAHK. C.5. Proof of Lemma 5.12 Lemma 5.12.Under the stochastic rewards setting, with probability at least 1- 2δ, we have: Kÿ k“1“ ¯rJq˚´˜rJ kq˚‰ ďSAHc K 2lnp2 δq 19 An Optimistic Algorithm for online CMDPS with Anytime Adversarial Constraints Proof: Based on the optimistic estimates, we know that: ˜rk“ˆrk´1`βr k´1ps, aq. Then, we have the following relationship: ¯rJq˚´˜rJ kq˚“¯rJq˚´ˆrJ k´1q˚´βr k´1ps, aqq˚ ď¯rJq˚´ˆrJ k´1q˚ where the inequality holds for βr k´1ps, aqandq˚is non-negative. Thus, we have the following relationship easily: Kÿ k“1“ ¯rJq˚´˜rJ kq˚‰ ďKÿ k“1“ ¯rJq˚´ˆrJ k´1q˚‰ Based on norm property, and for each episode k, we have: ˇˇ¯rJq˚´ˆrJ k´1q˚ˇˇď}¯r´ˆrk´1}8¨}q˚}1 and from the definition of reward randFact 5.4 , we know that }¯r´ˆrk´1}8ď1and}q˚}1ďSAH . Thus: ˇˇ¯rJq˚´ˆrJ k´1q˚ˇˇďSAH If the objective costs are stochastic under Lemma 5.6, and by the Azuma-Hoeffding inequality we have: Pr«ˇˇˇˇˇK´1ÿ k“1¯rJq˚´K´1ÿ k“1ˆrJ k´1q˚ˇˇˇˇˇěMff ďδ“2ep´2M2 pK´1qpSAHq2q Thus by setting Mas: M“SAHc pK´1q 2lnp2 δq we have when the objective costs are stochastic, with the probability at least 1 ´2δ: ˇˇˇˇˇK´1ÿ k“1¯rJq˚´K´1ÿ k“1ˆrJ k´1q˚ˇˇˇˇˇďSAHc pK´1q 2lnp2 δq Then, by the absolute value property, we can obtain: Kÿ k“1“ ¯rJq˚´˜rJ kq˚‰ ďK´1ÿ k“1¯rJq˚´K´1ÿ k“1ˆrJ k´1q˚`¯rJq˚ ˇˇˇK´1ÿ k“1¯rJq˚´K´1ÿ k“1ˆrJ k´1q˚ˇˇˇˇˇ`|¯rJq˚| ďSAHc pK´1q 2lnp2 δq`SAH C.6. Proof of Lemma 5.13 Lemma 5.13.Based on Lemma 5.8, 5.9. Then, the following upper bound holds: Kÿ k“1“ dJ kqk‰`ď16p1`? Lδq? C? 6SAHK ln˜ K`8? C˜? 6SAHK SAH¸ `2¸ where dk“˜dkunder stochastic setting and dk“dkin adversarial setting. Proof: To begin with, we deal with the stochastic setting first, which is dk“˜dk. Since we have the fact that min and max value for regret that: ´|˜rJ kq˚´˜rJ kqk|ďp ˜rJ kq˚´˜rJ kqkqď| ˜rJ kq˚´˜rJ kqk|.And|˜rJ kq˚´˜rJ kqk|ď} ˜rk}2}q˚´qk}2“ p1`?LδqSAH . Thus, we have: Kÿ k“1p˜rJ kq˚´˜rJ kqkqěKÿ k“1´|˜rJ kq˚´˜rJ kqk|“´p 1`a LδqSAHK 20 An Optimistic Algorithm for online CMDPS with Anytime Adversarial Constraints Therefore, we have: ΦpλKq`αKÿ k“1r˜rJ kq˚´˜rJ kqksď3.5? C˜? 6SAHKp1`Φ1pλKqq SAH¸ ΦpλKq`αp´p1`a LδqSAHKqď3.5? C˜? 6SAHKp1`Φ1pλKqq SAH¸ exppβλKq´1`αp´p1`a LδqSAHKqď4? C˜? 6SAHK SAH¸ `4? C˜ βexppβλKq? 6SAHK SAH¸ exppβλKq˜ 1´β¨4? C˜? 6SAHK SAH¸¸ ďαpp1`a LδqSAHKq`4? C˜? 6SAHK SAH¸ `1 exppβλKqďαpp1`?LδqSAHKq`4? C´? 6SAHK SAH¯ `1 ´ 1´β¨4? C´? 6SAHK SAH¯¯ where the last inequality holds for choosing βsuch that 1´β¨4? C´? 6SAHK SAH¯ ą0: β¨4? C˜? 6SAHK SAH¸ ă1ÑβăSAH
|
https://arxiv.org/abs/2505.21841v1
|
4? C? 6SAHK which the choosing of βmatch when we prove the regret bound in that case we choose βďSAH 4? C? 6SAHK. Here, we let β“SAH 8? C? 6SAHKand we have: exppβλKqďαpp1`?LδqSAHKq`4? C´? 6SAHK SAH¯ `1 ´ 1´β¨4? C´? 6SAHK SAH¯¯ “αpp1`?LδqSAHKq`4? C´? 6SAHK SAH¯ `1 1 2 “2αpp1`a LδqSAHKq`8? C˜? 6SAHK SAH¸ `2 “K`8? C˜? 6SAHK SAH¸ `2 where the last inequality holds for take α“1 2p1`?LδqSAH. Then, recall the definition λk“λk´1`αr˜dJ kqks`, so λK“αřK k“1r˜dJ kqks`. Thus, take the log operation and we have: βλKďln˜ K`8? C˜? 6SAHK SAH¸ `2¸ λKď1 βln˜ K`8? C˜? 6SAHK SAH¸ `2¸ αKÿ k“1r˜dJ kqks`ď1 βln˜ K`8? C˜? 6SAHK SAH¸ `2¸ Kÿ k“1r˜dJ kqks`ď1 αβln˜ K`8? C˜? 6SAHK SAH¸ `2¸ “16p1`? Lδq? C? 6SAHK ln˜ K`8? C˜? 6SAHK SAH¸ `2¸ 21 An Optimistic Algorithm for online CMDPS with Anytime Adversarial Constraints The proof of dk“dkis almost same with the situation that dk“˜dk. Hence, we can directly have: Kÿ k“1rdJ kqks`ď16p1`? Lδq? C? 6SAHK ln˜ K`8? C˜? 6SAHK SAH¸ `2¸ D. Main Theoretical Analysis D.1. Proof of Theorem 5.1 Proof: In this section, our proof is based on the roadmap described in Figure 3. The context is divided into Regret and Violation parts, respectively. Figure 3. Proof Roadmap of Theorem 5.1 D.1.1. R EGRET BOUND PROOF Recall the definition of Regret: RegretpKq“Kÿ k“1” Vπ˚p¯r, pq´Vπkp¯r, pqı “Kÿ k“1rVπkp˜rk,˜pkq´Vπkp¯r, pqs loooooooooooooooooomoooooooooooooooooon Estimation Error`Kÿ k“1” Vπ˚p¯r, pq´Vπkp˜rk,˜pkqı loooooooooooooooooomoooooooooooooooooon Optimization Error We can bound the “Estimation Error” term by using Lemma 5.10. Now, let’s go through the details of the “Optimization Error” term. We will first decompose it as follows: Kÿ k“1” Vπ˚p¯r, pq´Vπkp˜rk,˜pkqı “Kÿ k“1“ Er¯rJq˚s´Er˜rJ kqks‰ “Kÿ k“1“ Er¯rJq˚s´Er˜rJ kq˚s‰ looooooooooooooomooooooooooooooon Term 1`Kÿ k“1“ Er˜rJ kq˚s´Er˜rJ kqks‰ looooooooooooooomooooooooooooooon Term 2. Thus, it’s clear that we can use Lemma 5.11 and Lemma 5.12 to bound these two terms, respectively. Therefore, the Regret is bounded in the following inequality: RegretpKq“Kÿ k“1” Vπ˚p¯r, pq´Vπkp¯r, pqı “Kÿ k“1rVπkp˜rk,˜pkq´Vπkp¯r, pqs`Kÿ k“1” Vπ˚p¯r, pq´Vπkp˜rk,˜pkqı “Kÿ k“1rVπkp˜rk,˜pkq´Vπkp¯r, pqs loooooooooooooooooomoooooooooooooooooon Lemma 5.10`Kÿ k“1“ Er¯rJq˚s´Er˜rJ kq˚s‰ looooooooooooooomooooooooooooooon Lemma 5.12`Kÿ k“1“ Er˜rJ kq˚s´Er˜rJ kqks‰ looooooooooooooomooooooooooooooon Lemma 5.11 ď˜O`? NSAH3K`S2AH3˘ `SAHc pK´1q 2lnp2 δq`SAH`2p1`a LδqpSAH`4? C? 6SAHKq 22 An Optimistic Algorithm for online CMDPS with Anytime Adversarial Constraints ď˜O`? NSAH3K`S2AH3˘ `SAHc pK´1q 2lnp2 δq`3p1`a LδqpSAH`4? C? 6SAHKq D.1.2. V IOLATION BOUND PROOF Stochastic setting. Recall the definition of stochastic Violation: ViolationpKq“Kÿ k“1“ Vπkp¯d, pq‰`“Kÿ k“1” Vπkp¯d, pq´Vπkp˜dk,˜pkqı` looooooooooooooooooomooooooooooooooooooon Estimation Error`Kÿ k“1” Vπkp˜dk,˜pkqı` looooooooooomooooooooooon Optimization Error Similarly, we first use Lemma 5.10 to bound “Estimation Error” term under the stochastic setting. Next, we will deal with the optimization error term by Lemma 5.13: Kÿ k“1” Vπkp˜dk,˜pkqı` “Kÿ k“1rEr˜dJ kqkss`ď16p1`? Lδq? C? 6SAHK ln˜ K`8? C˜? 6SAHK SAH¸ `2¸ Hence, the whole stochastic Violation is bounded as: ViolationpKq“Kÿ k“1“ Vπkp¯d, pq‰`“Kÿ k“1” Vπkp¯d, pq´Vπkp˜dk,˜pkqı` looooooooooooooooooomooooooooooooooooooon Lemma 5.10`Kÿ k“1” Vπkp˜dk,˜pkqı` looooooooooomooooooooooon Lemma 5.13 ď˜O`? NSAH3K`S2AH3˘ `16p1`? Lδq? C? 6SAHK ln˜ K`8? C˜? 6SAHK SAH¸ `2¸ Adversarial setting. When we deal with adversarial constraint, by the definition: ViolationpKq“Kÿ k“1rVπkpdk, pqs`“Kÿ k“1rVπkpdk, pq´Vπkpdk,˜pkqs` loooooooooooooooooooomoooooooooooooooooooon Estimation Error`Kÿ k“1rVπkpdk,˜pkqs` loooooooooomoooooooooon Optimization Error In this situation, we proved an additional estimation error bound in Lemma
|
https://arxiv.org/abs/2505.21841v1
|
5.10 with adversarial case and with Lemma 5.13, the following bound can be obtained: ViolationpKq“Kÿ k“1rVπkpdk, pqs`“Kÿ k“1rVπkpdk, pq´Vπkpdk,˜pkqs` loooooooooooooooooooomoooooooooooooooooooon Lemma 5.10`Kÿ k“1rVπkpdk,˜pkqs` loooooooooomoooooooooon Lemma 5.13 ď˜O`? NSAH3K`S2AH3˘ `16p1`? Lδq? C? 6SAHK ln˜ K`8? C˜? 6SAHK SAH¸ `2¸ D.2. Proof of Remark 5.2 If we fix the reward and constraint, where ˜rk“˜rk´1,˜dk“˜dk´1, then we have the following relationship adapted from Lemma 5.9: gffeKÿ k“1}∇k´∇k´1}2ďαgffeKÿ k“1Φ1pλkq˜dk´Φ1pλk´1q˜dk´1 ďαp1`? LδqΦ1pλKq? 2SAHK Tighter Bound Analysis. Similar to proof in Appendix C.4 we can obtain: ΦpλKq`αKÿ k“1r˜rJ kq˚´˜rJ kqksď3.5? C˜ Φ1pλKq? 2SAHK 2SAH¸ 23 An Optimistic Algorithm for online CMDPS with Anytime Adversarial Constraints exppβλKq´1`αKÿ k“1r˜rJ kq˚´˜rJ kqksďβexppβλKq¨3.5? C˜? 2SAHK 2SAH¸ αKÿ k“1r˜rJ kq˚´˜rJ kqksďexppβλKq˜ β¨3.5? C˜? 2SAHK 2SAH¸ ´1¸ `1 Kÿ k“1r˜rJ kq˚´˜rJ kqksďexppβλKq¨ ˝β¨3.5? C´? 2SAHK 2SAH¯ α´1 α˛ ‚`1 α ď2p1`? LδqSAH where the last equality obtained by α“1 2p1`? LδqSAH, βď2SAH 3.5? C? 2SAHK. Now, we clearly prove a Op1qbound for řK k“1r˜rJ kq˚´˜rJ kqks. E. Useful Lemmas Lemma E.1 (Lemma E.15 of (Dann et al., 2017)) .Consider two MDPs M1“pS,A,tp1 huH h“1,tr1 huH h“1qandM2“ pS,A,tp2 huH h“1,tr2 huH h“1q. For any policy πands, h, the following relation holds. Vπ hps;r1, p1q´Vπ hps;r2, p2q “E«Hÿ h1“hr1 hpsh, ahq´r2 hpsh, ahq`pp1 h´p2 hqp¨|sh, ahqVπ h`1p¨;r1, p1q|sh“s, π, p2ff (37) wherepp1 h´p2 hqp¨|sh, ahqVπ h`1p¨;r1, p1q“ř sPSpp1 h´p2 hqps1|sh, ahqVπ h`1ps1;r1, p1q. Lemma E.2 (Lemma D.5 of (Liu et al., 2021b)) .With probability at least 1´δ, Kÿ k“1Hÿ h“1ÿ ps,aqqπk hps, aqb nk´1 hps, aq_1ď6HSA`2H? SAK`2HSA lnK`5 ln2HK δ Kÿ k“1Hÿ h“1ÿ ps,aqqπk hps, aq nk´1 hps, aq_1ď4HSA`2HSA lnK`4 ln2HK δ(38) where qπk hps, aq“Prpsh“s, ah“a|s1, πk, pq. Lemma E.3 (Lemma 8 of (Jin et al., 2020)) .Conditioned on the good event G, for allps, a, h, s1, kqPSˆAˆrHsˆSˆrKs, there exists constants C1, C2ą0for which we have for all ˜pkPBkthat |pph´˜pk hqps1|s, aq|ďC1d phps, aqLp δ nk´1 hps, aq_1`C2Lp δ nk´1 hps, aq_1. The following lemma is Lemma 10 of (Chen & Luo, 2021) with a boundedness constant Cfor the reward function rk. Lemma E.4 (Lemma 10 of (Chen & Luo, 2021)) .Letrkbe an arbitrary function such that rk,hps, aqPr´ C, Csfor all ps, a, h, kqPSˆAˆrHsˆrKs. If the true transition kernel psatisfies pPBk, then for any ˜pkPBkwe have Kÿ k“1ˇˇˇˇˇE«Hÿ h“1pph´˜pk hqp¨|sh, ahqpVπk h`1p¨;rk, pq´Vπk h`1p¨;rk,˜pkqq|s1, πk, pffˇˇˇˇˇ“˜OpCH3S2Aq. Lemma E.5 (Lemma 4 of (Chen & Luo, 2021)) .For any reward function r, policy π, transition kernel p, Var˜Hÿ h“1rhpsh, ahq|s1, π, p¸ ěE«Hÿ h“1Vhpsh, ah;π, pq|s1, π, pff . (39) Lemma E.6 (Estimation Error for p).Let˜pkdenote the transition kernel in the candidate set Bk, and let ℓkbe an arbitrary function with ℓk,hps, aqPr´ C, Csfor allps, a, h, kqPSˆAˆrHsˆrKsand some Cą0. Then, conditioned on the good event G, the estimation error on the transition kernel can be bounded as follows: Kÿ k“1|Vπkpℓk, pq´Vπkpℓk,˜pkq|ď ˜OpC? NSAH3K`CS2AH3q. 24 An Optimistic Algorithm for online CMDPS with Anytime Adversarial Constraints Proof. By Lemma E.1, the desired term can be rewritten as Kÿ k“1|Vπkpℓk, pq´Vπkpℓk,˜pkq|“Kÿ k“1ˇˇˇˇˇE«Hÿ h“1pph´˜pk hqp¨|sh, ahqVπk h`1p¨;ℓk,˜pkq|s1, πk, pffˇˇˇˇˇ ďKÿ k“1ˇˇˇˇˇE«Hÿ h“1pph´˜pk hqp¨|sh, ahqVπk h`1p¨;ℓk, pq|s1, πk, pffˇˇˇˇˇlooooooooooooooooooooooooooooooooooooomooooooooooooooooooooooooooooooooooooon Term (I) `Kÿ k“1ˇˇˇˇˇE«Hÿ h“1pph´˜pk hqp¨|sh, ahqpVπk h`1p¨;ℓk,
|
https://arxiv.org/abs/2505.21841v1
|
pq´Vπk h`1p¨;ℓk,˜pkqq|s1, πk, pffˇˇˇˇˇloooooooooooooooooooooooooooooooooooooooooooooooooomoooooooooooooooooooooooooooooooooooooooooooooooooon Term (II) where we use the short-hand notation php¨|s, aqVπp¨;ℓk, pq“ř s1PSphps1|s, aqVπps1;ℓk, pq. Under the good event G, we have |pph´˜pk hqps1|s, aq|ďC1d phps, aqLp δ nk´1 hps, aq_1`C2Lp δ nk´1 hps, aq_1(40) due to Lemma E.3. Applying this, Term (I) can be written as Term (I)“Kÿ k“1ˇˇˇˇˇE«Hÿ h“1pph´˜pk hqp¨|sh, ahqVπk h`1p¨;ℓk, pq|s1, πk, pffˇˇˇˇˇ “Kÿ k“1ˇˇˇˇˇE«Hÿ h“1ÿ s1pph´˜pk hqps1|sh, ahqVπk h`1ps1;ℓk, pq|s1, πk, pffˇˇˇˇˇ “Kÿ k“1ˇˇˇˇˇE«Hÿ h“1ÿ s1pph´˜pk hqps1|sh, ahqpVπk h`1ps1;ℓk, pq´Es2„php¨|sh,ahqVπk h`1ps2;ℓk, pqq|s1, πk, pffˇˇˇˇˇ ďKÿ k“1E«Hÿ h“1ÿ s1ˇˇpph´˜pk hqps1|sh, ahqˇˇˇˇVπk h`1ps1;ℓk, pq´Es2„php¨|sh,ahqVπk h`1ps2;ℓk, pqˇˇ|s1, πk, pff ďKÿ k“1E«Hÿ h“1ÿ s1C1d phps1|sh, ahqLp δ nk´1 hpsh, ahq_1ˇˇVπk h`1ps1;ℓk, pq´Es2„php¨|sh,ahqVπk h`1ps2;ℓk, pqˇˇ|s1, πk, pff loooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooomoooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooon Term (I-a) `Kÿ k“1E«Hÿ h“1ÿ s1C2Lp δ nk´1 hpsh, ahq_1ˇˇVπk h`1ps1;ℓk, pq´Es2„php¨|sh,ahqVπk h`1ps2;ℓk, pqˇˇ|s1, πk, pff looooooooooooooooooooooooooooooooooooooooooooooooooooooooooooomooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooon Term (I-b) where the third equality follows fromř s1pph´˜pk hqps1|s, aqEs2„php¨|sh,ahqVπk h`1ps2;ℓk, pq“0, and the last inequality is due to (40). Note that the expectations can be expressed by occupancy measures. Furthermore, in Term (I-a), we can replace 25 An Optimistic Algorithm for online CMDPS with Anytime Adversarial Constraints ř s1withř s1:phps1|s,aqą0. Then this can be rewritten as Term (I-a) “C1b Lp δKÿ k“1E» –Hÿ h“1ÿ s1:phps1|s,aqą0d phps1|sh, ahq nk´1 hpsh, ahq_1ˇˇVπk h`1ps1;ℓk, pq´Es2„php¨|sh,ahqVπk h`1ps2;ℓk, pqˇˇ|s1, πk, pfi fl “C1b Lp δKÿ k“1E» –Hÿ h“1ÿ s1:phps1|s,aqą0d phps1|sh, ahqpVπk h`1ps;ℓk, pq´Es2„php¨|sh,ahqVπk h`1ps2;ℓk, pqq2 nk´1 hpsh, ahq_1|s1, πk, pfi fl “C1b Lp δKÿ k“1ÿ ps,a,hqqπk hps, a;pqÿ s1:phps1|s,aqą0d phps1|s, aqpVπk h`1ps;ℓk, pq´Es2„php¨|s,aqVπk h`1ps2;ℓk, pqq2 nk´1 hps, aq_1 ďC1b Lp δgffeKÿ k“1ÿ ps,a,hqqπk hps, a;pqÿ s1:phps1|s,aqą0phps1|s, aqpVπk h`1ps;ℓk, pq´Es2„php¨|s,aqVπk h`1ps2;ℓk, pqq2 ˆgffeKÿ k“1ÿ ps,a,hqÿ s1:phps1|s,aqą0qπk hps, a;pq nk´1 hps, aq_1 where the inequality follows from the Cauchuy-Schwarz inequality. Here, ÿ s1:phps1|s,aqą0phps1|s, aqpVπk h`1ps;ℓk, pq´Es2„php¨|s,aqVπk h`1ps2;ℓk, pqq2“Vhps, a;πk, pq. Then, Term (I-a) is upper bounded as Term (I-a)ďC1b Lp δgffeKÿ k“1ÿ ps,a,hqqπk hps, a;pqVhps, a;πk, pqˆgffeKÿ k“1ÿ ps,a,hqÿ s1:phps1|s,aqą0qπk hps, a;pq nk´1 hps, aq_1 ďC1b Lp δgffeKÿ k“1E«Hÿ h“1Vhpsh, ah;πk, pq|s1, πk, pff ˆgffeNKÿ k“1ÿ ps,a,hqqπk hps, a;pq nk´1 hps, aq_1. By Lemma E.5, we have E”řH h“1Vhpsh, ah;πk, pq|s1, πk, pı ďVar´řH h“1ℓk,hpsh, ahq|s1, πk, p¯ . Since řH h“1ℓk,hpsh, ahqPr´ CH, CHsalmost surely, we have Var´řH h“1ℓk,hpsh, ahq|s1, πk, p¯ ďC2H2. Furthermore, we can bound the later term with Lemma E.2. Then it follows that Term (I-a)“˜OpC? KH2ˆ? NSAHq“˜OpC? NSAH3Kq. To bound Term (I-b), under the good event G, we haveˇˇVπk h`1ps1;ℓk, pq´Es2„php¨|sh,ahqVπk h`1ps2;ℓk, pqˇˇď2CH. By Lemma E.2, Term (I-b)ď2CHSKÿ k“1Hÿ h“1E« C2Lp δ nk´1 hpsh, ahq_1|s1, πk, pff “˜O` CH2S2A˘ . Then we have Term (I)“Term (I-a)`Term (I-b)“˜O´ C? NSAH3K`CH2S2A¯ . By Lemma E.4, we can bound Term (II) as Term (II)“˜OpCH3S2Aq. Finally, we have Kÿ k“1|Vπkpℓk, pq´Vπkpℓk,˜pkq|ď Term (I)`Term (II)“˜OpC? NSAH3K`CH3S2Aq. 26
|
https://arxiv.org/abs/2505.21841v1
|
arXiv:2505.21847v1 [cs.CV] 28 May 2025RePaViT: Scalable Vision Transformer Acceleration via Structural Reparameterization on Feedforward Network Layers Xuwei Xu1 2Yang Li3Yudong Chen1Jiajun Liu3 1Sen Wang1 2 Abstract We reveal that feedforward network (FFN) layers, rather than attention layers, are the primary con- tributors to Vision Transformer (ViT) inference latency, with their impact signifying as model size increases. This finding highlights a critical opportunity for optimizing the efficiency of large- scale ViTs by focusing on FFN layers. In this work, we propose a novel channel idle mecha- nism that facilitates post-training structural repa- rameterization for efficient FFN layers during testing. Specifically, a set of feature channels remains idle and bypasses the nonlinear activa- tion function in each FFN layer, thereby form- ing a linear pathway that enables structural repa- rameterization during inference. This mecha- nism results in a family of RePa rameterizable Vision Transformers (RePaViTs), which achieve remarkable latency reductions with acceptable sacrifices (sometimes gains) in accuracy across various ViTs. The effectiveness of our method scale consistently with model sizes, demonstrat- ing greater speed improvements and progressively narrowing accuracy gaps or even higher accu- racies on larger models. In particular, RePa- ViT-Large and RePa-ViT-Huge enjoy 66.8% and 68.7% speed-ups with +1.7% and+1.1% higher top-1 accuracies under the same training strat- egy, respectively. RePaViT is the first to employ structural reparameterization on FFN layers to ex- pedite ViTs to our best knowledge, and we believe that it represents an auspicious direction for effi- cient ViTs. Source code is available at https: //github.com/Ackesnal/RePaViT . 1School of Electrical Engineering and Computer Science, The University of Queensland, Brisbane, Australia.2ARC Train- ing Centre for Information Resilience (CIRES), The University of Queensland, Brisbane, Australia.3DATA61, CSIRO, Pul- lenvale, Brisbane, Australia.. Correspondence to: Jiajun Liu <ryan.liu@data61.csiro.au>, Sen Wang <sen.wang@uq.edu.au>. Proceedings of the 42ndInternational Conference on Machine Learning , Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s). (a) ViT BlockAttention+𝐍𝐍× Vanilla FFN LayerNormLinear 1Activation LayerNorm Patch Embed.Linear 2+ (b) RePa ViT BlockAttention+𝐍𝐍× Channel Idle FFN BatchNormLinear 1Act. LayerNorm Patch Embed.Linear 2+ BatchNorm (c) Reparameterized RePa ViT BlockAttention+𝐍𝐍× RePa FFN Act. LayerNorm Patch Embed.+ RePa Linear 1RePa Linear 2 RePa Linear 3Figure 1. RePaViT architecture. (a) represents the vanilla ViT block. (b) illustrates our channel idle mechanism for FFN layers during training, where only a subset of channels are activated while the rest bridge a linear pathway. (c) shows the reparameterized RePaViT block during testing, where the number of parameters and computational complexity are significantly reduced. 1. Introduction Vision Transformer (ViT) (Dosovitskiy et al., 2021) and its advanced variants (Touvron et al., 2021; Liu et al., 2021; Ryoo et al., 2021; Yu et al., 2022c; Liu et al., 2022; De- hghani et al., 2023) have achieved outstanding performance in various computer vision tasks. However, the high compu- tational cost and memory demand of ViTs hinder their wide deployment in real-world scenarios, especially in computing resource-constrained environments. To improve efficiency for ViTs, several techniques have been developed, such as token pruning (Rao et al., 2021; Liang et al., 2021; Kong et al., 2022a;b; Fayyaz et al., 2022) and 1 RePaViT: Scalable Vision Transformer
|
https://arxiv.org/abs/2505.21847v1
|
Acceleration via Structural Reparameterization on Feedforward Network Layers 100.0 150.0 200.0 300.0 500.0 1000.0 1500.0 2000.0 Throughput (images/second)767880828486Top-1 accuracy (%)DeiT-Base (17.6 GMACs, 81.8% Acc.) RePa-DeiT-Base (9.9 GMACs, 81.3% Acc.)DeiT-Small (4.3 GMACs, 79.8% Acc.) RePa-DeiT-Small (3.2 GMACs, 78.9% Acc.)Swin-Base (15.2 GMACs, 83.5% Acc.) RePa-Swin-Base (9.0 GMACs, 82.6% Acc.) LV-ViT-S (6.1 GMACs, 81.4% Acc.)RePa-LV-ViT-S (4.7 GMACs, 81.6% Acc.)LV-ViT-M (11.9 GMACs, 83.6% Acc.)RePa-LV-ViT-M (8.8 GMACs, 83.5% Acc.) ViT-Large (59.7 GMACs, 80.3% Acc.)RePa-ViT-Large (34.9 GMACs, 82.0% Acc.) Model Size 20M 50M 100M Figure 2. Performance comparison of RePaViTs and their vanilla backbones. RePaViTs (red circled) consistently achieve greater accelerations and smaller accuracy gaps when model sizes increase, showing the potential effectiveness in expediting large-scale ViTs. It is also worth noting that RePa-ViT-Large not only improves inference speed by more than 50% but also raises accuracy by 1.7%. token merging (Bolya et al., 2023; Zong et al., 2022; Marin et al., 2023; Xu et al., 2024b; Kim et al., 2024) methods that gradually reduce the number of image tokens as the layer goes deep; hybrid architectures (Mehta & Rastegari, 2022a; Chen et al., 2022a; Maaz et al., 2022; Li et al., 2022; Zhang et al., 2023) that embed efficient convolutional neural networks (CNNs) into ViTs; and network pruning (Yu et al., 2022b;a; Yu & Xiang, 2023; Zhang et al., 2024; He & Zhou, 2024) methods that remove less important parameters while preserving performance. Meanwhile, knowledge distillation methods (Touvron et al., 2021; Hao et al., 2022; Wu et al., 2022; Chen et al., 2022b) are introduced to further optimize efficient ViTs’ performance. Despite growing interest in efficient ViTs, existing ap- proaches often overlook structural reparameterization (Ding et al., 2019; 2021b; Zhu et al., 2023), a powerful network simplification technique widely used in CNNs. Structural reparameterization enables networks to adopt different struc- tures during training and inference by merging multi-branch convolutions or adjacent BatchNorm (Ioffe & Szegedy, 2015) and convolution via linear algebra operations. This process allows a complex architecture during training to be compressed into a simpler structure for inference, thereby improving efficiency. Some recent research (Vasu et al., 2023a; Guo et al., 2024) has investigated structural reparam- eterization for ViTs by integrating elements from CNNs into ViTs and subsequently reparameterizing only these CNN components. However, little attention has been given to directly applying structural reparameterization to the intrin- sic architecture of ViTs, particularly to their fundamental building blocks. Among these building blocks, feedforward network (FFN) layers represent a promising yet underexplored target for applying structural reparameterization. A typical FFN layer consists of two consecutive linear projections with a nonlin-ear activation function in between ( i.e., Figure 1(a)). The two linear projections can be potentially merged via struc- tural reparameterization to reduce complexity during testing. Notably, reducing FFN complexity is particularly critical for improving the efficiency of ViTs. Despite their straight- forward structure, FFN layers account for more than 60% of the total computational complexity in ViT models (Li et al., 2022; Mehta & Rastegari, 2022b). Furthermore, we observe that FFN layers contribute a substantial portion of the total latency in ViTs, with this contribution scaling up as
|
https://arxiv.org/abs/2505.21847v1
|
the model size grows, as shown in Figure 3. These observations reflect the urgent demand for techniques to optimize FFN layers, especially for large-scale ViTs. To facilitate structural reparameterization for FFN layers, in this work, we propose an innovative channel idle mech- anism. Specifically, in each FFN layer, only a small sub- set of feature channels undergo the activation function to provide necessary nonlinearity while the rest channels re- main idle, as shown in Figure 1(b). Consequently, these idle channels bridge a linear pathway through the activation function, enabling structural reparameterization during infer- ence. Moreover, inspired by Yao et al. (2021), we substitute the LayerNorm (Lei Ba et al., 2016) with BatchNorm (Ioffe & Szegedy, 2015) and add another BatchNorm before the second linear projection. These BatchNorms can be repa- rameterized into their adjacent linear projection weights, which allows further reparameterization of the shortcut. With the proposed channel idle mechanism, a family of RePa rameterizable Vision Transformers (RePaViTs) are developed, whose FFN layers can be reparameterized to condensed structures during inference as Figure 1(c) shows. Extensive experiments on various ViTs have validated the effectiveness of our method, demonstrating its potential to enhance the applicablity of ViTs in resource-constrained environments. Moreover, as Figure 2 illustrates, the ex- 2 RePaViT: Scalable Vision Transformer Acceleration via Structural Reparameterization on Feedforward Network Layers perimental results further indicate that our method delivers more significant acceleration and narrower performance disparity as the model complexity increases. In particular, RePaViT accelerates ViT-Large and ViT-Huge models by ~68% speed gain while even improving accuracy by 1~2% compared to their vanilla versions. This also demonstrates a transformative contribution, as many practical large-scale foundation models for computer vision tasks utilize ViTs as their backbones, such as CLIP (Radford et al., 2021; Cherti et al., 2023) and SAM (Kirillov et al., 2023). Moreover, our RePaViT achieves better trade-offs between speed improve- ment and accuracy compared to state-of-the-art network pruning methods. To our best knowledge, RePaViT is the first method that successfully applies structural reparameterization on FFN layers for efficient ViTs, and achieves significant acceler- ation while having positive gains in accuracy instead of accuracy drops on large and huge ViTs with the same train- ing strategies. 2. Related Work 2.1. Efficient Vision Transformer Methods Vision Transformer (ViT) (Dosovitskiy et al., 2021) adapts the Transformer (Vaswani et al., 2017) architecture for com- puter vision, achieving success on various computer vision tasks. However, ViT suffers a substantial computational complexity. To alleviate the computational burden, several techniques that focus on structural design for efficient ViTs have been proposed. Spatial-wise token reduction methods are developed to identify less important tokens and subse- quently prune (Rao et al., 2021; Liang et al., 2021; Kong et al., 2022a; Fayyaz et al., 2022; Xu et al., 2022; Meng et al., 2022; Tang et al., 2022; Xu et al., 2023) or merge (Bolya et al., 2023; Zong et al., 2022; Marin et al., 2023; Xu et al., 2024b; Kim et al., 2024) them during inference. As a result, the number of tokens participating in the self-attention com- putation is reduced. Meanwhile, hybrid
|
https://arxiv.org/abs/2505.21847v1
|
architectures that combine self-attentions with computationally efficient con- volutions (Graham et al., 2021; Mehta & Rastegari, 2022a; Chen et al., 2022a; Li et al., 2022; Cai et al., 2023; Vasu et al., 2023a; Zhang et al., 2023; Shaker et al., 2023) are introduced to reduce the computationally expensive self- attention operations while introducing regional biases into ViTs. In addition to hybrid ViTs, MetaFormer (Yu et al., 2022c) figures out that ViTs benefit from their architectural design, which consists of one token mixer layer and one multi-layer perception layer, and the token mixer can be replaced by more efficient operations, such as average pool- ing (Yu et al., 2022c) or linear projection (Tolstikhin et al., 2021). However, these approaches overlook the structural reparameterization method, which can effectively compress a network that contains consecutive linear transformations,such as FFN layers in ViTs. Our work is the first to apply structural reparameterization on FFN layers for ViTs. 2.2. Structural Reparameterization Structural reparameterization is an effective network sim- plification technique that is typically employed in multi- branch CNNs (Ding et al., 2019; Guo et al., 2020; Ding et al., 2021a;b). It converts an over-parameterized network block into a compressed structure during testing, thereby reducing the model complexity and increasing the speed for the inference stage. For instance, after reparameterizing its multi-branch convolutions and shortcuts into a single branch, RepVGG-B0 (Ding et al., 2021b) achieves 71% speed-up with no accuracy loss. Although some recent studies claim to adopt structural reparameterization for enhancing ViTs’ efficiency (Vasu et al., 2023a; Wang et al., 2024; Tan et al., 2024), they primarily construct a hybrid architecture con- sisting of both convolutions and self-attentions and only perform reparameterization on the convolutional part. A recent state-of-the-art method, SLAB (Guo et al., 2024), proposes to progressively substitute LayerNorms in ViTs with BatchNorms and reparameterize BatchNorms into lin- ear projection weights. Unlike these methods, we are the first to apply structural reparameterization on FFN layers. 3. Method 3.1. Latency Analysis To understand the significance of improving efficiency for FFN layers, we profile the latencies of major components in several representative ViT models in Figure 3, including DeiT (Touvron et al., 2021), Swin Transformer (Liu et al., 2021) and ViT (Dosovitskiy et al., 2021). Figure 3 illustrates that FFN layers constitute a substantial portion of the total processing time, which escalates quickly as the model size increases. For instance, in the DeiT-Small model, FFN lay- ers contribute to approximately 32.8% of the inference time, while in the DeiT-Base model, this proportion increases to 45.1%. Moreover, the percentage of FFN layers’ latency in the large-scale ViT-Large model rises to 53.8%, more than half of the total inference time. This phenomenon arises because scaling up ViTs typically involves increasing the number of channels, whereas the number of tokens tends to remain constant. Meanwhile, the computational complexity of an FFN layer, quantified asO(2ρNC2), is quadratic to the number of feature chan- nels. Consequently, as the model expands, the FFN layers become significantly more computationally expensive. In conclusion, optimizing FFN layers becomes considerably important for minimizing the overall computational costs for
|
https://arxiv.org/abs/2505.21847v1
|
large ViTs. 3 RePaViT: Scalable Vision Transformer Acceleration via Structural Reparameterization on Feedforward Network Layers 0 1 2 3 4 Latency (ms)RePa-ViT-Large ViT-LargeRePa-Swin-Base Swin-BaseRePa-Swin-Small Swin-SmallRePa-DeiT-Base DeiT-BaseRePa-DeiT-Small DeiT-SmallPatch Embedding MHSA FFN Reparameterized FFN Figure 3. Latency analysis. Visualization of the runtime latencies of patch embedding, MHSA and FFN layers. Notably, as the model size increases, the proportion of latency attributed to FFN layers also rises. Our method effectively reduces the latency of FFN layers and obtains increasingly better performance on larger models, demonstrating a scalable acceleration of FFN layers. 3.2. Channel Idle Mechanism for FFN Layers As Figure 1(a) illustrates, a typical FFN layer consists of two linear projections with a nonlinear activation function in between. Given an input X∈RN×Cwhere Nrepresents the number of tokens and Cdenotes the number of feature channels, the FFN layer process can be formulated as Y=FFN (LN(X)) +X=Act(LN(X)WIn)WOut+X,(1) where WIn∈RC×ρC,WOut∈RρC×Care the linear projec- tion weights, LN(·)is LayerNorm (Lei Ba et al., 2016) and Act(·)is usually the GELU (Hendrycks & Gimpel, 2016) activation function. ρis the FFN expansion ratio, which is usually set to 4. The biases are omitted for simplicity since they are inherently linear and do not interfere with the repa- rameterization process. Unfortunately, due to the nonlinear activation function, the structural reparameterization cannot directly merge the two linear projection weights WInand WOutvia linear algebra operations. Inspired by ShuffleNetv2 (Ma et al., 2018) which keeps a group of channels idle in grouped convolutions and shuffles channels for information exchange, we propose a simple yet effective channel idle mechanism to enable reparameteriza- tion in FFN layers. Specifically, this mechanism maintains a large subset of feature channels inactivated in an FFN layer, thereby bridging a linear pathway through the nonlinear acti- vation function in the corresponding FFN layer. In addition, we substitute LayerNorm with BatchNorm (BN) (Ioffe & Szegedy, 2015) to enable post-training reparameterization of normalization and shortcut for the FFN layer. As a result, our channel idle mechanism during the training stage can be formulated as XIn=BN(X)WIn, XAct=Concat (Act(XIn [:,1:µC]),XIn [:, µC+1:ρC]), Y=BN(XAct)WOut+X,(2) where the activation function is only applied on µC(µ < ρ ) feature channels. The (ρ−µ)Cidling feature channels construct a linear route as presented in Figure 1(b).We further define the channel idle ratio as θ= 1−µ ρ, which represents the percentage of feature channels keeping inactivated in the FFN layer. µis set to 1by default in the following experiments unless otherwise noted, leading to the default θ= 1−1 ρ(e.g.,θ= 0.75when ρ= 4, indicating 75% channels are idling when the expansion ratio is 4). 3.3. Structural Reparameterization for FFN layers With the channel idle mechanism defined in Equation 2, we are able to simplify the FFN layer by structural reparameter- ization during the testing stage. Firstly, we reparameterize the BatchNorms into their corresponding linear projection weights as eWIn=γXp σ2 X+ϵXWIn, eWOut=γXActq σ2 XAct+ϵXActWOut,(3) where γs,σ2s and ϵs are the empirical means, empirical variances and constants from the frozen BatchNorm layers, respectively. With the reparameterized projection weights eWInandeWOut, the output Yin Equation 2 can be reformu- lated as Y=Act(XeWIn [:,1:µC])eWOut [1:µC,:] +XeWIn [:, µC+1:ρC]eWOut [µC+1:ρC,:]+X.(4) Then, we
|
https://arxiv.org/abs/2505.21847v1
|
further reparameterize the weights as eW=eWIn [:, µC+1:ρC]eWOut [µC+1:ρC,:]+I. (5) By substituting Equation 5 into Equation 4, we obtain the updating function for the FFN layer during the testing stage with three reparameterized weights as Z=Act(YeWIn [:,1:µC])eWOut [1:µC,:]+YeW. (6) As Figure 1(c) shows, after reparameterization, the two massive linear projections are converted into three smaller linear transformations with fewer parameters and all the normalizations are merged into linear projection weights. 4 RePaViT: Scalable Vision Transformer Acceleration via Structural Reparameterization on Feedforward Network Layers 3.4. Computational Complexity Analysis Number of parameters: The vanilla FFN layer’s param- eters are mainly derived from the two linear projection weights WIn∈RC×ρCandWOut∈RρC×C, totalling 2ρC2. In contrast, with our channel idle mechanism, the weights are reparameterized into three terms: an input weight eWIn[:,1:µC]∈RC×µC, an output weight eWOut[1:µC,:]∈ RµC×Cand a reparameterized weight eW∈RC×C. The to- tal number of parameters is effectively reduced from 2ρC2 to(2µ+ 1)C2. Consequently, in the reparameterized FFN layer, the parame- ter count is diminished to 1−θ+1 2ρof the original parameter count, where θis the aforementioned idle ratio. For instance, when ρ= 4andθ= 0.75, the number of parameters in an FFN layer declines to 37.5% post-parameterization. This reduction significantly simplifies the model, diminishing its memory consumption. Computational complexity: The computational complex- ity of the vanilla FFN layer is O(2ρNC2)while the com- putational complexity is significantly reduced to O((2µ+ 1)NC2)in our reparameterized FFN layer. The computa- tional complexity reduction ratio for an FFN layer is also 1−θ+1 2ρ. It is worth noting that, due to the elimination of normaliza- tions and shortcuts in the FFN layer, the inference speed gain is more than the computational complexity reduction. 3.5. Comparison against RepVGG-style Reparameterization RepVGG (Ding et al., 2021b) introduces structural repa- rameterization into CNNs, where multi-branch convolutions are merged into a single-branch convolution through linear operations on convolution kernels. While RePaViT draws inspiration from RepVGG, there are significant differences between our structural reparameterization approach and the RepVGG-style reparameterization: •Different targets: Existing works using RepVGG-style reparameterization for efficient ViTs (Vasu et al., 2023a;b) introduce CNN components into ViTs and only reparam- eterize those convolutional components. In contrast, our method directly targets existing FFN layers in ViTs, aim- ing to improve the efficiency of standard ViT architectures rather than designing an entirely new backbone. Thus, the application objectives are fundamentally distinct. •Different reparameterization solutions: Another differ- ence is that RepVGG reparameterizes horizontally across parallel convolutional kernels, while RePaViT reparame- terizes vertically on consecutive linear projection weights. Mathematically, RepVGG reparameterizes two parallel convolutional branches with kernels WConv 1andWConv 2bysumming them: eWConv Rep=WConv 1 +WConv 2. (7) On the contrary, as demonstrated in Equation 5, RePaViT reparameterizes two consecutive projection weights WFFN 1 andWFFN 2by multiplying them: eWFFN Rep=WFFN 1·WFFN 2. (8) In the above example, WConv 1andWConv 2have been padded to the same shape, and the reparameterization processes of BatchNorm and biases are omitted for simplicity. It is also worth noting that our channel idle mechanism can- not be regarded as a special case of a dual-branch structure in RepVGG. In RepVGG, all branches must be linear so that they can be
|
https://arxiv.org/abs/2505.21847v1
|
reparameterized, whereas in our approach, one branch is linear while the other one is nonlinear. 4. Experiments 4.1. Datasets, Training and Evaluation Settings We mainly train and test RePaViTs for the image classifica- tion task on the widely recognized ImageNet-1k (Deng et al., 2009) dataset, following the data augmentations and training recipes proposed by Touvron et al. (2021) as the standard practice. In line with Yao et al. (2021), the maximum learn- ing rate is set to 4×10−3with 20 epochs of warmup from 1×10−6. The default batch size and total training epochs are 4096 and 300, respectively. For dense prediction tasks, we follow the configurations from MMDetection (Chen et al., 2019) and MMSegmentation (Contributors, 2020) to finetune RePaViTs on MSCOCO (Lin et al., 2014) and ADE20K (Zhou et al., 2017) datasets for object detection and segmentation tasks, respectively. All the models are trained from scratch on NVIDIA H100 GPUs. To ensure fair comparisons, we measure the throughput of all the models on the same NVIDIA A6000 GPU with the same environ- ments and a fixed batch size of 128. FlashAttention (Dao et al., 2022) is used for self-attention computation during inference measurement by default. More implementation details on the training settings are provided in Appendix A. 4.2. Classification Results Backbones: We choose four ViT backbones, including a representative plain-structured ViT (DeiT (Touvron et al., 2021)), a representative hierarchical-structured ViT (Swin Transformer (Liu et al., 2021)), a plain ViT trained with token labelling (LV-ViT (Jiang et al., 2021)), and large-scale ViT (Dosovitskiy et al., 2021). The FFN layers in these models are embedded with the channel idle mechanism and are all trained from scratch solely on the ImageNet-1k dataset by supervised learning. 5 RePaViT: Scalable Vision Transformer Acceleration via Structural Reparameterization on Feedforward Network Layers Table 1. Performance comparisons among RePaViTs and their vanilla backbones. For the "RePa" column, ×and√stands for the RePaViT model pre- and post-reparameterization, respectively. The decimals after model names ( i.e., 0.50 and 0.75) represent the channel idle ratios ( θ). When the backbone architecture fixes, our method consistently achieves greater accelerations and complexity reductions while narrowing the accuracy gap as the model size grows. Model RePa #MParam. ↓Complexity (GMACs)↓Speed (images/second)↑Top-1 accuracy↑ DeiT-Tiny - 5.7 1.1 3435.1 72.1% × 5.7 1.1 2397.9RePa-DeiT-Tiny/0.50 √4.4(−22.8%)0.8(−27.3%)4001.2 (+16.5%)69.4% (−2.7%) DeiT-Small - 22.1 4.3 1410.3 79.8% × 22.1 4.3 1000.9RePa-DeiT-Small/0.5 √16.7 (−24.4%)3.2(−25.6%)1734.7 (+23.0%)78.9% (−0.9%) DeiT-Base - 86.6 16.9 418.5 81.8% × 86.6 16.9 336.6RePa-DeiT-Base/0.75 √51.1 (−41.0%)9.9(−41.4%)660.3 (+57.8%)81.3% (−0.5%) ViT-Large - 304.3 59.7 124.2 80.3% × 304.5 59.8 102.7RePa-ViT-Large/0.75 √178.4 (−41.4%)34.9 (−41.5%)207.2 (+66.8%)82.0% (+1.7%) ViT-Huge - 632.2 124.3 61.5 80.3% × 632.5 124.4 53.0RePa-ViT-Huge/0.75 √369.9 (−41.5%)72.6 (−41.6%)103.8 (+68.7%)81.4% (+1.1%) Swin-Tiny - 28.3 4.4 804.4 81.2% × 28.3 4.4 614.9RePa-Swin-Tiny/0.75 √17.5 (−38.2%)2.6(−40.9%)1020.4 (+26.9%)78.4% (−2.8%) Swin-Small - 49.6 8.6 471.7 83.0% × 49.7 8.6 363.1RePa-Swin-Small/0.75 √29.9 (−39.7%)5.1(−40.7%)627.8 (+33.1%)81.4% (−1.6%) Swin-Base - 87.8 15.2 326.6 83.5% × 87.9 15.2 249.4RePa-Swin-Base/0.75 √52.8 (−39.9%)9.0(−40.8%)467.6 (+43.2%)82.6% (−0.9%) LV-ViT-S - 26.2 6.1 866.6 81.4% × 26.2 6.1 725.4RePa-LV-ViT-S/0.75 √19.1 (−27.1%)4.7(−23.0%)1110.9 (+28.2%)81.6% (+0.2%) LV-ViT-M - 55.8 11.9 457.6 83.6% × 55.9 11.9 396.6RePa-LV-ViT-M/0.75 √40.1 (−28.1%)8.8(−26.1%)640.6 (+40.0%)83.5% (−0.1%)Table 2. Comparison with state-of-the-art network pruning
|
https://arxiv.org/abs/2505.21847v1
|
methods for efficient ViTs. "-" indicates that the statistic is either missing or irreproducible. Our method demonstrates significantly higher speed-ups compared to pruning methods while achieving competitive or even higher top-1 accuracies across various ViT backbones. Backbone Method #MParam. ↓Compl. (GMACs)↓Speed improv.↑Top-1 acc.↑ WDPruning 13.3 2.6 +18.3% 78.4% X-pruner - 2.4 - 78.9% DC-ViT 16.6 3.2 +20.0% 78.6% LPViT 22.1 2.3 +16.3% 80.7% RePaViT/0.50 16.7 3.2 +23.0% 78.9%DeiT-Small RePaViT/0.75 13.2 2.5 +54.4% 76.3% WDPruning 55.3 9.9 +18.2% 80.8% X-pruner - 8.5 - 81.0% DC-ViT 65.1 12.7 +18.4% 81.3% LPViT 86.6 8.8 +18.8% 80.8% RePaViT/0.50 65.3 12.7 +28.6% 81.4%DeiT-Base RePaViT/0.75 51.1 10.6 +57.8% 81.3% WDPruning 32.8 6.3 +15.3% 81.8% X-pruner - 6.0 - 82.0% RePaViT/0.50 37.8 6.4 +20.7% 82.8%Swin-Small RePaViT/0.75 29.9 5.1 +33.1% 81.4% DC-ViT 66.4 11.5 +14.9% 83.8% LPViT 87.8 11.2 +8.9% 81.7% RePaViT/0.50 66.8 11.5 +19.6% 83.4%Swin-Base RePaViT/0.75 52.8 9.0 +42.4% 82.6% Table 3. Comparison against the state-of-the-art repa- rameterization method for ViTs. With a similar num- ber of parameters, RePaViT obtains both faster inference speeds and higher accuracies than SLAB (Guo et al., 2024). Model #MParam. ↓Compl. (GMACs)↓Speed (img/s)↑Top-1 acc.↑ SLAB-DeiT-Base 86.6 17.1 387.0 78.9% RePa-DeiT-Base/0.25 79.5 15.5 452.3 81.1% SLAB-Swin-Base 87.7 15.4 299.9 83.6% RePa-Swin-Base/0.25 80.8 14.0 356.3 83.7% Reparameterization results: Table 1 presents the image classification performance of RePaViTs before and after reparameterization, and compares with their vanilla back- bones. Due to the nature of linear algebra operations, the pre- and post-reparameterization accuracies are the same. In general, our innovative channel idle mechanism remark- ably enhances these models’ computational efficiency and throughput while preserving their accuracy. We observe that with the same backbone architecture, RePaViT achieves more substantial acceleration with a narrowing accuracy gap when the model size increases. For example, employ- ing DeiT as the backbone, the smaller DeiT-Tiny model witnesses a 16.5% speed-up at the cost of a 2.7% accuracy loss. However, when scaled up to the DeiT-Base model, our approach delivers a 57.8% throughput improvement, with only a marginal 0.5% drop in accuracy. This pattern is consistent across various models. In cases where the backbones include additional regularizations during train- ing, our method not only accelerates performance but alsopreserves accuracy to a remarkable extent. In particular, on the LV-ViT-M model, we facilitate a 40.0% increase in the inference speed with a negligible 0.1% decrease in accuracy. Notably, RePaViT yields ~68% speed-up and even 1~2% higher accuracy on ViT-Large and ViT-Huge models , indicating its potential on large-scale foundation models. This insight demonstrates the practical value of RePaViT in accelerating large-scale models without compromising performance, making it an effective solution for large-scale real-world applications requiring both speed and precision. 4.3. Comparison Against Network Pruning While several network pruning methods for efficient ViTs fo- cus on reducing the number of parameters and the theoretical computational complexity during inference, our approach differs fundamentally from these methods. We provide a comparison with state-of-the-art and representative net- work pruning techniques in Table 2, including WDPruning (Yu et al., 2022a), X-Pruner (Yu & Xiang, 2023), DC-ViT 6 RePaViT: Scalable Vision Transformer Acceleration via Structural Reparameterization on Feedforward Network Layers Table 4.
|
https://arxiv.org/abs/2505.21847v1
|
Sensitivity of channel idle ratio θ.The performance of RePaViT on plain (DeiT (Touvron et al., 2021)) and hierarchical (Swin (Liu et al., 2021)) ViTs with various θis reported. θ=* represents the vanilla backbone. θ=1.00 implies the nonlinear activation being re- moved from the model. The results show a significant accuracy drop when θsurpasses 0.75. BackboneIdle ratioθ#MParam. ↓Compl. (GMACs)↓Speed (img/s)↑Top-1 acc.↑ DeiT-Tiny1.00 2.6 0.5 5810.1 48.6% 0.75 3.5 0.6 4470.8 64.2% 0.50 4.4 0.8 4001.2 69.4% 0.25 5.3 1.0 3575.6 71.9% * 5.7 1.1 3435.1 72.1% DeiT-Small1.00 9.6 1.8 2612.9 63.9% 0.75 13.2 2.5 2003.7 76.3% 0.50 16.7 3.2 1734.7 78.9% 0.25 20.3 3.9 1489.7 80.3% * 22.1 4.3 1410.3 79.8% DeiT-Base1.00 37.0 7.1 878.7 73.7% 0.75 51.1 9.9 660.3 81.3% 0.50 65.3 12.7 538.0 81.4% 0.25 79.5 15.5 452.3 81.1% * 86.6 16.9 418.5 81.8% Swin-Tiny1.00 13.2 1.9 1180.1 67.6% 0.75 17.5 2.6 1020.4 78.4% 0.50 21.8 3.3 905.9 80.5% 0.25 26.1 4.0 844.8 81.4% * 28.3 4.4 804.4 81.2% Swin-Small1.00 22.1 3.7 745.0 72.5% 0.75 29.9 5.1 627.8 81.4% 0.50 37.8 6.5 569.2 82.8% 0.25 45.7 7.9 514.5 83.1% * 49.6 8.6 471.7 83.0% Swin-Base1.00 38.8 6.5 539.0 75.5% 0.75 52.8 9.0 467.6 82.6% 0.50 66.8 11.5 390.6 83.4% 0.25 80.8 14.0 356.3 83.7% * 87.8 15.2 326.6 83.5%Table 5. Ablation study on train-time reparameterization.√ for "Training RePa" stands for reparameterizing the model before training.√for "BatchNorm RePa" represents that the Batch- Norm before a linear projection is reparameterized into the pro- jection weight. "-" under top-1 accuracy means training failure. Overall, training with full parameters and reparameterizing dur- ing testing yields better performance. ModelTraining RePaBatchNorm RePaTraining #MParam.Top-1 accuracy↑ RePa-DeiT-Tiny/0.75√ √3.5 59.6%√× 3.5 64.3% × × 5.7 64.2% RePa-DeiT-Small/0.75√ √13.2 75.0%√× 13.2 75.7% × × 22.1 76.3% RePa-DeiT-Base/0.75√ √51.1 -√× 51.1 80.6% × × 86.6 81.3% RePa-ViT-Large/0.75√ √178.4 -√× 178.5 80.6% × × 304.5 82.0% RePa-Swin-Tiny/0.75√ √17.5 77.1%√× 17.5 78.0% × × 28.3 78.4% RePa-Swin-Small/0.75√ √29.9 79.3%√× 30.0 79.1% × × 49.7 81.4% RePa-Swin-Base/0.75√ √52.8 79.6%√× 52.9 80.3% × × 87.9 82.6% RePa-LV-ViT-S/0.75√ √19.1 -√× 19.1 81.3% × × 26.2 81.6% RePa-LV-ViT-M/0.75√ √40.1 -√× 40.2 - × × 55.9 83.6% (Zhang et al., 2024), and LPViT (Xu et al., 2024a). Due to unavailable or incomplete code repositories of certain state- of-the-art pruning methods, we rely on the performance statistics reported in the original papers and align efficiency optimization using speed improvements for fairness. Table 2 shows that the structural reparameterization ap- proach of RePaViT achieves significantly greater inference acceleration compared to network pruning methods. More- over, the effectiveness of our method increases as model size grows. For example, while the state-of-the-art DC-ViT achieves speed improvements of approximately 15~20% across all backbones, RePaViT provides 19.6% to 57.8% speed improvements when the model scales up. These re- sults highlight two key advantages of our method: •Computing environment friendly : Our reparameterized model is dense and structurally regular, making it efficient to run on general-purpose hardware without requiring spe-cialized hardware and software support for sparse matrix operations. So our method can bring more speed-ups in general computing environments. •Scaling effectiveness on larger models : Compared
|
https://arxiv.org/abs/2505.21847v1
|
with network pruning methods, RePaVit yields more accelera- tions and smaller performance gaps on larger models even with the same channel idle ratio θ. This underscores the important practical value of RePaViT on large foundation models for vision tasks. 4.4. Comparison Against State-of-The-Art Method Table 3 compares our RePaViT approach against SLAB (Guo et al., 2024), a recent state-of-the-art method introduc- ing progressive reparameterized BatchNorms for ViTs. For fair comparisons with similar model sizes, the performance of RePaViTs with θ=0.25 is used. The results indicate that our reparameterization strategy offers a better trade-off be- 7 RePaViT: Scalable Vision Transformer Acceleration via Structural Reparameterization on Feedforward Network Layers Table 6. Performance on dense prediction tasks. Results on the 1 ×training schedule are presented. The latencies (ms) per image are reported for throughput comparisons. ModelRetinaNet Mask R-CNN UperNet Latency (ms)↓AP↑AP50↑AP75↑APS↑APM↑APL↑Latency (ms)↓AP↑AP50↑AP75↑APS↑APM↑APL↑Latency (ms)↓mIoU↑ Swin-Small 61.7 37.2 56.9 39.6 22.4 40.5 49.4 62.5 45.5 67.8 49.9 28.6 49.2 60.4 36.3 47.6 RePa-Swin-Small 53.8 (−12.8%)38.3 57.9 40.7 21.8 42.0 51.6 53.8 (−13.9%)43.6 65.8 47.8 27.1 47.0 57.3 32.1 (−11.6%)45.7 Swin-Base 82.0 38.9 59.5 41.3 24.3 43.6 54.4 82.6 45.8 67.6 50.3 28.7 48.9 61.7 45.6 48.1 RePa-Swin-Base 66.7 (−18.7%)39.8 60.0 42.1 25.3 43.7 53.8 69.4 (−16.0%)44.8 67.0 49.4 29.0 48.5 58.4 38.6 (−15.4%)46.9 tween efficiency and accuracy. For example, when utilizing DeiT-Base as the backbone, our method not only achieves a higher speed and fewer parameters but also surpasses SLAB by a 2.2% higher accuracy. 4.5. Sensitivty of Channel Idle Ratio θ In Section 3.2, we define the channel idle ratio θas the percentage of feature channels keeping idle in the activation. Table 4 illustrates the influence of θon the performance of RePaViTs. Overall, a larger θrepresents more channels idling in the FFN layer, leading to a smaller number of parameters, a lower computational complexity, and a higher inference speed post-reparameterization. Remarkably, when θexceeds 0.75, which is the default idle ratio for RePaViTs, there is an obvious decline in the top- 1 accuracies. For instance, when setting θto 1.0 ( i.e., no channels being activated), the RePa-DeiT-Base’s accuracy drops from 81.8% to 73.7%. Similarly, the RePa-Swin- Base model witnesses its accuracy decline from 83.5% to 75.5% with θ= 1.0. For smaller models, such performance collapse can be more severe. This outcome demonstrates that while reducing the proportion of nonlinear components can significantly enhance the model’s efficiency, preserving sufficient nonlinearities is also crucial for performance. It is noteworthy that, with a proper θ, ViTs can achieve even better performance with fewer parameters and faster inference speeds. For example, DeiT-Small, Swin-Tiny, Swin-Small and Swin-Base models all enjoy higher top-1 accuracy when θ=0.25. 4.6. Ablation Study We ablate the structural reparameterization process during training. Instead of training the full 2ρC2linear project weights and then reparameterizing them during testing, we directly train the reparameterized weights with a reduced size of (2µ+ 1)C2. Specifically, in our experiments, the numbers of parameters for a single FFN layer before and after reparameterization are 8C2(i.e.,ρ=4) and 3C2(i.e., µ=1), respectively. Table 5 indicates that training with more parameters ( i.e., train-time overparameterization) generallyachieves better performance than
|
https://arxiv.org/abs/2505.21847v1
|
training with less parame- ters for ViTs, which aligns with the findings in Vasu et al. (2023a;b). Meanwhile, train-time overparameterization also helps to stabilize the training process for large models. For instance, when trained with reparameterized structure, RePa- DeiT-Base, RePa-ViT-Large, RePa-LV-ViT-S and RePa-LV- ViT-M all suffer training collapse and fail to converge. 4.7. Dense Predictions Table 6 presents the results of two downstream tasks. Firstly, the ImageNet-1k pre-trained RePa-Swin models are inte- grated with a one-stage detector RetinaNet (Lin et al., 2017) and a two-stage detector Mask R-CNN (He et al., 2017) for the object detection task on the MSCOCO dataset with 1×training schedule ( i.e., 12 epochs). Remarkably, our RePa-Swin-Base model achieves up to 18.7% latency re- duction at even a higher average precision (AP) with Reti- naNet when compared to its vanilla backbone. RePA-Swin- Base also obtains a similar performance with 16.0% less latency with Mask R-CNN. Secondly, UperNet (Xiao et al., 2018) is leveraged for the semantic segmentation task on the ADE20K dataset with RePa-Swin models as backbones. Similarly, RePa-Swin-Base achieves 15.4% latency reduc- tion with merely 1.2% mIoU loss. Overall, the experimental results on downstream tasks re- flect a consistent trend that the performance disparities are narrowing and the acceleration gains are escalating as the backbone model sizes grow. This aligns with the observa- tions in Section 4.2 well, which further proves the scalable acceleration capability of our channel idle mechanism. 4.8. Self-supervised Learning Experiments and Others Given that large foundation models are typically trained us- ing self-supervised learning strategies, we evaluate RePaViT under self-supervised training ( i.e., DINO (Caron et al., 2021)) and language-guided contrastive learning ( i.e., CLIP (Radford et al., 2021)). The experimental results are pro- vided in Appendix B. Notably, when applied to CLIP mod- els, RePaViT improves zero-shot top-1 accuracy by 0.8% while achieving a 24.7% speed improvement, demonstrating its effectiveness in optimizing large foundation models. 8 RePaViT: Scalable Vision Transformer Acceleration via Structural Reparameterization on Feedforward Network Layers 5. Conclusion In this paper, we investigate the latency compositions of ViTs and observe that FFN layers significantly contribute to the overall latency. The observations highlight the critical need for accelerating FFN layers to enhance the efficiency of ViTs, where structural reparameterization emerges as a potential solution. We introduce a novel channel idle mech- anism to facilitate the reparameterization of FFN layers during inference. The proposed mechanism is employed on various ViT backbones, resulting in a family of RePaViTs. RePaViTs demonstrate consistent scalability with more ac- celerations and narrower accuracy disparities as the back- bone model size escalates. Notably, RePaViT achieves ac- curacy gains while improving the inference speed on large- scale ViT backbones. These unprecedented results mark a disruptive and timely contribution to the community and establish RePaViT as a significant addition to the toolkit for accelerating large foundation models. We believe that RePaViT presents a promising direction for expediting ViTs and we invite the community to further explore its effective- ness on even larger foundation models. Impact Statement This paper presents work whose goal is to advance the field of Machine Learning.
|
https://arxiv.org/abs/2505.21847v1
|
There are many potential societal consequences of our work, none which we feel must be specifically highlighted here. Acknowledgement This research was partially supported by the Australian Government through the Australian Research Council’s In- dustrial Transformation Training Centre for Information Resilience (CIRES) project number IC200100022, CSIRO’s Research Plus Science Leader Project R-91559, and Aus- tralian Research Council Discovery Projects DP230101753 and DECRA DE200101610. References Bolya, D., Fu, C.-Y ., Dai, X., Zhang, P., Feichtenhofer, C., and Hoffman, J. Token merging: Your vit but faster. In ICLR , 2023. Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., et al. Language models are few-shot learners. InNeurIPS , 2020. Cai, H., Li, J., Hu, M., Gan, C., and Han, S. Efficientvit: Multi-scale linear attention for high-resolution dense pre- diction. In ICCV , 2023. Caron, M., Touvron, H., Misra, I., Jégou, H., Mairal, J.,Bojanowski, P., and Joulin, A. Emerging properties in self-supervised vision transformers. In ICCV , 2021. Chen, K., Wang, J., Pang, J., Cao, Y ., Xiong, Y ., Li, X., Sun, S., Feng, W., Liu, Z., Xu, J., Zhang, Z., Cheng, D., Zhu, C., Cheng, T., Zhao, Q., Li, B., Lu, X., Zhu, R., Wu, Y ., Dai, J., Wang, J., Shi, J., Ouyang, W., Loy, C. C., and Lin, D. MMDetection: Open mmlab detection toolbox and benchmark. arXiv preprint arXiv:1906.07155 , 2019. Chen, Y ., Dai, X., Chen, D., Liu, M., Dong, X., Yuan, L., and Liu, Z. Mobile-former: Bridging mobilenet and transformer. In CVPR , 2022a. Chen, Y ., Wang, S., Liu, J., Xu, X., de Hoog, F., and Huang, Z. Improved feature distillation via projector ensemble. InNeurIPS , 2022b. Cherti, M., Beaumont, R., Wightman, R., Wortsman, M., Ilharco, G., Gordon, C., Schuhmann, C., Schmidt, L., and Jitsev, J. Reproducible scaling laws for contrastive language-image learning. In CVPR , 2023. Contributors, M. MMSegmentation: Openmmlab semantic segmentation toolbox and benchmark. https:// github.com/open-mmlab/mmsegmentation , 2020. Dao, T., Fu, D., Ermon, S., Rudra, A., and Ré, C. Flashat- tention: Fast and memory-efficient exact attention with io-awareness. In NeurIPS , 2022. Dehghani, M., Djolonga, J., Mustafa, B., Padlewski, P., Heek, J., Gilmer, J., Steiner, A. P., Caron, M., Geirhos, R., Alabdulmohsin, I., et al. Scaling vision transformers to 22 billion parameters. In ICML , 2023. Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. Imagenet: A large-scale hierarchical image database. InCVPR , 2009. Ding, X., Guo, Y ., Ding, G., and Han, J. Acnet: Strengthen- ing the kernel skeletons for powerful cnn via asymmetric convolution blocks. In ICCV , 2019. Ding, X., Zhang, X., Han, J., and Ding, G. Diverse branch block: Building a convolution as an inception-like unit. InCVPR , 2021a. Ding, X., Zhang, X., Ma, N., Han, J., Ding, G., and Sun, J. Repvgg: Making vgg-style convnets great again. In CVPR , 2021b. Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., and Houlsby, N. An image is worth 16x16 words: Transformers for image
|
https://arxiv.org/abs/2505.21847v1
|
recognition at scale. In ICLR , 2021. 9 RePaViT: Scalable Vision Transformer Acceleration via Structural Reparameterization on Feedforward Network Layers Fayyaz, M., Koohpayegani, S. A., Jafari, F. R., Sengupta, S., Joze, H. R. V ., Sommerlade, E., Pirsiavash, H., and Gall, J. Adaptive token sampling for efficient vision transformers. InECCV , 2022. Graham, B., El-Nouby, A., Touvron, H., Stock, P., Joulin, A., Jégou, H., and Douze, M. Levit: a vision transformer in convnet’s clothing for faster inference. In ICCV , 2021. Guo, J., Chen, X., Tang, Y ., and Wang, Y . Slab: Efficient transformers with simplified linear attention and progres- sive re-parameterized batch normalization. In ICML , 2024. Guo, S., Alvarez, J. M., and Salzmann, M. Expandnets: Lin- ear over-parameterization to train compact convolutional networks. In NeurIPS , 2020. Hao, Z., Guo, J., Jia, D., Han, K., Tang, Y ., Zhang, C., Hu, H., and Wang, Y . Learning efficient vision transformers via fine-grained manifold distillation. In NeurIPS , 2022. He, K., Gkioxari, G., Dollár, P., and Girshick, R. Mask r-cnn. In ICCV , 2017. He, Y . and Zhou, J. T. Data-independent module-aware pruning for hierarchical vision transformers. In ICLR , 2024. Hendrycks, D. and Gimpel, K. Gaussian error linear units (gelus). arXiv preprint arXiv:1606.08415 , 2016. Ioffe, S. and Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. InICML , 2015. Jiang, Z.-H., Hou, Q., Yuan, L., Zhou, D., Shi, Y ., Jin, X., Wang, A., and Feng, J. All tokens matter: Token labeling for training better vision transformers. In NeurIPS , 2021. Kim, M., Gao, S., Hsu, Y .-C., Shen, Y ., and Jin, H. Token fusion: Bridging the gap between token pruning and token merging. In WACV , 2024. Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A. C., Lo, W.-Y ., et al. Segment anything. In ICCV , 2023. Kong, Z., Dong, P., Ma, X., Meng, X., Niu, W., Sun, M., Shen, X., Yuan, G., Ren, B., Tang, H., et al. Spvit: Enabling faster vision transformers via latency-aware soft token pruning. In ECCV , 2022a. Kong, Z., Ma, H., Yuan, G., Sun, M., Xie, Y ., Dong, P., Meng, X., Shen, X., Tang, H., Qin, M., et al. Peeling the onion: Hierarchical reduction of data redundancy for efficient vision transformer training. In AAAI , 2022b.Lei Ba, J., Kiros, J. R., and Hinton, G. E. Layer normaliza- tion. arXiv preprint arXiv:1607.06450 , 2016. Li, Y ., Yuan, G., Wen, Y ., Hu, J., Evangelidis, G., Tulyakov, S., Wang, Y ., and Ren, J. Efficientformer: Vision trans- formers at mobilenet speed. In NeurIPS , 2022. Liang, Y ., Chongjian, G., Tong, Z., Song, Y ., Wang, J., and Xie, P. Evit: Expediting vision transformers via token reorganizations. In ICLR , 2021. Lin, T.-Y ., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., and Zitnick, C. L. Microsoft coco: Common objects in context. In ECCV , 2014. Lin, T.-Y ., Goyal, P., Girshick, R., He, K., and Dollár, P. Focal loss for
|
https://arxiv.org/abs/2505.21847v1
|
dense object detection. In ICCV , 2017. Liu, Z., Lin, Y ., Cao, Y ., Hu, H., Wei, Y ., Zhang, Z., Lin, S., and Guo, B. Swin transformer: Hierarchical vision transformer using shifted windows. In ICCV , 2021. Liu, Z., Hu, H., Lin, Y ., Yao, Z., Xie, Z., Wei, Y ., Ning, J., Cao, Y ., Zhang, Z., Dong, L., et al. Swin transformer v2: Scaling up capacity and resolution. In CVPR , 2022. Loshchilov, I. and Hutter, F. Sgdr: Stochastic gradient descent with warm restarts. In ICLR , 2017. Ma, N., Zhang, X., Zheng, H.-T., and Sun, J. Shufflenet v2: Practical guidelines for efficient cnn architecture design. InECCV , 2018. Maaz, M., Shaker, A., Cholakkal, H., Khan, S., Zamir, S. W., Anwer, R. M., and Shahbaz Khan, F. Edgenext: efficiently amalgamated cnn-transformer architecture for mobile vision applications. In ECCV , 2022. Marin, D., Chang, J.-H. R., Ranjan, A., Prabhu, A., Raste- gari, M., and Tuzel, O. Token pooling in vision trans- formers for image classification. In WACV , 2023. Mehta, S. and Rastegari, M. Mobilevit: light-weight, general-purpose, and mobile-friendly vision transformer. InICLR , 2022a. Mehta, S. and Rastegari, M. Separable self-attention for mobile vision transformers. arXiv preprint arXiv:2206.02680 , 2022b. Meng, L., Li, H., Chen, B.-C., Lan, S., Wu, Z., Jiang, Y .-G., and Lim, S.-N. Adavit: Adaptive vision transformers for efficient image recognition. In CVPR , 2022. Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al. Language models are unsupervised multitask learners. OpenAI blog , 2019. 10 RePaViT: Scalable Vision Transformer Acceleration via Structural Reparameterization on Feedforward Network Layers Radford, A., Kim, J. W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al. Learning transferable visual models from natural language supervision. In ICML , 2021. Rao, Y ., Zhao, W., Liu, B., Lu, J., Zhou, J., and Hsieh, C.-J. Dynamicvit: Efficient vision transformers with dynamic token sparsification. In NeurIPS , 2021. Ryoo, M., Piergiovanni, A., Arnab, A., Dehghani, M., and Angelova, A. Tokenlearner: Adaptive space-time tok- enization for videos. In NeurIPS , 2021. Schuhmann, C., Vencu, R., Beaumont, R., Kaczmarczyk, R., Mullis, C., Katta, A., Coombes, T., Jitsev, J., and Komatsuzaki, A. Laion-400m: Open dataset of clip- filtered 400 million image-text pairs. In NeurIPS Data Centric AI Workshop , 2021. Shaker, A., Maaz, M., Rasheed, H., Khan, S., Yang, M.-H., and Khan, F. S. Swiftformer: Efficient additive attention for transformer-based real-time mobile vision applica- tions. In ICCV , 2023. Tan, Z., Li, X., Wu, Y ., Chu, Q., Lu, L., Yu, N., and Ye, J. Boosting vanilla lightweight vision transformers via re-parameterization. In ICLR , 2024. Tang, Y ., Han, K., Wang, Y ., Xu, C., Guo, J., Xu, C., and Tao, D. Patch slimming for efficient vision transformers. InCVPR , 2022. Tolstikhin, I. O., Houlsby, N., Kolesnikov, A., Beyer, L., Zhai, X., Unterthiner, T., Yung, J., Steiner, A., Keysers, D., Uszkoreit, J., et al. Mlp-mixer: An all-mlp architec- ture for vision. In NeurIPS , 2021. Touvron, H., Cord, M., Douze, M., Massa, F.,
|
https://arxiv.org/abs/2505.21847v1
|
Sablayrolles, A., and Jégou, H. Training data-efficient image trans- formers & distillation through attention. In ICML , 2021. Vasu, P. K. A., Gabriel, J., Zhu, J., Tuzel, O., and Ranjan, A. Fastvit: A fast hybrid vision transformer using structural reparameterization. In ICCV , 2023a. Vasu, P. K. A., Gabriel, J., Zhu, J., Tuzel, O., and Ranjan, A. Mobileone: An improved one millisecond mobile backbone. In CVPR , 2023b. Vaswani, A. et al. Attention is all you need. In NeurIPS , 2017. Wang, A., Chen, H., Lin, Z., Han, J., and Ding, G. Repvit: Revisiting mobile cnn from vit perspective. In CVPR , 2024. Wu, K., Zhang, J., Peng, H., Liu, M., Xiao, B., Fu, J., and Yuan, L. Tinyvit: Fast pretraining distillation for small vision transformers. In ECCV , 2022.Xiao, T., Liu, Y ., Zhou, B., Jiang, Y ., and Sun, J. Unified perceptual parsing for scene understanding. In ECCV , 2018. Xu, K., Wang, Z., Chen, C., Geng, X., Lin, J., Yang, X., Wu, M., Li, X., and Lin, W. Lpvit: Low-power semi- structured pruning for vision transformers. In ECCV , 2024a. Xu, X., Li, C., Chen, Y ., Chang, X., Liu, J., and Wang, S. No token left behind: Efficient vision transformer via dynamic token idling. In AJCAI , 2023. Xu, X., Wang, S., Chen, Y ., Zheng, Y ., Wei, Z., and Liu, J. Gtp-vit: Efficient vision transformers via graph-based token propagation. In WACV , 2024b. Xu, Y ., Zhang, Z., Zhang, M., Sheng, K., Li, K., Dong, W., Zhang, L., Xu, C., and Sun, X. Evo-vit: Slow-fast token evolution for dynamic vision transformer. In AAAI , 2022. Yao, Z., Cao, Y ., Lin, Y ., Liu, Z., Zhang, Z., and Hu, H. Leveraging batch normalization for vision transformers. InICCV , 2021. You, Y ., Li, J., Reddi, S., Hseu, J., Kumar, S., Bhojanapalli, S., Song, X., Demmel, J., Keutzer, K., and Hsieh, C.-J. Large batch optimization for deep learning: Training bert in 76 minutes. In ICLR , 2020. Yu, F., Huang, K., Wang, M., Cheng, Y ., Chu, W., and Cui, L. Width & depth pruning for vision transformers. In AAAI , 2022a. Yu, L. and Xiang, W. X-pruner: explainable pruning for vision transformers. In CVPR , 2023. Yu, S., Chen, T., Shen, J., Yuan, H., Tan, J., Yang, S., Liu, J., and Wang, Z. Unified visual transformer compression. InICLR , 2022b. Yu, W., Luo, M., Zhou, P., Si, C., Zhou, Y ., Wang, X., Feng, J., and Yan, S. Metaformer is actually what you need for vision. In CVPR , 2022c. Zhang, H., Zhou, Y ., and Wang, G.-H. Dense vision trans- former compression with few samples. In CVPR , 2024. Zhang, J., Li, X., Li, J., Liu, L., Xue, Z., Zhang, B., Jiang, Z., Huang, T., Wang, Y ., and Wang, C. Rethinking mobile block for efficient attention-based models. In ICCV , 2023. Zhou, B., Zhao, H., Puig, X., Fidler, S., Barriuso, A., and Torralba, A. Scene parsing through ade20k dataset. In CVPR , 2017. Zhu, A., Wang, Y ., Li, W., and Qian, P. Structural
|
https://arxiv.org/abs/2505.21847v1
|
reparame- terization lightweight network for video action recogni- tion. In ICASSP , 2023. 11 RePaViT: Scalable Vision Transformer Acceleration via Structural Reparameterization on Feedforward Network Layers Zong, Z., Li, K., Song, G., Wang, Y ., Qiao, Y ., Leng, B., and Liu, Y . Self-slimmed vision transformer. In ECCV , 2022. 12 RePaViT: Scalable Vision Transformer Acceleration via Structural Reparameterization on Feedforward Network Layers A. Training Settings All RePaViTs are rigorously trained on the ImageNet-1k dataset (Deng et al., 2009), following the same data augmentations proposed by DeiT (Touvron et al., 2021). Consistently, the total number of training epochs is standardized at 300. In an effort to accommodate the substitution of LayerNorm with BatchNorm, we have increased the batch size to 4096. Additionally, the Lamb optimizer (You et al., 2020) has been selected to ensure stable training with a large batch size. Learning rates are dedicatedly configured for different backbone architectures, and a cosine scheduler (Loshchilov & Hutter, 2017) is utilized for learning rate adjustment throughout the training period. Detailed training settings are provided in Table 7. Table 7. Training settings of RePaViTs for the image classification task. Model EpochsBatch sizeOptimizerBase learning rateMin learning rateWarmup learning rateSchedulerWeight decayDrop path rate RePa-DeiT-Tiny 300 4096 Lamb4×10−3 5×10−5 1×10−6 Cosine scheduler0.050.10 RePa-DeiT-Small RePa-DeiT-Base RePa-ViT-Large1×10−30.30RePa-ViT-Huge RePa-Swin-Tiny 4×10−3 0.10RePa-Swin-Small RePa-Swin-Base RePa-LV-ViT-S1024 1×10−31×10−5 RePa-LV-ViT-M B. Self-Supervised Learning Performance Large foundation models with superior performance are usually trained with self-supervised learning techniques. To demonstrate the potential applicability of RePaViT with self-supervised learning, we first validate our method using DINO (Caron et al., 2021) and report the performance in Table 8. We adopt the same training settings as outlined in DINO. Even with self-supervised learning, RePaViTs still exhibit substantial efficiency enhancement. Notably, there is a consistent trend as observed in Section 4.2 that when the model size increases, our method yields greater speed improvements and a smaller accuracy gap. For example, RePa-ViT-Small achieves a 39.4% increase in speed (1779.6 image/second vs 1277.0 image/second) with a 2.6% drop in accuracy (74.4% vs 77.0%) when using a linear classifier. In the case of employing a larger backbone model, RePa-ViT-Base realizes a more significant acceleration of 57.2% (623.0 image/second vs 396.2 image/second) with a smaller accuracy loss of 1.2% (77.0% vs 78.2%). These results indicate a high adaptability of our RePaViT using different learning paradigms. Table 8. RePaViT performance on DINO models (Caron et al., 2021). Model #MParam. ↓Compl. (GMACs)↓Speed (img/s)↑k-NN top-1 acc.↑Linear top-1 acc.↑ ViT-Small 21.7 4.3 1277.0 72.8% 77.0% RePa-ViT-Small/0.75 12.8 (−41.1%)2.5(−41.9%)1779.6 (+39.4%) 69.6% 74.4% ViT-Base 85.8 16.9 396.2 76.1% 78.2% RePa-ViT-Base/0.75 50.4 (−41.3%)9.9(−41.4%)623.0 (+57.2%) 74.1% 77.0% Next, we evaluate RePaViT on a more advanced language-guided contrastive learning framework, specifically CLIP (Radford et al., 2021). We adopt the open-source OpenCLIP framework (Cherti et al., 2023) and train all models on the LAION-400M dataset (Schuhmann et al., 2021), with a total of 3B seen data points. All training configurations strictly follow the default settings of OpenCLIP. The zero-shot classification performance on the ImageNet-1K validation set is presented in Table 9. For the smaller CLIP-ViT-B/32 model, our RePa-CLIP-ViT-B/32 achieves a 26.8% speed increase
|
https://arxiv.org/abs/2505.21847v1
|
with a negligible 0.3% accuracy drop. On the larger CLIP-ViT-B/16 model, our method improves inference speed by 24.7% while achieving a 0.8% gain in zero-shot classification top-1 accuracy. These results demonstrate the effectiveness of RePaViT in enhancing the 13 RePaViT: Scalable Vision Transformer Acceleration via Structural Reparameterization on Feedforward Network Layers Table 9. RePaViT performance on CLIP models (Radford et al., 2021). All the models are trained on LAION-400M dataset with 3B seen samples in total. Model Idle ratio θ#MParam. ↓Complexity (GFLOPs) ↓Speed (image/second) ↑Top-1 accuracy ↑ CLIP-ViT-B/32 - 87.9 4.4 3860.2 57.1% RePa-CLIP-ViT-B/32 0.50 66.6 (−24.2%) 3.4(−22.7%) 4893.5 (+26.8%) 56.8% (−0.3%) RePa-CLIP-ViT-B/32 0.75 52.4 (−40.4%) 2.6(−40.9%) 5812.3 (+50.6%) 53.2% (−3.9%) CLIP-ViT-B/16 - 86.2 17.6 824.2 62.7% RePa-CLIP-ViT-B/16 0.50 64.9 (−24.7%) 13.4 (−23.9%) 1027.9 (+24.7%) 63.5% (+0.8%) RePa-CLIP-ViT-B/16 0.75 50.8 (−41.1%) 10.6 (−39.8%) 1161.5 (+40.9%) 61.0% (−1.7%) efficiency of large foundation models trained with language-guided contrastive learning. We anticipate our method to be applied to large foundational vision models in future work. C. Limitations Despite the exceptional performance of RePaFormers on large backbone models, there is a notable decrease in accuracy as the model size shrinks. For example, as demonstrated in Table 4, the accuracy of RePa-DeiT-Tiny decreases significantly from 72.1% to 64.2%. This performance drop is primarily attributed to the reduced nonlinearity in the backbone, which is a consequence of keeping channels idle. In smaller models, both the number of layers and the number of feature channels are limited, resulting in substantially fewer activated channels compared to larger models. After applying the channel idle mechanism with a high idle ratio ( e.g., 75%), tiny models would lack sufficient non-linear transformations. However, as the model size increases, both the number of layers and feature channels expand, enhancing the model’s robustness and mitigating the impact of reduced nonlinearity. In conclusion, while our method may not be optimally suited for tiny models, it significantly enhances the performance of large ViT models. We sincerely invite the research community to further investigate and validate the effectiveness of our approach on large foundational models, such as SAM (Kirillov et al., 2023) or GPT (Radford et al., 2019; Brown et al., 2020). This exploration could provide valuable insights into the scalability and adaptability of our method across various advanced computational frameworks. 14
|
https://arxiv.org/abs/2505.21847v1
|
Xinyu AI Search: Enhanced Relevance and Comprehensive Results with Rich Answer Presentations Bo Tang∗ tangbo@mail.ustc.edu.cn AIDS and SIAR, University of Science and Technology of China Suzhou, ChinaJunyi Zhu∗ junyizhu.ai@gmail.com ESAT-PSI, KU Leuven Leuven, BelgiumChenyang Xi, Yunhang Ge firstname.lastname@iaar.ac.cn Institute for Advanced Algorithms Research Shanghai, China Jiahao Wu jiahao.wu@connect.polyu.hk The Hong Kong Polytechnic University Hong Kong, ChinaYuchen Feng, Yijun Niu Wenqiang Wei, Yu Yu Chunyu Li, Zehao Lin firstname.lastname@iaar.ac.cn Institute for Advanced Algorithms Research Shanghai, ChinaHao Wu, Ning Liao Yebin Yang, Jiajia Wang Zhiyu Li, Feiyu Xiong firstname.lastname@iaar.ac.cn Institute for Advanced Algorithms Research Shanghai, China Jingrun Chen† jingrunchen@ustc.edu.cn AIDS and SIAR, University of Science and Technology of China Suzhou, China Figure 1: Online evaluation (on February 5th) of the Xinyu AI search system for the query ‘Trump administration latest actions.’ Xinyu features built-in citation, timeline visualization (right column), and a textual-visual choreography mechanism. Abstract Traditional search engines struggle to synthesize fragmented infor- mation for complex queries, while generative AI search engines face challenges in relevance, comprehensiveness, and presentation. To address these limitations, we introduce Xinyu AI Search, a novel sys- tem that incorporates a query-decomposition graph to dynamically break down complex queries into sub-queries, enabling stepwise retrieval and generation. Our retrieval pipeline enhances diversity ∗Co-first author. †Corresponding author. 2025.through multi-source aggregation and query expansion, while filter- ing and re-ranking strategies optimize passage relevance. Addition- ally, Xinyu AI Search introduces a novel approach for fine-grained, precise built-in citation and innovates in result presentation by integrating timeline visualization and textual-visual choreography. Evaluated on recent real-world queries, Xinyu AI Search outper- forms eight existing technologies in human assessments, excelling in relevance, comprehensiveness, and insightfulness. Ablation stud- ies validate the necessity of its key sub-modules. Our work presents the first comprehensive framework for generative AI search en- gines, bridging retrieval, generation, and user-centric presentation. 1arXiv:2505.21849v1 [cs.IR] 28 May 2025 Preprint, Bo Tang and Junyi Zhu et al. 1 Introduction In the era of information, a significant volume of events, knowledge, and resources has been digitized and made accessible through the Internet. As users continually strive to quickly and accurately locate relevant information within this rapidly growing digital ecosys- tem, the development of search engines has emerged as a critical solution to meet their fundamental need for efficient and reliable information retrieval [ 13,47]. However, traditional search engines often face challenges in addressing complex or ambiguous queries. Furthermore, these systems typically present results as a ranked list, requiring users to manually synthesize information from diverse sources. This significantly increases comprehension efforts, par- ticularly in scenarios where aggregating fragmented information from multiple resources is required. Previously, the field of language modeling has been significantly advanced by the development of autoregressive models based on Transformer architectures [ 57,67]. These architectures enable ef- ficient parallel processing of sequences and facilitate large-scale unsupervised pretraining [ 21,34,57,58], leading to the emergence of intelligent behaviors. Additionally, reinforcement learning from human feedback [ 18,54] has further refined these models by align- ing their outputs with human preferences and values. Together, these advancements have empowered machines to comprehend long-form text, interpret human intent, and generate responses
|
https://arxiv.org/abs/2505.21849v1
|
that closely resemble human communication. Modern large language models (LLMs) have demonstrated human-level performance in tasks such as reading comprehension and reasoning within specific contexts [ 21,42,62,75]. Their vast parameters also enable the encod- ing of extensive knowledge [ 14,59]. Despite these strengths, LLMs continue to face critical challenges, including outdated knowledge and the generation of hallucinated content [ 37,79]. These issues significantly undermine their reliability and limit their effectiveness in real-world applications. More recently, retrieval-augmented generation (RAG) has emerged as a promising framework that integrates information retrieval techniques, such as search engines, with LLMs [ 37]. Studies have demonstrated that with externally retrieved non-parametric infor- mation, RAG can substantially reduce hallucinations in generated outputs while enabling LLMs to provide up-to-date information through in-context learning [ 10,33,52]. In turn, LLMs can enhance search quality by rewriting queries to better align with search engine requirements, improve readability of search results by syn- thesizing fragmented information from multiple retrieved sources, and summarize long-form texts [14, 21, 44, 76]. 1.1 The Development and Challenges of Generative AI Search Engines Building on the concept of RAG, generative AI search engines such as Perplexity AI [ 5], Tiangong AI [ 6], and Metaso [ 49] have emerged to provide synthesized answers using LLMs, rather than merely re- turning links like traditional search engines. While conversational LLM-based products such as ChatGPT [ 53] also employ RAG to access up-to-date information and enhance factual accuracy, gen- erative AI search engines distinguish themselves by prioritizing comprehensiveness and improving the overall reading experience 0 10 20 30 40 50 60 Percentage (%)Incorrect conclusion or reasoning.Answer does not faithfully follow the cited sources.Retrieved documents contain noise.Answers lack multi-perspective discussion.Key retrieved information is missing in the answer.Answer is not up-to-date.Figure Figure 2: Common issues in generative AI search answers. through advancements in answer presentation. For instance, incor- porating built-in citations that link directly to sources to build user trust in search results. A survey conducted in the United States found that over a quarter of adults considered switching to AI search engines in 2023 [61]. Although RAG and LLMs complement each other, the diver- sity of user queries makes it challenging to generate satisfactory responses solely based on retrieved documents. Moreover, commer- cial AI search engines may produce inaccurate or unfaithful answer. To analyze these limitations, we collected 300 queries across eight domains and identified several common issues in existing technolo- gies, such as Perplexity AI. The results are presented in Fig. 2. 1.2 Our Approach to Generative AI Search The issues outlined in Fig. 2 affect the relevance and comprehen- siveness of answers to queries, along with several other factors that degrade text generation quality. Beyond these challenges, im- proving the reading experience through enhanced presentation of generated results represents another key area for advancement in generative AI search. To address these issues, we develop Xinyu (which indicates “a new way to present” in Chinese). In this pa- per, we provide a detailed breakdown of our proposed method and demonstrate how we orchestrate its workflow. An online test showcase is presented in
|
https://arxiv.org/abs/2505.21849v1
|
Fig. 1. Our contributions can be summarized as follows: (1)We systematically decompose this domain into specific subproblems and provide detailed descriptions of our solutions, including prompt design, data preparation, and model training, to facilitate future research and applications. (2)We introduce novel approaches, in- cluding query decomposition graphs, timeline visualization, textual- visual choreography, and built-in citations, to address multiple core challenges. (3)Experimental results demonstrate that Xinyu is com- petitive with existing technologies, and we conduct extensive abla- tion studies to evaluate the effectiveness of individual components. (4)To the best of our knowledge, this is the first paper to provide a comprehensive disclosure of a generative AI search engine system.1 1While online blog articles offer illustrative overviews of AI search systems (e.g., Per- plexity AI), and Lepton AI’s released code serves primarily as a demonstrative example, our work is distinct. We provide a full-stack technical disclosure and comprehensive evaluations, offering greater value for reproducibility and as a research reference. 2 Xinyu AI Search: Enhanced Relevance and Comprehensive Results with Rich Answer PresentationsPreprint, 1.2.1 Paper Organization .Main terms are clarified below. In Sec. 2, we discuss related work. In Sec. 3, we present our method. In Sec. 4 we compare our method with existing technologies and conduct ablation studies. Finally, we conclude in Sec. 5. 1.2.2 Terminology .We define the key terms and concepts below: Query: The user’s input query. Sub-query: A query decomposed from the user’s input query. Retrieval query: A query submitted to the search interface. Retrieved document: Content accessed via links returned by the search engine. Retrieved passage: A segment of text in the retrieved document. 2 Related Work Constructing an generative AI search engine involves the design of three main components: retrieval, contextual generation and orchestration. Since these components relate to a broad range of research topics. We briefly introduce the work closely related with this paper. A more detailed survey can be found at [26]. 2.1 Retrieval Effective retrieval is critical to system performance, as the qual- ity of retrieved in formation significantly shapes the final output. Query rewriting techniques aim to transform user queries into more precise and retrieval-friendly formats, addressing ambiguities and enhancing alignment with indexed data [ 24,45,55,80]. Similarly, query expansion enriches the input by generating alternative or supplementary queries, ensuring the retrieval of a broader and more contextually relevant set of documents [ 22,30,68]. The choice of retrieval sources also impacts system performance, with strategies leveraging unstructured data, semi-structured data, and structured knowledge graphs to provide domain-specific and fine-grained knowledge [ 28,43,69,78]. Lastly, when database construction is needed, effective chunking strategies, metadata enrichment, and hierarchical indexing are considered to ensure that retrieval com- ponents operate efficiently [29, 50, 65]. 2.2 Contextual Generation After retrieving documents for a query, the generation process re- lies on their context to produce responses that are accurate and well-informed. Studies show that irrelevant information in the refer- ences can distract the model and lead to inaccurate answers [ 19,77]. Additionally, LLMs allocate varying levels of attention to different sections of the prompt, making the placement of relevant informa- tion crucial to
|
https://arxiv.org/abs/2505.21849v1
|
the quality of the generated response [ 41]. Reference filtering techniques aim to eliminate irrelevant or noisy retrieved documents, ensuring only pertinent information is considered for generation [ 20,46]. Context selection focuses on identifying the most relevant portions of the retrieved context while discarding less pertinent parts, thereby optimizing the model’s input [ 32,74]. Reference reranking further reorganizes the retrieved information to position the most relevant content prominently, improving the quality of responses [ 25,83]. Some work employ fine-tuning to adapt a language model for specific tasks or domains, or to align its outputs with desired formats and styles, thereby achieving superior task performance [23, 39, 82].2.3 Orchestration Simple pipelines that directly generate responses from retrieved re- sults often fall short, prompting research into auxiliary components and the orchestration of more sophisticated workflows. Iterative workflows involve alternating between retrieval and generation processes, progressively enriching the context by utilizing gener- ated text or intermediate results to refine subsequent retrievals [ 60]. Adaptive workflows enhance system flexibility by dynamically de- termining the necessity of retrieval based on the context of the query, often incorporating mechanisms for self-assessment and adjustment [ 9,33,51]. Recursive workflows break down complex queries into smaller, interdependent subtasks, iteratively resolv- ing each to produce comprehensive and logically structured re- sponses [ 35,66]. Specifically, the chain-of-knowledge strategy first generates rationales for answering a query, then leverages retrieval results to refine these rationales and deduce the final response [ 40]. 3 Technical Approach of Xinyu AI Search Next, we elaborate on our method. In Sec. 3.1, we present the query preprocessing steps. In Sec. 3.2, we describe how the workflow is orchestrated using our proposed query-decomposition graph. Our retrieval system is detailed in Sec. 3.3, followed by an explanation of the steps taken to enhance generation quality using the retrieved documents in Sec. 3.4. Finally, we discuss specific components designed for rich answer presentation in Sec. 3.5. InXinyu , instruction fine-tuned LLMs, embedding models, and rerankers are extensively utilized across various tasks. We leverage multiple open-source models of different sizes, balancing perfor- mance and efficiency. Task-specific fine-tuning can further enhance model effectiveness. However, given the complexity of the overall system, fine-tuning all models individually is prohibitively expen- sive. To maintain cost efficiency, we adopt a unified data preparation and model fine-tuning framework for several key tasks. Below, we introduce this framework, with further details provided in the ap- pendices, which are referenced in subsequent sections. Unified Framework of Data Preparation and Model Fine-tuning. LetDdenote the training dataset. Data preparation involves col- lecting existing public datasets, as well as generating synthetic data (or labeling data) using stronger models (e.g., larger models) and subsequently refining data quality through expert selection [ 8]. For fine-tuning generative LLMs, the training data consists of input and ground-truth answer pairs (𝑥,𝑦)∈D . Given model parameters 𝜃, we optimize them using the next-token prediction (NTP) objective: LNTP(𝜃)=−E(𝑥,𝑦)∼D|𝑦|∑︁ 𝑡=1log𝑝𝜃(𝑦𝑡|𝑥,𝑦<𝑡), (1) where𝑦<𝑡represents preceding tokens. For fine-tuning reranker models, the training data consists of an anchor sample 𝑥, a positive sample𝑥+, and𝑁negatives𝑥− 1:𝑁, i.e.,(𝑥,𝑥+,𝑥− 1:𝑁)∈D . Letℎ𝜃(·) denote the score function. We optimize
|
https://arxiv.org/abs/2505.21849v1
|
the network to correctly predict the positive sample among the negatives using a cross- entropy loss over the scored candidates: 3 Preprint, Bo Tang and Junyi Zhu et al. Query User Intent UnderstandingQDG Multi-Source RetrievalPassage PoolPassage Deduplication & SelectionPassage RerankingTimeline Visualization Extract Group TitleTimelineGroup1 TitleGropu2 TitleEvent ExtractGroup EventsTimeline Visualization Generate Answer Text Query-Decomposition Graph(QDG) Chain Decomposition:Complex query. Split into multiple interdependent queries.Split Decomposition:Complex query. Split into multiple independent queries. Terminal:Simple query. Answer after retrieval or directly. Main Query Sub Query1Sub Query2 Sub Query1 Sub Query2InputQuery Main Query Query Expansion Events Pool Query PreprocessingAnswer QDGContextual GenerationRetrievalBuilt-In CitationPara1Para2······AnswerText Remove Duplicate Events Event1Event2······Sort byTimestampsSort byTimestamps Input Query:Latest iPhone price? Upgrades from iPhone 14?iPhone14 specs?What is the latest iPhone?Latest iPhone specs?Latest iPhone price?Latest iPhone vs. iPhone14performance boosts?······ [1]Para1 [2]Para2[3]ParaN Final Response Textual-Visual Choreography PassageAnswer TextImage FilterTextual-Visual AnswerImages Pool Para&Image CLIPScorePara&ImageTitle SimScorePara&ImageContext SimScoreText-Image Matching Para1 Para2······Para1Para2Passage Answer TextPara1Para2Key Entities Extraction NumbersDateLocationsNamesTitlesKey EntitiesIdentification Of Citations Built-In Citation Answer······Para2[2]Para1[1] Built-In Citation ······ ······ Event1Event2······ Query Rewriting Gen by LLMTextual-Visual Choreography Main QueryMain QueryPassage Figure 3: Xinyu AI search framework. The upper row illustrates the full response pipeline, while the lower row provides a more detailed depiction of several novel approaches integrated into this framework. LRe(𝜃)=−E(𝑥,𝑥+,𝑥− 1:𝑁)∼D" log𝑒ℎ𝜃(𝑥,𝑥+) 𝑒ℎ𝜃(𝑥,𝑥+)+Í𝑁 𝑖=1𝑒ℎ𝜃(𝑥,𝑥− 𝑖)# .(2) 3.1 Query preprocessing When a query is input by the user, initial steps are conducted to ensure that the query is safe and harmless. Additionally, query disambiguation can help improve the search quality when query is passed to the search module. 3.1.1 User Intent Understanding. As illustrated in the top row of Fig. 3, understanding user intent is the first step after a query is sub- mitted. This module initially filters out harmful or unsafe queries, such as those that violate legal or ethical standards or compromise privacy. Additionally, if a query is ambiguous or lacks specificity, the system prompts the user with clarifying questions or options to refine their intent. For example, if a user submits the query “The current state of the economy,” the system suggests options to spec- ify a region of interest (see Fig. 6). To support these functionalities, we fine-tune a generative LLM (Qwen2.5-14B) to analyze queries, make judgments, suggest potential clarifying options, and output the results in JSON format. Query rejection and refinement are based on parsing relevant keywords. A detailed discussion of the fine-tuning process for this task is provided in App. D.1.3.1.2 Query Rewriting. After user intent understanding, we con- duct query rewriting to align the query with the search engine requirement. To address the issue of geo-temporal queries, e.g. "Shanghai news from last week", we supplement meta data about the user local time and location, then instruct a generative LLM (Qwen2.5-14B) to specify these information in the query if needed. 3.2 Query-Decomposition Graph (QDG) To overcome the limitations of naive RAG in capturing nuanced and multi-faceted information for complex questions, we propose a novel and practical decomposition strategy that breaks down the user’s query into sub-queries and answers them step by step. We refer to this approach as the query-decomposition graph (QDG). As shown in Fig. 3, after query
|
https://arxiv.org/abs/2505.21849v1
|
rewriting, a single query is trans- formed into a QDG. A specific example is provided at the bottom left of Fig. 3. In the QDG, nodes represent sub-queries, while directed edges indicate dependencies. Given a query, we use a fine-tuned gen- erative LLM (Qwen2.5-72B) to construct the corresponding QDG by defining nodes and their pairwise relationships. The LLM is instructed with the QDG definition and few-shot examples. Guided by the prompt, the LLM applies the following decomposition strate- gies: 1)Chain decomposition: A query is sequentially decomposed into a series of sub-queries, where each parent node provides pre- liminary information for its child node and the descendant nodes, e.g. least-to-most [ 81].2)Split decomposition: The query is divided 4 Xinyu AI Search: Enhanced Relevance and Comprehensive Results with Rich Answer PresentationsPreprint, into multiple independent sub-queries. 3)Terminal: The input query is elementary, and decomposition is unnecessary. Further details on the instruction prompts and model fine-tuning are provided in Apps. B.1 and D.2. We find that the success rate of generating valid QDGs approaches 1, but we also perform validation and reiterate the generation process if the check fails. The QDG defines the workflow for subsequent generation pro- cesses: the parent node is processed first, while the child node generates its output based on retrieved documents, ancestor sub- queries, and their corresponding answers. With QDG as its core, Xinyu facilitates hierarchical reasoning and evidence aggregation, ensuring a comprehensive and logically consistent resolution of complex queries. 3.3 Retrieval After constructing the QDG, we aggregate all sub-queries and per- form retrieval. To ensure the retrieved documents capture diverse details necessary for generating answers, we enhance retrieval diversity through query expansion and multi-source retrieval, as discussed below. For an illustration, refer to the top row of Fig. 3. 3.3.1 Query Expansion. We first enrich the sub-queries by asking an LLM to generate multiple retrieval queries revolving around a given sub-query. Specifically, the LLM is instructed to act as a subject matter expert in an university, expanding the given query to create related questions that assess students’ comprehensive understanding of the topic across multiple dimensions: 1) content mastery, 2) understanding of key elements, 3) contextual analysis, and 4) extended thinking. 3.3.2 Multi-source retrieval. To ensure the real-time relevance of retrieved information, we invoke search engine APIs. Since search engines employ varied ranking algorithms, we submit each retrieval query to multiple sources simultaneously to obtain more compre- hensive content. Directly feeding raw web page content into an LLM can lead to high perplexity and degraded response quality. To make the input content LLM-friendly and safe, we implement a robust content filtering pipeline. This process involves removing disruptive elements, filtering sensitive or extraneous information, and standardizing formatting. More details on the filtering rules are provided in App. C. After filtering, we segment documents into passages using the RecursiveCharacterTextSplitter method fromLangChain [15]. We adopt a small chunk size of 350 with a relatively large 25% overlap to optimize the performance of the text embedding model, following the study by Azure AI Search [12]. 3.4 Contextual Generation After retrieval, the retrieved passages are assigned
|
https://arxiv.org/abs/2505.21849v1
|
to their corre- sponding sub-queries in the QDG. A fine-tuned LLM then generates responses for each sub-query following the dependency structure, ensuring that parent nodes are processed before their child nodes. Notably, retrieved passages vary in relevance and often contain duplicate information, which can distract the model. Moreover, LLMs allocate different levels of attention to various sections of the input [ 41]. To mitigate these issues, we implement passage deduplication, selection, and re-ranking, as detailed below. For an illustration, refer to the top row of Fig. 3.3.4.1 Passage Deduplication. Different retrieved documents may exhibit content homogenization, such as sharing the same view- points or reproducing information from a common source. To miti- gate redundancy across passages, we first perform deduplication. Specifically, we use a fine-tuned text embedding model (bge-large- zh [72]) to compute embeddings for the passages and then calculate pairwise cosine similarities. Our objective is to identify the largest subset of passages in which no two passages have a similarity score exceeding 0.8. Finding the optimal solution to this problem corre- sponds to solving the maximum independent set problem, which is NP-hard. To improve computational efficiency, we adopt a greedy algorithm that processes each passage sequentially, retaining it only if its similarity to all previously retained passages remains below 0.8. Details of fine-tuning are provided in App. D.3. 3.4.2 Passage Selection. Since passages may contain irrelevant information, we mitigate noise by ranking their relevance to the sub-query. Specifically, we compute a weighted average of keyword frequency and TF-IDF scores, where keywords (e.g., time and loca- tion) are extracted from the sub-query using an LLM (Qwen2.5-14B). At this stage, we retain the top 70% of the most relevant passages. 3.4.3 Passage Rerank. Since LLMs allocate more attention to in- formation at the edges of a prompt and tend to lose focus in the middle [ 41], we further refine the retrieved passages presented to the LLM by sorting them based on their similarity to the sub- query. This ranking is performed using a fine-tuned reranker model (bge-reranker-v2-m3 [ 17,38]). Details on dataset preparation and fine-tuning are provided in App. D.3. 3.4.4 Answer Generation. After passage re-ranking, passages are appended to their respective sub-queries, and responses are gener- ated in the order dictated by the QDG using a fine-tuned generative LLM (Qwen2.5-72B). If a sub-query has a parent node, the Q&A results of all ancestor nodes are inserted before the retrieved pas- sages. An example is provided in App. B.1. Once all terminal nodes complete generation, their questions and answers are concatenated, appended to the main query, and used to generate the final response. Details on the fine-tuning process are provided in App. D.4. 3.5 Rich Answer Presentations Traditional chatbots often rely on linear text stacking, which can im- pose a high cognitive load on users. Given that AI-powered search engines facilitate extensive knowledge transmission, integrating cognitive scaffolding is essential to support user comprehension. Cognitive science research has shown that structured information and multimodal presentations enhance the efficiency of information assimilation [ 48,56,64]. Moreover, since mitigating hallucinations in LLMs remains challenging [ 79],
|
https://arxiv.org/abs/2505.21849v1
|
aiding users in result verifica- tion and fostering confidence in synthesized outputs is crucial. To address these challenges, we incorporate timeline visualizations, textual-visual choreography, and built-in citations, as discussed below, to optimize the reading experience. 3.5.1 Built-In Citation. A straightforward approach to citation generation involves instructing the LLM to produce citations on the fly, as implemented by Lepton AI [ 3]. However, our initial evaluation indicates that this method exhibits a high error rate. Furthermore, 5 Preprint, Bo Tang and Junyi Zhu et al. Table 1: Pearson correlation coefficients between human and LLM scores for different evaluation criteria. Metric Value Metric Value Comprehensiveness 0.679 Conciseness 0.787 Numerical Precision 0.741 Clarity 0.737 Relevance 0.807 Coherence 0.746 Factuality 0.831 Insightfulness 0.610 Timeliness 0.759 we observe that some existing systems, such as Perplexity AI, place citations at the end of a paragraph, potentially detaching references from the corresponding evidence. To enhance both citation accuracy and granularity, we propose a novel citation scheme. As illustrated in the bottom right of Fig. 3, Xinyu decouples answer generation from citation attachment. Our pipeline employs two models. The first model, an SLM (Qwen2.5-3B), extracts key entities (e.g., dates, locations, names) from the generated answer on a sentence-by-sentence basis . If a sentence contains extractable entities, a second SLM (Qwen2.5-3B) identifies citations based on these entities, its orignal sentence, and retrieved documents. Both models have been fine-tuned for their respective tasks. Details of prompt and fine-tuning are provided in Apps. B.4 and D.5. In cases where no entities are extracted from a sentence, we adopt a fallback method that computes the sentence embedding using bge-large-zh and assigns a citation if its cosine similarity with a retrieved document exceeds 0.6. To reduce latency, we implement an asynchronous processing strategy that runs citation assignment in parallel with answer generation (albeit with a one-sentence delay). 3.5.2 Timeline Visualization. In online search scenarios focused on news and events, integrating timeline visualizations enables users to better understand the evolution and context of events. We propose a novel timeline visualization scheme as illustrated in the bottom middle of Fig. 3. First, we collect all retrieved passages fol- lowing the passage selection (see Sec. 3.4.2). Next, we instruct an LLM (Qwen2.5-14B) to extract any event time mentioned in each passage and to generate a corresponding title and summary. If a passage does not explicitly mention a time, we resort to using the document’s report time as extracted by the same LLM. Passages lacking temporal information in both the passage and the docu- ment are discarded. Because the retained passages may describe the same content, we then employ bge-large-zh to compute text embedding of the concatenated title and summary and calculate pairwise cosine similarities across passages. Passages with a simi- larity score exceeding 0.9 are merged by discarding the one with the later timestamp, resulting in a list of distinct events with times- tamps. To make timeline visualization more structured, we further instruct the LLM to group these events and derive relevant key- words based on their summaries. Finally, we present the event titles for each group, sorted
|
https://arxiv.org/abs/2505.21849v1
|
according to their timestamps. 3.5.3 Textual-Visual Choreography. A picture is worth a thousand words. As illustrated in the bottom right of Fig. 3, Xinyu integrates relevant images into textual responses to enhance information as- similation. These images are extracted from retrieved documents. To ensure quality and relevance, we first filter out noisy images, 13.33%15.33%18.00%14.00%7.33% 12.67% 10.00% 9.33% Politics Economy Society Technology Sports Culture and Entertainment Military HistoryFigure 4: Domain distribution of 300 test queries. retaining only those of high quality. Specifically, a rule-based filter- ing algorithm eliminates logos, icons, and low-resolution images. Subsequently, we compute the similarity of the textual description associated with the image with the main query using bge-reranker- v2-m3 and remove those smaller than 0.3. To determine the optimal placement of images, we compute the pairwise similarity between generated answer paragraphs and candidate images. This computa- tion involves a weighted average of three measures: (1) the embed- ding cosine similarity between the generated text paragraph and the image, computed using the chinese-clip-vit-huge-patch14; (2) the estimated similarity between the synthesized paragraph and the retrieved document’s title, obtained via bge-reranker-v2-m3; and (3) the embedding cosine similarity between the synthesized paragraph and the retrieved document’s text using bge-large-zh. Pairwise simi- larities are assembled into a matrix. Then we determine the optimal image-to-text alignment using Hungarian algorithm [36]. 4 Online Deployment and Experiments Online Deployment. Xinyu AI search engine was initially devel- oped for Chinese users. We have since launched an English version by converting Chinese prompts into English. This approach has shown surprisingly good performance, likely due to the multilingual capabilities of modern generative and embedding models. However, we believe that fine-tuning language-specific models can further enhance the system, and this optimization is currently in progress. Fig. 1 illustrates Xinyu ’s interface with an online test case. Test Cases. We collected a set of test queries covering eight do- mains. Since many user queries during deployment are related to trending news topics, we also gathered a set of recent queries for evaluation to compare product performance under real-time conditions. The query domain distribution is shown in Fig. 4. As our expert evaluators are native Chinese speakers, the numerical evaluation results are based on Chinese queries. Multi-faceted Evaluation Criteria. Because it is difficult to estab- lish a gold-standard answer for a generative AI search engine, we adopted rating criteria for evaluating the generated answers rather than computing a match to a fixed answer. We invited experts with journalism backgrounds and master’s degrees to develop rating criteria that reflect a multi-faceted evaluation of the generated answers, including: (1)Conciseness, (2)Numerical Precision, (3) Relevance, (4)Factuality, (5)Timeliness, (6)Comprehensiveness, 6 Xinyu AI Search: Enhanced Relevance and Comprehensive Results with Rich Answer PresentationsPreprint, Table 2: Multi-faceted comparison of different approaches. Higher value indicates better performance, 10 is the maximum. Model Conciseness Numerical Precision Relevance Factuality Timeliness Comprehensiveness Clarity Coherence Insightfulness Average Perplexity AI [5] 9.851 9.630 9.436 8.524 8.553 7.284 9.612 9.853 6.543 8.810 Tiangong AI [6] 9.840 9.722 7.812 8.924 8.103 8.020 9.604 9.802 6.535 8.707 Ernie Bot [2] 9.770 9.320 8.883 8.028 8.406 7.798 9.524 9.900 5.963
|
https://arxiv.org/abs/2505.21849v1
|
8.621 KIMI [4] 9.840 9.515 8.529 8.224 8.966 8.155 9.223 9.709 6.796 8.773 Metaso [49] 9.760 8.941 8.515 7.408 8.403 5.689 9.383 9.689 4.759 8.061 ChatGLM [16] 9.810 9.420 8.949 9.124 8.346 6.168 9.533 9.726 5.047 8.458 Baichuan [1] 9.660 9.596 6.486 7.612 8.220 8.252 9.223 9.612 6.117 8.309 Tongyi [7] 9.803 9.009 7.586 7.212 8.194 7.677 9.293 9.899 5.859 8.281 Xinyu (Ours) 9.813 9.714 9.533 8.932 9.205 9.143 9.633 9.810 7.333 9.235 Table 3: Comparison of timeline visualization. Except wall time, higher value is better. Approach Timeliness Comprehensiveness Clarity Event Count Precision Wall Time (s) ↓ CHRONOS [71] 7.02 5.79 6.00 5.12 79% 67.44 Xinyu (Ours) 8.07 8.08 8.24 10.14 84 % 33.27Table 4: Comparison of textual- visual choreography. Approach Inclusion (%) ↑Precision (%)↑ Metaso [49] 3.0 72.2 Xinyu (Ours) 80.0 90.0 Table 5: Comparison of built-in citation. Model Density (%) ↑Precision (%)↑ Perplexity AI [5] 46.6 82.1 Metaso [49] 59.5 49.7 Tiangong [6] 27.0 90.8 Baichuan [1] 45.7 90.9 KIMI [4] 41.4 72.9 Xinyu (Ours) 67.2 90.4 (7)Clarity, (8)Coherence, and (9)Insightfulness. More detailed definitions for each dimension are provided in the App. A. LLM Evaluation. Evaluating the generated answers for all ex- periments using multi-faceted criteria by human experts is pro- hibitively expensive. Therefore, we employed an LLM to evaluate the generated text for a subset of experiments. LLM evaluation has been adopted in many studies [ 27]. We used GPT-4O (gpt-4- 0125-preview) to assess the results by instructing it to provide point-by-point reasoning and computing its final score. We set tem- perature to 0, more details about the prompt are provided in the App. B.3. Tab. 1 shows that the LLM evaluation is highly correlated with human evaluation based on the results of Tabs. 2 and 15. Baselines. We compare our approach with eight existing tech- nologies, including: Generative AI search engines : Perplexity AI [5], Metaso [ 49], Tiangong AI [ 6], ChatGLM [ 16], and Tongyi [ 7]; andConversational LLMs with RAG : KIMI [ 4], Ernie Bot [ 2], and Baichuan [ 1]. For Perplexity AI, we selected GPT-4O [ 53] as the backend model. 4.1 Comparison with Existing Technologies 4.1.1 Multi-Faceted Evaluation of the Generated Answer. We com- pareXinyu with existing technologies by inviting human experts to evaluate answers generated by different approaches based on the multi-faceted evaluation criteria. The results, presented in Tab. 2, show that Xinyu performs competitively while achieving the high- est average score ( 9.235 vs. 8.810). Notably, Xinyu significantlyoutperforms other methods in comprehensiveness ( 9.143 vs. 8.252), and insightfulness ( 7.333 vs. 6.796). Additionally, we conduct an LLM-based evaluation using GPT-4O, where the model assesses the generated text according to the same rating criteria. The results are reported in Tab. 15. 4.1.2 Representation Enhancement. Built-In Citation. We evaluate our approach using two key met- rics. The first, citation precision, measures whether the provided evidence is genuinely supported by the cited source. The second, citation density, quantifies the proportion of sentences containing citations relative to the total number of sentences. Citation density reflects two factors: (1) the extent to which the
|
https://arxiv.org/abs/2505.21849v1
|
generated answer relies on retrieved information and (2) the placement of citations. Some existing systems, such as Perplexity AI, often position cita- tions at the end of a paragraph, making it difficult for users to trace specific claims, especially when multiple citations correspond to different parts of a paragraph stack. In such cases, citation density is also lower. As shown in Tab. 5, Xinyu achieves significantly higher citation density ( 67.2 vs. 59.5) while maintaining competitive cita- tion precision. Timeline Visualization. A recent study introduces CHRONOS for timeline generation [ 71]. We compare our method against CHRONOS using three multi-faceted evaluation criteria—timeliness, compre- hensiveness, and clarity—assessed by human evaluators. Addition- ally, we evaluate event count to measure the system’s ability to identify multiple events, precision to assess the relevance of ex- tracted events to the query, and wall time of online deployments to gauge computational efficiency. Tab. 3 demonstrates that Xinyu significantly outperforms CHRONOS across multiple dimensions. Textual-Visual Choreography. Among the baselines, Metaso [ 49] also implements textual-visual choreography. We compare our method against it by evaluating two metrics: inclusion (the rate at which images are incorporated into the generated answers) and precision (the percentage of included images that are contextually 7 Preprint, Bo Tang and Junyi Zhu et al. Table 6: Ablation study of sub-models in our approach, " −" indicates skipping the sub-module. Variant Conciseness Numerical Precision Relevance Factuality Timeliness Comprehensiveness Clarity Coherence Insightfulness Average Full Approach 9.880 9.547 9.547 9.731 8.300 8.533 9.900 9.747 7.107 9.143 −Query Preprocessing 9.810 9.422 9.497 9.646 8.279 8.423 9.891 9.637 6.993 9.066 −Query Expansion 9.793 9.300 9.593 9.626 8.300 8.493 9.867 9.780 6.827 9.064 −QDG 9.780 9.607 9.413 9.731 8.320 8.620 9.860 9.827 6.993 9.127 −Passage Selection 9.833 9.473 9.513 9.717 8.207 8.613 9.847 9.787 7.060 9.118 −Passage Rerank 9.827 9.587 9.587 9.731 8.220 8.587 9.873 9.800 6.987 9.132 Table 7: Ablation study of replacing our fine-tuned LLMs with proprietary models. The best results for each metric are bolded. Model Conciseness Numerical Precision Relevance Factuality Timeliness Comprehensiveness Clarity Coherence Insightfulness Average GPT-4O [53] 9.828 9.425 9.621 9.433 7.973 8.473 9.753 9.717 6.520 8.972 Qwen 2.5-72B [73] 9.780 9.290 9.463 8.987 8.053 8.140 9.893 9.633 6.687 8.881 Xinyu (Ours) 9.880 9.547 9.547 9.731 8.300 8.533 9.900 9.747 7.107 9.142 Clarity Comprehensiveness8.008.258.508.759.009.259.509.7510.00ScoreWithout built-in citation Without timeline-visualization Without textual-visual choreography Full approach Figure 5: Ablation study of sub-modules for the rich answer representation. relevant). As shown in Tab. 4, Xinyu outperforms Metaso signifi- cantly on both metrics. 4.2 Ablation Study Query and Retrieved Documents Processing. We first conduct an ablation study on the sub-modules designed to enhance text quality. Specifically, we compare the performance of the generated answers after omitting each sub-module against the full approach. LLM evaluation is used to rate the responses based on the multi-faceted evaluation criteria, with results provided in Tab. 6. Notably, skipping a sub-module does not always lead to a decline in all metrics. For example, omitting query expansion may improve relevance. How- ever, the full approach, which integrates all sub-modules, achieves the best overall performance, as indicated by the
|
https://arxiv.org/abs/2505.21849v1
|
average scores. Representation Enhancement. We further conduct an ablation study on built-in citation, timeline visualization, and textual-visual choreography to assess their impact on clarity and comprehensive- ness based on human evaluation. As shown in Fig. 5, removing any of these modules significantly reduces the comprehensiveness of the generated answer. Additionally, textual-visual choreography has a strong positive effect on clarity. These findings highlight the advantages of rich answer representations in supporting cognitive scaffolding and enhancing information assimilation efficiency.Fine-Tuning. InXinyu , we fine-tune multiple LLMs for gener- ative tasks, including entity extraction, QDG generation, and re- sponse generation. To evaluate the impact of fine-tuning, we replace these models with GPT-4O and Qwen 2.5-72B. As shown in Tab. 7, fine-tuned models enable Xinyu to generate higher-quality answers. Additionally, we present an ablation study on the fine-tuned gener- ation model in Tab. 16 and the reranking models in Tab. 14. 5 Conclusion In this work, we present Xinyu , a generative AI search engine designed to tackle multi-faceted challenges in answer generation and user experience through a fully integrated pipeline. Our ap- proaches not only build on state-of-the-art researches but also intro- duce novel solutions to specific challenges. Extensive experiments demonstrate the superiority of Xinyu over existing technologies. Future work will focus on enhancing multilingual capabilities and expanding domain-specific optimizations. References [1]Baichuan AI. 2024. Baichuan. https://ying.baichuan-ai.com/ Accessed: 2025-02- 05. [2] Baidu AI. 2024. Yiyan. https://yiyan.baidu.com/ Accessed: 2025-02-05. [3]Lepton AI. 2025. Search with Lepton. https://github.com/leptonai/search_with_ lepton Accessed: 2025-02-02. [4] Moonshot AI. 2024. KIMI. https://kimi.moonshot.cn/ Accessed: 2025-02-05. [5]Perplexity AI. 2024. Perplexity AI. https://www.perplexity.ai/ Accessed: 2025- 02-05. [6] Tiangong AI. 2024. Tiangong AI. http://tiangong.cn/ Accessed: 2025-02-05. [7] Tongyi AI. 2024. Tongyi. https://tongyi.ai/ Accessed: 2025-02-05. [8]Alon Albalak, Yanai Elazar, Sang Michael Xie, Shayne Longpre, Nathan Lambert, Xinyi Wang, Niklas Muennighoff, Bairu Hou, Liangming Pan, Haewon Jeong, Colin Raffel, Shiyu Chang, Tatsunori Hashimoto, and William Yang Wang. 2024. A Survey on Data Selection for Language Models. Transactions on Machine Learn- ing Research (2024). https://openreview.net/forum?id=XfHWcNTSHp Survey Certification. [9]Akari Asai, Zeqiu Wu, Yizhong Wang, Avirup Sil, and Hannaneh Hajishirzi. 2024. Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection. InThe Twelfth International Conference on Learning Representations . [10] Orlando Ayala and Patrice Bechard. 2024. Reducing hallucination in structured outputs via Retrieval-Augmented Generation. In Proceedings of the 2024 Confer- ence of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 6: Industry Track) , Yi Yang, Aida Davani, Avi Sil, and Anoop Kumar (Eds.). Association for Computational Linguistics, Mexico City, Mexico, 228–238. doi:10.18653/v1/2024.naacl-industry.19 [11] Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, et al .2016. Ms marco: A human generated machine reading comprehension dataset. arXiv preprint arXiv:1611.09268 (2016). 8 Xinyu AI Search: Enhanced Relevance and Comprehensive Results with Rich Answer PresentationsPreprint, [12] Alec Berntson. 2023. Azure AI Search: Outperforming Vector Search with Hybrid Retrieval and Reranking. https://techcommunity.microsoft.com/blog/azure- ai-services-blog/azure-ai-search-outperforming-vector-search-with-hybrid- retrieval-and-reranking/3929167 Accessed: 2025-02-01. [13] Sergey Brin and Lawrence Page. 1998. The Anatomy of a Large-Scale Hypertex- tual Web Search Engine. Computer Networks 30 (1998), 107–117. [14] Tom Brown, Benjamin
|
https://arxiv.org/abs/2505.21849v1
|
Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al .2020. Language models are few-shot learners. Advances in neural information processing systems 33 (2020), 1877–1901. [15] Harrison Chase. 2022. LangChain . https://github.com/langchain-ai/langchain [16] ChatGLM. 2024. ChatGLM. https://chatglm.cn/ Accessed: 2025-02-05. [17] Jianlv Chen, Shitao Xiao, Peitian Zhang, Kun Luo, Defu Lian, and Zheng Liu. 2024. BGE M3-Embedding: Multi-Lingual, Multi-Functionality, Multi-Granularity Text Embeddings Through Self-Knowledge Distillation. arXiv:2402.03216 [cs.CL] [18] Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. 2017. Deep reinforcement learning from human preferences. Advances in neural information processing systems 30 (2017). [19] Florin Cuconasu, Giovanni Trappolini, Federico Siciliano, Simone Filice, Cesare Campagnano, Yoelle Maarek, Nicola Tonellotto, and Fabrizio Silvestri. 2024. The power of noise: Redefining retrieval for rag systems. In Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval . 719–729. [20] Jiaxi Cui, Zongjian Li, Yang Yan, Bohua Chen, and Li Yuan. 2023. Chatlaw: Open- source legal large language model with integrated external knowledge bases. CoRR (2023). [21] Jacob Devlin. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018). [22] Shehzaad Dhuliawala, Mojtaba Komeili, Jing Xu, Roberta Raileanu, Xian Li, Asli Celikyilmaz, and Jason Weston. 2024. Chain-of-Verification Reduces Hallucina- tion in Large Language Models. In Findings of the Association for Computational Linguistics: ACL 2024 , Lun-Wei Ku, Andre Martins, and Vivek Srikumar (Eds.). Association for Computational Linguistics, Bangkok, Thailand, 3563–3578. [23] Xinya Du and Heng Ji. 2022. Retrieval-Augmented Generative Question An- swering for Event Argument Extraction. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing , Yoav Goldberg, Zornitsa Kozareva, and Yue Zhang (Eds.). Association for Computational Linguistics, Abu Dhabi, United Arab Emirates, 4649–4666. doi:10.18653/v1/2022.emnlp-main.307 [24] Luyu Gao, Xueguang Ma, Jimmy Lin, and Jamie Callan. 2023. Precise Zero-Shot Dense Retrieval without Relevance Labels. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , Anna Rogers, Jordan Boyd-Graber, and Naoaki Okazaki (Eds.). Association for Computational Linguistics, Toronto, Canada, 1762–1777. [25] Yunfan Gao, Tao Sheng, Youlin Xiang, Yun Xiong, Haofen Wang, and Jiawei Zhang. 2023. Chat-rec: Towards interactive and explainable llms-augmented recommender system. arXiv preprint arXiv:2303.14524 (2023). [26] Yunfan Gao, Yun Xiong, Xinyu Gao, Kangxiang Jia, Jinliu Pan, Yuxi Bi, Yi Dai, Jiawei Sun, Qianyu Guo, Meng Wang, and Haofen Wang. 2024. Retrieval-Augmented Generation for Large Language Models: A Survey. arXiv:2312.10997 [cs.CL] [27] Jiawei Gu, Xuhui Jiang, Zhichao Shi, Hexiang Tan, Xuehao Zhai, Chengjin Xu, Wei Li, Yinghan Shen, Shengjie Ma, Honghao Liu, et al .2024. A Survey on LLM-as-a-Judge. arXiv preprint arXiv:2411.15594 (2024). [28] Xiaoxin He, Yijun Tian, Yifei Sun, Nitesh V Chawla, Thomas Laurent, Yann LeCun, Xavier Bresson, and Bryan Hooi. 2024. G-Retriever: Retrieval-Augmented Generation for Textual Graph Understanding and Question Answering. In The Thirty-eighth Annual Conference on Neural Information Processing Systems . [29] IBM. 2024. Metadata Enrichment: Highly Scalable Data Classification and Data Discovery. https://www.ibm.com/think/insights/metadata-enrichment-highly- scalable-data-classification-and-data-discovery Accessed: 2025-01-19. [30] Rolf Jagerman, Honglei Zhuang, Zhen Qin, Xuanhui Wang, and Michael Bender- sky. 2023.
|
https://arxiv.org/abs/2505.21849v1
|
Query expansion by prompting large language models. arXiv preprint arXiv:2305.03653 (2023). [31] Jiaming Ji, Mickel Liu, Josef Dai, Xuehai Pan, Chi Zhang, Ce Bian, Boyuan Chen, Ruiyang Sun, Yizhou Wang, and Yaodong Yang. 2024. Beavertails: Towards improved safety alignment of llm via a human-preference dataset. Advances in Neural Information Processing Systems 36 (2024). [32] Huiqiang Jiang, Qianhui Wu, Xufang Luo, Dongsheng Li, Chin-Yew Lin, Yuqing Yang, and Lili Qiu. 2024. LongLLMLingua: Accelerating and Enhancing LLMs in Long Context Scenarios via Prompt Compression. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , Lun-Wei Ku, Andre Martins, and Vivek Srikumar (Eds.). Association for Computational Linguistics, Bangkok, Thailand, 1658–1677. doi:10.18653/v1/2024. acl-long.91 [33] Zhengbao Jiang, Frank Xu, Luyu Gao, Zhiqing Sun, Qian Liu, Jane Dwivedi- Yu, Yiming Yang, Jamie Callan, and Graham Neubig. 2023. Active Retrieval Augmented Generation. In Proceedings of the 2023 Conference on Empirical Meth- ods in Natural Language Processing , Houda Bouamor, Juan Pino, and KalikaBali (Eds.). Association for Computational Linguistics, Singapore, 7969–7992. doi:10.18653/v1/2023.emnlp-main.495 [34] Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020. Scaling laws for neural language models. arXiv preprint arXiv:2001.08361 (2020). [35] Gangwoo Kim, Sungdong Kim, Byeongguk Jeon, Joonsuk Park, and Jaewoo Kang. 2023. Tree of Clarifications: Answering Ambiguous Questions with Retrieval- Augmented Large Language Models. In The 2023 Conference on Empirical Methods in Natural Language Processing . [36] H. W. Kuhn. 1955. The Hungarian method for the assignment problem. Naval Research Logistics Quarterly 2, 1-2 (1955), 83–97. doi:10.1002/nav.3800020109 arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1002/nav.3800020109 [37] Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, et al.2020. Retrieval-augmented generation for knowledge-intensive nlp tasks. Advances in Neural Information Processing Systems 33 (2020), 9459–9474. [38] Chaofan Li, Zheng Liu, Shitao Xiao, and Yingxia Shao. 2023. Making Large Lan- guage Models A Better Foundation For Dense Retrieval. arXiv:2312.15503 [cs.CL] [39] Xinze Li, Zhenghao Liu, Chenyan Xiong, Shi Yu, Yu Gu, Zhiyuan Liu, and Ge Yu. 2023. Structure-Aware Language Model Pretraining Improves Dense Retrieval on Structured Data. In Findings of the Association for Computational Linguistics: ACL 2023, Anna Rogers, Jordan Boyd-Graber, and Naoaki Okazaki (Eds.). Association for Computational Linguistics, Toronto, Canada, 11560–11574. doi:10.18653/v1/ 2023.findings-acl.734 [40] Xingxuan Li, Ruochen Zhao, Yew Ken Chia, Bosheng Ding, Shafiq Joty, Soujanya Poria, and Lidong Bing. 2024. Chain-of-Knowledge: Grounding Large Language Models via Dynamic Knowledge Adapting over Heterogeneous Sources. In The Twelfth International Conference on Learning Representations . [41] Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, and Percy Liang. 2024. Lost in the Middle: How Language Models Use Long Contexts. Transactions of the Association for Computational Linguistics 12 (2024), 157–173. doi:10.1162/tacl_a_00638 [42] Yinhan Liu. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 364 (2019). [43] Ziyang Luo, Can Xu, Pu Zhao, Xiubo Geng, Chongyang Tao, Jing Ma, Qingwei Lin, and Daxin Jiang. 2023. Augmented large language models with parametric knowledge guiding. arXiv preprint arXiv:2305.04757 (2023). [44] Xinbei Ma, Yeyun Gong, Pengcheng
|
https://arxiv.org/abs/2505.21849v1
|
He, hai zhao, and Nan Duan. 2023. Query Rewriting in Retrieval-Augmented Large Language Models. In The 2023 Confer- ence on Empirical Methods in Natural Language Processing . https://openreview. net/forum?id=gXq1cwkUZc [45] Xinbei Ma, Yeyun Gong, Pengcheng He, Hai Zhao, and Nan Duan. 2023. Query Rewriting in Retrieval-Augmented Large Language Models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing , Houda Bouamor, Juan Pino, and Kalika Bali (Eds.). Association for Computational Lin- guistics, Singapore, 5303–5315. [46] Yubo Ma, Yixin Cao, Yong Ching Hong, and Aixin Sun. 2023. Large Language Model Is Not a Good Few-shot Information Extractor, but a Good Reranker for Hard Samples!. In The 2023 Conference on Empirical Methods in Natural Language Processing . [47] Christopher D. Manning, Prabhakar Raghavan, and Hinrich Schütze. 2008. Intro- duction to Information Retrieval . Cambridge University Press. [48] Richard E. Mayer. 2014. Cognitive Theory of Multimedia Learning. In The Cambridge Handbook of Multimedia Learning , Richard E. Mayer (Ed.). Cambridge University Press, Cambridge, 43–71. [49] Metaso. 2024. Metaso. https://metaso.cn/ Accessed: 2025-02-05. [50] Microsoft Azure Architecture Center. 2024. Developing a RAG Solution - Chunk Enrichment Phase. https://learn.microsoft.com/en-us/azure/architecture/ai- ml/guide/rag/rag-enrichment-phase Accessed: 2025-01-19. [51] Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christopher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, et al . 2021. Webgpt: Browser-assisted question-answering with human feedback. arXiv preprint arXiv:2112.09332 (2021). [52] Shiyu Ni, Keping Bi, Jiafeng Guo, and Xueqi Cheng. 2024. When Do LLMs Need Retrieval Augmentation? Mitigating LLMs’ Overconfidence Helps Retrieval Augmentation. In Findings of the Association for Computational Linguistics: ACL 2024, Lun-Wei Ku, Andre Martins, and Vivek Srikumar (Eds.). Association for Computational Linguistics, Bangkok, Thailand, 11375–11388. doi:10.18653/v1/ 2024.findings-acl.675 [53] OpenAI. 2024. ChatGPT. https://chatgpt.com/ Accessed: 2025-02-05. [54] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al .2022. Training language models to follow instructions with human feedback. Advances in neural information processing systems 35 (2022), 27730–27744. [55] Wenjun Peng, Guiyang Li, Yue Jiang, Zilong Wang, Dan Ou, Xiaoyi Zeng, Derong Xu, Tong Xu, and Enhong Chen. 2024. Large language model based long-tail query rewriting in taobao search. In Companion Proceedings of the ACM on Web Conference 2024 . 20–28. 9 Preprint, Bo Tang and Junyi Zhu et al. [56] Luis P. Prieto, Kshitij Sharma, Łukasz Kidzinski, María Jesús Rodríguez-Triana, and Pierre Dillenbourg. 2018. Multimodal Teaching Analytics: Automated Extrac- tion of Orchestration Graphs from Wearable Sensor Data. Journal of Computer Assisted Learning 34, 2 (April 2018), 193–203. doi:10.1111/jcal.12232 [57] Alec Radford and Karthik Narasimhan. 2018. Improving Language Understanding by Generative Pre-Training. https://api.semanticscholar.org/CorpusID:49313245 [58] Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language Models are Unsupervised Multitask Learners. (2019). [59] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. Journal of Machine Learning Research 21, 140 (2020), 1–67. http://jmlr.org/papers/v21/20-074.html [60] Zhihong Shao, Yeyun Gong, Yelong Shen, Minlie Huang, Nan Duan, and Weizhu Chen. 2023. Enhancing Retrieval-Augmented Large
|
https://arxiv.org/abs/2505.21849v1
|
Language Models with It- erative Retrieval-Generation Synergy. In Findings of the Association for Com- putational Linguistics: EMNLP 2023 , Houda Bouamor, Juan Pino, and Kalika Bali (Eds.). Association for Computational Linguistics, Singapore, 9248–9274. doi:10.18653/v1/2023.findings-emnlp.620 [61] Statista. 2023. https://www.statista.com/statistics/1377993/us-adults-ai- powered-search-engines-usage-choice/ Accessed: 2025-01-21. [62] Winnie Street, John Oliver Siy, Geoff Keeling, Adrien Baranes, Benjamin Barnett, Michael McKibben, Tatenda Kanyere, Alison Lentz, Robin IM Dunbar, et al .2024. LLMs achieve adult human performance on higher-order theory of mind tasks. arXiv preprint arXiv:2405.18870 (2024). [63] Hao Sun, Zhexin Zhang, Jiawen Deng, Jiale Cheng, and Minlie Huang. 2023. Safety assessment of chinese large language models. arXiv preprint arXiv:2304.10436 (2023). [64] John Sweller, Paul Ayres, and Slava Kalyuga. 2020. Cognitive load theory and educational technology. Educational Technology Research and Development 68, 1 (2020), 1–16. doi:10.1007/s11423-019-09701-3 [65] Ravi Theja. 2023. Evaluating the Ideal Chunk Size for a RAG System using LlamaIndex. https://www.llamaindex.ai/blog/evaluating-the-ideal-chunk-size- for-a-rag-system-using-llamaindex-6207e5d3fec5 Accessed: 2025-01-19. [66] Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot, and Ashish Sabharwal. 2023. Interleaving Retrieval with Chain-of-Thought Reasoning for Knowledge- Intensive Multi-Step Questions. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , Anna Rogers, Jordan Boyd-Graber, and Naoaki Okazaki (Eds.). Association for Computational Linguistics, Toronto, Canada, 10014–10037. doi:10.18653/v1/2023.acl-long.557 [67] A Vaswani. 2017. Attention is all you need. Advances in Neural Information Processing Systems (2017). [68] Liang Wang, Nan Yang, and Furu Wei. 2023. Query2doc: Query Expansion with Large Language Models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing , Houda Bouamor, Juan Pino, and Kalika Bali (Eds.). Association for Computational Linguistics, Singapore, 9414–9423. doi:10.18653/v1/2023.emnlp-main.585 [69] Xintao Wang, Qianwen Yang, Yongting Qiu, Jiaqing Liang, Qianyu He, Zhouhong Gu, Yanghua Xiao, and Wei Wang. 2023. Knowledgpt: Enhancing large language models with retrieval and storage access on knowledge bases. arXiv preprint arXiv:2308.11761 (2023). [70] Yuxia Wang, Haonan Li, Xudong Han, Preslav Nakov, and Timothy Baldwin. 2023. Do-Not-Answer: A Dataset for Evaluating Safeguards in LLMs. arXiv preprint arXiv:2308.13387. [71] Weiqi Wu, Shen Huang, Yong Jiang, Pengjun Xie, Fei Huang, and Hai Zhao. 2025. Unfolding the Headline: Iterative Self-Questioning for News Retrieval and Timeline Summarization. arXiv preprint arXiv:2501.00888 (2025). [72] Shitao Xiao, Zheng Liu, Peitian Zhang, and Niklas Muennighoff. 2023. C-Pack: Packaged Resources To Advance General Chinese Embedding. arXiv:2309.07597 [cs.CL] [73] An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, et al .2024. Qwen2. 5 technical report. arXiv preprint arXiv:2412.15115 (2024). [74] Haoyan Yang, Zhitao Li, Yong Zhang, Jianzong Wang, Ning Cheng, Ming Li, and Jing Xiao. 2023. PRCA: Fitting Black-Box Large Language Models for Re- trieval Question Answering via Pluggable Reward-Driven Contextual Adapter. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Pro- cessing , Houda Bouamor, Juan Pino, and Kalika Bali (Eds.). Association for Compu- tational Linguistics, Singapore, 5364–5375. doi:10.18653/v1/2023.emnlp-main.326 [75] Zhilin Yang. 2019. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv preprint arXiv:1906.08237 (2019). [76] Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. 2018. HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question
|
https://arxiv.org/abs/2505.21849v1
|
Answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing , Ellen Riloff, David Chiang, Julia Hockenmaier, and Jun’ichi Tsujii (Eds.). Association for Computational Linguistics, Brussels, Belgium, 2369–2380. doi:10.18653/v1/D18- 1259[77] Ori Yoran, Tomer Wolfson, Ori Ram, and Jonathan Berant. 2024. Making Retrieval- Augmented Language Models Robust to Irrelevant Context. In The Twelfth Inter- national Conference on Learning Representations . [78] Liangyu Zha, Junlin Zhou, Liyao Li, Rui Wang, Qingyi Huang, Saisai Yang, Jing Yuan, Changbao Su, Xiang Li, Aofeng Su, et al .2023. Tablegpt: Towards unifying tables, nature language and commands into one gpt. arXiv preprint arXiv:2307.08674 (2023). [79] Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, and Shuming Shi. 2023. Siren’s Song in the AI Ocean: A Survey on Hallucination in Large Language Models. arXiv preprint arXiv:2309.01219 (2023). [80] Huaixiu Steven Zheng, Swaroop Mishra, Xinyun Chen, Heng-Tze Cheng, Ed H. Chi, Quoc V Le, and Denny Zhou. 2024. Take a Step Back: Evoking Reasoning via Abstraction in Large Language Models. In The Twelfth International Conference on Learning Representations . [81] Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Claire Cui, Olivier Bousquet, Quoc V Le, and Ed H. Chi. 2023. Least-to-Most Prompting Enables Complex Reasoning in Large Language Models. InThe Eleventh International Conference on Learning Representations . https: //openreview.net/forum?id=WZH7099tgfM [82] Junyi Zhu, Shuochen Liu, Yu Yu, Bo Tang, Yibo Yan, Zhiyu Li, Feiyu Xiong, Tong Xu, and Matthew B. Blaschko. 2024. FastMem: Fast Memorization of Prompt Improves Context Awareness of Large Language Models. In Findings of the Association for Computational Linguistics: EMNLP 2024 , Yaser Al-Onaizan, Mohit Bansal, and Yun-Nung Chen (Eds.). Association for Computational Linguistics, Miami, Florida, USA, 11740–11758. doi:10.18653/v1/2024.findings-emnlp.687 [83] Shengyao Zhuang, Bing Liu, Bevan Koopman, and Guido Zuccon. 2023. Open- source Large Language Models are Strong Zero-shot Query Likelihood Models for Document Ranking. In Findings of the Association for Computational Linguistics: EMNLP 2023 , Houda Bouamor, Juan Pino, and Kalika Bali (Eds.). Association for Computational Linguistics, Singapore, 8807–8817. A Multi-Faceted Evaluation Criteria The detailed definitions of the multi-faceted evaluation criteria are provided in Tab. 8. B Instruction Prompt B.1 Query-Decomposition Graph Prompt for the query-decomposition graph is provided in Tab. 9. B.2 Answer Generation Prompt for answer generation is provided in Tab. 10. B.3 LLM Evaluation Tab. 11 presents the prompt used to instruct the LLM to evaluate the generated text based on multi-faceted criteria (see Tab. 8). To reduce task complexity and enhance evaluation quality, we assess one facet per evaluation and fill the metric title and definition accordingly. B.4 Built-In Citation Prompt for entity extraction is provided in Tab. 12. Prompt for citation identification is provided in Tab. 13. C Retrieval Documents Filtering Rules HTML Content Filtering. Non-content HTML tags, such as <script> and <style>, are removed using the lxml library to parse the origi- nal HTML into a DOM tree. Using XPath selectors, we retain only essential content tags like <div>, <p>, and <article>,
|
https://arxiv.org/abs/2505.21849v1
|
ensuring com- patibility with irregular HTML structures. Text Processing. : Extracted text blocks are separated by spaces or line breaks to improve readability. Redundant whitespace and 10 Xinyu AI Search: Enhanced Relevance and Comprehensive Results with Rich Answer PresentationsPreprint, Table 8: Multi-faceted evaluation criteria. (1) Conciseness : - The response should directly address the user 's question. - Avoid irrelevant content, unnecessary information, or roundabout explanations. - Deduct 1 point for each irrelevant statement. (2) Numerical Precision : - If a question requires a specific number, avoid vague terms like "several" or "many times." - Responses should be precise and specific. - Deduct 1 point for each ambiguous statement. (3) Relevance : - If the question specifies constraints (e.g., time, location, person, event), the answer must adhere to them. - Deduct 1 point for each instance of misalignment with the question 's constraints. (4) Factuality : - The information must be factually correct, especially for numerical or factual questions. - Deduct 1 point for each incorrect numerical or factual statement. (5) Timeliness : - For ongoing news or urgent reports, ensure information reflects the latest updates. - The current date is {to be filled}. - If the question is not time-sensitive, no points are deducted. - For time-sensitive questions, deduct points proportionally based on outdatedness. (6) Comprehensiveness : - The response should comprehensively cover all aspects of the user 's inquiry. - The user should not need further search to grasp the full context. - Deduct 1 point for each missing essential element. (7) Clarity : - The response should be easy to understand, well- structured, and formatted logically. - Example: Chronological events should be presented in chronological order. - Deduct 1 point for unclear or disorganized presentation. (8) Coherence : - The response should be logically consistent, with smooth transitions between sentences. - Deduct 1 point for each instance of incoherent or disjointed phrasing. (9) Insightfulness : - The response should provide insightful or unique perspectives. - Base score: 6 points. - Award 0.5-1 additional points for each innovative idea or expression. excessive line breaks are removed while preserving paragraph struc- ture. Distracting elements like “Read More,” “Click to Continue, ” or inline emojis are filtered out. Special characters, stopwords (e.g., from publicly available resources like Stopwords JSON2), emoji patterns (via regular expressions targeting Unicode Emoji ranges), and irrelevant newline characters are eliminated. 2https://github.com/6/stopwords-json/blob/master/dist/ca.json Figure 6: Xinyu ’s Interface for query disambiguation. Sensitive Information Filtering. Personal identifiers, such as phone numbers, email addresses, and platform-specific markers are de- tected and removed. Text Normalization. Punctuation is standardized to half-width characters, and numbers in various formats are converted to stan- dardized half-width Arabic numerals. D Fine-Tuning D.1 User Intent Understanding To promote responsible AI behavior and constrain the scope of queries, we fine-tune the model for the query rejection module. We define 11 categories of queries that warrant refusal, including: (1) illegal content, (2) ethical violations, (3) privacy breaches, (4) harm- ful intent, (5) professional consultations, (6) human-AI interactions, (7) misinformation, (8) technical inquiries, (9) academic requests, (10) planning and consulting inquiries, and
|
https://arxiv.org/abs/2505.21849v1
|
(11) creative content generation. To construct a training dataset, we collect a set of seed queries based on open-source datasets: Do-Not-Answer [ 70], BeaverTails [ 31] and Safety-Prompts [ 63]. Additionally, we generate synthetic queries to compensate the imbalance number for each class in the collected dataset. Then we instruct multiple LLMs by given the definition of the 11 categories to classify and output in JSON format as follows: { “Refusal”: “Yes/No”, “Category”: “illegal content/ethical viloations/. . . ” } Decisions are aggregated using a majority voting mechanism to establish a consensus. Finally, human experts review the results to correct misclassification. For queries classified as non-refusal, we further prepare a dataset for query disambiguation. We instruct multiple LLMs to analyze whether a query requires further clarification to generate an ap- propriate response, outputting the results in the following JSON format: { "Requires additional input": "Yes/No", "Additional options": { "Prompt description": "Please select...", "Choices": ["xx", "xx", ...] } } 11 Preprint, Bo Tang and Junyi Zhu et al. Table 9: Prompt for query-decomposition graph. Please analyze the following query and return the explanation in dictionary format. Response format: {'is_complex ': True/False, 'sub_queries ': [], 'parent_child ': []} Analysis Steps and Principles: 1. **Classify the nature of the query** - The query can be classified into one of two types: (a) A "complex query" that consists of multiple sub-queries. (b) A "simple query" that can be directly answered. - If the query is classified as "complex," set 'is_complex 'to **True**. - If the query is "simple," set 'is_complex 'to **False**, and leave 'sub_queries 'and 'parent_child 'as empty lists. 2. **Decomposing a Complex Query** - If the query is classified as "complex," break it down into **sub-queries** and store them in the 'sub_queries ' list. - Decomposition principles: 1) If a query contains multiple **target entities**, split it into multiple sub-queries. - Example: *"What are the latest social news and weather news in Shanghai?"* - Target entities: *"social news"*, *"weather news"*. - Split into: *"What are the latest social news in Shanghai?"* and *"What are the latest weather news in Shanghai?"*. 2) Each sub-query should be **indivisible** and should not require further decomposition. 3) No duplicate sub-queries. 4) When referring to **names of people, places, or organizations**, ensure full and precise descriptions. - Example: *"What is the area and population of New Jersey, USA?"* - Correct split: *"What is the area of New Jersey, USA?"* and *"What is the population of New Jersey, USA?"*. - Incorrect split: *"What is the area of New Jersey?"* and *"What is the population of New Jersey?"*. 5) The total number of sub-queries **should not exceed 6**. 3. **Analyzing Dependencies Between Sub-Queries** - If the query is complex, analyze the **dependency relationships** between sub-queries and store them in ' parent_child '. - Example: - *"What natural disasters occurred in Indonesia in April?"* - *"How long did this natural disaster last?"* - The second question **depends** on the first; thus, the first is the *parent*, and the second is the *child*: ```json {"parent": "What natural disasters occurred in Indonesia in April?",
|
https://arxiv.org/abs/2505.21849v1
|
"child": "How long did this natural disaster last?"} ``` - Dependency principles: 1) If sub-queries are **independent**, 'parent_child 'remains an empty list. 2) If the **child question cannot be answered without the parent**, it is a dependent relationship. - Example: "What is the latest iPhone model" is the parent node of "What are the specifications of the latest iPhone?" - The first question must be answered before the second. 3) Every possible pair of sub-queries should be evaluated for dependency. - A query can be both a *parent* and a *child* in different relationships. ### Example: {Few-Shot Examples} Query: {Query} Response: \n Additional options are prepared to present clarifying options for the user as shown in Fig. 6. Again, we use a majority voting mechanism to determine whether additional input is required. This process results in 7K data points. Human experts then review the results, refine the choices, and select high-quality samples. Ultimately, we construct a dataset of 5K data points, where 1/4 of the queries require additional input. Two models are fine-tuned for the query rejection and query disambiguation tasks using their respective datasets, following Eq. (1).D.2 Question-Decomposition Graph We collect a set of queries and instruct multiple models using the prompt provided in App. B.1 to generate QDGs. The gener- ated QDGs are then programmatically validated to ensure that the parent-child relationships meet the specified requirements, and du- plicate QDGs are removed. This process results in 8,000 data points. Finally, human experts examine the data and select high-quality, correctly generated samples. In the end, we retain 4,184 data points for fine-tuning, which is conducted based on Eq. (1). 12 Xinyu AI Search: Enhanced Relevance and Comprehensive Results with Rich Answer PresentationsPreprint, Table 10: Prompt for answer generation. You are an AI assistant named Xinyu, developed by the Shanghai Algorithm Innovation Research Institute. You are performing an encyclopedia Q\&A task. Please generate an answer based on the provided reference materials and related Q\&A content. Question: {Sub-Query} Related Q\&A: {Ancestor Node 1: Sub-Query} {Ancestor Node 1: Answer} {Ancestor Node 2: Sub-Query} {Ancestor Node 2: Answer} ... Reference materials: {Retrieved Passage 1} {Retrieved Passage 2} ... When generating your answer, follow these guidelines: [Structural Requirements]: To ensure clarity and organization, you may use one or more of the following structured formats: - **Introduction-Body-Summary**: Introduce the topic, elaborate, and summarize key points. - **Paragraphs by Subquestion**: Address each subquestion in a separate paragraph. - **Cause and Effect**: Explain the causes and consequences of an event. - **Comparison and Contrast**: Describe and compare two or more concepts. - **Chronological Order**: Describe events or steps in order of occurrence. - **Problem-Solution**: Introduce a problem and explain solutions or strategies. - **Pros and Cons**: List the positive and negative aspects of a decision or choice. - **Definition and Examples**: Provide a definition and illustrate it with examples. - **Logical Reasoning**: Derive conclusions based on assumptions or premises. - **List Structure**: Enumerate facts or features for easy readability. - **Categorization**: Introduce a concept, group it by categories, and explain in detail. - **Theme and Variations**: Explore a core
|
https://arxiv.org/abs/2505.21849v1
|
theme and its variations. - **Case Study**: Explain a theory or concept through specific cases. - **Hierarchical Structure**: Arrange information by importance or sequence. - **Issue and Counterarguments**: Present an issue with supporting and opposing views. [Language Requirements]: (1) Use concise and clear language. (2) Ensure that the answer 's structure enhances clarity and readability. (3) The response must directly and accurately address the question, avoiding irrelevant content. (4) When citing reference materials, ignore template formatting or improper phrasing. (5) If detailed elaboration is required, output the answer in a structured **Markdown** format. Your Answer: \nTable 11: Prompt for multi-faceted evaluation. Assume you are an article quality inspector. Please evaluate the response based on {Metric Title}. I will provide the user 's question and the final response The maximum score is 10 points, and the scoring rules are as follows: {Metric Definition} Please strictly follow the scoring rules. Example output format: '{ "Issues Identified": "X", "Calculation Process": "10-1.0-1.0-1.0 = 7.0", "Score": 7 }' {Few-Shot Examples} Your final score: \n" Table 12: Prompt for entity extraction. Read the given sentence and extract the contained information about time, location, persons, and job titles . Your extraction result should be returned in JSON format, with each field name restricted to one of the following: ["Time", "Location", "Persons", "Job Titles"] If there are multiple pieces of information of the same type in the sentence, the corresponding category 's value should be represented as an array. Below are some examples: {Few-Shot Examples} {Sentence} Extraction result: \n D.3 Reranker Model We construct a dataset comprising question-answer pairs using recent real-world data and public datasets such as MS MARCO [ 11]. To generate hard negatives for fine-tuning, we apply various chunk- ing strategies to create multiple candidate samples resembling the positive examples. These candidates are then ranked based on a base reranker model, selecting the top-300 samples. Next, we lever- age multiple LLMs to assess whether each generated sample can answer the corresponding question. If the majority vote is negative, the sample is designated as a hard negative. In total, we generate 56K pairs. Finally, human experts review the results and curate a high-quality subset, retaining 13K pairs for fine-tuning, which is performed based on Eq. (2). D.4 Generation We collect a set of queries and retrieve relevant documents, then generate responses using multiple LLMs and the prompt provided in App. B.2, yielding 121K answers. Human experts review the results, removing low-quality responses and refining the retained 13 Preprint, Bo Tang and Junyi Zhu et al. Table 13: Prompt for citation identification. You are a journalist skilled in analyzing the correlation between document information. I will provide you with a sentence excerpted from a news article, along with several reference documents used in writing this article. Your task is to determine which reference document the excerpted sentence most likely originates from. The excerpted sentence is: {Sentence} The key information contained in this sentence is: Time: {Time} Location: {Location} Person: {Person} Job Title: {Job Title} Numbers: {Numbers} The reference documents used for writing this article and their respective key information
|
https://arxiv.org/abs/2505.21849v1
|
are as follows: [1] {Retrieved Document} [2] {Retrieved Document} [3] ... When making your determination, ensure that the selected reference document matches as much key information from the excerpted sentence as possible. The higher the degree of key information overlap, the more likely the reference document is the source of the excerpted sentence. Your response should contain only a one- or two-digit number representing the corresponding reference document number, such as "[2]", "[9]", or "[13]". If you believe that none of the reference documents are relevant to the given sentence, return "-1". The most likely source document number is: \n Table 14: Ablation study of fine-tuning the rerank model. Model Precision Recall F1 Score Wall Time (s) GPT-4O [53] 0.717 0.719 0.692 3.6 Qwen 2.5-72B [73] 0.541 0.894 0.641 2.4 bge-reranker-v2-m3 [17] 0.568 0.671 0.562 0.1 Xinyu (ours) 0.607 0.735 0.623 0.1 ones to ensure consistency in tone and eliminate hallucinations. Ultimately, 37K high-quality answers are selected for fine-tuning, which is performed using Eq. (1). D.5 Built-In Citation As this module requires high efficiency, we fine-tune two SLMs (Qwen2.5-3B), using its larger counterpart, Qwen2.5-72B. We collect a set of passages and use Qwen2.5-72B to extract the entities from each sentence based on the prompt provided in App. B.4. If any entities can be extracted, we retain the (passage, sentence, entities) triplet as part of the dataset. This process yields 33K data points. Human experts then review the data to correct errors and remove low-quality samples, ultimately retaining 26K data points. We fine- tune the entity extraction SLM using sentences paired with their corresponding entities. Additionally, we fine-tune another SLM to retrieve the relevant passage from a given set based on the extracted entities, using the prompt provided in App. B.4. Both models are optimized following Eq. (1).E Additional Results Multi-Faceted Evaluation by LLM. Tab. 15 presents the results of a multi-faceted evaluation conducted by GPT-4O. While the absolute values differ from those obtained through human evaluation (see Tab. 2), the rankings remain similar, demonstrating a strong corre- lation (see Tab. 1). These results confirm that LLM-based evaluation is also indicative of performance. Ablation of Fine-Tuning the Answer Generation Model. Tab. 15 shows that replacing our fine-tuned answer generation model with either its base model (Qwen2.5-72B) or a strong proprietary model (GPT-4O) results in lower-quality generated answers. This finding indicates that our fine-tuning process is effective. Ablation of Fine-Tuning the Reranker Model. We compare our fine-tuned reranker model against its base model (bge-reranker-v2- m3) and two LLMs by instructing them to generate a ranking order based on relevance. As shown in Tab. 14, our method preserves the efficiency of the base model while significantly outperforming it and achieving performance comparable to GPT-4o. The wall time for GPT-4o reflects API response time, whereas the wall time for other models is measured on a local cluster equipped with NVIDIA H800 GPUs. Batch size is set to 1. 14 Xinyu AI Search: Enhanced Relevance and Comprehensive Results with Rich Answer PresentationsPreprint, Table 15: Multi-faceted comparison of different approaches based on GPT-4O (gpt-4-0125-preview). Higher value indicates better
|
https://arxiv.org/abs/2505.21849v1
|
performance, 10 is the maximum. Model Conciseness Numerical Precision Relevance Factuality Timeliness Comprehensiveness Clarity Coherence Insightfulness Average Perplexity AI 9.913 9.607 9.740 9.727 8.120 8.280 9.887 9.853 6.613 9.082 Tiangong AI [6] 9.819 9.188 9.570 9.738 7.758 7.517 9.839 9.799 6.161 8.821 Ernie Bot [2] 9.814 9.152 9.556 9.648 8.062 7.924 9.745 9.814 6.552 8.918 KIMI [4] 9.695 9.359 9.576 9.675 8.059 8.305 9.686 9.720 6.432 8.945 Metaso [49] 9.781 8.932 9.493 9.596 7.589 6.842 9.712 9.589 5.801 8.593 ChatGLM [16] 9.733 9.274 9.568 9.745 7.986 7.911 9.863 9.808 6.603 8.943 Baichuan [1] 9.433 9.053 9.307 9.403 7.813 7.832 9.373 9.200 6.640 8.673 Tongyi [7] 9.747 8.900 9.313 9.527 7.700 7.940 9.827 9.740 6.493 8.799 Xinyu (Ours) 9.880 9.547 9.547 9.731 8.300 8.533 9.900 9.747 7.107 9.144 Table 16: Ablation study of replacing our fine-tuned answer generation model with proprietary models. Model Conciseness Numerical Precision Relevance Factuality Timeliness Comprehensiveness Clarity Coherence Insightfulness Average GPT-4O [53] 9.854 9.482 9.588 9.597 8.107 8.515 9.849 9.734 6.989 9.080 Qwen 2.5-72B [73] 9.824 9.380 9.551 9.337 7.947 8.417 9.848 9.683 6.884 8.986 Xinyu (ours) 9.880 9.547 9.547 9.731 8.300 8.533 9.900 9.747 7.107 9.142 15
|
https://arxiv.org/abs/2505.21849v1
|
arXiv:2505.21850v1 [cs.CV] 28 May 2025Beyond Perception: Evaluating Abstract Visual Reasoning through Multi-Stage Task Yanbei Jiang1, Yihao Ding1,2, Chao Lei1, Jiayang Ao1 Jey Han Lau1,Krista A. Ehinger1 1The University of Melbourne,2University of Sydney yanbeij@student.unimelb.edu.au jeyhan.lau@gmail.com kehinger@unimelb.edu.au Abstract Current Multimodal Large Language Models (MLLMs) excel in general visual reasoning but remain underexplored in Abstract Visual Rea- soning (A VR), which demands higher-order reasoning to identify abstract rules beyond sim- ple perception. Existing A VR benchmarks fo- cus on single-step reasoning, emphasizing the end result but neglecting the multi-stage na- ture of reasoning process. Past studies found MLLMs struggle with these benchmarks, but it doesn’t explain how they fail. To address this gap, we introduce MultiStAR, a Multi- StageAVRbenchmark, based on RA VEN, designed to assess reasoning across varying levels of complexity. Additionally, existing metrics like accuracy only focus on the final outcomes while do not account for the cor- rectness of intermediate steps. Therefore, we propose a novel metric, MSEval, which con- siders the correctness of intermediate steps in addition to the final outcomes. We conduct comprehensive experiments on MultiStAR us- ing 17 representative close-source and open- source MLLMs. The results reveal that while existing MLLMs perform adequately on ba- sic perception tasks, they continue to face challenges in more complex rule detection stages. The dataset and code are available at https://github.com/YanbeiJiang/MultiStAR 1 Introduction Multimodal Large Language Models (MLLMs) demonstrate proficiency in addressing a wide array of visual-text inquiries and show strong multimodal understanding ability in tasks such as visual ques- tion answering (Goyal et al., 2017; Marino et al., 2019; Ding et al., 2023), image captioning (Saito et al., 2023; Vinyals et al., 2015; Jiang et al., 2024a), and visual grounding (He et al., 2024; Deng et al., 2021). These tasks focus on evaluating the mod- els’ capability to understand real-world or domain- specific knowledge. However, Abstract Visual Rea- Single-Step AVR Q: Which panel (1-8) should be placed in the empty box to complete the pa�ern? Logical Chain Ans: “1” Problem Matrix Answer Set Q:Iden�fy the rule that governs the number of objects. Ground Truth: Progressively decrease Predic�on: Progressively decreaseQ: How many objects…? Ground Truth: 1 Predic�on: 1 Direct Answer Ans: “xxx” One-panel percep�on Two-panel compariso n Rule deduc�onQ: How many objects are in the panel? Q: Is the color of all the objects in le�panel the same as, darker or brighter than right panel? Ans: “1” Q: Iden�fy the rule that governs the shape of objects. Ans: “More” Ans: “Edge Number Decreasing” Q:Does the le� panel contain the same, more, or fewer…? Ground Truth: Fewer Predic�on: More … Figure 1: Left Part : RA VEN puzzle. The correct answer is 1. Right Part : Direct Answer subtask, where ques- tions are independent for each configuration; Logical Chain subtask, where information from previous stages is used to assist in answering the current stage, all ques- tions here focus on the concept of Number. soning (A VR) presents a different challenge, fo- cusing on a model’s ability to identify and reason through abstract patterns, relationships, and rules. A well-known example of A VR tasks
|
https://arxiv.org/abs/2505.21850v1
|
is RA VEN (Raven, 2003; Zhang et al., 2019), as shown in the left part of Figure 1. The solver needs to select the correct panel from a answer set to complete a 3x3 problem matrix by deducing the visual rules governing the grid’s arrangement. For instance, by analyzing the colors of each panel, one might ob- serve the color remains consistent across each row. Unlike other multimodal tasks in real-world sce- narios, A VR focuses on reasoning about arbitrary visual elements, serving as a robust benchmark for evaluating the zero-shot reasoning capabilities of MLLMs in visual contexts (Ma ´ndziuk and ˙Zy- chowski, 2019; Santoro et al., 2018). Previous works have consistently shown that A VR tasks pose challenges for MLLMs in zero- shot inference settings. Despite recent advance- ments like Chain-of-Thought prompting (Ahrabian et al., 2024; Gendron et al., 2024) and the inclu- sion of oracle captions (Zhang et al., 2024), models continue to perform at near-random levels on these tasks. The A VR datasets used commonly in these evaluations like RA VEN primarily focus on single- step end-to-end reasoning (i.e., giving the models the questions and asking them to derive the final answer), as shown in the left part of Figure 1 (San- toro et al., 2018; Nie et al., 2020; Cao et al., 2024). However, this design deviates from the human rea- soning process, which often involves sequential steps: starting with single-panel perception, pro- gressing to panel comparisons, and finally deduc- ing the underlying rules before solving the puz- zle. Previous datasets often omit these intermedi- ate stages, posing challenges to effectively evaluate their step-by-step reasoning capabilities and iden- tify where models struggle within the reasoning process. This highlights the need for benchmarks thatassess intermediate perception and reasoning processes . Additionally, a model that accurately identifies patterns in early steps but fails in the fi- nal deduction still demonstrates partial reasoning capability. Rewarding such intermediate success aligns with human evaluation practices. However, existing metrics like accuracy, measure only the performance of the current stage while disregard- ing the correctness of intermediate steps. To address the limitation of lacking intermedi- ate process evaluation, we introduce MultiStAR, a Multi-Stage Abstract Visual Reasoning dataset, designed to evaluate MLLMs on the intermediate steps in the reasoning process. As shown in Fig- ure 1, the dataset is divided into two sub-tasks, each focusing on different aspects of reasoning. The first sub-task, referred to as Direct Answer , evaluates model performance at varying levels of complexity to assess perception and reasoning abil- ities at each individual step. Using template-based methods, we generate questions based on RA VEN, ranging from basic object recognition to advanced comparison, pattern recognition, and rule inference. This approach ensures comprehensive coverage of reasoning patterns. The second sub-task called Logical Chain , emphasizes how models measure and maintain logical correlations across reasoning steps. Using puzzles from the RA VEN as the final question, we decompose the reasoning process into a sequence of subproblems in a bottom-up manner. Each stage in this chain links the current reasoning task to its dependent subproblems, requiring
|
https://arxiv.org/abs/2505.21850v1
|
the model to combine current information with outputsfrom previous stages. To assess the correctness of intermediate steps, we introduce a novel met- ric, MSEval, which provides a more fine-grained assessment of the model’s reasoning process for the logical chain task. MSEval uses the correct answer probabilities at each stage to compute the joint probability across the reasoning process. This approach considers both the correctness of the cur- rent stage and all dependent intermediate steps. In summary, our contributions are: 1) We intro- duce the MultiStAR benchmark, designed to eval- uate models across different stages of reasoning through two subtasks, allowing for a more granular analysis of their performance throughout the rea- soning process. 2) We present a novel metric that incorporates the correctness of the current stage as well as the accuracy of its dependent intermediate steps. 3) We perform extensive experiments on a wide range of state-of-the-art MLLMs, providing insights into their strengths, weaknesses, and future improvement directions on A VR tasks. 2 Related Work Visual reasoning benchmarks have evolved to as- sess the capacity of AI models on various tasks in- cluding compositional (Johnson et al., 2017), com- monsense (Gao et al., 2022; Li et al., 2024), scien- tific (Hiippala et al., 2021; Saikh et al., 2022; Yue et al., 2024), and abstract visual reasoning. Both commonsense and scientific reasoning tasks require real-world knowledge and a prior understanding of specific domains. Abstract Visual Reasoning (A VR) benchmarks the main focus of this work, primarily involving classification tasks, where mod- els select an answer from a fixed set of choices based on abstract patterns and rules (Ma ´ndziuk and ˙Zychowski, 2019; Zhang et al., 2019; San- toro et al., 2018; Nie et al., 2020). A few other A VR benchmarks address generative tasks, where models are tasked with recreating elements that fit within a given visual sequence, introducing addi- tional complexity by evaluating a model’s creative reasoning capabilities (Chollet, 2019; Moskvichev et al., 2023). The most similar benchmark to ours is MARVEL (Jiang et al., 2024b), which targets A VR tasks and extends reasoning diversity with six core patterns across geometric and abstract shapes. It also includes basic perception questions to as- sess visual comprehension. However, MARVEL is still limited in its capacity to analyze intermediate reasoning steps. Table 1 shows the key statistics DatasetNum. of ImagesNum. of QA pairsReasoning Domain (Task Focus)Question GenerationAnswer TypeFunctional ProgramMulti-Step Structure CLEVR (Johnson et al., 2017) 100K 1M Compositional (3D shapes) Template Open QA ✓ ✗ CRIC (Gao et al., 2022) 96K 494K Commonsense (Daily life) Template Open QA ✓ ✗ AI2D (Hiippala et al., 2021) 5K 15K Scientific (Science diagram) Manual MCQA ✗ ✗ ScienceQA (Saikh et al., 2022) 10K 21K Scientific (Science problems) Manual MCQA ✗ ✗ MMMU (Yue et al., 2024) 11.5K 11.5K Scientific (Exam questions) Manual MCQA ✗ ✗ SEED-Bench (Li et al., 2024) 19K 19K Commonsense (Spatial, temporal) Neural MCQA ✗ ✗ MARVEL (Jiang et al., 2024b) 0.8K 3K Abstract shapes Template Open QA ✗ ✗ MultiStAR (Direct Answer) 8.1K 21.7K Abstract shapes Template & Neural MCQA ✓ ✗ MultiStAR
|
https://arxiv.org/abs/2505.21850v1
|
(Logical Chain) 0.56K 3.92K Abstract shapes Template & Neural MCQA ✓ ✓ Table 1: Comparison of various VQA datasets. Template : generated using predefined rules, Manual : written by humans, Neural : generated using large language models, Template & Neural : generated using predefined rules and rewritten by large language models. Open QA : free-text answers, MCQA : Multiple-Choice Question Answering. Functional Program : Indicates whether the dataset is automatically created by functional programs. Multi-Step Structure : Highlights whether the dataset includes a hierarchical structure with interdependent reasoning steps. 2R1R2P1P 1R2P 2R F1P One Panel Basic 1P -B Two Panels Compare 2P Is the color of all the objects in le�panel the same as, darker or brighter than the objects in right panel?What is the shape of the object at bo�om le�?One Panel Compare 1P -C Are all en��es in this panel of the same size? One Row Deduc�on 1R Iden�fy the rule about number of objects. Two Rows Deduc�on 2R Iden�fy a rule that dictates the color of objects in both rows. i) One-Panel iii) One-Row (1R)ii) Two -Panels v) RAVEN puzzle (Final)Does the le� panel contain the same number, more or fewer objects…How many objects … Inspect the panels from le� to right and iden�fy the rule governing the number of objects. Determine the rule connec�ng the number of objects in each row. iv) Two -Rows (2R) You are presented with a 3x3 grid of panels, called the 'Problem Matrix.’…Task B: Logical Chain F2P1P 1P Number Focused Number Posi�on …Shape, Size, ColorTask A: Direct Answer Visualized Logical Chain You are presented with a 3x3 grid of panels, called the 'Problem Matrix.’…RAVEN Final Figure 2: Left Part: Direct Answer subtask, showcasing six configurations along with their corresponding examples. Right Part: Logical Chain task, presenting a partial view of the logical chain (See full chain and the chain designing rationale in Appendix A.3). Examples are provided for one specific path in the chain. and features comparison of major multimodal rea- soning datasets alongside our proposed MultiStAR benchmark. 3 Multi-stage Evaluation Benchmark 3.1 Task Definition and Configuration Our dataset comprises two sub-tasks, Direct An- swer and Logical Chain, both derived from RA VEN but with distinct reasoning patterns and focuses. 3.1.1 Direct Answer To uncovering where the MLLMs likely to succeed or struggle in the individual stages , this sub-task ex- plores A VR across multiple levels, which is divided into six configurations, shown in Figure 2: a) One Panel Basic Perception ( 1P-B ):The puzzle image consists of a single panel I=p, focusing on basic perception questions, such as determining the number of objects, the shape, or the position of a single object, without requiring any comparison. b) One Panel Comparison ( 1P-C ):The puzzle im- age remains a single panel, but questions requireintra-panel attribute comparisons. c) Two Panels Comparison ( 2P):The puzzle image consists of two panels, I= (p1,p2), requiring cross-panel comparisons. d) One Row Rule Deduction ( 1R):The puzzle image is a single row of three panels, I= (p1,p2,p3), and the task involves identifying a rule that governs the sequence.
|
https://arxiv.org/abs/2505.21850v1
|
e) Two Row Rule Deduction ( 2R):The puzzle image contains the first two rows, each with three panels, denoted as I= ({p1,1,p1,2,p1,3}, {p2,1,p2,2,p2,3}). The task is to find a rule that applies to both rows. f) RAVEN puzzle ( Final ):The original puzzle from RA VEN dataset. Formally, given an puzzle image I(which con- sist of one or more panels p) and a question q, the task is to select an answer afrom a set of k multiple-choice options: a∗= arg max a∈AP(a|I, q) (1) where A={a1, . . . , a k}is the answer set. 3.1.2 Logical Chain To measure the sequential steps of the reasoning process required to reach the final answer, rather than evaluating stages in isolation, the second Log- ical Chain task extends reasoning across multiple subproblems, introducing dependencies between stages to form a coherent logical chain. As illus- trated in the right part of Figure 2, each node rep- resents a stage question, and edges are connected if the previous information is necessary to answer the current stage. This task consists of five stages, similar to Di- rect Answer subtask: 1P(Merged 1P-B and 1P-C), 2P,1R,2RandFinal . Specifically, each node tin- volves predicting an answer atbased on the current question qt, the current image It, and information from prior stages Ht, the task is defined as: a∗ t= arg max at∈AtP(at|It, qt,Ht) (2) Ht={Re-Format (qj, aj)|j∈ Dt} (3) where Htrepresents the set of prior information, as determined by the pre-defined logical chain Dt, specifying one or more nodes that current node t depends on. As the images referenced by prior questions change across different stages, we use a rule-based program to reformat each dependent question qjand the generated answer aj, appending this prior information before the current question to construct the input for the current node. Details of this program are provided in Appendix A.4.2. 3.2 Dataset Creation Data Sources: Our MultiStAR dataset is derived from the RA VEN dataset, which its associated XML files provide objects details and ground-truth logical rules for generating each puzzle. Define Templates : We pre-define question tem- plates for all six configurations, each template in- cluding a question format, constraints, an answer space, and a corresponding function sequence, as illustrated in Figure 3. Overall, we created 25 dis- tinct templates, details are shown in Appendix A.2. Question Generation : By leveraging the puzzle information and the pre-defined templates, we im- plement an automated template-based generation process to efficiently produce large-scale question- answer pairs. Firstly, to enrich question formats and linguistic diversity, we employ GPT-4o (Ope- nAI, 2024) to rewrite the templates. Then, follow- ing a methodology similar to the CLEVR dataset(Johnson et al., 2017), we design functional pro- grams that execute a sequence of functions. For instance, as shown in Figure 3, the program “Scene Retrieve →Panel Retrieve →Filter Unique → Shape Query →Compare Shape” identifies the puzzle matrix, retrieves the relevant panel <P>, lo- cates objects at positions <X1> and <X2>, queries their shapes, and compares them to determine the ground-truth answer. And lastly, the multiple- choice options are sampled from
|
https://arxiv.org/abs/2505.21850v1
|
the answer space. Subtasks Creation: To create the Direct Answer subtask, we first sample XML files from RA VEN, and for each XML file, we generate one ques- tion for each template . During question formation, placeholders (e.g., <X1>, <X2>) are replaced with randomly selected any possible values consistent with the value ranges and constraints. Next, we create the Logical Chain sub-task by first filtering out those templates that do not contribute useful information for building the logical chain (i.e. the question does not provide necessary input for its child nodes). To simplify chain construction, the first two one-panel configurations, 1P-B and 1P- C, are combined into a single stage representing one-panel information. During question generation, one question is created for each node , with place- holders such as panel <P> replaced by the values corresponding to the current node’s position in the chain. For instance, if there are three 1P nodes in the chain, they correspond to panels 1, 2, and 3, respectively. Questions are then grouped by at- tributes such as number and position, aligning with how the chain is constructed. Finally, we assign the previous nodes for each question to establish the edges between nodes. Detailed analysis of our dataset MultiStAR, such as the question length and function distribution, please see Appendix A.1. Human Verification : To evaluate the quality of automatically generated question-answer pairs, we also conduct human study based on three aspects, Correctness ,Clarity andContent Validity . The results show our dataset performing well across all aspects, see Appendix A.6.1 for details. 3.3 Evaluation Metrics We use accuracy for the Direct Answer subtask, as it directly aligns with the task of selecting the correct answer from multiple choices. However, for the Logical Chain subtask, accuracy alone does not consider intermediate reasoning steps, focusing only on the end result. To address this limitation and better align with the step-by-step reasoning Type="5"1P-B 1P-C 2P 1R 2R Filter out Chain Unrelated Templates Merge 1P -B and 1P -C to 1P Generating Questions for each pre -defined nodeAuto-Genera�on EngineQues�on ParaphraserPredefined Template Items Subtask Crea�onQues�on Template Constraints AnswerSpace Func�on Sequence Logical Chain GeneratorBasic CompareTwo Panel CompareOne Row Deduc�onTwo Panel Compare Filtering Merging Genera�n g Type="5" (2) Panel Retrieve (1) Scene Retrieve <X2>: Top -Right object Func�on Sequence <Panel> … </Panel>In Panel <P>, is the shape of the object on the <X1> have the same, more or fewer edges compared to the object on the <X2>?Ques�on Template Paraphrased Ques�on In panel <P>, how does the number of edges of the object on the <X1> compared to the number of edges of the object on the <X2>? (3) Filter UniqueConstraints <X> != <X2>Answer Space The same, Fewer, MoreScene XML Source <X1>: Le�-bo�om object (4) Shape Query <Scene>…</Scene>(5) Compare Shape Object at <X1>: <Entity Angle="3" Color="7" Size="3" Type="5"/> Object at <X1>: <Entity Angle="3" Color="7" Size=“4"Type=“4"/>Edge5 > 4 Ground Truth : More Type=“4" Type="5" <Scene><Panel><Struct name="Out_In"><Compone nt id="0" name="Out"><Layout…>< Entity Angle="7" Color="0" Size="5" Type="4"/></Layout></C omponent><Component id="1“… Linking F 2R 1R 2P 1P Group ques�ons based on the a�ributes of each config.Grouping Human VerifyDirect Answer GeneratorFigure 3:
|
https://arxiv.org/abs/2505.21850v1
|
Our MultiStAR dataset generation pipeline. process, we introduce a new metric, MSEval. As the example illustrated in Figure 4, the score for the 1R node is designed to aggregate from all its related nodes, which include three 1P nodes, two 2P nodes, and the 1R node itself. This aggregation captures the interconnectivity between nodes in the logical chain. And to reflect their contribution to the reasoning process, it assigns a weight to each of these nodes based on their importance. Specifically: 𝒘𝒄𝒘𝒂 𝒘𝒅𝒘 𝒘𝒆 𝒘 𝒘𝒘𝒘𝒘𝒃𝒘𝒇 Figure 4: This example is the MSEval score calculation for the 1R node, depends on 1P, 2P and 1R itself. The corresponding weights are denoted as wathrough wf. Aggregated Outcomes: To aggregated the inter- mediate steps outcomes across the chain, MSEval chooses to compute a joint probability of the cur- rent node and all dependent nodes as the product of their conditional probabilities. To prevent dis- regarding the model’s performance when it is in- correct, each conditional probability is derived by measuring the probability assigned to the ground truth. Using logits from the model’s final layer for the answer choices (e.g., A, B, C, D), the correct answer logit p(i) jis transformed into a probability via the softmax function. This is defined as: Joint Prob(i) t=Norm (P(a(i) t=a(i)∗ t,Φ(i) t|ξ(i) t)) =Y j∈Dt∪{t}exp(p(i) j ϵ(i) j) (4)p(i) j=P(a(i) j=a(i)∗ j|H(i) j) =exp(za(i)∗ j) P k∈A(i) jexp(zk)(5) where za(i)∗ jis the logit for the correct answer a(i)∗ j, andA(i) jis the set of all possible answers, ϵ(i) j=1 |A(i) j|represents the random rate. The term exp(·)normalizes the probability to account for varying numbers of choices for all nodes. Φ(i) t= {a(i) j=a(i)∗ j|j∈ D t}represents the correct answer probability for all dependent nodes, ξ(i) t= {H(i) j|j∈ D t∪ {t}}represents a set of prior information for each node. Weighted Importance : The joint probability does not account for the relative importance of each node in the chain. To address this, we introduce a weight for each node based on its influence on the current node, as measured by conditional mutual informa- tion (CMI). CMI is obtained by altering the set of all possible answers A(i) jat node jwhile keeping the outputs of all other nodes ( D(i) t\ {j}) fixed. We then observe how the model’s outputs A(i) j→tat the current node tchange. If A(i) j→tchanges sig- nificantly, the CMI is higher, resulting in a higher weight. As raw CMI values may vary in scale, we take a normalization of the conditional mutual information (NCMI). This process is defined as: CMI(i, j, t) =CMI(A(i) j;A(i) j→t| D(i) t\ {j}) =H(A(i) j→t| D(i) t\ {j}) +H(A(i) j| D(i) t\ {j}) −H(A(i) j→t, A(i) j| D(i) t\ {j}) (6) NCMI (i, j, t) =exp( CMI(i, j, t))P k∈Dt∪{t}exp( CMI(i, k, t )))(7) where H(·)denotes entropy. Note that for current nodet, we have A(i) t=A(i) t→t, so that current node would always have the highest impact to itself. We apply the NCMI to each node’s conditional probability to compute its weighted contribution to the reasoning process. To simplify the formulation,
|
https://arxiv.org/abs/2505.21850v1
|
we apply a log to the expression. The final MSEval score for stage t, instance iis computed as: MSEval(i) t= logY j∈Dt∪{t}(exp(p(i) j ϵ(i) j))NCMI (i,j,t) =X j∈Dt∪{t}NCMI (i, j, t)·p(i) j ϵ(i) j(8) MSEval(i) t=X j∈Dt∪{t}w(i) j·p(i) j ϵ(i) j(9) As MSEval relies on access to the logits of the model’s final layer, which can only be applied to open-source models. More details about MSEval, such as algorithm Pseudo Code and computational cost, are shown in Appendix A.8. 4 Results In the Direct Answer subtask, we evaluate 17 rep- resentative MLLMs in zero-shot setting, including both open-source and close-source models. For open-source models, we have pre-trained (with- out instruction-tuned) and instruction-tuned mod- els. Additionally, we consider a range of model sizes to ensure a comprehensive evaluation. The model settings and the input prompt details are shown in Appendix A.4.1. In the Logical Chain subtask, we evaluate six models which performed well in the Direct An- swer subtask in zero-shot setting. To compare how much benefit the models gain from prior informa- tion, we also evaluate the models without providing prior information ( Ht=∅), serving as a baseline for comparison. In this case, as we only consider the current stage, the MSEval score simplifies to MSEval(i) t=p(i) t |A(i) t|. The model settings and the input prompt details are shown in Appendix A.4.2. To establish the human performance on the sub- problems within MultiStAR, we conduct a human study on a crowd-sourcing platform in which partic- ipants solved a 10% subset of the benchmark. We do not evaluate human performance on the original RA VEN puzzles, but instead use the result reported in Zhang et al. (2019). Please see Appendix A.6.2 for more details about the human study.4.1 Result Analysis Direct Answer: Table 2 compares the perfor- mance of various MLLMs on the Direct Answer task. The two close-source models, GPT-4o and Gemini-1.5-pro, outperform others in 1P-B, 1R and 2R stages. GPT-4o achieves an impressive 88.07% accuracy for basic object-oriented questions within a single panel, highlighting its strong capability to recognize simple visual patterns. Gemini achieves the best results in rule deduction tasks for both one- row and two-row configurations, indicating its su- perior ability to process more complex visual inputs and perform logical reasoning effectively. Among open-source models, pre-trained models generally perform worse than instruction-tuned ones, em- phasizing that instruction tuning is an effective ap- proach for addressing question-answering and rea- soning tasks. Qwen2-VL-Instruct-72B achieves the best performance on cross-panel comparison tasks, showcasing its strength in identifying relationships between objects across panels. Regarding the limi- tations of the models, we performed an error analy- sis (refer to Appendix A.4.2) and identified that the models mainly face challenges with deep reason- ing errors. Additionally, the decode-only models, such as Qwen2-VL, InternVL2, NVLM-D, tend to perform much better than encoder-decoder models like InstructBLIP or LLaV A-v1.5. This suggests that decoder-only architectures may be more ef- fective for step-by-step reasoning to some extent. However, the observed performance gap may not be only attributed to architectural differences, there are other potential confounds such as improved
|
https://arxiv.org/abs/2505.21850v1
|
training data or additional new functionalities. Interestingly, as the questions become increas- ingly complex and require deeper reasoning, a no- ticeable decline in performance is observed across all models, gradually approaching the random base- line. While models demonstrate strong perfor- mance on basic perception tasks, they struggle sig- nificantly with deeper reasoning challenges. In con- trast, human performance remains stable at above 60% with increasing complexity. This highlights the substantial gap between model and human per- formance, emphasizing the limitations of current MLLMs in understanding and reasoning at a level comparable to humans. Another finding is that the performance gener- ally improves with larger model sizes (See Ap- pendix A.9.1 for detailed visual analysis.), mainly due to differences in the size of their language en- coders. This indicates that a robust language en- coder significantly influences overall performance. 1P-B 1P-C 2P 1R 2R Final Close-Source Models GPT-4o (2024) 88.1 72.7 54.0 40.0 31.6 12.1 Gemini-1.5-pro (2023) 83.2 75.0 50.0 46.9 37.8 11.6 Pre-trained Open-Source Models Qwen-VL-7B (2023) 17.5 24.7 22.5 15.8 12.2 12.3 Idefics2-8B (2024) 17.2 33.0 27.3 19.8 21.4 12.3 xGen-MM-4B (2024) 40.2 31.8 12.5 24.1 23.9 3.4 Instruction-Tuned Open-Source Models Instructblip-7B (2023) 27.5 37.1 27.7 14.0 13.5 11.6 Instructblip-13B (2023) 29.4 39.0 26.9 25.0 23.0 14.3 LLaV A-v1.5-7B (2024) 47.8 50.2 32.9 27.1 25.7 13.3 LLaV A-v1.5-13B (2024) 59.6 47.6 15.9 26.7 26.9 11.3 Idefics2-8B (2024) 85.1 65.5 42.0 37.0 36.8 29.9∗ xGen-MM-4B (2024) 81.2 47.9 21.9 24.0 25.6 2.4 Qwen2-VL-2B (2024) 40.4 42.7 22.8 13.7 11.4 9.9∗ Qwen2-VL-7B (2024) 64.8 56.6 47.9 31.0 33.2 24.3∗ Qwen2-VL-72B (2024) 86.9 77.8 60.2 45.5 21.9 63.7∗ NVLM-D-72B (2024) 80.5 67.1 45.3 39.1 31.3 12.7 Intern-VL2-2B (2024) 54.0 48.7 27.3 26.9 23.9 10.1 Intern-VL2-8B (2024) 63.0 54.5 34.2 23.2 23.4 14.6 Random 39.9 23.0 26.2 25.0 25.0 12.5 Human 98.5 88.9 69.1 62.1 63.3 84.4‡ Table 2: The answer accuracy of MLLMs for the Direct Answer subtask. The best results are highlighted in bold. ∗The model may have included the RA VEN dataset in training, these results are no longer comparable as baselines .‡Result reported in (Zhang et al., 2019). Logical Chain: Table 3 shows the performance of MLLMs on each stage of the Logical Chain task. Results are reported as accuracy and MSE- val scores. Among all models, Gemini-1.5-pro achieves the best performance on the first four stages, demonstrating its superior ability to rea- son through multi-stage dependencies. Among the open-source models, Qwen2-VL-72B outperforms others in both accuracy and MSEval, suggesting that our MSEval metric generally aligns with ac- curacy. When comparing results with and without prior, all models show better performance when prior is available in both metrics, highlighting their capacity to benefit from step-by-step reasoning, even when some generated previous answers might be incorrect. For visual representation of the per- centage increase, refer to Appendix A.7. Interestingly, for the final stage involving the RA VEN puzzle, prior information appears to pro- vide limited utility for most models, with accu- racy close to random except for those that might touch on RA VEN tasks. This aligns with previous findings that chain-of-thought
|
https://arxiv.org/abs/2505.21850v1
|
reasoning models struggle to solve RA VEN puzzles (Ahrabian et al., 2024; Gendron et al., 2024). However, MSEvalMetric Prior 1P 2P 1R 2R Final GPT-4o Accw/o 73.8 39.1 34.7 28.9 15.7 w 73.8 43.9 41.8 50.6 10.0 Gemini Accw/o 75.5 61.6 49.6 44.6 5.7 w 75.5 64.4 52.6 57.1 18.6 Idefics2 (8B)Accw/o 57.8 39.6 34.4 35.1 25.7∗ w 57.8 37.8 36.6 42.4 25.7∗ MSEvalw/o 2.02 1.29 1.29 1.27 1.24∗ w 2.02 1.48 1.51 1.51 1.44∗ Qwen2-VL (72B)Accw/o 74.1 56.0 45.1 42.9 64.3∗ w 74.1 57.8 47.3 54.2 65.7∗ MSEvalw/o 2.54 1.95 1.79 1.70 5.14∗ w 2.54 2.13 2.12 2.10 3.31∗ Intern-VL2 (8B)Accw/o 54.4 35.9 21.6 21.6 18.6 w 54.4 41.9 31.6 33.5 17.1 MSEvalw/o 1.75 1.09 0.90 0.91 1.18 w 1.75 1.38 1.30 1.18 1.26 NVLM-D (72B)Accw/o 66.1 42.4 37.8 23.5 8.6 w 66.1 45.2 39.1 43.3 7.1 MSEvalw/o 2.25 1.20 1.28 1.02 0.76 w 2.25 1.65 1.69 1.62 1.41 RandomAcc - 31.1 31.7 25.0 25.0 12.5 MSEval - 1.00 1.00 1.00 1.00 1.00 Table 3: The Accuracy (Acc) and MSEval scores for the Logical Chain task. *The model may have included the RA VEN dataset in training, these results are no longer comparable as baselines . The highest accuracy are high- lighted in bold . The highest MSEval are highlighted inunderline . w/o: without prior, w: with prior. See Appendix A.4.2 for results of more models. scores tell a different story. Models with prior information, particularly NVLM-D-72B, show sig- nificant improvements in MSEval (close to 100%), despite low final stage accuracy. Since MSEval evaluates correctness across intermediate and cur- rent stages, it reveals that models, despite failing the final stage, often solve intermediate steps with higher confidence. Another finding is the MSEval score for Qwen2-VL-72B declines when provided with prior information, which is not consistent with accuracy. This indicates its weakness in address- ing intermediate stages that were likely not part of its pre-training. Despite errors in earlier stages, the model still performs well on the final question, suggesting it relies on memorizing patterns from the final stage rather than demonstrating a strong understanding of the logical reasoning behind the task. This highlights a critical limitation in current MLLMs, while they may achieve impressive re- sults in isolated cases, their ability to generalize and reason through multi-stage logical depen- dencies remains inadequate . To further verify the MSEval’s effectiveness, we also conduct qualita- tive analysis in section 5. 5 Discussion What insights can be drawn from each at- tribute’s performance? From Figure 5, the “number” attribute is the easiest to recognize in higher level configurations (2P, 1R, 2R), while “po- sition” is the most easily identified in low-level, single-panel settings. Some attributes achieve accu- racy above 90%, indicating that the models exhibit strong counting and spatial reasoning capabilities. However, they struggle with attributes like “color” and “size”, particularly in high-level configurations, suggesting that the models may not be adequately designed or trained to focus on these aspects. Figure 5: Breakdown analysis of five attributes for Gemeni-1.5-pro and Qwen2-VL-72B on the Direct An- swer task. Refer to the Appendix A.9.2 for more
|
https://arxiv.org/abs/2505.21850v1
|
models. Given the ground truth for intermediate steps, how does it influence the final results? Table 4 highlights the ground truth priors generally demon- strate a positive impact. For example, the 1R stage benefits significantly from the insights about each panel and intra-panel comparisons. The 2R stage also sees substantial gains, as it mainly relies on double-checking information from the 1R stage without requiring additional changes in most cases. However, the final stage experiences a negative im- pact despite the inclusion of correct rules. This may be attributed to the complexity of the visual input, which contains numerous objects, making it challenging for the model to effectively apply the given rules. And for the Qwen2-VL-72B model, its tendency to memorize patterns might turn these ground truths into noise. How much do previous stages influence the cur- rent stage? A key step in our MSEval metrics is measuring the relative importance of intermediate dependent stages to the current stage using NCMI.Prior Info 1P 2P 1R 2R Final GPT-4ow/o 73.8 39.1 34.7 28.9 15.7 GT 73.8 49.2 66.0 93.8 14.3 Geminiw/o 75.5 61.6 49.6 44.6 5.7 GT 75.5 62.4 68.9 79.5 7.1 Qwen2-VL (72B)w/o 74.1 56.0 45.1 42.9 64.3∗ GT 74.1 61.9 70.7 92.2 55.7∗ NVLM-D (72B)w/o 66.1 42.4 37.8 23.5 8.6 GT 66.1 51.9 66.4 96.0 7.14 Table 4: The accuracy with incorporating ground truth information at each stage for Logical Chain task. * These results are no longer comparable as baselines. This allows us to assess how each step in the chain depends on prior stages, helping verify whether previous information is useful and if the designed chain is logically sound . As shown in Table 5, prior information often has significant weight on the cur- rent stage, except for the position attribute at “1P to 2P”. This suggests that querying object position in a single panel has little impact on determining if positions are the same across panels. Attributes 1P to 2P (1P,2P) to 1R 1R to 2R 2R to F Qwen2-VL (72B)Number 0.40 0.54 0.33 0.42Position 0.21 0.48 0.39 Shape 0.37 0.51 0.27 Color 0.36 0.53 0.33 Size 0.37 0.51 0.32 NVLM-D (72B)Number 0.40 0.56 0.37 0.63Position 0.21 0.50 0.42 Shape 0.40 0.52 0.35 Color 0.38 0.54 0.38 Size 0.39 0.54 0.38 Table 5: Average NCMI weight assigned to all depen- dent stages, grouped by each attribute. How do variations in handling long prompts affect model outcomes? Injecting prior infor- mation into prompts significantly increases their length (see Appendix A.1 for details), making it more challenging for models to focus on critical de- tails. To address this issue, we proposed two meth- ods: (1) adding HTML tags to structure the prompt by separating prior information, background, and questions, enabling the model to clearly distinguish each part, and (2) formatting the prompt as a PDF document with distinct sections and titles. Table 6 demonstrates that HTML tagging provides no- table benefits, particularly Qwen2-VL and GPT-4o, while the document-based approach proves less ef- fective, especially for high-level stages. However, for other open-source models, they yields no im- provements (see Appendix A.9.3 for
|
https://arxiv.org/abs/2505.21850v1
|
further details and examples of the conversion methods). Prior 1P 2P 1R 2R Final GPT-4oVanilla 73.8 43.9 41.8 50.6 10.0 Struct. 82.2 64.4 47.8 50.9 8.6 Doc. 80.8 44.8 31.1 24.9 10.0 GeminiVanilla 75.5 64.4 52.6 57.1 18.6 Struct. 70.6 66.4 52.9 57.8 17.1 Doc. 69.6 51.0 36.7 33.1 14.3 Qwen2Vanilla 74.1 57.8 47.3 54.2 65.7∗ Struct. 77.2 67.7 55.1 53.6 61.4 Doc. 76.5 63.1 50.2 46.6 24.3 Table 6: The accuracy of three prompting techniques for prior information for Logical Chain task. Vanilla : Pure Text, Struct. : Structure (HTML), Doc. : Document. *These results are no longer comparable as baselines. DS-GT: B DS-Pre: B DS-Name and Image What is the shape of the object in the le� part of the panel?A: circle B: hexagon C: triangle D: squareDS-GT: B DS-Pre: B17.16 25.5 16.0 16.0DS-Ques�on DS-Answer/ Logits DS-GT/Pre 1-Panel (1P) Mul�ple Available Consider only the le� part of the two panels. Is the shape of all the objects in the le� panel have the same, more, or fewer edges compared with the objects in the right one.A: Not Comparable B: Fewer C: The Same D: MoreDS-GT: D DS-Pre: D21.65 20.88 20.0 22.252-Panel Mul�ple Available Look at the three panels in the image from le� to right, paying a�en�on only to the le� por�ons of each panel, and iden�fy the rule that controls the shape of objects. One Row (1R)DS-GT: C DS-Pre: COther IPs Other 2P Acc Incorrect (0.0) MSEval 2.51 MSEval baseline 1.00DS-GT: C DS-Pre: C A: Edges↓1 B: Edges ↑1 C: No rule D: Shape same17.16 25.5 16.0 16.0DS-GT: A DS-Pre: B Figure 6: The top two rows are dependent stages (All Correct), the bottom row is current stage (Incorrect). 6 Qualitative Analysis To highlight the advantages of our MSEval metric over traditional accuracy, we provide several con- crete examples across different scenarios. Figure 6 shows a case where the final answer to the cur- rent question is incorrect, resulting in an accuracy score of 0.0. However, the model demonstrates strong performance in intermediate steps, correctly solving the one-panel and two-panel comparisons with high confidence. Additionally, the logits for the correct answer "A" are only slightly lower than the highest logits. By considering these factors, the MSEval metric assigns a reasonable score, re- flecting the model’s partial success. Further exam- ples, including cases where the current question is correct but intermediate steps are incorrect, are provided in Appendix A.10. 7 Conclusion In this work, we propose the MultiStAR bench- mark. While current models perform well on basicperception tasks, they face significant challenges with deeper reasoning stages. Our findings also in- dicate that models have the potential to benefit from step-by-step reasoning. However, despite extensive training yielding impressive results in isolated sce- narios, their ability to handle logical dependencies remains limited. We introduce a metric MSEval, which can be applied to a variety of reasoning tasks beyond visual reasoning, including domains such as mathematics and science, where multi-step logic is critical, provided there are clearly defined chains. 8 Limitation The automatic generation methods we use are restricted to datasets with clearly
|
https://arxiv.org/abs/2505.21850v1
|
defined object attributes, such as the XML files provided by RA VEN. This limits our expansion to RA VEN dataset, as most datasets lack such metadata. Ex- panding these methods to other datasets will require machine learning approaches, such as automatic object boundary detection, which could eliminate the need for metadata files. The logical chain design in our dataset is not perfect. In some cases, prior information is insuf- ficient for the current stage, such as instances in the one-row rule deduction stage where the rule might involve "Three Different Numbers", in this case, we also need the information about the sec- ond row. To make the chain construction more easily, currently, we design chains at the "Corpus- Level," meaning they are fixed across all instances. Future work could explore automatic "Instance- Level" chain construction methods, enabling mod- els to dynamically generate chains based on pat- terns within individual examples. As the results show (especially in the final stage), current models still lack the ability to navigate multi-stage logical dependencies effectively, even when trained on the data. We do not address this issue in the current work. Future research could explore optimization methods that focus on improv- ing intermediate reasoning steps, rather than just the final outcome, to enhance models’ multi-step reasoning capabilities. Acknowledgments References Kian Ahrabian, Zhivar Sourati, Kexuan Sun, Jiarui Zhang, Yifan Jiang, Fred Morstatter, and Jay Pujara. 2024. The curious case of nonverbal abstract rea- soning with multi-modal large language models. In Proceedings of Thirty Seventh Conference on Learn- ing Theory . Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, and Jingren Zhou. 2023. Qwen-vl: A frontier large vision-language model with versatile abilities. arXiv preprint arXiv:2308.12966 . Xu Cao, Bolin Lai, Wenqian Ye, Yunsheng Ma, Jo- erg Heintz, Jintai Chen, Jianguo Cao, and James M Rehg. 2024. What is the visual cognition gap be- tween humans and multimodal llms? arXiv preprint arXiv:2406.10424 . Zhe Chen, Weiyun Wang, Hao Tian, Shenglong Ye, Zhangwei Gao, Erfei Cui, Wenwen Tong, Kongzhi Hu, Jiapeng Luo, Zheng Ma, et al. 2024. How far are we to gpt-4v? closing the gap to commercial multimodal models with open-source suites. arXiv preprint arXiv:2404.16821 . François Chollet. 2019. On the measure of intelligence. arXiv preprint arXiv:1911.01547 . Wenliang Dai, Nayeon Lee, Boxin Wang, Zhuolin Yang, Zihan Liu, Jon Barker, Tuomas Rintamaki, Moham- mad Shoeybi, Bryan Catanzaro, and Wei Ping. 2024. Nvlm: Open frontier-class multimodal llms. arXiv preprint arXiv:2409.11402 . Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale Fung, and Steven Hoi. 2023. Instructblip: Towards general-purpose vision- language models with instruction tuning. Preprint , arXiv:2305.06500. Jiajun Deng, Zhengyuan Yang, Tianlang Chen, Wen- gang Zhou, and Houqiang Li. 2021. Transvg: End- to-end visual grounding with transformers. In Pro- ceedings of the IEEE/CVF International Conference on Computer Vision , pages 1769–1779. Yihao Ding, Siwen Luo, Hyunsuk Chung, and Soyeon Caren Han. 2023. Vqa: A new dataset for real-world vqa on pdf documents. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases
|
https://arxiv.org/abs/2505.21850v1
|
, pages 585–601. Springer. Difei Gao, Ruiping Wang, Shiguang Shan, and Xilin Chen. 2022. Cric: A vqa dataset for compositional reasoning on vision and commonsense. IEEE Trans- actions on Pattern Analysis and Machine Intelligence , 45(5):5561–5578. Gael Gendron, Qiming Bao, Michael Witbrock, and Gillian Dobbie. 2024. Large language models are not strong abstract reasoners yet. In ICLR 2024 Work- shop: How Far Are We From AGI . Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. 2017. Making the v in vqa matter: Elevating the role of image understanding in visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition , pages 6904–6913.Ruozhen He, Paola Cascante-Bonilla, Ziyan Yang, Alexander C Berg, and Vicente Ordonez. 2024. Im- proved visual grounding through self-consistent ex- planations. In Proceedings of the IEEE/CVF Confer- ence on Computer Vision and Pattern Recognition , pages 13095–13105. Tuomo Hiippala, Malihe Alikhani, Jonas Haverinen, Timo Kalliokoski, Evanfiya Logacheva, Serafina Orekhova, Aino Tuomainen, Matthew Stone, and John A Bateman. 2021. Ai2d-rst: A multimodal cor- pus of 1000 primary school science diagrams. Lan- guage Resources and Evaluation , 55:661–688. Yanbei Jiang, Krista A Ehinger, and Jey Han Lau. 2024a. Kale: An artwork image captioning system aug- mented with heterogeneous graph. arXiv preprint arXiv:2409.10921 . Yifan Jiang, Jiarui Zhang, Kexuan Sun, Zhivar Sourati, Kian Ahrabian, Kaixin Ma, Filip Ilievski, and Jay Pu- jara. 2024b. Marvel: Multidimensional abstraction and reasoning through visual evaluation and learning. arXiv preprint arXiv:2404.13591 . Justin Johnson, Bharath Hariharan, Laurens Van Der Maaten, Li Fei-Fei, C Lawrence Zitnick, and Ross Girshick. 2017. Clevr: A diagnostic dataset for compositional language and elementary visual reasoning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition , pages 2901–2910. J Richard Landis and Gary G Koch. 1977. The mea- surement of observer agreement for categorical data. biometrics , pages 159–174. Hugo Laurençon, Léo Tronchon, Matthieu Cord, and Victor Sanh. 2024. What matters when build- ing vision-language models? arXiv preprint arXiv:2405.02246 . Bohao Li, Yuying Ge, Yixiao Ge, Guangzhi Wang, Rui Wang, Ruimao Zhang, and Ying Shan. 2024. Seed- bench: Benchmarking multimodal large language models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 13299–13308. Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. 2024. Visual instruction tuning. Advances in neural information processing systems , 36. Jacek Ma ´ndziuk and Adam ˙Zychowski. 2019. Deepiq: A human-inspired ai system for solving iq test prob- lems. In 2019 International Joint Conference on Neural Networks (IJCNN) , pages 1–8. IEEE. Kenneth Marino, Mohammad Rastegari, Ali Farhadi, and Roozbeh Mottaghi. 2019. Ok-vqa: A visual ques- tion answering benchmark requiring external knowl- edge. In Proceedings of the IEEE/cvf conference on computer vision and pattern recognition , pages 3195–3204. Arsenii Kirillovich Moskvichev, Victor Vikram Odouard, and Melanie Mitchell. 2023. The con- ceptarc benchmark: Evaluating understanding and generalization in the arc domain. Transactions on machine learning research . Weili Nie, Zhiding Yu, Lei Mao, Ankit B Patel, Yuke Zhu, and Anima Anandkumar. 2020. Bongard-logo: A new benchmark for human-level concept learning and reasoning. Advances in Neural Information
|
https://arxiv.org/abs/2505.21850v1
|
Pro- cessing Systems , 33:16468–16480. OpenAI. 2024. https://openai.com/index/ hello-gpt-4o/ . Jean Raven. 2003. Raven progressive matrices. In Handbook of nonverbal assessment , pages 223–237. Springer. Tanik Saikh, Tirthankar Ghosal, Amish Mittal, Asif Ekbal, and Pushpak Bhattacharyya. 2022. Scienceqa: A novel resource for question answering on scholarly articles. International Journal on Digital Libraries , 23(3):289–301. Kuniaki Saito, Kihyuk Sohn, Xiang Zhang, Chun-Liang Li, Chen-Yu Lee, Kate Saenko, and Tomas Pfister. 2023. Pic2word: Mapping pictures to words for zero- shot composed image retrieval. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 19305–19314. Adam Santoro, Felix Hill, David Barrett, Ari Morcos, and Timothy Lillicrap. 2018. Measuring abstract reasoning in neural networks. In International con- ference on machine learning , pages 4477–4486. Gemini Team, Rohan Anil, Sebastian Borgeaud, Jean- Baptiste Alayrac, Jiahui Yu, Radu Soricut, Johan Schalkwyk, Andrew M Dai, Anja Hauth, Katie Millican, et al. 2023. Gemini: a family of highly capable multimodal models. arXiv preprint arXiv:2312.11805 . Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015. Show and tell: A neural image caption generator. In Proceedings of the IEEE Con- ference on Computer Vision and Pattern Recognition , pages 3156–3164. Peng Wang, Shuai Bai, Sinan Tan, Shijie Wang, Zhi- hao Fan, Jinze Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, et al. 2024. Qwen2-vl: Enhanc- ing vision-language model’s perception of the world at any resolution. arXiv preprint arXiv:2409.12191 . Le Xue, Manli Shu, Anas Awadalla, Jun Wang, An Yan, Senthil Purushwalkam, Honglu Zhou, Viraj Prabhu, Yutong Dai, Michael S Ryoo, et al. 2024. xgen-mm (blip-3): A family of open large multimodal models. arXiv preprint arXiv:2408.08872 . Xiang Yue, Yuansheng Ni, Kai Zhang, Tianyu Zheng, Ruoqi Liu, Ge Zhang, Samuel Stevens, Dongfu Jiang, Weiming Ren, Yuxuan Sun, et al. 2024. Mmmu: A massive multi-discipline multimodal understandingand reasoning benchmark for expert agi. In Pro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 9556–9567. Chi Zhang, Feng Gao, Baoxiong Jia, Yixin Zhu, and Song-Chun Zhu. 2019. Raven: A dataset for rela- tional and analogical visual reasoning. In Proceed- ings of the IEEE/CVF Conference on Computer Vi- sion and Pattern Recognition , pages 5317–5327. Yizhe Zhang, He Bai, Ruixiang Zhang, Jiatao Gu, Shuangfei Zhai, Josh Susskind, and Navdeep Jaitly. 2024. How far are we from intelligent visual deduc- tive reasoning? arXiv preprint arXiv:2403.04732 . A Appendix A.1 Dataset Analysis Figure 7a compares question lengths across differ- ent reasoning VQA datasets. Our dataset stands out with a roughly even distribution of question lengths, unlike other datasets that predominantly fo- cus on shorter questions, which makes our dataset more challenging for MLLMs. Figure 7b illus- trates the proportion of functional programs used in the dataset, showing a wide variety of functions, with query_rule being slightly more frequent. Fig- ure 7c highlights the number of multiple-choice options for each configuration, where differences in the number of choices arise due to constraints in the answer space for some configurations. Fig- ure 7d presents the input prompt length for each stage in the logical chain task, comparing settings with and
|
https://arxiv.org/abs/2505.21850v1
|
without prior information. Incorporating prior information from earlier stages significantly increases the maximum prompt length to 261.2 tokens, posing a challenge for MLLMs to parse effectively. A.2 Generation Template Table 7 outlines the question templates used for the Direct Answer Task. Notably, it is impractical to ask questions about the color or size of a single object, as these attributes are represented by numer- ical values (e.g., color as 255 or size as 8), which models cannot interpret meaningfully. Therefore, questions about size and color are excluded from the basic one-panel tasks. Instead, comparative questions such as "darker" or "smaller" are in- cluded in the one-panel comparison tasks. Two constraints are applied during template creation: Not_Equal(P, P2) ensures that Panels P and P2 are different, and Same_Row(P, P2) ensures that Pan- els P and P2 belong to the same row. For answer spaces exceeding four options, a sampling method is used to limit the choices to a maximum of four. Table 8 lists all possible values that each place- holder can take, as well as the complete set of rules for each attribute in the rule deduction configura- tion. After the placeholder values are assigned, references to "Panel <P>" are replaced with "this panel" to enhance clarity and readability. A.3 The Full Logical Chain Figure 8 presents the full view of our pre-defined logical chain, while Table 9 provides corresponding question examples for each node in the chain. Thechain’s ultimate objective is to solve the original RA VEN puzzle, where each row contains rules for attributes such as number, position, shape, size, and color. Intuitively, we connect each attribute’s rule deduction phase to the final phase, operating under the assumption that knowing all hidden rules of the RA VEN puzzle provides sufficient information to solve it. To determine the rules, we expand the reasoning scope from one row to two rows. For one-row rule deduction, we link single-panel perception and two-panel comparison to ensure that with panel- level details and inter-panel comparisons, the rule can be identified. This logical chain is crafted to mimic human problem-solving behavior: focusing first on single-panel perception, followed by panel comparisons, then deducing the first-row rule and validating it with the second row. However, this handcrafted chain design has lim- itations. First, it may not align with the model’s actual reasoning process, which can cause discrep- ancies in performance. Additionally, to simplify chain construction, we designed it at the "Corpus- Level," meaning it remains fixed across all in- stances. This approach sometimes results in in- sufficient prior information for certain stages. For example, in one-row rule deduction, a rule like "the number of objects distributes three distinct values across panels, rotating through each possible per- mutation" may require second-row information to resolve. These limitations highlight the need for more flexible and instance-specific logical chain designs in future work. A.4 Input Prompt And Model Settings A.4.1 Direct Answer Model Settings: All MLLMs are tested under their default settings under the environment of Huggingface1, the transformer package version in python is 4.39.2 for NVLM-D-72B model and 4.46.0 for all
|
https://arxiv.org/abs/2505.21850v1
|
others. Prompt Details: The RA VEN dataset includes various puzzle settings, such as Left-Right, Up- Down, and In-Out, where rules are applied sepa- rately to distinct parts of the panels (Figure 9). To address these settings, when we decompose the problem into subproblems, we treat each part inde- pendently. For instance, there are separate question sets for the left and right sections of the panels in the Left-Right setting, with the question explicitly 1https://huggingface.co/ (a) Question Length Compare (b) Function Distribution 2R_Deduc�on 25%4-choices 1R_Deduc�on 25%4-choices 2P_Compare 22% 4-choices 15% 3-choices 3% 2-choices 3%1P_Compare 18% 2-choices 8%3-choices 8%4-choices 2%4-choices 8%1P-Basic 9%3-choices 1% (c) Num. Choice Distribution (d) Prompt Length Figure 7: The left three panels (a), (b), and (c) present analyses of the Direct Answer task, while the right panel (d) focuses on the Logical Chain task. Type="5" 2R1R2P1P F2P1P 1P Number Focused Type="5" 2R1R2P1P 2P1P 1P Posi�on Focused Type="5" 2R1R2P1P 2P1P 1P Shape Focused Type="5" 2R1R2P1P 2P1P 1P Size Focused Type="5" 2R1R2P1P 2P1P 1P Color Focused Figure 8: The full logical chain: To arrive at the final answer, we incorporate rules from all five attributes. stating which part is being addressed. To clarify the panel structure, an additional sentence is appended to the question: •Left-Right: The panel is divided into two sections by a vertical line, separating the left side from the right side, with objects possibly present in both sections. •Up-Down: The input panel is split by a hor- izontal line, separating the top side from the bottom side, with objects possibly present in both sections. •In-Out: The panel is divided into two regions: an outer structure and an inner structure, with objects possibly present in both regions. This extra information is unnecessary for other settings. The complete prompt format is: [Extra Setting Info ] Question: [question] Please select one of the following: [choices] . The answer should be one of A, B, C, D. Figure 9: The original RA VEN puzzle, includes seven puzzle settings.A.4.2 Logical Chain Model Settings: All MLLMs are tested under their default settings under the environment of Huggingface2, the transformer package version in python is 4.39.2 for NVLM-D-72B model and 4.46.0 for all others. In addition, to handle the length of our prompts, we increase the maximum token length to 2048. When prior information is injected, a rule-based program is used to convert the information into text and integrate it into the prompt. This transforma- tion is necessary because the images referenced in prior questions are not the same as those in the current question, making it impossible to directly reuse them. For example, if the prior question is "How many objects in this panel?" and the current question is "Comparing the number of objects in the left and right panel," the phrase "this panel" cannot directly correspond to "left panel" or "right panel." To ad- dress this, we transform "this panel" into a more specific term, such as "left panel" or "right panel." Table 11 outlines the transfer rules for Number and Position. Similar patterns are applied for other attributes, which are not listed here
|
https://arxiv.org/abs/2505.21850v1
|
for brevity. After the prior information is transformed, the prompt is structured as follows: [Extra Setting Info] Below is the information generated from the previous steps, please be aware that it may or may not contain errors: [[Prior Info 1], [Prior Info 2], ...] Question: [question] Please select one of the following: [choices] . The answer should be one of A, B, C, D. More results: Table 10 shows the results three additional models InstructBlip-13B, LLaV A-1.5- 13B, and xgen-mm. Prior information appears to provide limited utility for these models, all of them 2https://huggingface.co/ are just around random baselines except xgen-mm have relative good basic perception ability. A.5 Error Analysis Figure 10: Errors distribution for each model under the settings of without and with prior. A.5.1 Errors from Explanation To further investigate model performance, we con- duct an error analysis for the Logical Chain subtask. Models are asked to generate with explanations alongside answers, and we manually reviewed all output explanations when the models predict incor- rect answer. Errors are classified into four types: •Perception Error: This occurs when the model misinterprets visual inputs, such as object numbers or shapes. In the provided example, there should be four objects in the left panel and three in the right panel, but the model fails to recognize this correctly. •Reasoning Error: This involves incorrect logic applied to correctly perceived inputs. In this example, the model accurately identifies the objects and their edge numbers in each panel. However, due to flawed reasoning, it in- correctly concludes that the number of edges is decreasing, which is not the case. •Unrelated Information Error: This error type refers to the generation of incomplete sentences or unrelated information. Here, the question asks about the position of objects, but the explanation provided by the model focuses on shapes and sizes, which are irrelevant. •Propagation Error: This occurs when the model fails to detect or correct inaccuracies in prior information. In this example, the prior information is already incorrect, but the model does not identify or address these inaccuracies, leading to an incorrect answer.Figure 10 reveals that reasoning errors are the most common, followed by perception errors, with unrelated-information and propagation errors be- ing rare. Gemini exhibits the lowest perception error rate, while GPT-4o shows the lowest reason- ing error rate. Notably, injecting prior information significantly reduces perception errors, demonstrat- ing that prior knowledge enhances models’ under- standing of visual inputs, but it does not help with reasoning error. A.5.2 Unanswerable Question We conducted a small experiment by making all the answer choices incorrect, this can be used to ver- ify the models robustness. In this experiment, we defined hallucination as the model’s failure to rec- ognize that the question is "non-answerable". We defined two approaches for setting unanswerable questions: •Same Attribute: All choices are incorrect while retaining the same attribute as the ques- tion. •Changed Attribute: The choices are changed to a different attribute (e.g., the question asks about the number of objects, but the choices are their positions). We sampled 10 examples for each stage (60 in total) and
|
https://arxiv.org/abs/2505.21850v1
|
tested them on GPT-4o, manually review- ing the output explanations. As shown in Table 12, interestingly, as the questions become more difficult, particularly at the final step, the model increasingly fails to distinguish unanswerable ques- tions, resulting in higher rates of hallucination. Fur- thermore, under Setting 2, where the attribute is changed, the model exhibits a greater likelihood of hallucination, as it struggles to recognize the shift in attributes. A.6 Human Sudies and and Inter-Participant Agreement To evaluate the subjective quality of human per- formance in our study, we conducted two separate parts: Part A and Part B. Part A focuses on evalua- tion of the quality of our automatically generated dataset, while Part B focuses on testing human reasoning abilities across different stages of com- plexity. For both Part A and Part B, a Consent Form and a Plain Language Statement are provided to the annotators prior to the annotation process. These documents must be read and agreed upon before they can proceed with the annotations. A.6.1 Part A Part A involved five research students who partic- ipated in answering a series of abstract reasoning questions. This section aimed to evaluate the qual- ity of the dataset generated by our template-based methods. Since the Direct Answer and Logical Chain share the same pool of templates, and Direct Answer covers all templates, we chose to focus on assessing the quality of the Direct Answer com- ponent. A random sample of 620 questions was selected for this evaluation. To ensure participants clearly understood the tasks and evaluation crite- ria, a detailed guide was provided at the beginning of the questionnaire (see Figure 11). An example question from the questionnaire is shown in Figure 12. Figure 11: A detailed guide provided to participants at the beginning of the questionnaire for Part A. To thoroughly assess the human performance in Part A, we used three key indicators: Correctness, Clarity, and Content Validity. Correctness assesses whether the answer pro- vided for the question is correct. Evaluators were asked to determine if the provided answer accu- rately corresponded to the visual information pre- sented in the panels. This involved a careful com- parison between the answer and the visual data to ensure accuracy. Clarity evaluates how clear and understandable the question is. Evaluators considered whether the Figure 12: Sample Question from the Questionnaire for Part A. question was phrased clearly and was easy to under- stand. They assessed if the wording made sense, if any terms were ambiguous, and whether someone without prior knowledge could easily comprehend the question. This indicator is crucial for ensuring that the questions are accessible and interpretable by all participants. Content validity checks if the question is suitable for the task stage in which it is presented. Evalua- tors examined whether the content of the question aligned with the current task stage. The dataset is divided into five types, such as one panel basic perception or two panel comparisons. Participants needed to ensure that the question was appropriate for the specific reasoning type it represented. This indicator ensures that each question is
|
https://arxiv.org/abs/2505.21850v1
|
relevant and appropriately challenging for its designated stage. The metrics used to evaluate performance in Part A included correctness, clarity, and content validity, with positive rates for each metric provided in Table 13. The positive rate is the proportion of questions answered by "Yes". The results indicate that the participants in Part A performed exceptionally well across all metrics, with Correctness, Clarity, and Content Validity scores consistently high. This suggests that the questions were well-designed and comprehensible, and the participants were able to provide accurate answers. A.6.2 Part B Part B utilized the Prolific crowdsourcing platform3 to recruit 162 participants who were subjected to the same set of abstract reasoning questions as those given to the MLLMs. The objective of this part was to evaluate human performance on our dataset, enabling a comparison between human and model capabilities. Participants received a detailed guide at the beginning of the questionnaire, which included task descriptions and several examples, as shown in Figure 13. The guide varied depend- ing on the stage of the Direct Answer task, but for this section, we include only the One-Panel Basic Perception stage. And similar to Part A, due to Direct Answer covering all templates, we chose to focus on assessing the human performance of the Direct Answer component. Each question in the questionnaire for Part B included a image and a multiple-choice question, as illustrated in Figure 14. Figure 13: A detailed guide provided to participants at the beginning of the questionnaire for Part B. This guide focuses on One-Panel Basic Perception. The performance metrics for Part B are summa- rized in Table 14. The performance for Part B show a noticeable decline in positive rates, particularly for more complex tasks such as Two Panel Com- pare, One Row, and Two Rows. This decline high- 3https://www.prolific.com/ Figure 14: Sample Question from the One-Panel Basic Perception Questionnaire for Part B. lights the increased difficulty of these tasks and suggests that the broader participant pool found these questions more challenging. Inter-Participant Agreement. To quantify the inter-participant agreement across participants for Part B stuides, we computed Fleiss’ kappa scores (Landis and Koch, 1977) across different question types. The Fleiss’ Kappa scores for each question types are provided in Table 15. The high Fleiss’ Kappa score for One-Panel Ba- sic (0.9711) indicates strong agreement among the participants, this is mainly due to the simplicity of One-Panel Basic questions. However, the lower scores for Two-Panel and rule deduction phase highlight the increased difficulty and the signifi- cant variability in human interpretation for these more complex tasks. A.7 Performance Increase with Prior Info Tables 15 and 16 present the percentage increase in Accuracy and MSEval metrics, respectively. It is evident that, except for the Final stage, all other stages show improved performance, with MSE- val and Accuracy metrics closely aligned in these cases. However, in the Final stage, while Accu- racy does not show a significant increase for the four open-source models, MSEval suggests some improvement due to the incorporation of rule infor- mation for solving the final RA VEN puzzle. An exception is
|
https://arxiv.org/abs/2505.21850v1
|
observed with Qwen2-VL-72B, which may already perform well on RA VEN. Incorporat- ing information from earlier stages might introduce misleading details, leading to a significant perfor- mance drop. A.8 Additional Details about MSEval A.8.1 Algorithm Pseudo Code Algorithm 1 shows the details Pseudo Code for our proposed MSEval metrics. A.8.2 Computational Cost The computational complexity of the algorithm can be expressed as: O(N· |Et| · |A|) where: •N: The number of samples. •|Et|: The number of edges in the logical chain (dependency relationships between nodes). •|A|: The number of possible choices for each node. In most cases, |A|, the number of possible choices per node, is typically equal to 4. As a result, the computational complexity simplifies to: O(4·N· |Et|)or simply O(N· |Et|), which is effectively linear with respect to both the number of instances ( N) and the number of edges (|Et|) in the logical chain. By reducing the number of edges or employing a smaller logical chain, the computational cost can be significantly minimized, ensuring better scalability and efficiency, especially for large datasets or complex logical dependencies. This simplification highlights the importance of optimizing the chain structure to maintain compu- tational feasibility. The actual time cost for each open-source model we tested is shown in Table 16. A.9 Discussion Additional Materials A.9.1 Model Parameters Trend Figures 17, 18, 19, 20, 21, 22, and 23 demonstrate that performance generally improves with larger model parameter sizes across stages, except for the 2R and Final stages. For these two stages, most models perform below the random baseline. The differences in model performance are primarily attributed to the varying sizes of their language en- coders, highlighting the significant role of a robustlanguage encoder in overall performance. How- ever, despite the observed improvements, a notice- able gap persists between model and human perfor- mance. This discrepancy may arise from the com- plexity of the visual input, which poses challenges for models in fully understanding and integrating multimodal information. A.9.2 Attribute Break-Down Analysis Figure 24 illustrates the attribute-level performance breakdown of two open-source models and four closed-source models evaluated on the logical chain task. Gemini, GPT-4o, and the two larger models, Qwen2-VL and NVLM-D, exhibit similar trends: the Number attribute achieves the highest performance in more complex stages (2P, 1R, and 2R), while Position dominates in lower-level stages (1P-C and 1P-B). In contrast, smaller models like Idefics2 and Intern2-VL struggle with the Number attribute but perform relatively better on Position, indicating that these models are less sensitive to counting tasks but demonstrate better spatial rea- soning. A.9.3 Handling Long Prompts Table 17 presents the accuracy and MSEval scores for three prompting techniques incorporating prior information in the Logical Chain task. For GPT- 4o, Qwen2-VL, and Gemini, the use of HTML tags yields significant performance improvements. Additionally, for GPT-4o and Qwen2-VL, the Document-based prompting also demonstrates no- table benefits. However, for other models, these two techniques show a negative impact. In this case, the MSEval results are consistent with the accuracy outcomes. List 1 shows an example of HTML structured prompts, while Figure 25 shows an example
|
https://arxiv.org/abs/2505.21850v1
|
of Document structured prompts. <!DOCTYPE html > <html > <body > <h1> In this visual puzzle , you are given two panels . Each panel divided into two sections by a vertical line , separating the < strong >left </ strong > side from the < strong >right </ strong > side , with objects might present in both sections . Below is the information generated from the previous steps , please be aware that it may or may not contain errors : </h1> Figure 15: Accuracy percentage increase after incorporating prior info. <div> <h2>Panel Information </ h2> <ul> <li>There are 1 objects in the < strong >left </ strong > part of the <strong >left </ strong > panel .</ li > <li>There are 2 objects in the < strong >left </ strong > part of the <strong >right </ strong > panel .</ li> </ul> </div> <div> <h2>Question </ h2> <p> Consider only the < strong >left </ strong > part of the two panels in the image . Does the < strong >left </ strong > panel contain the same number of objects , more objects , or fewer objects than the < strong >right </ strong > panel ? Please select one of the following : </p> <ul> <li>A: More </ li> <li>B: The same </ li> <li>C: Fewer </ li> </ul> <p>The answer should be one of A, B, C.</ p> </div> </body > </html > Listing 1: HTML Structure for Handling Long PromptA.10 Qualitative Analysis Lists 2 to 6 present five examples that illustrate the advantages of our MSEval metric over tradi- tional accuracy. For instance, in List 5, all the inter- mediate steps are incorrect, yet the model arrives at the correct answer with only a small confidence margin (31.375 compared to the second-highest confidence of 31.125). In this case, the model’s performance is not truly effective, as it is unclear how it managed to produce the correct answer de- spite incorrect intermediate steps. Unlike tradi- tional accuracy, which would mark this as fully correct, MSEval appropriately penalizes such cases by assigning a low score. Conversely, a reverse scenario is shown in List 4, where traditional accuracy marks the result as entirely incorrect. However, the model correctly answers all intermediate steps, and the probability of the correct answer is very close to the highest confidence value. In this situation, MSEval assigns a relatively high score, reflecting the model’s par- tial success and rewarding its correct reasoning process. Figure 16: MSEval percentage increase after incorporating prior info. Figure 17: The average accuracy trend for the Direct Answer task as model sizes gradually increase. The trend line is derived using Gaussian smoothing, and the average accuracy is calculated by averaging the results across all five stages. Figure 18: The One-Panel Basic Perception accuracy trend for the Direct Answer task as model sizes gradu- ally increase. The trend line is derived using Gaussian smoothing. Figure 19: The One-Panel Comparison accuracy trend for the Direct Answer task as model sizes gradually in- crease. The trend line is derived using
|
https://arxiv.org/abs/2505.21850v1
|
Gaussian smooth- ing. Figure 20: The Two-Panels Comparison accuracy trend for the Direct Answer task as model sizes gradually in- crease. The trend line is derived using Gaussian smooth- ing. Algorithm 1 Overall Workflow Input: Logical Chain Dt Define: St={t} ∪ D t Model logits Z={z(i) j|j∈ St, i= 1, . . . , N } All possible choices for each node {A(i) j|j∈ St} Output: MSEval score for stage t: MSEval t Step 1: Compute Conditional Probabilities foreach sample i= 1toNdo foreach node j∈ Stdo Compute probability p(i) j←exp(z(i) j) P k∈A(i) jexp(z(i) k) end for end for Step 2: Compute Conditional Mutual Information foreach sample i= 1toNdo foreach node j∈ Stdo AlterA(i) jto generate perturbed outputs A(i) j→t Compute: CMI(i, j, t)←H(A(i) j→t| D(i),−j t ) +H(A(i) j| D(i),−j t ) −H(A(i) j→t,A(i) j| D(i),−j t ) end for end for Step 3: Normalize Conditional Mutual Information foreach sample i= 1toNdo foreach node j∈ Stdo Compute: NCMI (i, j, t)←exp( CMI(i, j, t))P k∈Stexp( CMI(i, k, t )) end for end for Step 4: Compute MSEval Score for Each Sample foreach sample i= 1toNdo Initialize MSEval(i) t←0 foreach node j∈ Stdo Compute ϵ(i) j=1 |A(i) j| Update: MSEval(i) t←MSEval(i) t+NCMI (i, j, t)·p(i) j ϵ(i) j end for end for Step 5: Average MSEval Across All Samples Compute: MSEval t←1 NNX i=1MSEval(i) t Return: MSEval t Figure 21: The One-Row Deduction accuracy trend for the Direct Answer task as model sizes gradually in- crease. The trend line is derived using Gaussian smooth- ing. Figure 22: The Two-Rows Deduction accuracy trend for the Direct Answer task as model sizes gradually in- crease. The trend line is derived using Gaussian smooth- ing. Figure 23: The Final accuracy trend for the Direct An- swer task as model sizes gradually increase. The trend line is derived using Gaussian smoothing. Figure 24: The Radar Chart for all six models in Logical Chain subtask. Figure 25: Document Structure for Handling Long Prompt. Question Pattern Question Example Attribute Constraints Answer Space One-Panel Basic How many objects are in panel <P>?How many objects are in panel 1?Number NA [1,2,3,4,5,6,7,8,9] What is the shape of the object at <X> in panel <P>?What is the shape of the object at top-left in panel 1?Shape NA [ "triangle", "square", "pentagon", "hexagon", "circle" ] Where is the <S> positioned in panel <P>?Where is the triangle positioned in panel 1?Position NA ["Left", "Right", "Top", "Down", "Bottom-Left", ...] One-Panel Comparison In panel <P>, is the shape of the object on the <X> have the same, more, or fewer edges compared to the object on the <X2>? (Note: The edge number increases in the following order: triangle, square, pentagon, hexagon, circle)In this panel, is the shape of the object on the left have the same, more, or fewer edges compared to the object on the right? (Note: The edge number increases in the following order: triangle, square, pentagon, hexagon, circle)"Shape Not_Equal(X, X2) [ "The same", "Fewer", "More" ] In panel <P>, does the object on the <X> the same, smaller or larger in size compared
|
https://arxiv.org/abs/2505.21850v1
|
to the ob- ject on the <X2>?In panel 1, does the object on the top-left the same, smaller or larger in size compared to the ob- ject on the bottom-right?Size Not_Equal(X, X2) [ "The same", "Smaller", "Larger" ] In panel <P>, does the object on the <X> the same, darker or brighter in color compared to the object on the <X2>?In panel 1, does the object on the top-left the same, darker or brighter in color compared to the object on the bottom-right?color Not_Equal(X, X2) [ "The same", "Darker", "Brighter" ] In panel <P>, where is the <S> relative to the <S2>?In panel 1, where is the triangle relative to the square?Position Not_Equal(S, S2) [ "Left", "Right", "Above", "Below", ... ] Are all objects in panel <P> of the same shape?Are all objects in panel 1 of the same shape?Shape NA [ "Yes", "No" ] Are all objects in panel <P> of the same size?Are all objects in panel 1 of the same size?Size NA [ "Yes", "No" ] Are all objects in panel <P> of the same color?Are all objects in panel 1 of the same color?Color NA [ "Yes", "No" ] Two-Panels Comparison Does panel <P> contain the same number of objects, more ob- jects, or fewer objects than panel <P2>?Does panel 1 contain the same number of objects, more objects, or fewer objects than panel 2?Number Not_Equal(P, P2), Same_Row(P, P2)[ "The same", "More", "Fewer" ] Is the shape of all the objects in panel <P> have the same, more, or fewer edges compared to the objects in panel <P2>? If the shapes within either panel are al- ready different from each other, select ’Not Comparable.’ (Note: The edge number increases in the following order: triangle, square, pentagon, hexagon, circle)Is the shape of all the objects in panel 1 have the same, more, or fewer edges compared to the ob- jects in panel 2? If the shapes within either panel are already different from each other, select ’Not Comparable.’ (Note: The edge number increases in the fol- lowing order: triangle, square, pentagon, hexagon, circle)Shape Not_Equal(P, P2), Same_Row(P, P2)[ "The same", "Fewer", "More", "Not compara- ble" ] Is the size of all the objects in panel <P> the same as, smaller or larger than the objects in panel <P2>? If the sizes within either panel are already different from each other, select ’Not Compara- ble.’Is the size of all the objects in panel 1 the same as, smaller or larger than the objects in panel 2? If the sizes within either panel are already different from each other, select ’Not Comparable.’Size Not_Equal(P, P2), Same_Row(P, P2)[ "The same", "Smaller", "Larger", "Not compara- ble" ] Question Pattern Question Example Attribute Constraints Answer Space Two-Panels Comparison Is the color of all the objects in panel <P> the same as, darker or brighter than the objects in panel <P2>? If the colors within either panel are already different from each other, select ’Not Compara- ble.’Is the color of all the objects in panel 1 the same as, darker or brighter than the objects in panel
|
https://arxiv.org/abs/2505.21850v1
|
2? If the colors within either panel are already different from each other, select ’Not Compara- ble.’Color Not_Equal(P, P2), Same_Row(P, P2)[ "The same", "Darker", "Brighter", "Not compara- ble" ] Is the position of all the objects in panel <P> the same as the objects in panel <P2>?Is the position of all the objects in panel 1 the same as the objects in panel 2?Position Not_Equal(P, P2), Same_Row(P, P2)[ "Yes", "No" ] One-Row Rule Deduction Examine the three panels in the image from left to right and identify the rule that governs the number of the objects.- Number NA See Number Rule in Table 8 Examine the three panels in the image from left to right and identify the rule that governs the position of the objects.- Position NA See Position Rule in Table 8 Examine the three panels in the image from left to right and identify the rule that governs the shape of the objects.- Shape NA See Shape Rule in Table 8 Examine the three panels in the image from left to right and identify the rule that governs the size of the objects.- Size NA See Size Rule in Table 8 Examine the three panels in the image from left to right and identify the rule that governs the color of the objects.- Color NA See Color Rule in Table 8 Two-Rows Rule Deduction Inspect the first row of three panels from left to right and inspect the second row of three panels from left to right and determine a rule applicable to both rows that governs the number of objects.- Number NA Same as One-Row Inspect the first row of three panels from left to right and inspect the second row of three panels from left to right and determine a rule applicable to both rows that governs the position of objects.- Position NA Same as One-Row Inspect the first row of three panels from left to right and inspect the second row of three panels from left to right and determine a rule applicable to both rows that governs the shape of objects.- Shape NA Same as One-Row Inspect the first row of three panels from left to right and inspect the second row of three panels from left to right and determine a rule applicable to both rows that governs the size of objects.- Size NA Same as One-Row Inspect the first row of three panels from left to right and inspect the second row of three panels from left to right and determine a rule applicable to both rows that governs the color of objects.- Color NA Same as One-Row Table 7: Question pattern templates with corresponding example questions. There are 25 templates in total. 3 templates for One-Panel Basic. 7 templates for One-Panel Comparison. 5 templates for Two-Panels Comparison. 5 Templates for One-Row Rule Deduction. 5 Templates for Two-Rows Rule Deduction. Category Value Ranges Placeholders Position (<X>) center, left, right, top, bottom, top-left, top-right, bottom-left, bottom-right, top-left, top-center, top-right, middle-left, middle-center, middle-right, bottom-left, bottom- center, bottom-right, outer-part, inner-part, top-left of the inner part, top-right
|
https://arxiv.org/abs/2505.21850v1
|
of the inner part, bottom-left of the inner part, bottom-right of the inner part Panel (<P>) 0, 1, 2, 3, 4, 5, 6, 7 Shape (<S>) triangle, square, pentagon, hexagon, circle Rules Number Rule The number of objects gradually decreases by 1; The number of objects remains constant; The number of objects gradually increases by 1; The number of objects distributes three distinct values across panels, rotating through each possible permu- tation of these values; The number of objects in the last panel equals the sum of the objects in the previous two panels; The number of objects in the last panel equals the difference between the objects in the previous two panels; No clear rule is present Position Rule If an object is in the first panel but not in the second at corresponding position, it appears in the third panel; The position of objects in the last panel is the union of positions from the previous two panels; Three distinct position settings across panels, rotating through each possible permutation of these settings; The position of objects does not change across panels; No clear rule is present Color Rule The color of objects gradually darkens by a constant amount each time; The color of objects gradually brightens by a constant amount each time; The color of objects in the last panel is the sum of the colors in the previous two panels; The color of objects in the last panel is the difference between the colors in the previous two panels; Three distinct colors across panels, rotating through each possible permutation of these colors; The color remains constant; No clear rule is present Size Rule The size of objects gradually increases by a constant amount each time; The size of objects gradually decreases by a constant amount each time; The size of objects in the last panel is the sum of the sizes in the previous two panels; The size of objects in the last panel is the difference between the sizes in the previous two panels; Three distinct sizes across panels, rotating through each possible permutation of these sizes; The size remains constant; No clear rule is present Shape Rule The edge number of shape gradually decreases by 1; The edge number of shape gradually increases by 1; Three distinct shapes across panels, rotating through each possible permutation of these shapes; The shape remains constant; No clear rule is present Table 8: Pre-defined placeholder value ranges and rules for five attributes Attributes Logical Chain Stage Example Number1P: How many objects are in the panel? 2P: Does the left panel contain the same number of objects, more objects, or fewer objects than the right panel? 1R: Inspect the three panels in the image from left to right and identify the rule that dictates the number of objects. 2R: Inspect the first row of three panels from left to right and inspect the second row of three panels from left to right and determine a rule applicable to both rows that governs the number of objects. Position1P: Where is the circle positioned
|
https://arxiv.org/abs/2505.21850v1
|
in the panel? 2P: Is the position of all the objects in the left panel the same as the objects in the right panel? 1R: Examine the three panels in the image from left to right and identify the rule that governs the position of the objects. 2R: Examine the three panels in the first row, then the three panels in the second row, both from left to right, and derive a rule that applies to both rows in relation to the position of objects. Shape1P: What is the shape of the object at center in the panel? 2P: Is the shape of all the objects in the left panel have the same, more, or fewer edges compared to the objects in the right panel? 1R: Inspect the three panels in the image from left to right and identify the rule that dictates the shape of objects. 2R: Analyze the first row of three panels from left to right, followed by the second row of three panels, and identify a common rule that dictates the shape of objects in both rows. Size1P: Are all objects in the panel of the same size? 2P: Is the size of all the objects in the left panel the same as, smaller or larger than the objects in the right panel? 1R: Analyze the three panels in the image from left to right and uncover the rule that governs the size of objects. 2R: Review the first row of three panels in sequence from left to right, then do the same for the second row, and determine a shared rule that governs the size of objects in both rows. Color1P: Are all objects in the panel of the same color? 2P: Is the color of all the objects in panel <P>the same as, darker or brighter than the objects in panel <P2>? 1R: Inspect the three panels in the image from left to right and identify the rule that dictates the color of objects. 2R: Examine the three panels in the first row, then the three panels in the second row, both from left to right, and derive a rule that applies to both rows in relation to the color of objects. FinalYou are presented with a 3x3 grid of panels, called the Problem Matrix. The last panel is missing and marked with a ‘?’ symbol. Below the matrix, there is a set of 8 possible answer options labeled from 1 to 8. Your task is to determine which panel from the answer set (1-8) correctly fits the missing position in the problem matrix. The pattern in the matrix follows some hidden rules that apply row by row (horizontally). Please select the number (from 1 to 8) of the panel that completes the pattern. Table 9: A full logical chain with the examples for five stages. Metric Prior 1P 2P 1R 2R Final InstructBLIPAccw/o 31.94 28.09 22.00 23.82 7.14 w 31.94 28.09 22.13 23.45 8.57 MSEvalw/o 1.000 1.000 1.007 1.000 0.848 w 1.000 1.000 1.004 1.004 0.959 xGen-MMAccw/o 60.48 23.27 25.82 21.64 14.29 w 60.48 23.18
|
https://arxiv.org/abs/2505.21850v1
|
20.18 26.36 5.71 MSEvalw/o 2.006 0.794 0.965 0.953 1.290 w 2.006 1.082 0.812 0.985 1.013 Llava-13bAccw/o 30.91 31.55 23.09 22.36 15.71 w 30.91 31.09 22.36 21.64 15.71 MSEvalw/o 1.059 0.997 0.941 0.910 1.002 w 1.059 1.009 0.944 0.929 0.975 RandomAcc - 31.1 31.7 25.0 25.0 12.5 MSEval - 1.00 1.00 1.00 1.00 1.00 Table 10: The Accuracy (Acc) and MSEval scores for the Logical Chain task. w/o: without prior, w: with prior. Attribute Stage Numbers Output Number single_panel ["1"] There are {answer_str} objects in the left panel. ["2"] There are {answer_str} objects in the right panel. ["3"] There are {answer_str} objects in the right panel. two_panels ["1", "2"] The left panel has {answer_str} objects compared to the middle panel. ["2", "3"] The middle panel has {answer_str} objects compared to the right panel. one_row Any The rule for the number of objects in the first row is: {answer_str} . Position single_panel ["1"] Where is the (\w+) positioned in the panel? be- comes: There is a \1 positioned in the left panel. ["2"], ["3"] Where is the (\w+) positioned in the panel? be- comes: There is a \1 positioned in the right panel. two_panels ["1", "2"] Ifanswer_str isYes, "The position of all the objects in the left panel is the same as the objects in the middle panel." Otherwise, "The position of all the objects in the left panel is not the same as the objects in the middle panel." ["2", "3"] Ifanswer_str isYes, "The position of all the objects in the middle panel is the same as the objects in the right panel." Otherwise, "The position of all the objects in the middle panel is not the same as the objects in the right panel." one_row Any The rule for the position of objects in the first row is: {answer_str} . Table 11: Rule-based program of attribute Number and Position. The Stage represents the prior stage. (\w+) represents the word here will be put in the position of \1. Setting 1P-C 1P-B 2P 1R 2R Final Setting 1 0/10 3/10 1/10 8/10 7/10 10/10 Setting 2 7/10 4/10 8/10 10/10 10/10 10/10 Table 12: Performance comparison of different settings across various stages. Setting 1: Same Attribute; Setting 2: Changed Attribute. The ratio is Hallucinations / Total: Metric One Panel Basic One Panel Compare Two Panel Compare One Row Two Rows Correctness 0.98 0.97 0.96 0.93 0.94 Clarity 0.96 0.97 0.94 0.95 0.99 Content Validity 0.99 0.99 0.99 1.00 1.00 Table 13: Human performance (positive rates) for Part A across different question types. 1P-B 1P-C 2P 1R 2R 98.52 88.89 69.08 62.12 63.33 Table 14: Human performance (positive rates) for Part B across different question types. Task 1P-B 1P-C 2P 1R 2R Kappa Scores 0.9711 0.7830 0.4988 0.4443 0.4075 Table 15: Fleiss’ Kappa Scores for Inter-Participant Agreement across different question types. Model Running Time Idefics2-8B 7H 24M Intern2-VL-8B 14H 54M Qwen2-VL-Instruct-72B 5D 3H 50M NVLM-D-72B 5D 0H 13M Total Questions: 3.92K Device: 2×A100 80G GPU Table 16: Actual Running Time for Each Model. D: Day, H: Hour, M: MinuteMetric Prior 1P 2P 1R 2R
|
https://arxiv.org/abs/2505.21850v1
|
Final GPT-4o AccVanilla 73.8 43.9 41.8 50.6 10.0 Struct. 82.2 64.4 47.8 50.9 8.6 Doc. 80.8 44.8 31.1 24.9 10.0 Gemini AccVanilla 75.5 64.4 52.6 57.1 18.6 Struct. 70.6 66.4 52.9 57.8 17.1 Doc. 69.6 51.0 36.7 33.1 14.3 Qwen2-VL (72B)AccVanilla 74.1 57.8 47.3 54.2 65.7∗ Struct. 77.2 67.7 55.1 53.6 61.4 Doc. 76.5 63.1 50.2 46.6 24.3 MSEvalVanilla 2.54 1.95 1.79 1.70 5.14∗ Struct. 2.64 2.46 2.37 2.17 3.11 Doc. 2.62 2.33 2.26 1.90 1.88 NVLM-D (72B)AccVanilla 66.1 45.2 39.1 43.3 7.1 Struct. 45.6 25.3 23.1 36.8 10.0 Doc. 18.7 11.3 17.5 14.2 20.0 MSEvalVanilla 2.25 1.20 1.28 1.02 0.76 Struct. 1.84 0.87 0.99 0.98 0.79 Doc. 0.93 0.78 0.88 0.94 0.96 Idefics2 (8B)AccVanilla 57.8 37.8 36.6 42.4 25.7 Struct. 46.2 30.5 36.9 43.5 18.6 Doc. 27.2 23.6 9.6 6.9 15.7 MSEvalVanilla 2.02 1.48 1.51 1.51 1.44 Struct. 1.59 1.18 1.25 1.33 1.27 Doc. 1.04 1.01 0.97 0.98 1.00 Intern2-VL (8B)AccVanilla 54.4 41.9 31.6 33.5 17.1 Struct. 48.4 31.0 20.7 27.3 7.1 Doc. 23.1 30.2 17.1 17.1 8.6 MSEvalVanilla 2.02 1.48 1.51 1.51 1.44 Struct. 1.52 1.18 1.10 1.00 0.97 Doc. 1.02 1.00 1.00 0.97 0.92 Table 17: The Accuracy (Acc) and MSEval scores of three prompting techniques for the Logical Chain task. Vanilla: Pure Text, Struct.: Structure (HTML), Doc.: Document. The highest accuracy are highlighted in bold. The highest MSEval are highlighted in underline . --------------------------------- Dependent Stage Name: single_panel_1_left Dependent Stage Question: Are all objects in the left part of the panel of the same color? Dependent Stage Choice: [ 'A: Only one object ','B: No ','C: Yes '] Dependent Stage Ground Truth: A Dependent Stage Logits: { 'A': 11.125, 'B': 11.125, 'C': 20.75} Dependent Stage Generated Answer: C --------------------------------- Dependent Stage Name: single_panel_2_left Dependent Stage Question: Are all objects in the left part of the panel of the same color? Dependent Stage Choice: [ 'A: No ','B: Only one object ','C: Yes '] Dependent Stage Ground Truth: B Dependent Stage Logits: { 'A': 11.125, 'B': 10.5625, 'C': 19.875} Dependent Stage Generated Answer: C --------------------------------- Dependent Stage Name: single_panel_3_left Dependent Stage Question: Are all objects in the left part of the panel of the same color? Dependent Stage Choice: [ 'A: Only one object ','B: No ','C: Yes '] Dependent Stage Ground Truth: A Dependent Stage Logits: { 'A': 10.375, 'B': 10.375, 'C': 19.75} Dependent Stage Generated Answer: C --------------------------------- Dependent Stage Name: two_panels_1_2_left Dependent Stage Question: Consider only the left part of the two panels in the image. Is the color of all the objects in the left panel the same as, darker or brighter than the objects in the right panel? If the colors within either panel are already different from each other, select 'Not Comparable. ' Dependent Stage Choice: [ 'A: Not comparable ','B: The same ','C: Darker ','D: Brighter '] Dependent Stage Ground Truth: C Dependent Stage Logits: { 'A': 16.5, 'B': 17.75, 'C': 16.375, 'D': 15.0625} Dependent Stage Generated Answer: B --------------------------------- Dependent Stage Name: two_panels_2_3_left Dependent Stage Question: Consider only the left part of the two panels in the image. Is the color
|
https://arxiv.org/abs/2505.21850v1
|
of all the objects in the left panel the same as, darker or brighter than the objects in the right panel? If the colors within either panel are already different from each other, select 'Not Comparable. ' Dependent Stage Choice: [ 'A: Not comparable ','B: Darker ','C: Brighter ','D: The same '] Dependent Stage Ground Truth: B Dependent Stage Logits: { 'A': 18.875, 'B': 18.0, 'C': 16.125, 'D': 19.5} Dependent Stage Generated Answer: D --------------------------------- Current Stage: Current Stage Name: one_row_left Current Stage Question: Look at the three panels in the image from left to right, paying attention only to the left portions of each panel, and identify the rule that controls the color of objects. Current Stage Choice: [ 'A: The color of objects in the last panel is the sum of the colors in the previous two panels. ','B: The color of objects gradually brightens by a constant amount each time. ','C: The color of objects gradually darkens by a constant amount each time. ','D: The color of objects in the last panel is the difference between the colors in the previous two panels. '] Current Stage Ground Truth: B Current Stage Logits: { 'A': 20.0, 'B': 21.375, 'C': 20.5, 'D': 19.75} Current Stage Generated Answer: B --------------------------------- Accuracy: 1.0 MSEval: 1.066 MSEval Random Baseline: 1.0 Listing 2: An instance of high accuracy but low MSEval occurs since the LLM NVLM-D-72B generates a current-stage answer consistent with the ground truth, while earlier dependent stages produce inconsistent results. --------------------------------- Dependent Stage Name: single_panel_1_left Dependent Stage Question: What is the shape of the object in the left part of the panel ? Dependent Stage Choice: [ 'A: circle ','B: hexagon ','C: triangle ','D: square '] Dependent Stage Ground Truth: B Dependent Stage Logits: { 'A': 17.125, 'B': 24.5, 'C': 16.0, 'D': 16.0} Dependent Stage Generated Answer: B --------------------------------- Dependent Stage Name: single_panel_2_left Dependent Stage Question: What is the shape of the object in the left part of the panel ? Dependent Stage Choice: [ 'A: triangle ','B: pentagon ','C: square ','D: hexagon '] Dependent Stage Ground Truth: B Dependent Stage Logits: { 'A': 17.5, 'B': 23.875, 'C': 17.125, 'D': 15.8125} Dependent Stage Generated Answer: B --------------------------------- Dependent Stage Name: single_panel_3_left Dependent Stage Question: What is the shape of the object in the left part of the panel ? Dependent Stage Choice: [ 'A: hexagon ','B: pentagon ','C: square ','D: triangle '] Dependent Stage Ground Truth: C Dependent Stage Logits: { 'A': 17.375, 'B': 16.875, 'C': 24.125, 'D': 17.375} Dependent Stage Generated Answer: C --------------------------------- Dependent Stage Name: two_panels_1_2_left Dependent Stage Question: Consider only the left part of the two panels in the image. Is the shape of all the objects in the left panel have the same, more, or fewer edges compared to the objects in the right panel? If the shapes within either panel are already different from each other, select 'Not Comparable. '(Note: The edge number increases in the following order: triangle, square, pentagon, hexagon, circle) Dependent Stage Choice: [ 'A: Not comparable ','B: Fewer ','C: The same ','D: More
|
https://arxiv.org/abs/2505.21850v1
|
'] Dependent Stage Ground Truth: D Dependent Stage Logits: { 'A': 21.625, 'B': 20.875, 'C': 20.0, 'D': 22.25} Dependent Stage Generated Answer: D --------------------------------- Dependent Stage Name: two_panels_2_3_left Dependent Stage Question: Consider only the left part of the two panels in the image. Is the shape of all the objects in the left panel have the same, more, or fewer edges compared to the objects in the right panel? If the shapes within either panel are already different from each other, select 'Not Comparable. '(Note: The edge number increases in the following order: triangle, square, pentagon, hexagon, circle) Dependent Stage Choice: [ 'A: The same ','B: Not comparable ','C: More ','D: Fewer '] Dependent Stage Ground Truth: C Dependent Stage Logits: { 'A': 20.875, 'B': 21.75, 'C': 21.875, 'D': 20.25} Dependent Stage Generated Answer: C --------------------------------- Current Stage: Current Stage Name: one_row_left Current Stage Question: Look at the three panels in the image from left to right, paying attention only to the left portions of each panel, and identify the rule that controls the shape of objects. (Note: The edge number increases in the following order: triangle, square, pentagon, hexagon, circle) Current Stage Choice: [ 'A: The edge number of shape gradually decreases by 1. ','B: The edge number of shape gradually increases by 1. ','C: No clear rule is present. ','D: The shape remains constant. '] Current Stage Ground Truth: A Current Stage Logits: { 'A': 21.75, 'B': 22.75, 'C': 19.25, 'D': 16.625} Current Stage Generated Answer: B --------------------------------- Accuracy: 0.0 MSEval: 2.506839853582758 MSEval Random Baseline: 1.0 Listing 3: An instance of low accuracy but high MSEval arises as the LLM NVLM-D-72B generates a current-stage answer inconsistent with the ground truth, despite earlier dependent stages producing consistent results. --------------------------------- Dependent Stage Name: single_panel_1_left Dependent Stage Question: What is the shape of the object in the left part of the panel ? Dependent Stage Choice: [ 'A: circle ','B: hexagon ','C: triangle ','D: square '] Dependent Stage Ground Truth: B Dependent Stage Logits: { 'A': 21.125, 'B': 26.375, 'C': 20.375, 'D': 22.375} Dependent Stage Generated Answer: B --------------------------------- Dependent Stage Name: single_panel_2_left Dependent Stage Question: What is the shape of the object in the left part of the panel ? Dependent Stage Choice: [ 'A: triangle ','B: pentagon ','C: square ','D: hexagon '] Dependent Stage Ground Truth: B Dependent Stage Logits: { 'A': 22.5, 'B': 25.5, 'C': 21.25, 'D': 24.625} Dependent Stage Generated Answer: B --------------------------------- Dependent Stage Name: single_panel_3_left Dependent Stage Question: What is the shape of the object in the left part of the panel ? Dependent Stage Choice: [ 'A: hexagon ','B: pentagon ','C: square ','D: triangle '] Dependent Stage Ground Truth: C Dependent Stage Logits: { 'A': 22.875, 'B': 21.125, 'C': 27.5, 'D': 23.625} Dependent Stage Generated Answer: C --------------------------------- Dependent Stage Name: two_panels_1_2_left Dependent Stage Question: Consider only the left part of the two panels in the image. Is the shape of all the objects in the left panel have the same, more, or fewer edges compared to the objects in the right panel? If the shapes
|
https://arxiv.org/abs/2505.21850v1
|
within either panel are already different from each other, select 'Not Comparable. '(Note: The edge number increases in the following order: triangle, square, pentagon, hexagon, circle) Dependent Stage Choice: [ 'A: Not comparable ','B: Fewer ','C: The same ','D: More '] Dependent Stage Ground Truth: D Dependent Stage Logits: { 'A': 30.5, 'B': 30.25, 'C': 30.75, 'D': 30.875} Dependent Stage Generated Answer: D --------------------------------- Dependent Stage Name: two_panels_2_3_left Dependent Stage Question: Consider only the left part of the two panels in the image. Is the shape of all the objects in the left panel have the same, more, or fewer edges compared to the objects in the right panel? If the shapes within either panel are already different from each other, select 'Not Comparable. '(Note: The edge number increases in the following order: triangle, square, pentagon, hexagon, circle) Dependent Stage Choice: [ 'A: The same ','B: Not comparable ','C: More ','D: Fewer '] Dependent Stage Ground Truth: C Dependent Stage Logits: { 'A': 30.375, 'B': 29.75, 'C': 30.375, 'D': 30.25} Dependent Stage Generated Answer: C --------------------------------- Current Stage: Current Stage Name: one_row_left Current Stage Question: Look at the three panels in the image from left to right, paying attention only to the left portions of each panel, and identify the rule that controls the shape of objects. (Note: The edge number increases in the following order: triangle, square, pentagon, hexagon, circle) Current Stage Choice: [ 'A: The edge number of shape gradually decreases by 1. ','B: The edge number of shape gradually increases by 1. ','C: No clear rule is present. ','D: The shape remains constant. '] Current Stage Ground Truth: A Current Stage Logits: { 'A': 32.75, 'B': 33.0, 'C': 31.25, 'D': 30.5} Current Stage Generated Answer: B --------------------------------- Accuracy: 0.0 MSEval: 2.228 MSEval Random Baseline: 1.0 Listing 4: An instance of low accuracy but high MSEval arises as the LLM Intern-VL2-8B generates a current-stage answer inconsistent with the ground truth, despite earlier dependent stages producing consistent results. --------------------------------- Dependent Stage Name: single_panel_1_right Dependent Stage Question: Are all objects in the right part of the panel of the same size? Dependent Stage Choice: [ 'A: No ','B: Yes ','C: Only one object '] Dependent Stage Ground Truth: C Dependent Stage Logits: { 'A': 31.125, 'B': 31.125, 'C': 29.625} Dependent Stage Generated Answer: A --------------------------------- Dependent Stage Name: single_panel_2_right Dependent Stage Question: Are all objects in the right part of the panel of the same size? Dependent Stage Choice: [ 'A: No ','B: Yes ','C: Only one object '] Dependent Stage Ground Truth: C Dependent Stage Logits: { 'A': 31.0, 'B': 31.125, 'C': 29.5} Dependent Stage Generated Answer: B --------------------------------- Dependent Stage Name: single_panel_3_right Dependent Stage Question: Are all objects in the right part of the panel of the same size? Dependent Stage Choice: [ 'A: Only one object ','B: No ','C: Yes '] Dependent Stage Ground Truth: A Dependent Stage Logits: { 'A': 31.375, 'B': 31.375, 'C': 31.625} Dependent Stage Generated Answer: C --------------------------------- Dependent Stage Name: two_panels_1_2_right Dependent Stage Question: Consider only the right part of the two panels in
|
https://arxiv.org/abs/2505.21850v1
|
the image. Is the size of all the objects in the left panel the same as, smaller or larger than the objects in the right panel? If the sizes within either panel are already different from each other, select 'Not Comparable. Dependent Stage Choice: [ 'A: Not comparable ','B: Smaller ','C: Larger ','D: The same '] Dependent Stage Ground Truth: C Dependent Stage Logits: { 'A': 30.875, 'B': 30.75, 'C': 30.25, 'D': 30.75} Dependent Stage Generated Answer: A --------------------------------- Dependent Stage Name: two_panels_2_3_right Dependent Stage Question: Consider only the right part of the two panels in the image. Is the size of all the objects in the left panel the same as, smaller or larger than the objects in the right panel? If the sizes within either panel are already different from each other, select 'Not Comparable. Dependent Stage Choice: [ 'A: Not comparable ','B: Smaller ','C: The same ','D: Larger '] Dependent Stage Ground Truth: D Dependent Stage Logits: { 'A': 30.25, 'B': 30.125, 'C': 30.5, 'D': 30.125} Dependent Stage Generated Answer: C --------------------------------- Current Stage: Current Stage Name: one_row_right Current Stage Question: Analyze the three panels in the image from left to right, concentrating only on the right areas of each panel, and determine the rule that dictates the size of objects. Current Stage Choice: [ 'A: The size of objects gradually decreases by a constant amount each time. ','B: The size of objects in the last panel is the difference between the sizes in the previous two panels. ','C: The size remains constant. ','D: Three distinct sizes across panels, rotating through each possible permutation of these sizes. '] Current Stage Ground Truth: D Current Stage Logits: { 'A': 31.125, 'B': 30.375, 'C': 31.0, 'D': 31.375} Current Stage Generated Answer: D --------------------------------- Accuracy: 1.0 MSEval: 0.955 MSEval Random Baseline: 1.0 Listing 5: An instance of high accuracy but low MSEval occurs since the LLM Intern-VL2-8B generates a current-stage answer consistent with the ground truth, while earlier dependent stages produce inconsistent results. --------------------------------- Dependent Stage Name: single_panel_1_right Dependent Stage Question: What is the shape of the object in the right part of the panel? Dependent Stage Choice: [ 'A: circle ','B: square ','C: hexagon ','D: pentagon '] Dependent Stage Ground Truth: C Dependent Stage Logits: { 'A': 0, 'B': 0, 'C': 30.625, 'D': 0} Dependent Stage Generated Answer: C --------------------------------- Dependent Stage Name: single_panel_2_right Dependent Stage Question: What is the shape of the object in the right part of the panel? Dependent Stage Choice: [ 'A: pentagon ','B: triangle ','C: square ','D: circle '] Dependent Stage Ground Truth: B Dependent Stage Logits: { 'A': 0, 'B': 30.5, 'C': 0, 'D': 0} Dependent Stage Generated Answer: B --------------------------------- Dependent Stage Name: single_panel_3_right Dependent Stage Question: What is the shape of the object in the right part of the panel? Dependent Stage Choice: [ 'A: triangle ','B: hexagon ','C: square ','D: circle '] Dependent Stage Ground Truth: C Dependent Stage Logits: { 'A': 0, 'B': 0, 'C': 30.875, 'D': 0} Dependent Stage Generated Answer: C --------------------------------- Dependent Stage Name: two_panels_1_2_right Dependent Stage Question:
|
https://arxiv.org/abs/2505.21850v1
|
Consider only the right part of the two panels in the image. Is the shape of all the objects in the left panel have the same, more, or fewer edges compared to the objects in the right panel? If the shapes within either panel are already different from each other, select 'Not Comparable. '(Note: The edge number increases in the following order: triangle, square, pentagon, hexagon, circle) Dependent Stage Choice: [ 'A: The same ','B: More ','C: Fewer ','D: Not comparable '] Dependent Stage Ground Truth: B Dependent Stage Logits: { 'A': 0, 'B': 30.0, 'C': 0, 'D': 0} Dependent Stage Generated Answer: B --------------------------------- Dependent Stage Name: two_panels_2_3_right Dependent Stage Question: Consider only the right part of the two panels in the image. Is the shape of all the objects in the left panel have the same, more, or fewer edges compared to the objects in the right panel? If the shapes within either panel are already different from each other, select 'Not Comparable. '(Note: The edge number increases in the following order: triangle, square, pentagon, hexagon, circle) Dependent Stage Choice: [ 'A: Not comparable ','B: Fewer ','C: More ','D: The same '] Dependent Stage Ground Truth: B Dependent Stage Logits: { 'A': 0, 'B': 28.25, 'C': 0, 'D': 0} Dependent Stage Generated Answer: B --------------------------------- Current Stage: Current Stage Name: one_row_right Current Stage Question: Inspect the three panels in the image from left to right, focusing exclusively on the right parts of each panel, and uncover the rule that governs the shape of objects. (Note: The edge number increases in the following order: triangle, square, pentagon, hexagon, circle) Current Stage Choice: [ 'A: The shape remains constant. ','B: Three distinct shapes across panels, rotating through each possible permutation of these shapes. ','C: No clear rule is present. ','D: The edge number of shape gradually increases by 1. '] Current Stage Ground Truth: B Current Stage Logits: { 'A': 0, 'B': 0, 'C': 31.67, 'D': 0} Current Stage Generated Answer: C --------------------------------- Accuracy: 0.0 MSEval: 2.345087186311839 MSEval Random Baseline: 1.0 Listing 6: An instance of low accuracy but high MSEval arises as the LLM Qwen2-VL-72B generates a current-stage answer inconsistent with the ground truth, despite earlier dependent stages producing consistent results.
|
https://arxiv.org/abs/2505.21850v1
|
arXiv:2505.21851v1 [cs.RO] 28 May 2025Streaming Flow Policy Simplifying diffusion /flow-matching policies by treating action trajectories as flow trajectories Website: https://streaming-flow-policy.github.io Sunshine Jiang MITXiaolin Fang MITNicholas Roy MIT Tom ´as Lozano-P ´erez MITLeslie Kaelbling MITSiddharth Ancha MIT Abstract: Recent advances in diffusion /flow-matching policies have enabled imitation learning of complex, multi-modal action trajectories. However, they are computationally expensive because they sample a trajectory of trajectories —a diffusion /flow trajectory of action trajectories. They discard intermediate action trajectories, and must wait for the sampling process to complete before any actions can be executed on the robot. We simplify diffusion /flow policies by treating action trajectories as flow trajectories . Instead of starting from pure noise, our algorithm samples from a narrow Gaussian around the last action. Then, it incrementally integrates a velocity field learned via flow matching to produce a sequence of actions that constitute a single trajectory. This enables actions to be streamed to the robot on-the-fly during the flow sampling process, and is well-suited for receding horizon policy execution. Despite streaming, our method retains the ability to model multi-modal behavior. We train flows that stabilize around demonstration trajectories to reduce distribution shift and improve imitation learning performance. Streaming flow policy outperforms prior methods while enabling faster policy execution and tighter sensorimotor loops for learning-based robot control. Figure 1: (a)Diffusion policy [ 1] and flow-matching policy [ 2] input a history of observations (not shown) to predict a “chunk” of future robot actions. The x-axis represents the action space, and the + y-axis represents increasing diffusion /flow timesteps. Conventional diffusion /flow policies sample a “trajectory of trajectories” — a diffusion /flow trajectory of action trajectories. They discard intermediate trajectories, and must wait for the diffusion /flow process to complete before the first actions can be executed on the robot. (b)We simplify diffusion /flow policies by treating action trajectories as flow trajectories . Our flow-matching algorithm operates in action space. Starting from a noised version of the last executed action, it incrementally generates a sequence of actions that constitutes a single trajectory. This aligns the “time” of the flow sampling process with the “execution time” of the action trajectory. Importantly, actions can be streamed to the robot’s controller on the fly during the flow sampling process, while retaining the ability to model multi-modal trajectory distributions. Figure 2: (a)To illustrate our method, we consider a toy example of 1-D robot actions with two demonstration trajectories shown in blue and red. (b)Given a demonstration trajectory sampled from the training set ( e.g.the blue one), we first analytically construct a conditional flow i.e.an initial action distribution and a velocity field. The constructed flow samples trajectories from a thin Gaussian tube around the demonstration trajectory. Using the constructed velocity field as targets, we learn a marginal velocity field via flow matching [ 3], shown in (c). The learned velocity field has the property that its induced marginal distribution over actions at each horizontal time slice matches the training distribution. (d)The initial action at t= 0is sampled from a narrow Gaussian centered at the most recently executed action. Then,
|
https://arxiv.org/abs/2505.21851v1
|
we iteratively integrate the learned velocity field to generate an action trajectory. Sampled trajectories (shown in red) cover both behavior modes in the training data. (b)We find that constructing conditional flows that stabilize around demonstration trajectories reduces distribution shift and improves imitation learning performance. The main takeaway is that our method is able to both represent multi-modal distributions over action trajectories like diffusion /flow policies, while also iteratively generating actions that can be streamed during the flow sampling process, enabling fast and reactive policy execution. 1 Introduction Recent advances in robotic imitation learning, such as diffusion policy [ 1,4] and flow-matching policy [ 2,5,6] have enabled robots to learn complex, multi-modal action distributions for challenging real-world tasks such as cooking, laundry folding, robot assembly and navigation [ 7]. They take a history of observations as input, and output a sequence of actions (also called an “action chunk”). Conventional diffusion /flow policies represent a direct application of diffusion models [ 8,9] and flow-matching [ 3] to robot action sequences — they formulate the generative process as proba- bilistic transport in the space of action sequences , starting from pure Gaussian noise. Therefore, diffusion /flow policies represent a “ trajectory of trajectories ” — a diffusion /flow trajectory of action trajectories (Fig. 1a). This approach has several drawbacks. The sampling process discards all inter- mediate action trajectories, making diffusion /flow policies computationally inefficient. Importantly, the robot must wait for the diffusion /flow process to complete before executing any actions. Thus, diffusion /flow policies often require careful hyper-parameter tunning to admit tight control loops. In this work, we propose a novel imitation learning framework that harnesses the temporal structure of action trajectories. We simplify diffusion /flow policies by treating action trajectories as flow trajectories (Fig. 1b). Our aim is to learn a flow transport in the action space A, as opposed to trajectory space AT. Unlike diffusion /flow policies that start the sampling process from pure Gaussian noise (in AT), our initial sample comes from a narrow Gaussian centered around the most recently generated action (in A). Then, we iteratively integrate a learned velocity field to generate a sequence of future actions that forms a single trajectory. The “flow time” — indicating progress of the flow process — coincides with execution time of the sampled trajectory. Iteratively generating the sequence of actions allows the actions to be streamed to the robot’s controller on-the-fly during the flow generation process, significantly improving the policy’s speed and reactivity. We show how a streaming flow policy with the above desiderata can be learned using flow matching [ 3]. Given an action trajectory from the training set (Fig. 2a), we construct a velocity field conditioned on this example that samples paths in a narrow Gaussian “tube” around the demonstration (Fig. 2b). 2 Symbol Description Domain Tpred Prediction time horizon of trajectories during training R+ Tchunk Time horizon of action chunk during inference R+ t Flow time =execution time rescaled from [0, Tpred]to[0,1] [0,1] a Robot action (often a robot configuration) A v Action velocity TA o, h Observation, Observation history
|
https://arxiv.org/abs/2505.21851v1
|
O,H ξ Action trajectory (chunk), where time is rescaled from [0, Tpred]to[0,1] [0,1]→ A ˙ξ Time derivative of action trajectory: ˙ξ(t) =d dtξ(t) [0,1]→TA pD(h, ξ) Distribution of observation histories and future action chunks. Training set is assumed to be sampled from this distribution.∆(H × [0,1]→ A) vθ(a, t|h) Learned marginal velocity field with network parameters θ TA vξ(a, t) Conditional velocity field for demonstration ξ TA pξ(a|t) Marginal probability distribution over aat time tinduced by vξ ∆(A) v∗(a, t|h) Optimal marginal velocity field under data distribution pD TA p∗(a|t, h) Marginal probability distribution over aat time tinduced by v∗∆(A) k, σ0 Stabilizing gain, Initial standard deviation R≥0,R+ Table 1: Mathematical notation used throughout the paper. Our training procedure is remarkably simple — we regress a neural network vθ(a, t|h)that takes as input (i) an observation history h, (ii) flow timestep t∈[0,1], and (iii) action a, to match the constructed velocity field. We are able to re-use existing architectures for diffusion /flow policy while only modifying the input and output dimension of the network from ATtoA. Flow matching guarantees that the marginal flow learned over all training trajectories, as shown in Fig. 2(c, d), is multi-modal. Specifically, the marginal distribution of actions at each timestep tmatches that of the training distribution. Our approach thus retains diffusion /flow policy’s ability to represent multi-modal trajectories while allowing for streaming trajectory generation. How should we construct the target velocity field? Prior work [ 10] has shown that low-level stabilizing controllers can reduce distribution shift and improve theoretical imitation learning guarantees. We leverage the flexibility of the flow matching framework to construct velocity fields that stabilize around a given demonstration trajectory, by adding velocity components that guide the flow back to the demonstration. In our experiments, we find that stabilizing flow significantly improves performance. Our method can leverage two key properties specific to robotics applications: (i) robot actions are often represented as position setpoints of the robot’s joints or end-effector pose that are tracked by a low-level controller, (ii) the robot’s joint positions /end-effector poses can be accurately measured via proprioceptive sensors ( e.g.joint encoders) and forward kinematics. Streaming flow policy can not only imitate action trajectories, but is especially suited to imitate state trajectories when a stiff controller is available that can closely track state trajectories. In this case, the flow sampling process can be initialized from the known ground truth robot state instead of the state predicted from the previous chunk. This reduces uncertainty and error in the generated trajectory. Unlike diffusion /flow policies, streaming flow policy is only guaranteed to match the marginal distribution of actions at each timestep, but not necessarily the joint distribution. Consequently, our method can produce trajectories that are compositions of segments of training trajectories, even if the composition was not part of the training dataset. While this may be seen as a limitation of our method, we argue that for most robotics tasks, compositionality is not only valid, but a desirable property that requires fewer demonstrations. Furthermore, while streaming flow policy is unable to capture global constraints that
|
https://arxiv.org/abs/2505.21851v1
|
can only be represented in the joint distribution, it can learn local constraints such as joint constraints, and convex velocity constraints; see Sec. 9 for more details. In practice, we find that streaming flow policy performs comparably to diffusion policy while being significantly faster. 2 Background and problem formulation We consider the problem of imitating sequences of future actions a∈ A from histories of observations h∈ H as input, where a history h={oi}K i=1is a finite sequence of observations oi∈ O. The 3 time horizon Tpred∈R+of the trajectory to be predicted can be an arbitrary hyperparameter. For simplicity, we re-scale the time interval to [0,1]by dividing by Tpred. Therefore, we represent an action trajectory as ξ: [0,1]→ A . We assume an unknown data generating distribution pD(h, ξ)of inputs and outputs, from which a finite training dataset D={(hi, ξi)}N i=1ofNtuples is sampled. See Table 1 for a complete list of notation. Our aim is to learn a policy that outputs a potentially multi-modal distribution over future trajectories ξgiven a history of observations h. Velocity fields: We formulate streaming flow policy , with model parameters θ, as a history- conditioned velocity field vθ(a, t|h). For a given history h∈ H,t∈[0,1], and action a∈ A, the model outputs a velocity in the tangent space TAofA. The velocity field is a neural ordinary dif- ferential equation (ODE) [ 11]. Given an initial action a(0), the velocity field induces trajectories a(t) in action space by specifying the instantaneous time-derivative of the trajectory da/dt =vθ(a, t|h). Flows: The pairing of vθ(a, t|h)with an initial probability distribution over a(0)is called a continuous normalizing flow [11,12] (simply referred to as “ flow”). A flow transforms the initial action distribution to a new distribution pθ(a|t, h), for every t∈[0,1], in a deterministic and invertible manner. We want streaming flow policy to start sampling close to the action aprevthat was most recently executed. This is the final action that was computed in the previous action chunk. When imitating state trajectories instead of action trajectories, we set aprevto the current known robot state. Invertible flows require the initial probability distribution over a(0)to have non-zero probability density on the domain Ato be well behaved. Therefore, we chose a narrow Gaussian distribution centered at aprevwith a small variance σ2 0. A trajectory is generated by sampling from the initial distribution and integrating the velocity field as: a(t) =a0+Zt 0vθ a(s), s h dswhere a0∼ N aprev, σ2 0 (1) Importantly, standard ODE solvers can perform forward finite-difference integration auto-regressively, where integration at time tdepends only on previously computed actions a(s), s≤t. This property allows us to stream actions during the integration process, without needing to wait for the full trajectory to be computed. Next, we describe how we analytically construct conditional velocity fields given a trajectory ξ. Then, we will use them as targets to learn vθ(a, t|h)using flow matching [ 3]. 3 Analytically constructing conditional velocity fields Given an action trajectory ξ, we first analytically construct a stabilizing conditional flow that travels closely along ξ. This will be used as a target to
|
https://arxiv.org/abs/2505.21851v1
|
train a neural network velocity field. In particular, we construct a velocity field vξ(a, t)and an initial distribution p0 ξ(a)such that the induced marginal probability distributions pξ(a|t)form a thin Gaussian “tube” around ξ. By “Gaussian tube”, we mean that pξ(a|t)is a narrow Gaussian distribution centered at ξ(t)for every t∈[0,1]. This is illustrated in Fig. 2(a,b). We construct the stabilizing conditional flow as: vξ(a, t) = ˙ξ(t)|{z} Trajectory velocity−k(a−ξ(t))|{z} Stabilization termand p0 ξ(a) =N a|ξ(0), σ2 0 (2) The initial distribution p0 ξ(a)is a narrow Gaussian centered at the initial action ξ(0)with a small standard deviation σ0. The velocity has two components. The trajectory velocity is the velocity of the action trajectory ξat time t, and does not depend on a. This term serves to move along the direction of the trajectory. The stabilization term is a negative proportional error feedback that corrects deviations from the trajectory. Controllers that stabilize around demonstration trajectories are known to reduce distribution shift and improve theoretical imitation learning guarantees [ 10]. We empirically observe that the stabilizing term produces significantly more robust and performant policies, compared to setting k= 0. We note that our framework leverages time derivatives of action trajectories ˙ξ(t) during training, which are easily accessible, in addition to ξ(t). This is in contrast to conventional diffusion /flow policies that only use ξ(t)but not ˙ξ(t). We note that, throughout this paper, the term ‘velocity’ refers to ˙ξ(t), and not the physical velocity of the robot. While they may coincide for certain choices of the action space A,˙ξ(t)may not represent any physical velocity. 4 Algorithm 1 Training algorithm Input: Training set D={(hi, ξi)}N i=1,Tpred •ξhas time horizon Tpred rescaled to [0,1] 1:while not converged do 2: (h, ξ)∼ D 3: t∼Uniform(0 ,1) 4: a∼pξ(a|t)(defined in Eq. 3) 5: θ←θ−λ∇θ∥vξ(a, t)−vθ(a, t|h)∥2 | {z } Conditional flow matching loss 6:return vθAlgorithm 2 Inference algorithm Input: vθ(a, t|h),Tpred,Tchunk ,∆t 1:h, a← {} , qcurr(current robot configuration) 2:while True do 3: t, hchunk←0, h 4: ifimitating state :a←qcurr 5: while t≤Tchunk/Tpreddo// open loop 6: o←Execute (a)// stream action during flow 7: h←h∪ {o} 8: a←a+vθ(a, t|hchunk)∆t// integration step 9: t←t+ ∆t Theorem 1: The stabilizing conditional flow given by Eq. 2 induces the following per-timestep marginal distributions over the action space: pξ(a|t) =N a ξ(t), σ2 0e−2kt (3) Proof: See App. A. The distribution of states sampled at any timestep t∈[0,1]is a Gaussian centered at the trajectory ξ(t). Furthermore, the standard deviation starts from σ0and decays exponentially with time at rate k. 4Learning objective for velocity fields to match marginal action distributions LetpD(h, ξ)denote the unknown data generating distribution from which the training dataset is sampled. The conditional velocity field vξ(a, t)defined in Sec. 3 models a single action trajectory. If multiple behaviors ξare valid for the same input history h, how can we learn a velocity field v(a, t|h) that represents multi-modal trajectory distributions? Using vξ(a, t)as target, the conditional flow matching loss [3] for a history-conditioned velocity field v(a, t|h)is defined as: LCFM(v, pD) =E(h, ξ)∼pDEt∼U[0,1]Ea∼pξ(a|t) v(a, t|h)−vξ(a, t) 2 2(4) This is simply an expected L2loss
|
https://arxiv.org/abs/2505.21851v1
|
between a candidate velocity field v(a, t|h)and the the analyti- cally constructed conditional velocity field vξ(a, t)as target. The expectation is over histories and trajectories under the probability distribution pD(h, ξ), time tsampled uniformly from [0,1], and action asampled from the constructed conditional flow known in closed-form in Eq. 3. The following theorem characterizes the per-timestep marginal distributions induced by the minimizer of this loss: Theorem 2: The minimizer v∗= arg min vLCFM(v, pD)induces the following per-timestep marginal distribution for each t∈[0,1]and observation history h: p∗(a|t, h) =Z ξpξ(a|t)pD(ξ|h) dξ (5) Proof: This is a direct consequence of the flow matching theorems (Thms. 1 and 2) in Lipman et al. [3]. Intuitively, the per-timestep marginal distribution induced by the minimizer of LCFM is the average of per-timestep marginal distributions of constructed conditional flows pξ(a|t), over the distribution of future trajectories in pD(ξ|h)that share the same observation history h. Matching the per-timestep marginal distributions is desirable and necessary for representing multi- modal distributions. Consider the example in Fig. 2 that constructs two conditional flows, one that samples actions to the right ( a > 0), and the other that samples actions to the left ( a < 0). In order for a learned model to sample both modes with probability 0.5 each, its per-timestep marginal distribution must match the averaged per-timestep marginal distributions of conditional flows. Unlike flow policies [ 2,5,6] that only require matching the target distributions at t= 1, our method leverages the fact that flow matching [3] matches the marginal distributions at alltimesteps t∈[0,1]. 5 Push-T with state input Push-T with image input State imitation Avg/Max scoresAction imitation Avg/Max scoresLatencyState imitation Avg/Max scoresLatency ↑ ↑ ↓ ↑ ↓ 1DP [1]: 100 DDPM steps 92.9% /94.4% 90.7% /92.8% 40.2 ms 87.0% /90.1% 127.2 ms 2DP [1]: 10 DDIM steps 87.0% /89.0% 81.4% /85.3% 04.4 ms 85.3% /91.5% 10.4 ms 3Flow matching policy [5] 80.6% /82.6% 80.6% /82.6% 05.8 ms 71.0% /72.0% 12.9 ms 4 Streaming DP [14] 87.5% /91.4% 84.2% /87.0% 26.7 ms 84.7% /87.1% 77.7 ms 5SFP without stabilization 84.0% /86.4% 81.8% /93.2% 03.5 ms 73.9% /77.5% 08.8 ms 6 SFP (Ours) 95.1% /96.0% 91.7% /93.7% 03.5 ms 83.9% /84.8% 08.8 ms Table 2: Imitation learning accuracy on the Push-T [ 1] dataset. Our method (in green) compared against baselines (in red) / and ablations (in blue). See text for details. 5 Training and inference algorithms for streaming flow policy Training: While we do not have access to the underlying data generating distribution pD(h, ξ), we do have access to a training set D={(hi, ξi)} ∼pD(h, ξ)that contains Nsamples from this distribution. Therefore, we train a neural network velocity field vθ(a, t|h)using a finite-sample estimate of Eq. 4: bLCFM(θ,D) =1 NPN i=1Et∼U[0,1]Ea∼pξi(a|t) vθ(a, t|hi)−vξ(a, t) 2 2as shown in Alg. 1. Inference: While behavior policies are trained to predict sequences of horizon Tpred, they are usually run in a receding horizon fashion with a potentially different action chunk horizon Tchunk≤ Tpred [1]. The integration timestep ∆tis another hyperparameter that controls the granularity of the action sequence. Therefore, to generate an
|
https://arxiv.org/abs/2505.21851v1
|
action chunk, we integrate the velocity field in t∈[0, Tchunk/Tpred], producing Tchunk/(Tpred∆t)many actions. The action chunk is computed and executed open-loop i.e.the neural network vθinputs the same observation history hchunk for all integration steps. Importantly, we are able to stream and execute actions on the robot as soon as they are computed (see Alg. 2, line 6). In contrast, diffusion /flow policies must wait for the inner loop to complete before executing any actions. Deterministic execution at test time: Our learning framework suggests the initial action be sampled from a0∼ N a0|aprev, σ2 0 (see Eqs. 1 and 2). However, during inference time, we avoid adding noise to actions by setting σ0= 0to produce deterministic behavior. We do so because the ability to represent multi-modal distributions is primarily motivated by the need to prevent “averaging” distinct but valid behaviors of the same task [ 1]. While representing multi-modality is crucial during training, the learned policy can be run deterministically at test time without loss in performance. For example, ACT [ 13] sets its variance parameter to zero at test time to produce deterministic behavior. In App. B, we present a variant of streaming flow policy in an extended state space that decouples stochasticity into additional latent variables. This variant allows us to sample multiple modes of the trajectory distribution at test time without adding noise to actions. However, we found that simply setting σ0= 0at test time works better in practice; therefore we follow this strategy in all our experiments. Imitating actions vs. states: When training trajectories correspond to actions , we start integration of the current action chunk from the most recently generated action in the previous chunk. Streaming flow policy can also be used to imitate robot state trajectories when a controller is available that can closely track desired states. It is especially suited for state imitation because we can start integration of the current state chunk from the current robot state that is accurately measured by proprioceptive sensors. Streaming flow policy is able to leverage state feedback in two ways: in the history hand the initialization a0for flow integration. This reduces error in the generated trajectory. 6 Experiments We evaluate streaming flow policy on two imitation learning benchmarks: the Push-T environment [ 1, 16], and RoboMimic [ 15]. We compare our method (in green ) against 4 baselines (in red ): Row 1 (DP): standard diffusion policy [ 1] that uses 100 DDPM [ 9] steps, Row 2 (DP): a faster version of 6 RoboMimic Lift Action imitation Avg/Max scoresRoboMimic Can Action imitation Avg/Max scoresRoboMimic Square Action imitation Avg/Max scoresLatency ↑ ↑ ↑ ↓ 1 DP [1]: 100 DDPM steps 100.0% /100.0% 94.0% /98.0% 77.2% /84.0% 53.4 ms 2 DP [1]: 10 DDIM steps 100.0% /100.0% 94.8% /98.0% 76.0% /82.0% 5.8 ms 3 Flow matching policy [5] 99.2% /100.0% 66.0% /80.0% 54.0% /56.0% 4.8 ms 4 Streaming DP [14] 98.8% /100.0% 96.8% /98.0% 77.6% /82.0% 30.3 ms 5 SFP without stabilization 99.6% /100.0% 90.0% /92.0% 53.2% /60.0% 4.5 ms 6 SFP (Ours) 100.0% /100.0% 98.4% /100.0% 78.0%/84.0% 4.5 ms
|
https://arxiv.org/abs/2505.21851v1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.