text
string | source
string |
|---|---|
arXiv:2505.21938v1 [cs.LG] 28 May 2025Practical Adversarial Attacks on Stochastic Bandits via Fake Data Injection Qirun Zeng1Eric He2Richard Hoffmann2Xuchuang Wang3Jinhang Zuo4 1University of Science and Technology of China2California Institute of Technology 3University of Massachusetts Amherst4City University of Hong Kong Abstract Adversarial attacks on stochastic bandits have traditionally relied on some un- realistic assumptions, such as per-round reward manipulation and unbounded perturbations, limiting their relevance to real-world systems. We propose a more practical threat model, Fake Data Injection, which reflects realistic adversarial constraints: the attacker can inject only a limited number of bounded fake feedback samples into the learner’s history, simulating legitimate interactions. We design efficient attack strategies under this model, explicitly addressing both magnitude constraints (on reward values) and temporal constraints (on when and how often data can be injected). Our theoretical analysis shows that these attacks can mislead both Upper Confidence Bound (UCB) and Thompson Sampling algorithms into selecting a target arm in nearly all rounds while incurring only sublinear attack cost. Experiments on synthetic and real-world datasets validate the effectiveness of our strategies, revealing significant vulnerabilities in widely used stochastic bandit algorithms under practical adversarial scenarios. 1 Introduction Multi-armed bandit (MAB) algorithms are widely used in online decision-making systems for their ability to balance exploration and exploitation using partial feedback. They form the backbone of many interactive applications, including personalized recommendation [ 1], online advertising [ 2], clin- ical trials [ 3], and adaptive routing [ 4]. As these algorithms are increasingly deployed in high-stakes, user-facing systems, growing concerns have emerged regarding their vulnerability to adversarial manipulation. A growing body of work [ 5,6,7,8] has shown that MAB algorithms are vulnerable to adversarial attacks on feedback, where an attacker subtly perturbs the observed rewards to mislead the learning process. Remarkably, even with limited intervention, the attacker can steer the learner toward repeatedly selecting a targeted but suboptimal arm in the vast majority of rounds. However, prior works on adversarial attacks against stochastic bandits [ 5,6,8] typically adopt a feedback-perturbation threat model, where the attacker observes the learner’s chosen arm and the corresponding environment-generated reward in each round, then arbitrarily modifies that reward before it is revealed to the learner. While this model offers strong theoretical leverage, the assumption that an attacker can directly and continuously alter feedback from the environment is often unrealistic in practice. Consider, for example, a restaurant recommendation platform. An attacker seeking to promote a specific restaurant cannot alter the actual feedback submitted by real users in every round. A more feasible strategy is to create fake user accounts that submit biased reviews to influence the recommendation system. Similarly, in online advertising or click-through rate prediction, attackers commonly engage in click fraud or inject synthetic interactions to simulate user behavior—effectively injecting new data rather than modifying genuine observations. These practical scenarios motivate a shift toward more realistic and constrained threat models. Preprint. Building on these observations, we propose a more realistic and practically grounded threat model: theFake Data Injection model. Instead of modifying genuine feedback, the attacker influences the learner indirectly by injecting a limited number of fabricated
|
https://arxiv.org/abs/2505.21938v1
|
(arm, reward) pairs into its interaction history. These fake samples must conform to valid feedback ranges (e.g., binary clicks or 1–5 star ratings), and their injection is subject to constraints such as system-level detection or resource limits. The learner processes these fake interactions indistinguishably from real ones—updating estimates, counts, and decision logic accordingly. This model captures practical attack surfaces overlooked by previous works. It removes the unrealistic assumption of per-round reward manipulation, enables feedback injection on arbitrary arms, and respects the bounded nature of real-world feedback. At the same time, it introduces new algorithmic challenges that cannot be addressed by existing techniques. Unlike the standard model—where the attacker perturbs rewards in real time—fake data injection raises fundamental questions about when , how strongly , and how frequently to inject samples to effectively influence the learner. The attacker must decide: (i) how many fake samples are required to suppress the selections of a non-target arm, (ii) how to achieve this using only bounded reward values, and (iii) how to distribute injections over time when batch size or injection frequency is constrained. These challenges call for new analytical tools and attack strategies that explicitly account for both magnitude constraints (on reward values) andtemporal constraints (on when and how often data can be injected). Our Contributions. We develop a suite of attack strategies tailored to the fake data injection model, addressing both theoretical and practical challenges: •We propose the Least Injection algorithm for the unbounded setting, showing that a single fake sample per non-target arm suffices to steer the learner toward a target arm with sublinear cost. A key technical tool is the Exponential Suppression Lemma , which ensures long-term suppression of non-target arms and guides the design of our subsequent algorithms. •We extend this to the bounded setting via the Simultaneous Bounded Injection (SBI) algo- rithm, which replicates the effect of an unbounded sample using a batch of bounded fake data, while maintaining sublinear cost. •To address stricter constraints, we propose the Periodic Bounded Injection (PBI) algorithm, which injects small batches at controlled intervals. We provide a new suppression analysis to guarantee its effectiveness under temporal and magnitude constraints. •All attack algorithms are analyzed under both UCB and Thompson Sampling, with theoreti- cal guarantees that match the standard threat model in terms of cost and effectiveness. •We validate our methods on the real-world dataset, demonstrating that even sparse, bounded fake data can significantly bias bandit learners in practice. Our work bridges the gap between theoretical models of adversarial bandits and practical data-driven attacks observed in real systems. It introduces new threat models, techniques, and insights that we hope will inspire more realistic evaluations of online learning algorithms in adversarial environments. Related Work. Recent years have seen growing interest in understanding the vulnerability of bandit algorithms under adversarial attacks [ 5,6,9,10,11]. Jun et al. [5]initiated this line of work by designing effective attack strategies against UCB and ϵ-greedy algorithms in the stochastic bandit setting. Their work showed that an attacker can steer the learner toward a suboptimal arm while incurring only sublinear cost. Liu and
|
https://arxiv.org/abs/2505.21938v1
|
Shroff [6]extended this to settings where the learning algorithm is unknown, and further to contextual bandits. Subsequent work has explored increasingly general and complex settings. Garcelon et al. [9]studied attacks on linear contextual bandits, where an adversary can perturb both the context vectors and rewards. More recently, Zuo [8]developed smoothed attack strategies that reduce detectability while maintaining effectiveness. While prior work has shown that bandit algorithms can be manipulated with sublinear cost, most of these attacks rely on a strong feedback-perturbation threat model, where the attacker modifies the observed reward of the selected arm in every round—often with unbounded perturbations [ 5,6,8]. Even in more recent studies that consider bounded feedback [ 12,13,7], the attacker is still allowed to act continuously, and only needs to decide when to attack. In contrast, we introduce the Fake Data Injection Threat Model , where the attacker indirectly influences the learner by inserting fabricated (arm, reward) pairs into its history. This model captures realistic constraints found in systems like recommendation platforms, where attackers can create fake users but cannot modify real feedback. It 2 imposes bounded rewards, supports injection on arbitrary arms, and introduces constraints on both the number and frequency of injections, necessitating new algorithmic techniques. 2 Preliminaries 2.1 Stochastic Bandits We consider the standard stochastic multi-armed bandit setting with the arm set [K]:={1,2, . . . , K }, where each arm k∈[K]is associated with an unknown reward distribution with mean µk. The reward distributions are assumed to be σ2-sub-Gaussian, with σ2known. Without loss of generality, we assume the arms are ordered such that µ1≥µ2≥ ··· ≥ µK. In each round t= 1,2, . . . , T , the learner selects an arm at∈[K]and receives a stochastic reward rtdrawn from the distribution corresponding to arm at. In this paper, we consider two widely used algorithms for stochastic bandits: Upper Confidence Bound (UCB) and Thompson Sampling (TS). Upper Confidence Bound (UCB) . We consider the UCB algorithm as specified in Jun et al. [5], Zuo [8], and the prototype is the (α, ψ)algorithm of Bubeck and Cesa-Bianchi [14, Section 2.2 ]. In the first Krounds, the learner pulls each arm once to initialize reward estimates. For subsequent rounds t > K , the learner selects the arm with the highest UCB index at= arg max a∈[K]h ˆµa(t) + 3σq logt Na(t)i , where ˆµa(t)is the empirical mean reward of arm aandNa(t)is the number of times arm ahas been selected up to round t. Thompson Sampling (TS) . We consider the Thompson Sampling algorithm specified in Zuo [8], and the prototype is the (α, ψ)algorithm of Agrawal and Goyal [15]. In the first Krounds, the learner pulls each arm once. For rounds t > K , a sample νais drawn independently for each arm a∈[K] from the distribution N(ˆµa(t),1/Na(t)), and the learner selects the arm with the largest sampled value at= arg max a∈[K]νa. In the absence of attacks, both UCB and TS are known to achieve sublinear regret by selecting suboptimal arms only o(T)times. 2.2 Previous Threat Model and Limitations We begin by reviewing
|
https://arxiv.org/abs/2505.21938v1
|
the standard threat model adopted in prior works on adversarial attacks against bandit algorithms [ 5,6,7,8]. In each round t, the learner selects an arm atto play, and the environment generates a pre-attack reward r0 tdrawn from the underlying distribution of arm at. The attacker then observes the tuple (at, r0 t)and decides an attack value αt. The learner can only receive the post-attack reward rt=r0 t−αt. Define the cumulative attack cost as C(T) =PT t=1|αt|. The attacker’s objective is to manipulate the learner into selecting a specific target arm for a linear number of rounds while incurring only sublinear attack cost. Formally, an attack is considered successful if the attacker can force the learner to select the target arm T−o(T)times while ensuring that C(T) =o(T). While many attack strategies have been proposed under the standard threat model, it exhibits several critical limitations when applied to practical settings. First , the model assumes that the attacker can perturb the environment-generated reward in every round. This assumption is often unrealistic in real-world applications such as recommender systems. For example, consider an app that recommends restaurants to users and collects their feedback to improve future recommendations. An attacker may wish to bias the system toward recommending a particular restaurant, but it is infeasible to directly modify the feedback of all real users. In practice, a more common attack strategy is to create fake users who submit fabricated feedback. However, even this is constrained by operational or detection limits—adding fake users in every round is highly impractical. Thus, the assumption of per-round attack capability does not reflect realistic adversarial power. Second , the model restricts the attacker to modifying only the reward of the chosen arm in each round. In contrast, fake-user-based attacks offer more flexibility. A fake user can submit feedback on any item (i.e., any arm), regardless of what the learner selected in that round. This means fake data injection enables the attacker to fabricate feedback for arbitrary arms, not just the one currently played by the learner. As a result, the standard model underestimates the attacker’s flexibility in practice and overestimates their ability to act at every timestep. Third , the threat model assumes that both the pre-attack reward and the attack values are unbounded. In many systems, user feedback is naturally bounded — e.g., binary click signals or discrete rating scores (e.g., 1 to 5 stars). Allowing arbitrarily large attack values could result in out-of-range or clearly invalid feedback, which 3 would either be filtered out by the system or easily flagged as suspicious. Therefore, attacks that rely on large reward perturbations are incompatible with these bounded-feedback environments. 3 New Threat Model: Fake Data Injection To address practical limitations of the standard adversarial attack model, we introduce a new and more realistic threat model, which we call the Fake Data Injection Threat Model. This model captures how adversaries behave in real-world systems such as recommendation platforms, where direct manipulation of genuine user feedback is infeasible, and attacks are often carried out by injecting fabricated interactions (e.g., fake users with fake feedback).
|
https://arxiv.org/abs/2505.21938v1
|
In the Fake Data Injection model, the attacker does notinterfere with the feedback received by the learner during normal interactions. Instead, the attacker is allowed to inject up to NFfake data samples , denoted by {(aF i, rF i)}NF i=1, into the learner’s history. Each fake data point mimics a legitimate user interaction, where aF i∈[K]is the selected arm and rF i∈[˜a,˜b]is the corresponding reward for two given bounds ˜a≤˜b. The learner processes these injected samples in the same way as genuine observations—for example, updating the empirical mean ˆµaF i, incrementing the pull count NaF i, and advancing the internal time step t. We define the total attack cost as: CF(T):=NFX i=1|rF i−µaF i|, and consider an attack successful if it can mislead the learner into pulling a target arm for T−o(T) rounds while ensuring CF(T) =o(T), analogous to prior work. This new model resolves several key limitations of the previous threat model: Limited access manipulation. Unlike the standard model—which assumes the attacker can modify the reward in every round—our model reflects the more plausible scenario where the attacker can only inject a limited number of fake interactions. For instance, in a restaurant recommendation app, an attacker cannot tamper with the feedback from real users but can register a finite number of fake accounts to submit biased reviews. It is unrealistic to assume the attacker can do this in every round without detection or resource exhaustion. Flexible feedback across arms. The standard model restricts the attacker to modifying the reward for the arm chosen by the learner. In contrast, our model allows the attacker to fabricate data for anyarm. This mirrors real-world attacks where fake users can submit reviews or feedback on arbitrary items, not just those recommended to them. For example, an attacker aiming to boost a target restaurant can flood the system with positive feedback for that restaurant—regardless of whether it was actually recommended in a specific round. Bounded and plausible feedback. In our model, the fake rewards must lie within the valid feedback range [˜a,˜b], consistent with many practical systems that collect binary clicks or scaled ratings (e.g., 1 to 5 stars). This avoids the unrealistic assumption of unbounded reward modifications, where a single large perturbation could dominate the learner’s behavior. Our bounded injection design ensures the fake data remains indistinguishable from legitimate interactions. 4 Attack Strategies In this section, we develop attack strategies specifically designed for the fake data injection threat model. We begin by studying a simplified setting with unbounded feedback, which serves as a conceptual bridge from prior threat models: instead of directly altering the learner’s observed rewards, the attacker injects fake data with unrestricted values. It allows us to highlight the core mechanisms and intuitions behind effective attack strategies. Building on these, we then consider constrained injection attacks, where both the magnitude of fake feedback and the injection frequency are limited. Our goal is to design strategies that balance these two constraints while still steering the learner toward suboptimal behavior efficiently. Without loss of generality, we assume that the target arm is armKwhich has the
|
https://arxiv.org/abs/2505.21938v1
|
lowest expected reward.1 1This represents the most challenging case for the attacker and can be easily extended to target any other arm. 4 4.1 Warm-up: Injection Attacks with Unbounded Feedback We begin our study of fake data injection attacks by considering a relaxed setting in which the injected reward values rF ican take arbitrary real values , i.e., the bounded feedback constraint from Section 3 is removed. This setting closely mirrors the standard threat model, where the attacker can directly modify the observed reward as rt=r0 t−αt, allowing unbounded perturbations αtin each round. In this unbounded injection setting, we demonstrate that injecting a single fake data point per non-target arm is sufficient to mislead the learner into favoring the target arm. We formalize this insight through the Least Injection Algorithm, a simple yet effective one-shot attack strategy against the UCB algorithm, as shown in Algorithm 1. Algorithm 1: Least Injection Algorithm on UCB Input: Attack parameter δ0>0 1forround t= 1,2, . . . do 2 foreach non-target arm i∈[K−1]do 3 ifarmihas not been attacked andNi(t) =⌈(logT)/δ2 0⌉then 4 Inject fake data sample: 5 (aF i, rF i) = i, Ni(t)· ˆℓK(t)−ˆµi(t) +ˆℓK(t) 6 t←t+ 1 7 end 8 end 9end The attack operates as follows. For each non-target arm i, we wait until it has been pulled Ni(t) = ⌈(logT)/δ2 0⌉times. At this point, we inject a single fake sample designed to reduce its empirical mean below a high-probability lower bound of the target arm. Specifically, we define the empirical lower confidence bound for target arm Kas: ˆℓK(t):= ˆµK(t)−2β(NK(t))−3σδ0, where β(N):=q 1 2Nlog π2KN2 3δ , and δ0>0is a tunable attack parameter. The injected sample on non-target arm iensures that after the attack, the empirical mean of arm isatisfies: ˆµi(t+ 1)≤ˆℓK(t), (1) thus making it unlikely to be selected in future rounds. The total number of injected fake data points is at most K−1, one per non-target arm. Define ∆i:=µi−µK. We now provide the formal theoretical guarantee of Algorithm 1. Theorem 4.1. Suppose T >2K, δ < 0.5. With probability at least 1−δ, Algorithm 1 forces the UCB algorithm to select the target arm in at least T− O (K−1)(log T)/δ2 0 rounds, using a cumulative attack cost of at most CF(T) =K−1X i=1|rF i−µi| ≤ O K−1X i=1(∆i+ 4β(1) + 3 σδ0)·logT δ2 0! . Compared with the attack algorithm under the standard threat model in [ 5], the Least Injection Algorithm achieves a similar level of target-arm selection with comparable sublinear attack cost. Notably, the parameter δ0controls the trade-off between the number of non-target arm pulls and the attack cost: increasing δ0reduces the number of non-target pulls but increases the cost per injection. However, the marginal benefit diminishes once δ0>√logT, beyond which the cost grows without improving effectiveness. By selecting δ0= Θ(√logT), the cumulative attack cost is minimized to bO(Kσ√logT), which matches the lower bound Ω(√logT)established in [8]. To prove Theorem 4.1, we introduce the following lemma, which plays a central role in our attack design and serves as a key building block for subsequent algorithms. 5 Lemma 4.1 (Exponential Suppression
|
https://arxiv.org/abs/2505.21938v1
|
of Non-Target Arms) .Suppose T > 2K, δ < 0.5. With probability at least 1−δ, for any non-target arm i∈[K−1]that has been pulled Ni(t)times, if a fake data point is injected according to Line 5 of Algorithm 1, then arm iwill not be selected again until at least round exp(Ni(t)δ2 0). Proof Sketch. After the injection, the empirical mean of arm iis reduced such that its UCB index becomes significantly lower than that of the target arm. We analyze the evolution of the UCB indices and show that, unless arm iis pulled again (which it is not), its confidence bound tightens slowly while its empirical mean remains suppressed. By induction over subsequent rounds, we show that the UCB index of arm iremains lower than that of the target arm for an exponential number of rounds, specifically up to round exp(Ni(t)δ2 0). Remark. Lemma 4.1 establishes a critical property of our attack strategy: once a non-target arm ihas been pulled sufficiently and a properly chosen fake data point is injected, its UCB index becomes exponentially suppressed. More precisely, if the following two conditions are satisfied (1) Ni(t)≥(logT)/δ2 0and (2) ˆµi(t+ 1)≤ˆℓK(t), then arm iwill not be selected again until after round T. This suppression effect is crucial: it guarantees that once the attack is applied to arm i, its influence on the learning process becomes negligible for the remaining rounds. The attacker can thus prevent further exploration of non-target arms using only a single injection per arm, ensuring that the learner increasingly concentrates on the target arm. This mechanism forms the backbone of all our attack strategies. In addition to attacking UCB, we extend the Least Injection Algorithm to target the Thompson Sampling algorithm. Specifically, the attacker injects a single fake data point into each non-target armiwhen Ni(t) =⌈logT/δ2 0⌉, using a modified version of Line 5 in Algorithm 1. Due to space limitations, we defer the full algorithm and details to the appendix. We provide its theoretical guarantee below. Theorem 4.2. Suppose T > 2K, δ < 0.5. With probability at least 1−2δ, the modi- fied Least Injection Algorithm forces the Thompson Sampling algorithm to select the target arm in at least T− O((K−1) log T/δ2 0)rounds, using a cumulative attack cost of at most OPK i=1 ∆i+ 4β(1) +q 8 log π2K 3δ + 4√logT logT δ2 0 . Compared with the attack algorithm under the standard threat model studied in [ 8], our approach achieves a similar level of target-arm selection with matching attack cost. By setting the attack parameter δ0= Θ(√logT), we obtain a total cost of O(√logT), which aligns with the known lower bound, demonstrating the near-optimality of our strategy under this relaxed fake data injection model. 4.2 Constrained Injection Attacks We now turn to more realistic and constrained settings where injected fake data must lie within a bounded range. This reflects practical scenarios in which user feedback—such as clicks or ratings—is inherently limited (e.g., binary or on a fixed scale). To address the constraint on individual fake rewards, we first propose a natural extension of Algorithm 1. In
|
https://arxiv.org/abs/2505.21938v1
|
this version, the influence of a single unbounded fake reward is approximated by injecting a batch of bounded fake samples simultaneously for each non-target arm. In practice, however, attackers may face an additional constraint on the number of fake samples that can be injected at any given time, due to resource limitations or system-level detection thresholds. To address this more challenging setting, we introduce a periodic injection strategy that operates under two constraints: (1) each fake reward must be bounded; (2) only a limited number of fake samples can be injected at once. Our strategy carefully coordinates the frequency and timing of injections to maintain effective adversarial influence over the learner while satisfying both constraints. Despite these limitations, we show that the attacker can still successfully manipulate the learner’s behavior with sublinear cost. 4.2.1 Simultaneous Bounded Injection To ensure the attack remains realistic and stealthy, we consider a practical setting in which each injected fake reward must lie within a bounded range [˜a,˜b]. Under this constraint, we propose the 6 Simultaneous Bounded Injection (SBI) algorithm, which extends Algorithm 1 by replicating the effect of a single unbounded fake reward via the injection of multiple bounded fake samples. Specifically, for each non-target arm, the attacker injects a batch of rewards with the minimum value ˜ain a single round to achieve the same suppression of the empirical mean as in the unbounded setting. Algorithm 2: Simultaneous Bounded Injection on UCB Input: Attack parameter δ0, bounded reward range [˜a,˜b] 1forround t= 1,2, . . . do 2 foreach non-target arm i∈[K−1]do 3 ifarmihas not been attacked andNi(t) =⌈(logT)/δ2 0⌉then 4 ˜n←l ˆµi(t)−ˆℓK(t) ˆℓK(t)−˜a·l logT δ2 0mm ; 5 Inject ˜nfake samples (i,˜a); 6 t←t+ ˜n 7 end 8 end 9end As shown in Algorithm 2, once a non-target arm ihas been pulled ⌈(logT)/δ2 0⌉times, the attacker computes ˜nand injects ˜nfake samples with reward ˜a. We make the following assumption to ensure that the suppression of arm i’s empirical mean is always feasible: Assumption 4.1. ˜a≤µK−3β(1)−3σδ0. This assumption guarantees that it is always possible to reduce the empirical means of non-target arms below the lower confidence bound of the target arm. In practice, it can be relaxed to ˜a < ˆµK(t)−2β(NK(t))−3σδ0at the specific round t; we state the worst-case condition for generality. We now present the theoretical guarantee of the SBI algorithm. Theorem 4.3. Suppose T >2K, δ < 0.5and Assumption 4.1 hold. With probability at least 1−δ, Algorithm 2 forces the UCB algorithm to select the target arm in at least T− O K−1X i=1µi+β (logT)/δ2 0 −˜a µK−3β(1)−3σδ0−˜a·logT δ2 0! rounds, using a cumulative attack cost of at most O K−1X i=1(µi−˜a)∆i+ 4β(1) + 3 σδ0 µK−3β(1)−3σδ0−˜alogT δ2 0! , Compared with Theorem 4.1, the attacker now injects ˜n=l ˆµi(t)−ˆℓK(t) ˆℓK(t)−˜a·l logT δ2 0mm fake samples for each non-target arm instead of a single one. While this increases the total number of injected samples, the order of the attack cost remains O(√logT)when the attack parameter is set to δ0= Θ(√logT). Thus, the SBI algorithm maintains asymptotic optimality under more realistic constraints. As in Section
|
https://arxiv.org/abs/2505.21938v1
|
4.1, the SBI algorithm can also be extended to attack the Thompson Sampling algorithm. Due to space constraints, we defer the full algorithm and details to the appendix and present the main theorem below. Theorem 4.4. Suppose T >2K, δ < 0.5. With probability at least 1−2δ, the modified Simultaneous Bounded Injection forces the Thompson sampling algorithm to select the target arm in at least T− O K−1X i=0µK−3β(1)−p 8 log( π2K/(3δ))−4√logT−˜a µi+β(logT/δ2 0)−˜alogT δ2 0! (2) with the attack cost: O K−1X i=1(µi−˜a)∆i+ 4β(1) +p 8 log( π2K/(3δ)) + 4√logT µK−3β(1)−p 8 log( π2K/(3δ))−4√logT−˜alogT δ2 0! (3) 7 4.2.2 Periodic Bounded Injection The SBI algorithm above assumes that the attacker can inject all required fake samples within a single round. However, this assumption may not hold in practice. For example, in a restaurant recommendation system, injecting a large batch of fake (e.g., low-rating) reviews at once may trigger anomaly detection mechanisms, leading the system to filter or ignore the fake data. In contrast, injecting smaller amounts of fake feedback periodically—at a controlled rate—can be significantly less suspicious and more effective in practice. To model this scenario, we introduce a more restrictive and realistic setting where: 1. The attacker can inject at most ffake samples in any single round (batch size constraint); 2.There must be a delay of at least Rirounds between consecutive injections on the same arm i(cooldown constraint). To address this setting, we propose the Periodic Bounded Injection (PBI) algorithm, shown in Algorithm 3. Given a maximum batch size f, the algorithm adaptively schedules periodic injections to suppress the empirical mean of non-target arms while respecting both constraints. Algorithm 3: Periodic Bounded Injection on UCB Input: Attack parameter δ0, reward bound [˜a,˜b], max batch size f 1forround t= 1,2, . . . do 2 foreach non-target arm i∈[K−1]do 3 ifarmihas not been attacked andNi(t) =⌈(logT)/δ2 0⌉then 4 ˜ni←l ˆµi(t)−ˆℓK(t) ˆℓK(t)−˜a·l logT δ2 0mm ; 5 Ri←exp ˆµK(t)−2β(NK(t))−˜µi(t+f) 3σ2 ·(Ni(t) +f) −t−f; 6 nexti←t; 7 end 8 if˜ni>0andnexti≤tthen 9 Inject ffake samples (i,˜a); 10 ˜ni←˜ni−f; 11 t←t+f; 12 nexti←nexti+f+Ri; 13 end 14 end 15end The PBI algorithm distributes the injection of fake samples across multiple rounds rather than injecting them all at once. Once a non-target arm ireaches the designated pull threshold ( ⌈(logT)/δ2 0⌉), the attacker computes both the total number of fake samples ˜nirequired to suppress the empirical mean of arm i, and a waiting interval Ri, which ensures that the fake samples can be injected periodically without allowing arm ito regain a high UCB index. The notation ˜µi(t+f)represents the estimated value of ˆµi(t+f)withffake data injections starting from round t. At each interval of Ri+f rounds, a batch of ffake samples is injected until the total ˜niis exhausted. This strategy effectively balances stealthiness andattack efficacy , making it robust against detection in practical systems with bounded feedback and rate-limited injection constraints. The analysis of cumulative attack cost for PBI is deferred to the appendix, as it is similar to that of Theorem 4.3: the total number of fake samples injected remains the same. What distinguishes PBI is how suppression
|
https://arxiv.org/abs/2505.21938v1
|
is maintained across time , which is guaranteed by the following lemma. Lemma 4.2. The choice of Riin Algorithm 3 ensures that once a batch of ffake data samples is injected into non-target arm i, the arm will not be selected again for at least the next Rirounds. Proof Sketch. This result builds on a modified version of the exponential suppression lemma (Lemma 4.1). Rather than suppressing a non-target arm with a single large injection, we ana- lyze the suppression effect of a partial injection of fbounded fake samples. We show that the first batch induces the weakest suppression, so it suffices to compute Ribased on this worst-case 8 scenario. By ensuring that the UCB index remains below that of the target arm during this interval, we guarantee that arm iis not selected within the next Rirounds. After the c-th period of injection, we have ˆµi≤ˆµK−2β(NK(t))−3σp (log(t+ (f+R)c))/Ni(t+ (f+R)c). And its UCB index will remain lower than arm K’s until at least round t+ (f+R)c. We also extended the PBI algorithm to attack the Thompson Sampling algorithm. Due to space limitations, we defer the detailed algorithm and corresponding results to the appendix. 5 Experiments Figure 1: Attack Costs vs δ0 Figure 2: Attack Costs vs T Figure 3: Target Arm Selec- tion Ratios We evaluate our attack strategies in a realistic setting using the MovieLens 25M dataset [ 16], which reflects the practical motivations of the Fake Data Injection model. Due to space constraints, we report results for the SBI and PBI algorithms on the UCB learner; results for other settings are moved to the appendix. We consider K= 10 arms and simulate user interaction traces with stochastic rewards derived from movie rating distributions. The time horizon is set to T= 100 ,000. For PBI, the per-round injection limit is set to f= 5. We vary the confidence parameter δ0and the horizon Tto evaluate their effect on attack cost and effectiveness. Figure 1 plots the average attack cost as a function of δ0. As expected, increasing δ0reduces the number of required fake samples for suppressing non-target arms, leading to lower attack costs. PBI consistently incurs lower cost than SBI for small δ0due to its more conservative and distributed injection schedule. Figure 2 shows the total attack cost versus the time horizon T. When Tis small, some arms are not pulled often enough to trigger full injection, resulting in a lower realized cost. As Tincreases, the cost gradually converges to the predicted values. Across all settings, PBI performs comparably or better than SBI. Figure 3 tracks the target arm selection ratio over time. Both SBI and PBI are highly effective, with the learner converging to the target arm in nearly all rounds after early exploration. These results confirm that fake data injection attacks remain highly effective under realistic constraints. In particular, the PBI strategy achieves strong empirical performance while often incurring lower attack cost than SBI. This underscores the practical advantage of temporally distributed attacks. 6 Concluding Remarks This work introduces a practical and realistic threat model, Fake Data Injection, for adversarial attacks
|
https://arxiv.org/abs/2505.21938v1
|
on stochastic bandits. In contrast to prior models that assume per-round, unbounded reward perturbations, our framework captures real-world constraints such as bounded feedback, limited injection capability, and the attacker’s inability to modify genuine user data. Within this model, we develop a suite of effective attack strategies that successfully manipulate both UCB and TS algorithms using only sublinear-cost injections. Our theoretical analysis and experimental results demonstrate that even sparse, bounded fake interactions can significantly bias stochastic bandit algorithms. Despite these results, several limitations remain and open avenues for future work. We assume a passive learner that processes all feedback without defense, which may not hold in robust or adversarial-aware systems. Our work also focuses on stochastic bandits; extending to contextual or re- inforcement learning settings remains an open challenge. Additionally, real-world attackers may face detection risks or adaptive filtering by the system—scenarios not captured in our current framework. Future work should explore defense mechanisms such as anomaly detection or arm-level auditing, and investigate the dynamic interplay between attackers and adaptive learners. Addressing these limitations will be crucial for building secure online learning systems in adversarial environments. 9 References [1]Lihong Li, Wei Chu, John Langford, and Robert E Schapire. A contextual-bandit approach to personalized news article recommendation. In Proceedings of the 19th international conference on World wide web , pages 661–670, 2010. [2]Wei Chen, Yajun Wang, Yang Yuan, and Qinshi Wang. Combinatorial multi-armed bandit and its extension to probabilistically triggered arms. Journal of Machine Learning Research , 17 (50):1–33, 2016. [3]Sofía S Villar, Jack Bowden, and James Wason. Multi-armed bandit models for the optimal design of clinical trials: benefits and challenges. Statistical science: a review journal of the Institute of Mathematical Statistics , 30(2):199, 2015. [4]Shuai Li, Baoxiang Wang, Shengyu Zhang, and Wei Chen. Contextual combinatorial cascading bandits. In International conference on machine learning , pages 1245–1253. PMLR, 2016. [5]Kwang-Sung Jun, Lihong Li, Yuzhe Ma, and Jerry Zhu. Adversarial attacks on stochastic bandits. Advances in neural information processing systems , 31, 2018. [6]Fang Liu and Ness Shroff. Data poisoning attacks on stochastic bandits. In International Conference on Machine Learning , pages 4042–4050. PMLR, 2019. [7]Jinhang Zuo, Zhiyao Zhang, Zhiyong Wang, Shuai Li, Mohammad Hajiesmaili, and Adam Wierman. Adversarial attacks on online learning to rank with click feedback. Advances in Neural Information Processing Systems , 36:41675–41692, 2023. [8]Shiliang Zuo. Near optimal adversarial attacks on stochastic bandits and defenses with smoothed responses. In International Conference on Artificial Intelligence and Statistics , pages 2098–2106. PMLR, 2024. [9]Evrard Garcelon, Baptiste Roziere, Laurent Meunier, Jean Tarbouriech, Olivier Teytaud, Alessandro Lazaric, and Matteo Pirotta. Adversarial attacks on linear contextual bandits. Advances in Neural Information Processing Systems , 33:14362–14373, 2020. [10] Yuzhe Ma and Zhijin Zhou. Adversarial attacks on adversarial bandits. arXiv preprint arXiv:2301.12595 , 2023. [11] Huazheng Wang, Haifeng Xu, and Hongning Wang. When are linear stochastic bandits attackable? In International Conference on Machine Learning , pages 23254–23273. PMLR, 2022. [12] Yinglun Xu, Bhuvesh Kumar, and Jacob D Abernethy. Observation-free attacks on stochastic bandits. In M. Ranzato, A. Beygelzimer, Y . Dauphin, P.S. Liang, and J. Wortman Vaughan, editors, Advances
|
https://arxiv.org/abs/2505.21938v1
|
in Neural Information Processing Systems , volume 34, pages 22550–22561. Curran Associates, Inc., 2021. URL https://proceedings.neurips.cc/paper_files/ paper/2021/file/be315e7f05e9f13629031915fe87ad44-Paper.pdf . [13] Zichen Wang, Rishab Balasubramanian, Hui Yuan, Mengdi Wang, Huazheng Wang, et al. Adversarial attacks on online learning to rank with stochastic click models. Transactions on Machine Learning Research , 2024. [14] Sébastien Bubeck and Nicolò Cesa-Bianchi. Regret analysis of stochastic and nonstochastic multi-armed bandit problems, 2012. URL https://arxiv.org/abs/1204.5721 . [15] Shipra Agrawal and Navin Goyal. Near-optimal regret bounds for thompson sampling. J. ACM , 64(5), September 2017. ISSN 0004-5411. doi: 10.1145/3088510. URL https://doi.org/ 10.1145/3088510 . [16] F. Maxwell Harper and Joseph A. Konstan. The movielens datasets: History and context. ACM Transactions on Interactive Intelligent Systems (TiiS) , 5(4):1–19, 2015. doi: 10.1145/2827872. URL https://doi.org/10.1145/2827872 . 10 Appendix A Proofs A.1 Concentration Results Suppose that the reward distributions of arms are σ2-sub-Gaussian. The following concentration result will be useful throughout our analysis. Recall that β(N) =r 2σ2 Nlogπ2KN2 3δ. (4) Then we define event Eas: E:={∀i, t,|ˆµi(t)−µi|< β(Ni(t))}, (5) where ˆµi(t)is the empirical mean reward of arm iandNi(t)is the number of times arm ihas been selected up to round t. Lemma A.1 (Lemma 1 in [5]) .Forδ∈(0,1),P{E}>1−δ. We then define another event Fto bound the sampled value νiof arm iin Thompson sampling F:= ∀i, t,|νi(t)−ˆµi(t)|<γ(t) Ni(t) , (6) where γ(t):=q 2 logπ2Kt2 3δ. Lemma A.2 (Lemma 3 in [8]) .Forδ∈(0,1),P{F}>1−δ. A.2 Proofs of Injection Attacks with Unbounded Feedback This section details the proof of Theorem 4.1, Theorem 4.2. First, we present the proof of Lemma 4.1. A.2.1 Proof of Lemma 4.1 Proof. Under event E(defined in Equation (5)), since Algorithm 1 never attacks the target arm K, we can establish a lower bound on its estimate. For any two rounds t2> t1, ˆµK(t2)≥µK−β(NK(t2))≥µK−β(NK(t1))≥ˆµK(t1)−2β(NK(t1)), (7) where the second inequality follows from the monotonicity of β(N). Suppose that at round t1, Algorithm 1 injects fake feedback on arm isuch that ˆµi(t1)≤ˆµK(t1)−2β(NK(t1))−3σδ0. (8) This guarantees that in round t1+ 1, the UCB index of arm isatisfies UCB i(t1+ 1)<UCB K(t1+ 1), so arm iis not selected at round t1+ 1. Now consider any subsequent round t2witht1< t2<exp(niδ2 0), where ni:=Ni(t1), and assume that arm ihas not been selected in any round between t1andt2. Then Ni(t2+ 1) = niand ˆµi(t2+ 1) = ˆ µi(t1), so the UCB index for arm iat round t2+ 1is UCB i(t2+ 1) = ˆ µi(t1) + 3σs log(t2+ 1) ni ≤ˆµK(t1)−2β(NK(t1))−3σδ0+ 3σs log(t2+ 1) ni ≤ˆµK(t1)−2β(NK(t1))(since t2<exp(niδ2 0)) ≤ˆµK(t2+ 1) (by (7)) ≤UCB K(t2+ 1), 11 where the third step uses the boundq logt2+1 ni< δ0. This argument shows that the UCB index of armiremains strictly lower than that of the target arm Kfor all t2<exp(niδ2 0). By induction, arm iwill not be selected again until at least exp(niδ2 0)rounds have passed. As a direct corollary of Lemma 4.1, if arm isatisfies ˆµi(t)≤ˆℓK(t)for any round t, and has been pulled more thanlogT δ2 0times, then arm iwill not be selected again before round T. Meanwhile, we present an analogous result for Thompson sampling in Lemma A.3, which admits a similar corollary under the same
|
https://arxiv.org/abs/2505.21938v1
|
condition. For simplicity, we define ˆℓ′ K(t) = ˆµK(t)−2β(NK(t))−q 8 log π2K 3δ −4p Ni(t)δ0 Lemma A.3. For each non-target arm i∈[K−1], ifˆµi(t)≤ˆℓ′ K(t), then with probability at least 1−2δ, arm iwill not be selected again until at least round ⌊exp(Ni(t)δ2 0)⌋. Proof. Suppose that at round t1, the following inequality holds: ˆµi(t1)≤ˆµK(t1)−2β(NK(t1))− r 8 logπ2K 3δ+ 4p Ni(t1)δ0! . (9) Letni:=Ni(t1)and consider any round t2such that t1< t2<⌊exp(niδ2 0)⌋. Assuming that arm iis not selected from round t1tot2, then Ni(t2+ 1) = niandˆµi(t2+ 1) = ˆ µi(t1). Applying the concentration bounds from Lemmas A.1 and A.2, the sampled value νi(t2+ 1) for arm isatisfies: νi(t2+ 1)<ˆµi(t2+ 1) + γ(t2+ 1) ≤ˆµK(t1)−2β(NK(t1))−r 8 logπ2K 3δ−4√niδ0+γ(t2+ 1) ≤ˆµK(t2+ 1)−r 8 logπ2K 3δ−4√niδ0+γ(t2+ 1) ≤ˆµK(t2+ 1)−r 8 logπ2K 3δ+ 16niδ2 0+γ(t2+ 1) = ˆµK(t2+ 1)−r 8 logπ2Kt2 3δ+γ(t2+ 1) ≤ˆµK(t2+ 1)−γ(t2+ 1) < νK(t2+ 1), where the last inequality uses the fact that γ(t2+ 1) is an upper confidence width and νK(t2+ 1)> ˆµK(t2+ 1)−γ(t2+ 1) with high probability. This chain of inequalities implies that the sampled value νi(t2+ 1) remains lower than νK(t2+ 1) for all t2<⌊exp(niδ2 0)⌋, with probability at least 1−2δ. Therefore, arm iwill not be selected again until at least round ⌊exp(niδ2 0)⌋. A.2.2 Proof of Theorem 4.1 Proof. Assume event Eholds. After a single injection on each non-target arm i∈[K−1], Algorithm 1 ensures that ˆµi(t)≤ˆµK(t)−2β(NK(t))−3σδ0and Ni(t)≥logT δ2 0. By Lemma 4.1, this guarantees that arm iwill not be selected before round Twith high probability. Define ˜nias the total number of injected data samples for arm i. Since Algorithm 1 performs a single injection per non-target arm, we have ˜ni= 1for all i∈[K−1]. We now analyze the total attack 12 cost: K−1X i=1CF i(T) =K−1X i=1 ˆµi(t)Ni(t) +µi˜ni−ˆℓK(t)(Ni(t) + ˜ni) ≤K−1X i=1 µi+β(Ni(t)) Ni(t) +µi−(ˆµK−2β(NK(t))−3σδ0)(Ni(t) + 1) ≤K−1X i=1 µi+β(Ni(t)) Ni(t) +µi−(µK−3β(1)−3σδ0)(Ni(t) + 1) ≤K−1X i=1 µi−µK+ βlogT δ2 0 + 3β(1) + 3 σδ0logT δ2 0 + 1 ≤K−1X i=1(∆i+ 4β(1) + 3 σδ0)logT δ2 0 + 1 =O K−1X i=1(∆i+ 4β(1) + 3 σδ0)logT δ2 0! . Therefore, both the cumulative attack cost and the number of non-target arm pulls are sublinear in T, completing the proof. A.2.3 Proof on Theorem 4.2 Proof. Suppose events EandFhold. After a single injection on arm iat round t, we ensure that Ni(t)≥logT δ2 0,ˆµi(t)≤ˆℓ′ K(t),where ˆℓ′ K(t)is the adjusted threshold for suppressing arm iunder Thompson sampling. By Lemma A.3, arm iwill not be selected again before round Twith high probability. Since Algorithm 4 performs a single injection per non-target arm, we have ˜ni= 1for all i∈[K−1]. We now analyze the total attack cost: K−1X i=1CF i(T) =K−1X i=1 ˆµi(t)Ni(t) +µi˜ni−ˆℓ′ K(t)(Ni(t) + ˜ni) ≤K−1X i=1((µi+β(Ni(t)))Ni(t) +µi) −K−1X i=1 ˆµK(t)−2β(NK(t))−r 8 logπ2K 3δ−4p Ni(t)δ0!logT δ2 0 + 1 ≤K−1X i=1 µi+βlogT δ2 0logT δ2 0 +µi −K−1X i=1 µK−3β(1)−r 8 logπ2K 3δ−4slogT δ2 0 δ0!logT δ2 0 + 1 ≤K−1X i=1 µi−µK+ 4β(1) +r 8 logπ2K 3δ+ 4slogT δ2 0 δ0!logT δ2 0 + 1 =O K−1X i=1 ∆i+ 4β(1) +r 8 logπ2K 3δ+ 4p logT! logT δ2 0! . Therefore, both the
|
https://arxiv.org/abs/2505.21938v1
|
cumulative attack cost and the number of non-target arm pulls are sublinear in T, completing the proof. 13 Algorithm 4: Least Injection Algorithm on Thompson sampling Input: Attack parameter δ0>0 1forround t= 1,2, . . . do 2 foreach non-target arm i∈[K−1]do 3 ifarmihas not been attacked andNi(t) =l logT δ2 0m then 4 Inject fake data sample: 5 (aF i, rF i) = i, Ni(t)· ˆℓ′ K(t)−ˆµi(t) +ˆℓ′ K(t) ; 6 t←t+ 1; 7 end 8 end 9end A.3 Proofs of Simultaneous Bounded Injection A.3.1 Proof of Theorem 4.3 Proof. According to Algorithm 2, after injecting ˜nisamples with value ˜ainto arm iat round t, the empirical mean at round t+ ˜nibecomes: ˆµi(t+ ˜ni) =ˆµi(t)Ni(t) + ˜ni·˜a Ni(t) + ˜ni ≤ˆµi(t)Ni(t) +ˆµi(t)−ˆℓK(t) ˆℓK(t)−˜aNi(t)·˜a Ni(t) +ˆµi(t)−ˆℓK(t) ˆℓK(t)−˜aNi(t) =ˆµi(t) +ˆµi(t)−ˆℓK(t) ˆℓK(t)−˜a˜a 1 +ˆµi(t)−ˆℓK(t) ˆℓK(t)−˜a =ˆµi(t)ˆℓK(t)−ˆℓK(t)˜a ˆµi(t)−˜a ≤ˆℓK(t), where the last step ensures that after injection, arm i’s empirical mean is suppressed below the target threshold. Since event Eholds, and by Lemma 4.1, arm iwill not be selected after round t+ ˜ni. The total number of pulls for arm iis therefore: Ni(t) + ˜ni≤logT δ2 0 +ˆµi(t)−ˆℓK(t) ˆℓK(t)−˜a·logT δ2 0 =ˆµi(t)−˜a ˆℓK(t)−˜a·logT δ2 0 ≤µi+β(Ni(t))−˜a ˆµK(t)−2β(NK(t))−3σδ0−˜a·logT δ2 0 ≤µi+βl logT δ2 0m −˜a µK−3β(1)−3σδ0−˜a·logT δ2 0 =O µi+β logT δ2 0 −˜a µK−3β(1)−3σδ0−˜a·logT δ2 0 , where we used concentration bounds for ˆµi(t)andˆµK(t)under event Eand the non-decreasing property of β(·). 14 The total attack cost can be calculated as: K−1X i=1CF i(T) =K−1X i=1 ˆµi(t)Ni(t) +µi˜ni−ˆℓK(t)(Ni(t) + ˜ni) ≤K−1X i=1(β(Ni(t))Ni(t) +µi(Ni(t) + ˜n)) −K−1X i=1(µK−3β(NK(t))−3σδ0)(Ni(t) + ˜n) =K−1X i=1 βlogT δ2 0logT δ2 0 + (µi−µK+ 3β(1) + 3 σδ0)(Ni(t) + ˜n) ≤K−1X i=1 (µi−˜a)∆i+β(⌈(logT)/δ2 0⌉) + 3β(1) + 3 σδ0 µK−3β(1)−3σδ0−˜alogT δ2 0 ≤K−1X i=1 (µi−˜a)∆i+ 4β(1) + 3 σδ0 µK−3β(1)−3σδ0−˜alogT δ2 0 =O K−1X i=1 (µi−˜a)∆i+ 4β(1) + 3 σδ0 µK−3β(1)−3σδ0−˜alogT δ2 0! . Therefore, both the cumulative attack cost and the number of non-target arm pulls are O logT δ2 0 per arm, and hence sublinear in T, completing the proof. A.3.2 Proof of Theorem 4.4 Proof. According to Algorithm 5, after injecting ˜nisamples with value ˜ainto arm iat round t, the empirical mean at round t+ ˜nibecomes: ˆµi(t+ ˜ni) =ˆµi(t)Ni(t) + ˜ni˜a Ni(t) + ˜ni ≤ˆµi(t)Ni(t) +ˆµi(t)−ˆℓ′ K(t) ˆℓ′ K(t)−˜aNi(t)˜a Ni(t) +ˆµi(t)−ˆℓ′ K(t) ˆℓ′ K(t)−˜aNi(t) =ˆµi(t) +ˆµi(t)−ˆℓ′ K(t) ˆℓ′ K(t)−˜a˜a 1 +ˆµi(t)−ˆℓ′ K(t) ˆℓ′ K(t)−˜a =ˆµi(t)ˆℓ′ K(t)−ˆℓ′ K(t)˜a ˆµi(t)−˜a ≤ˆℓ′ K(t+ni), where the last step ensures that after injection, arm i’s empirical mean is suppressed below the target threshold. Since events EandFhold, and by Lemma A.3, arm iwill not be selected after round t+ ˜ni. The total number of pulls for arm iis therefore: 15 Ni(t) + ˜ni=logT δ2 0 +ˆµi(t)−ˆℓ′ K(t) ˆℓ′ K(t)−˜alogT δ2 0 =ˆµi(t)−˜a ˆℓ′ K(t)−˜alogT δ2 0 ≤µi+β(Ni(t))−˜a ˆµK(t)−2β(NK(t))−q 8 logπ2K 3δ−4p Ni(t)δ0−˜alogT δ2 0 ≤µi+β(⌈(logT)/δ2 0⌉)−˜a µK−3β(1)−q 8 logπ2K 3δ−4p ⌈(logT)/δ2 0⌉δ0−˜alogT δ2 0 =O µi+β((log T)/δ2 0)−˜a µK−3β(1)−q 8 logπ2K 3δ−4√logT−˜alogT δ2 0 . Letˆi=µi+βl logT δ2 0m −˜aandˆk=µK−3β(1)−q 8 logπ2K 3δ−4p Ni(t)δ0−˜a. The total attack cost can be calculated as: K−1X i=1CF i(T) =K−1X i=1 ˆµi(t)Ni(t) +µi˜ni−ˆℓ′ K(t)(Ni(t) + ˜ni) ≤K−1X i=1(µi(Ni(t) + ˜n)
|
https://arxiv.org/abs/2505.21938v1
|
+β(Ni(t))Ni(t)) −K−1X i=1 ˆµK(t)−2β(NK(t))−r 8 logπ2K 3δ−4p Ni(t)δ0! (Ni(t) + ˜n) ≤K−1X i=1 βlogT δ2 0logT δ2 0 +µi ˆi ˜klogT δ2 0!! −K−1X i=1 µK−3β(1)−r 8 logπ2K 3δ−4p Ni(t)δ0−˜a+ ˜a!˜i ˜klogT δ2 0 =K−1X i=1 βlogT δ2 0logT δ2 0 + (µi−˜a)˜i ˜klogT δ2 0 −K−1X i=1 ˜ilogT δ2 0 =K−1X i=1 (µi−˜a)˜i ˜k−˜i+βlogT δ2 0logT δ2 0 =K−1X i=1 (µi−˜a)˜i ˜k−(µi−˜a)logT δ2 0 =K−1X i=1 (µi−˜a)∆i+βl logT δ2 0m + 3β(1) +q 8 logπ2K 3δ+ 4p Ni(t)δ0 µK−3β(1)−q 8 logπ2K 3δ−4p Ni(t)δ0−˜a logT δ2 0 ≤K−1X i=1 (µi−˜a)∆i+ 4β(1) +q 8 logπ2K 3δ+ 4rl logT δ2 0m δ0 µK−3β(1)−q 8 logπ2K 3δ−4rl logT δ2 0m δ0−˜a logT δ2 0 =O K−1X i=1 (µi−˜a)∆i+ 4β(1) +q 8 logπ2K 3δ+ 4√logT µK−3β(1)−q 8 logπ2K 3δ−4√logT−˜a logT δ2 0 . 16 Therefore, both the cumulative attack cost and the number of non-target arm pulls are O logT δ2 0 per arm, and hence sublinear in T, completing the proof. Algorithm 5: Simultaneous Bounded Injection Attack on Thompson Sampling Input: Number of real users R, target arm K, parameter δ0, lower bound ˜a 1forround t= 1,2, . . . do 2 foreach non-target arm i∈[K−1]do 3 ifarmihas not been attacked andNi(t) =l logT δ2 0m then 4 ˜n← ˆµi(t)−ˆℓ′ K(t) ˆℓ′ K(t)−˜a·l logT δ2 0m ; 5 Inject ˜nfake samples (i,˜a); 6 t←t+ ˜n 7 end 8 end 9end A.4 Proofs of Periodic Bounded Injection A.4.1 Proof of Lemma 4.2 Proof. Suppose event Eholds. We begin by estimating the total number of fake samples needed to demote arm i. This quantity is given by ˜ni=& ˆµi(t)−ˆℓK(t) ˆℓK(t)−˜a·logT δ2 0' , where ˆℓK(t)is a conservative lower bound on arm K’s empirical mean, and ˜ais the value of each injected fake sample. Under the batch size constraint f, the attack spansl ˜ni fm periods, with each period injecting ffake samples. Our goal is to choose an appropriate delay parameter Risuch that after each injection, arm iis not selected again until the next scheduled injection. Specifically, we require that after the c-th batch (for anyc∈ {1,2, . . . ,⌈˜ni f⌉}), arm iis not selected for at least Rirounds. Letti(c) =t+ (f+Ri)cdenote the round before the c+ 1-th injection. We examine the UCB index of arm iat time ti(c): UCB i(ti(c)) = ˆµi(ti(c)) + 3 σs logti(c) Ni(t) +fc =Ni(t)ˆµi(t) +fc˜a Ni(t) +fc+ 3σs logti(c) Ni(t) +fc. (10) To ensure arm iis not selected before round ti(c), we want its UCB index to be no larger than that of armK. A sufficient condition is UCB i(ti(c))≤ˆµK(t)−2β(NK(t))≤UCB K(ti(c)), (11) where we use the lower bound ˆµK(t)−2β(NK(t))to conservatively approximate arm K’s UCB. Define ˜µi(ti(c))as the post-injection empirical mean of arm i: ˜µi(ti(c)) =Ni(t)ˆµi(t) +fc˜a Ni(t) +fc. 17 Then, the condition in (11) implies that, for any c, the delay parameter Ri(c)must satisfy: Ri(c)≤exp ˆµK(t)−2β(NK(t))−˜µi(ti(c)) 3σ2 ·(Ni(t) +fc) −t c−f. (12) Finally, to ensure that this condition holds for every injection period, we define the overall delay parameter as the minimum over all c: Ri= min 1≤c≤⌈˜ni f⌉exp ˆµK(t)−2β(NK(t))−˜µi(ti(c)) 3σ2 ·(Ni(t) +fc) −t c−f. (13) This choice of Riensures that
|
https://arxiv.org/abs/2505.21938v1
|
after each batch of ffake samples, arm iwill not be pulled again until the next scheduled injection. Correction to Algorithm 3. In the original version of Algorithm 3, we set Ri=Ri(c= 1) according to Eq. (12). However, since Ri(c= 1) is not always the minimum over c, the corrected formulation should use the minimization in Eq. (13). Nevertheless, based on our experimental observations, we find that c= 1typically yields the smallest value of Ri(c)in practice. We provide the following sufficient condition under which Ri(c= 1) is guaranteed to be the minimizer: ˆµK(t)−2β(NK(t))−˜µi(ti(1)) 3σ>1. Proof. We simplify the expression for Ri(c)in (12) as follows: Ri(c) =exp h(c)g(c) −t c−f, where h(c) =ˆµK(t)−2β(NK(t))−˜µi(ti(c)) 3σ2 , g(c) =Ni(t) +fc. Note that h(c)>0and is non-decreasing in c(i.e.,h′(c)≥0), while g(c)is clearly increasing in c. We now examine the derivative of Ri(c): d dcRi(c) =d dcexp(h(c)g(c))−t c =exp(h(c)g(c)) c2[h(c)g(c) (h′(c)g(c) +h(c)g′(c))−1] +t c2. Since g′(c) =fandg(c) =Ni(t) +fc, we further bound this as: d dcRi(c)≥exp(h(c)g(c)) c2 h2(c)g(c)g′(c)−1 =exp(h(c)g(c)) c2 h2(c)f(Ni(t) +fc)−1 ≥exp(h(c)g(c)) c2 h2(1)f(Ni(t) +f)−1 . Therefore, if h(1)>1, then h2(1)f(Ni(t) +f)>1, which ensures that the derivative is strictly positive for all c≥1. This implies that Ri(c)is strictly increasing in c, and thus Ri(1)is the minimizer. 18 A.5 Proof of Periodic Bounded Injection on Thompson Sampling Algorithm 6: Periodic Bounded Injection on Thompson Sampling Input: Attack parameter δ0, reward bound [˜a,˜b], max batch size f 1forround t= 1,2, . . . do 2 foreach non-target arm i∈[K−1]do 3 ifarmihas not been attacked andNi(t) =l logT δ2 0m then 4 ˜ni← ˆµi(t)−ˆℓ′ K(t) ˆℓ′ K(t)−˜a·l logT δ2 0m ; 5 Ri= min1≤c≤⌈˜ni f⌉ 1 cr 3δ π2K·exp (ˆµK(t)−2β(NK(t))−˜µi(ti(c)))2 8 −t −f ; 6 nexti←t; 7 end 8 if˜ni>0andnexti≤tthen 9 Inject ffake samples (i,˜a); 10 ˜ni←˜ni−f; 11 t←t+f; 12 nexti←nexti+f+Ri; 13 end 14 end 15end Lemma A.4. The choice of Riin Algorithm 6 ensures that once a batch of ffake data samples is injected into non-target arm i, the arm will not be selected again for at least the next Rirounds. Proof. We aim to guarantee that arm iis not selected between successive fake sample injections. Let ˜νiand˜νKdenote the Thompson-sampled values of arm iand arm K, respectively, after the c-th injection. Let ti(c) =t+ (f+Ri)cdenote the round before the c+ 1-th injection. After injecting ffake samples with value ˜aforcperiods, the empirical mean of arm ibecomes ˜µi(ti(c)) =Ni(t)ˆµi(t) +fc˜a Ni(t) +fc. We want to ensure that arm iis unlikely to be selected before time ti(c)by ensuring: ˜νi(ti(c))≤˜µi(ti(c)) +γ(ti(c)) ≤ˆµK(t)−2β(NK(t))−γ(ti(c))≤˜νK(ti(c)), (14) where γ(t) =q 8 log π2Kt2 3δ bounds the Thompson sampling deviation under event F. Rearranging the middle inequality in (14), we require: γ(ti(c))≤ˆµK(t)−2β(NK(t))−˜µi(ti(c)). (15) Solving (15) for ti(c)gives: ti(c)≤s 3δ π2K·exp1 8(ˆµK(t)−2β(NK(t))−˜µi(ti(c)))2 . Therefore, we define: Ri= min 1≤c≤⌈˜ni f⌉r 3δ π2K·exp (ˆµK(t)−2β(NK(t))−˜µi(ti(c)))2 8 −t c−f. (16) This ensures that after each batch of ffake samples, arm iis suppressed for at least Rirounds under events EandF. Hence, the learner will not select arm ibetween consecutive injections. 19 B Additional Experiments B.1 Experimental Setup Our attack strategies were evaluated on both synthetic data and real-world user-item interaction data derived from the MovieLens
|
https://arxiv.org/abs/2505.21938v1
|
dataset [ 16]. We considered a 10-armed stochastic bandit setup for for all experiments. For the synthetic setting, each arm’s reward distribution was modeled as a Gaussian with mean in the range [0,1]and fixed standard deviation σ= 1. We designated the arm with the lowest mean as the target arm to be attacked. Simulations were run for up to 106rounds using either the UCB algorithm or Thompson sampling, with our attack strategies applied at predefined intervals. For the real-world experiments, we used the MovieLens 25M dataset. Ratings were binarized into a sparse user-item interaction matrix, where each entry indicates whether a user interacted with a movie. We then extracted a submatrix comprising the 1000 most active users and the 1000 most interacted-with movies. In each trial, 10 movies were randomly selected as arms, with the movie having the fewest interactions chosen as the target arm. The reward of each arm was defined as the average interaction rate (i.e., the mean of the corresponding binary column). This setup provides a realistic approximation of reward feedback in a recommender system, enabling us to evaluate the attack algorithms in a practical and data-driven context. We measured the effectiveness of the attacks by tracking the cumulative pull ratio of the target arm overTrounds. To assess both robustness and cost-efficiency, we further analyzed how the total attack cost varies with respect to different values of δ0and the time horizon T. B.2 Attacks on Thompson Sampling Figure 4: Attack Costs vs δ0 Figure 5: Attack Costs vs T Figure 6: Target Arm Selec- tion Ratios We evaluate our attack strategies in a simulated environment using synthetic data. Specifically, we consider a multi-armed bandit setting with K= 10 arms, whose mean rewards follow a descending sequence {0.9,0.85, . . . , 0.45}to ensure clear arm differentiation. We compare the performance of three attack methods—single injection, SBI, and PBI—against a Thompson Sampling learner. The time horizon is set to T= 100 ,000steps, and the per-round injection cap for PBI is fixed at f= 10 . To understand the trade-off between attack cost and effectiveness, we systematically vary both the confidence parameter δ0and the time horizon T. Figure 4 illustrates how the average total attack cost varies with the confidence parameter δ0. Asδ0 increases, the statistical threshold for suppressing non-target arms becomes more lenient, resulting in significantly fewer fake data injections. Consequently, both the SBI and PBI strategies exhibit a marked reduction in cost, while the Single Injection maintains a consistently low cost due to its one-shot nature. Figure 5 examines the total attack cost as a function of the time horizon T. The cost of SBI continues to grow with T, whereas PBI flattens out, highlighting its efficiency in long-term scenarios. As expected, the Single Injection strategy incurs the lowest cost overall. Figure 6 tracks the target arm selection ratio over time. All three strategies successfully induce the learner to select the target arm in over 95% of rounds after approximately 20,000 steps and maintain this dominance throughout the remaining horizon. These results confirm that all proposed strategies are effective
|
https://arxiv.org/abs/2505.21938v1
|
arXiv:2505.21954v1 [cs.CV] 28 May 2025UniTalk: Towards Universal Active Speaker Detection in Real World Scenarios Le Thien Phuc Nguyen1∗Zhuoran Yu1∗Khoa Quang Nhat Cao1Yuwei Guo1 Tu Ho Manh Pham1Tuan Tai Nguyen1Toan Ngo Duc Vo1Lucas Poon1 Soochahn Lee2Yong Jae Lee1 1University of Wisconsin–Madison2Kookmin University Abstract We present UNITALK, a novel dataset specifically designed for the task of active speaker detection, emphasizing challenging scenarios to enhance model generaliza- tion. Unlike previously established benchmarks such as A V A, which predominantly features old movies and thus exhibits significant domain gaps, UNITALK focuses explicitly on diverse and difficult real-world conditions. These include underrep- resented languages, noisy backgrounds, and crowded scenes — such as multiple visible speakers speaking concurrently or in overlapping turns. It contains over 44.5 hours of video with frame-level active speaker annotations across 48,693 speaking identities, and spans a broad range of video types that reflect real-world conditions. Through rigorous evaluation, we show that state-of-the-art models, while achieving nearly perfect scores on A V A, fail to reach saturation on UNITALK, suggesting that the ASD task remains far from solved under realistic conditions. Neverthe- less, models trained on UNITALK demonstrate stronger generalization to modern “in-the-wild” datasets like Talkies and ASW, as well as to A V A. UNITALK thus es- tablishes a new benchmark for active speaker detection, providing researchers with a valuable resource for developing and evaluating versatile and resilient models. Dataset: https://huggingface.co/datasets/plnguyen2908/UniTalk-ASD Code: https://github.com/plnguyen2908/UniTalk-ASD-code 1 Introduction Active speaker detection (ASD) [ 4,27,23,18,15,26,11] aims to identify whether a visible person in a video is speaking. This task plays a critical role in various downstream applications, including speaker diarization [ 16,24], audiovisual speech recognition [ 25,2,17], and human-robot interaction [ 12,21, 22]. To support the development of ASD models, several benchmark datasets have been proposed [ 20, 13,4], most notably the A V A-ActiveSpeaker dataset [ 20], which is constructed entirely from movie content . A V A-ActiveSpeaker has become the de facto benchmark for evaluating ASD models and has driven significant progress in the field, with recent methods reporting nearly perfect mAP scores (e.g., >95% [26, 11]), leading many to consider ASD a solved problem in practice. While A V A-ActiveSpeaker [ 20] has been instrumental in driving progress, its reliance on movie data limits its ability to represent the complexities of real-world scenarios. In practice, ASD models must handle a wide range of challenges that are less common or absent in movie content, such as underrepresented spoken languages, noisy backgrounds (e.g., street sounds, music, or overlapping speech), and crowded scenes involving multiple people, occlusions, or dynamic camera motion. These factors are critical for deployment in settings like video conferencing, social media, and live broadcasts. However, the lack of benchmark coverage along these axes makes it difficult to assess model robustness or make meaningful improvements in generalization. To address this gap, we introduce UNITALK, a new benchmark dataset for active speaker detection in real-world scenarios. While some prior datasets include online videos [ 4,13], they are neither ∗Equal Contribution Preprint. AVA UniTalk A) Crowded Scenes B) Underrepresented Language C) Noisy Background D) Mixed-Difficulty I) Movies
|
https://arxiv.org/abs/2505.21954v1
|
II) Dubbed Movies English / Chinese / Korean Audio Dubbed in French High / Low Noise Crowded / Uncrowded Scene Figure 1: Comparison between A V A and UNITALK.A V A [ 20] primarily consists of movie content often with clean audio and simple visual composition. It also includes dubbed videos, where the audio is artificially overlaid and may not align with visible speech, potentially limiting the reliability of audiovisual supervision. In contrast, UNITALK features diverse real-world scenarios, including crowded scenes, underrepresented languages, noisy backgrounds, and combinations thereof. Each row shows a representative clip from a subcategory in UNITALK, with icons indicating language, noise level, and visual complexity. curated to reflect the key challenges of real-world deployment nor comparable to A V A [ 20] in scale. In contrast, UNITALK is constructed with an emphasis on diversity across multiple axes of difficulty, including underrepresented languages, noisy backgrounds, and crowded scenes — such as multiple visible speakers speaking concurrently or in overlapping turns. The dataset contains over 44.5 hours of video with frame-level active speaker annotations across 48,693 speaking identities, spanning a broad range of video types that reflect real-world conditions across these targeted difficulty axes. Fig. 1 highlights the key advantages of U NITALK over A V A. Evaluation . For overall evaluation, we follow prior work [ 20] and use mean average precision (mAP) over the full test set (i.e., for each visible person in each video frame, we evaluate whether the model correctly predicts active speaking status). This provides a standardized metric for comparison with existing benchmarks and highlights the greater difficulty and headroom for improvement in UNITALK. In addition, UNITALK enables fine-grained evaluation through a set of curated subcategories, each designed to stress a specific axis of difficulty. Specifically, the test set is further partitioned into four subsets: (1) underrepresented languages , consisting of videos in languages that are less prevalent than English in both existing benchmarks and online media (e.g., East Asian languages), paired with clean audio and simple scenes; (2) noisy backgrounds , containing videos with strong ambient noise but in well-represented languages such as English; (3) crowded scenes , featuring visually challenging conditions such as multiple visible speakers or frequent occlusions; and (4) hard examples from mixed-difficulty , which contain at least two of the above difficulty factors. This protocol supports more detailed analysis and highlights failure modes not evident from overall scores. Key Findings. Despite achieving near-perfect performance on A V A [ 20], state-of-the-art ASD models [ 4,27,23,18,15,26,11] show a clear drop on UNITALK, suggesting that existing bench- marks may not reflect real-world readiness. This gap is not due to UNITALK being unreasonably difficult—models trained on it generalize better to other in-the-wild datasets [ 13,4] than those trained 2 on A V A. Our results indicate that the ASD task remains far from solved when evaluated under more realistic conditions, underscoring the need for future model development and benchmarking to move beyond A V A and toward more representative datasets like U NITALK. 2 Related Work Active Speaker Detection Datasets. Some pioneering work, such
|
https://arxiv.org/abs/2505.21954v1
|
as V oxCeleb [ 19] and Columbia Dataset [ 6], explored audio-visual speaker detection but focused primarily on constrained scenarios like monologue-style speech or interview settings. A V A-ActiveSpeaker [ 20] later emerged as the largest and most widely-used benchmark, offering frame-level annotations but relying heavily on movie content, including a subset of dubbed clips, which limits its relevance for many real-world applications. Recent datasets such as Talkies [ 4] and ASW (Active Speaker in the Wild) [ 13] intro- duced web videos from YouTube and V oxConverse [ 7], offering improved diversity. However, they remain limited in scale and do not systematically capture the challenges encountered in practical de- ployment. To address these limitations, we introduce UNITALK, a comprehensive and systematically curated benchmark designed to evaluate model performance across three specific axes of difficulty: underrepresented languages, varying levels of background noise, and crowded visual environments. Active Speaker Detection Methods. ASD involves determining whether a person visible in a video frame is actively speaking, a task that requires effective audiovisual modeling. Existing methods typically fall into two primary training strategies: multi-stage and single-stage frameworks. Multi-stage approaches, such as UniCon [ 29], ASC [ 3], ASDNet [ 14], and SPELL [ 18], train feature encoders independently before context modeling. Conversely, single-stage methods like TalkNet [ 23], Light-ASD [ 15], LoCoNet [ 26], and TalkNCE [ 11] simultaneously optimize the encoder and context modules. Moreover, context modeling has become a key focus in recent ASD research, with many approaches leveraging long-term temporal dependencies to improve speaker activity prediction. These efforts often rely on recurrent neural networks (RNNs) [ 15,14,29,3], graph neural networks [ 18,5], attention mechanisms [ 26,8], or hybrid architectures combining attention and RNN [ 3]. LoCoNet [ 26] introduces a dual attention mechanism—self-attention for modeling intra-speaker dynamics and CNNs for inter-speaker interactions. TalkNCE [ 11] extends this architecture by incorporating contrastive learning to better separate speaker embeddings in the context space. While these methods achieve strong performance on A V A, their performance on UNITALK indicates that substantial headroom remains under real-world conditions. 3 U NITALK Dataset 3.1 Preliminary: Active Speaker Detection The goal of active speaker detection (ASD) is to make a binary decision Y∈[0,1]Tgiven a face trackV∈RT×H×Wand a corresponding audio track A∈R4T×M, where Tis the temporal length of the face track, HandWare the spatial dimensions of each face, and Mis the number of Mel- spectrogram frequency bins. State-of-the-art ASD models [ 4,14,23,18,15,26,11] typically consist of two main components: an audio-visual encoder and a context modeling module. The encoder includes a visual encoder Fvand an audio encoder Fa. Specifically, Fvtakes the face track and produces a visual feature fv∈RT×Dv, where Dvis the visual embedding dimension. The audio encoder Fatransforms the Mel-spectrogram into an audio feature fa∈RT×Da, where Dais the audio embedding dimension. These features are concatenated along the embedding dimension to form the combined representation fav∈RT×(Dv+Da). This fused feature is then passed through a context modeling module C, producing the final context-aware feature f′ av, which is used for the main prediction. Finally, the model employs three linear classifiers: two auxiliary
|
https://arxiv.org/abs/2505.21954v1
|
classifiers that use faandfvrespectively, and one main classifier that uses f′ av. All encoders and classifiers are trained jointly using the following loss function: Lasd=λavLav+λaLa+λvLv where Lav,La, andLvare the cross-entropy losses computed between the ground truth Yand the predictions ˆYfrom the embeddings f′ av,fa, andfv, respectively. In particular, LaandLvserve as auxiliary losses to encourage the model to attend to both modalities [20]. 3 Define Search Terms (ENG):-Gameshow-Sport interview-Music VideoSearch Terms (VIET):-Giải trí-Phỏng vấn thể thao-Ca nhạcSearch Terms (CHN):-综艺节目-体育采访-音乐短片Search Terms (KOR):-노래-스포츠 인터뷰-뮤직 비디오Stage 1: Create video pool Stage 2: Face track generation Stage 1.5: Filter-Filter sensitive content-Keep challenging videos Stage 3: Annotate and storeTranslate …… Figure 2: Data curation pipeline. Our data curation pipeline consists of four distinct stages: (1) video sourcing to construct an initial pool of candidate clips, (2) content filtering to remove videos containing sensitive or inappropriate material, (3) face track generation to convert raw videos into structured face sequences, and (4) annotation and storage for benchmark use. 3.2 Data Curation Our data curation process is designed to construct a large-scale benchmark for active speaker detection that reflects the diversity and complexity of real-world audiovisual conditions. Specifically, the pipeline targets coverage across three critical axes of difficulty: underrepresented languages, noisy backgrounds, and crowded scenes with multiple visible speakers. While not every video includes all these challenges simultaneously, the dataset as a whole is curated to provide meaningful representation along each axis. The full pipeline, depicted in Figure 2, consists of three stages—candidate video sourcing, face track generation, and annotation—designed to ensure both scale and annotation quality while preserving the diversity necessary for evaluating model robustness. Candidate Video Sourcing. To construct a diverse pool of speaking scenarios, we first prompt GPT-4 [ 1] to generate keyword search terms corresponding to video scenes likely to exhibit either visual or acoustic complexity—such as multi-person talk shows, press conferences, classroom discussions, sports interviews, panel debates, etc. Following prior work [ 4,13], we use YouTube as the primary source of videos. The search keywords are then translated into multiple languages to encourage linguistic diversity and used to retrieve candidate videos across different regions. To ensure high annotation quality and reduce downstream noise, we apply a combination of automated and manual filtering: Videos are discarded if they exhibit excessive face occlusions, very low resolution (below 480 pixels on the shorter side), or poor audio conditions—such as missing speech, overpowering background music, or severe reverberation. We also exclude videos containing sensitive or inappropriate content to uphold ethical standards for data collection and annotation. Face Track Generation. To support dense, frame-level speaker annotation, we generate face tracks using an automatic face detection and tracking pipeline. Candidate faces are first detected using S3FD [ 28] and then linked across frames using a greedy tracking algorithm based on spatial overlap and visual similarity. Tracks are smoothed using Gaussian kernel filtering on keypoint trajectories, and gaps shorter than 0.2 seconds are linearly interpolated to ensure temporal continuity. To ensure annotation quality and feasibility, we follow the filtering criteria established by A V A- ActiveSpeaker [ 20]: we retain only face tracks
|
https://arxiv.org/abs/2505.21954v1
|
that are at least 1 second in duration to provide sufficient temporal context. Each retained track is paired with synchronized audio and video playback to facilitate accurate speaker labeling. Occasional tracking failures—such as identity switches or false-positive detections—are manually flagged and discarded by annotators during the annotation stage. This step ensures that only high-quality face tracks are retained for final annotation. In total, the process yields 48,693 face tracks across 4.0 million faces, forming the basis for robust and scalable active speaker labeling in U NITALK. 4 Table 1: Quantitative comparison between ASD datasets. All statistics are computed over the combined training and test sets of each dataset. The highest value in each row is shown in bold . Statistics A V A [20] ASW [13] Talkies [4] U NITALK Total hours 38.5 23 4.2 44.5 Total face tracks 37,738 8,000 23,508 48,693 Total face crops 3.4M 407K 799K 4M Average speakers per frame 1.5 1.9 2.3 2.6 EnglishChineseKoreanJapanese VietnameseFrenchGermanRussianSpanishIndianDutchPolishThaiCzechArabicHmong05HoursUniT alk's Language Distribution EnglishItalianRussianSpanishSwedishGermanFrenchKoreanArabic HungarianThai PortugueseIndian05HoursAVA's Language Distribution Figure 3: Language distribution in UNITALK vs. A V A. UNITALK covers a wider range of languages, particularly with stronger representation of East Asian languages e.g., Chinese, Korean, and Japanese. In contrast, A V A primarily consists of Indo-European languages, limiting its linguistic diversity. Annotation Protocol. The annotation phase consists of a two-stage, multi-pass labeling process to ensure both high recall and high precision. In the first stage, multiple annotators independently review each face track and label whether the person is actively speaking at each frame. To maximize recall, annotators are instructed to label any moment where a person appears to be producing a verbal signal. In the second stage, a different set of annotators verifies these initial labels, either confirming or correcting the speaking status. Final labels are retained only when a majority consensus among annotators is reached, reinforcing reliability and reducing subjective bias. We follow the annotation criteria established by A V A-ActiveSpeaker [ 20] for determining what constitutes speaking. Specifically, a person is considered to be speaking if they are producing a verbal signal that carries semantic content—this includes normal speech, shouting, singing, or calling out. Non-verbal vocalizations such as coughing, laughing, sneezing, or other incidental mouth movements without semantic content are not labeled as speaking, even if the mouth is visibly active. In total, the dataset comprises 44.5 hours of densely annotated video. To support benchmark development, the data is partitioned at the video level into a training split (33.4 hours) and a test split (11.1 hours), ensuring that no speakers or conversational contexts are shared between splits. This prevents data leakage and supports rigorous evaluation of generalization. 3.3 Dataset Statistics Quantitative Comparison with Existing ASD Benchmarks. Table 1 presents a quantitative comparison between UNITALK and several widely used benchmarks in active speaker detection, including A V A [ 20], ASW [ 13], and Talkies [ 4].UNITALK offers the largest scale, with 44.5 hours of annotated video, surpassing A V A (38.5 hours), ASW (23 hours), and Talkies (4.2 hours). It also provides the highest number of face tracks (48,693)
|
https://arxiv.org/abs/2505.21954v1
|
and face crops (4 million), indicating greater coverage of speaker appearances. Furthermore, UNITALK exhibits the highest speaker density, averaging 2.6 visible speakers per frame—compared to 2.3 for Talkies, 1.9 for ASW, and 1.5 for A V A—reflecting the increased interaction complexity in our benchmark. Demographic and Language Diversity. UNITALK features an ethnically diverse set of speaking identities, with 44.2% White, 34.2% Asian, and 21.6% Black individuals, ensuring a broad de- mographic representation. In terms of language distribution (Figure 3), UNITALK also improves upon the linguistic coverage seen in A V A [ 20], which primarily contains Indo-European languages. While English remains the dominant language in both datasets, A V A underrepresents East Asian languages such as Chinese, Korean, and Japanese—languages that are substantially better represented inUNITALK. This improved linguistic balance enables more robust evaluation of ASD models in multilingual and cross-cultural scenarios. 5 44.2% 21.6%34.2% a) Race DistributionWhite Black Asian44.8% 28.8%15.6%10.7% b) Faces Distribution1 2 3 >=434.4% 28.0%21.5%16.1% c) Subset DistributionHard Language Crowded NoiseFigure 4: Dataset Composition Overview. (a) Race distribution of visible speakers. (b) Number of visible faces per frame, reflecting the range of visual complexity. (c) Breakdown of test set according to targeted difficulty categories used for evaluation. 3.4 Benchmarking Task and Evaluation UNITALK is designed to serve as a standardized benchmark for active speaker detection under real-world conditions. Following prior work [ 20], we formulate ASD as a binary classification task at the face track level: for each video frame, the model must determine whether a given visible person is actively speaking. We adopt mean average precision (mAP) as the primary evaluation metric, following common benchmark practice [ 20,4,13], enabling consistent and comparable evaluation across models. Given a set of face tracks, we first sort all faces in decreasing order of their prediction scores. We then compute the precision and recall for the positive class (i.e., speaking) by iterating from the most confident to the least confident predictions. Finally, the mean average precision (mAP) is computed based on the resulting precision-recall curve. To facilitate deeper analysis of model robustness, we define a set of diagnostic subgroups aligned with the core axes of difficulty in UNITALK: language diversity, background noise, and visual crowding. Importantly, for the first three subgroups, we explicitly isolate each difficulty factor—selecting examples that exhibit the target condition while controlling for others—to better assess model behavior under controlled stress conditions: •Underrepresented Languages: scenes where the dominant spoken language is not English, and both background noise and visual complexity are minimal to isolate linguistic variation. •Noisy Backgrounds: scenes with strong ambient noise or music, where speech remains intelligible and the speaker is clearly visible. We use dominant language and non-crowded scenes to isolate acoustic difficulty. •Crowded Scenes: scenes with multiple visible speakers, occlusions, or rapid camera motion, while keeping language and background noise controlled to isolate visual complexity. •Hard Examples: scenes containing at least two of the above difficulty axes, such as overlapping speakers in underrepresented languages or noisy conversations in visually complex settings. To support the construction of these diagnostic subsets, we annotate difficulty attributes for all
|
https://arxiv.org/abs/2505.21954v1
|
test videos. As shown in Figure 4 c), the test set contains substantial coverage across all axes: 28.0% of samples feature underrepresented languages, 21.5% involve visually crowded scenes, 16.1% contain noisy audio, and 34.4% fall into the hard example category. Additionally, Figure 4 b) highlights the overall visual complexity of our benchmark: over 55% of frames contain two or more visible faces, reinforcing the need for robust modeling in multi-speaker scenarios. 4 Experiments We conduct a series of experiments to assess the effectiveness of UNITALK as a challenging and representative benchmark for active speaker detection (ASD). First, we evaluate a range of state-of- the-art ASD models on UNITALK to understand their performance under realistic visual and acoustic conditions. Next, we compare major ASD datasets—A V A, Talkies, ASW, and UNITALK—by training and evaluating models across all combinations, highlighting differences in generalization and domain coverage. Finally, we demonstrate the utility of UNITALK as a training source by fine-tuning a model pretrained on UNITALK for the A V A benchmark, showing rapid adaptation and strong downstream 6 performance. Together, these results position UNITALK as a valuable benchmark for both model evaluation and pretraining in real-world ASD scenarios. 4.1 Training ASD Models on U NITALK We benchmark a range of representative active speaker detection (ASD) models on UNITALK, including early multi-stage architectures such as ASDNet [ 14] and ASC [ 3], as well as recent end-to- end and contrastive learning approaches like TalkNet [ 23], LoCoNet [ 26], and TalkNCE [ 11]. All models are trained from scratch on the UNITALK training split and evaluated on its held-out test set using mean average precision (mAP). 4.2 Implementation Details We implement two multi-stage ASD models [ 14,3] and three single-stage ASD models [ 23,26,11]. For fair comparison, we follow each model’s original setup. For both LoCoNet [ 26] and TalkNCE [ 11], we use a batch size of 4 and sample 200 frames per training example. Each model is trained with 25 epochs on 4 RTX 2080 GPUs. For TalkNet [ 23], we use a batch size that contains at most 5000 frames, and the model is trained on one A40 GPU for 25 epochs. For single-stage ASD training objectives, we follow the LoCoNet and TalkNet, setting λav= 1,λa= 0.4, and λv= 0.4. For TalkNCE loss [ 11], we set its weight to 0.3 as mentioned in the paper. Random resizing, cropping, horizontal flipping, and rotations are used as visual data augmentation operations and a randomly selected audio signal from the training set is added as background noise to the target audio [ 23]. For the multi-stage training, we train ASC’s encoder for 100 epochs before training its context module for 15 epochs [ 3]. Similarly, we train ASDNet’s encoder for 70 epochs and its context module for 10 epochs [ 14]. For both ASC [ 3] and ASDNet [ 14], we set λav=λa=λv= 1in the first stage and λav= 1in the second stage. Finally, we use Adam [9] as our optimizer across all models. 4.3 Results State-of-the-art ASD models fall short on UNITALK .Table 2
|
https://arxiv.org/abs/2505.21954v1
|
shows that across a range of architectures, active speaker detection models trained on UNITALK achieve significantly lower mAP compared to their performance on A V A [ 20]. For example, LoCoNet [ 26], and TalkNCE [ 10]—which report mAP scores above 95 on A V A—obtain only 82.2, and 83.2 mAP, respectively, on UNITALK. Earlier models such as ASC [ 3] and ASDNet [ 14] perform even worse, highlighting that UNITALK presents a substantially more challenging evaluation setting. These results suggest that state-of-the-art models, while successful on existing benchmarks, still fall short on UNITALK. This gap indicates that the ASD task remains unsolved under more realistic conditions, and that prior benchmarks may not fully capture the challenges faced in real-world deployments. State-of-the-art ASD models struggle across all axes of real-world difficulty. As shown in Table 2, no model excels across any of the defined evaluation axes in UNITALK. Even the best-performing method, TalkNCE, achieves only moderate scores when faced with underrepresented languages (86.7), noisy backgrounds (84.1), and crowded scenes (84.9)—all lower than its nearly perfect mAP on A V A [ 20]—indicating that these real-world conditions remain challenging individually. The performance drops even further in the Hard subset (77.9), where multiple challenges co-occur. We also benchmark a TalkNCE model trained on A V A [20], which performs poorly across the board on UNITALK, scoring 77.5 overall and as low as 64.8 on Hard. These results highlight that existing ASD models are not only far from saturated under realistic settings, but that models trained on A V A generalize poorly when exposed to real-world acoustic and visual diversity. Models trained on UNITALK show strong cross-dataset transfer. To evaluate whether the difficulty of UNITALK stems from noisy or unrealistic data, we evaluated the top three models from Table 2—TalkNet, LoCoNet, and TalkNCE—on three established ASD benchmarks: A V A [ 20], Talkies [ 4], and ASW [ 13]. As shown in Table 3, these models consistently achieve strong results across datasets despite never being trained on them. This suggests that UNITALK provides diverse and transferable learning signals, and that its increased difficulty reflects realistic variation rather than annotation noise or domain-specific artifacts. 7 Table 2: Performance of ASD models trained and evaluated on UNITALK.Models are chosen to showcase different approaches, ranging from multi-stage systems to contrastive learning approaches. Results are reported in mAP. The highest score is shown in bold . UNITALK Model Architecture Train Data Overall Language Crowded Noise Hard ASDNet [14] ResNeXt/BGRU U NITALK 20.6 30.8 17.5 14.8 20.3 ASC [3] ResNet/LSTM U NITALK 61.4 74.7 62.9 53.4 57.3 TalkNet [23] ResNet/LIM U NITALK 75.7 80.1 77.6 67.1 70.3 LoCoNet [26] TalkNet/SIM U NITALK 82.2 85.8 84.6 80.0 76.2 TalkNCE [11] LoCoNet/NCE loss U NITALK 83.2 86.7 84.9 84.1 77.9 TalkNCE [11] LoCoNet/NCE loss A V A [20] 77.5 84.9 81.0 80.1 64.8 Table 3: Generalization of models trained on UNITALK.We report mAP scores for each model evaluated on A V A, Talkies, and ASW after training on UNITALK. Results on UNITALK represent in-domain performance, while the others reflect generalization to out-of-domain
|
https://arxiv.org/abs/2505.21954v1
|
benchmarks. The consistently strong performance across all benchmarks indicates that UNITALK provides transferable learning signals that support robust ASD model development. Model Architecture In-domain Out-of-domain UNITALK A V A [20] Talkies [4] ASW [13] TalkNet [23] ResNet/LIM 75.7 78.4 89.2 88.9 LoCoNet [26] TalkNet/SIM 82.2 84.4 91.0 90.0 TalkNCE [11] LoCoNet/NCE loss 83.2 88.0 91.4 90.7 4.4 Comparing Benchmarks by Cross-Dataset Generalization Performance Setup. To compare the generalization characteristics of existing ASD benchmarks, we conduct a cross-dataset experiment using the state-of-the-art ASD framework: LoCoNet [ 26] + TalkNCE loss [ 11]. We train the model independently on each of four datasets—A V A [ 20], Talkies [ 4], ASW [ 13], and UNITALK—and evaluate performance on all four benchmarks. The results are summarized in Table 4. Results. We observe a striking contrast in generalization behavior. Models trained on existing datasets like A V A, Talkies, and ASW achieve near-perfect in-domain performance (e.g., over 95 mAP), but exhibit substantial performance drops when evaluated on any other dataset. This indicates that such models tend to overfit to dataset-specific cues, which may be unrepresentative of broader real-world variability. In contrast, the best performing model, TalkNCE, trained on UNITALK does not achieve saturated performance in-domain (83.2 mAP), but generalizes significantly better to the other three datasets, with strong mAP scores of 88.0, 91.4, and 90.4 on A V A, Talkies, and ASW, respectively. These results suggest that prior benchmarks cover a limited range of speaking scenarios, enabling models to overfit to narrow acoustic and visual patterns. UNITALK, by contrast, introduces richer scenario diversity, which encourages models to learn more robust and transferable representations. This makes UNITALK not only a more challenging benchmark, but also a more effective training source for models intended for deployment in realistic, unconstrained environments. 4.5 U NITALK as a Pretraining Source Setup. To assess the utility of UNITALK as a pretraining source, we examine how effectively a model trained on UNITALK can adapt to A V A [ 20] using limited additional data. Specifically, we fine-tune the TalkNCE model [ 11] - initially trained on UNITALK - on varying amounts of A V A training data, ranging from 3 to 15 hours of video, as well as the full A V A training set. For each setting, we report mAP on both A V A (the target domain) and UNITALK (the original domain) to evaluate adaptation and knowledge retention. Results. As shown in Table 5, the model rapidly adapts to A V A, achieving 92.4 mAP with only 3 hours of A V A training data. Performance continues to improve with additional data, reaching 95.7 mAP with the full dataset. Throughout this process, performance on UNITALK remains strong, 8 Table 4: Cross-dataset generalization comparison. Each row reports mAP for a TalkNCE model [ 11] trained on the indicated dataset (left) and evaluated on all four benchmarks. Prior datasets yield strong in-domain but poor cross-dataset performance, while UNITALK enables stronger generalization across the board. Train\Eval A V A [20] Talkies [4] ASW [13] U NITALK A V A [20] 95.5 88.3 88.5 77.5
|
https://arxiv.org/abs/2505.21954v1
|
Talkies [4] 55.7 95.6 84.5 59.9 ASW [13] 29.2 58.8 96.1 33.8 UNITALK 88.0 91.4 90.4 83.2 Table 5: Fine-tuning a TalkNCE model [ 11] pretrained on UNITALK using A V A [ 20].Each row reports mAP after fine-tuning on a different amount of A V A training data (measured in video hours). The model quickly adapts to A V A while maintaining strong performance on U NITALK. Pretraining Data A V A Training Hours Epochs A V A U NITALK None 31hr (full A V A) 25 95.5 77.5 UNITALK 3hr 2 92.4 80.4 UNITALK 5hr 5 93.4 78.6 UNITALK 10hr 10 94.0 79.2 UNITALK 15hr 15 95.0 80.6 UNITALK 31hr (full A V A) 15 95.7 81.3 showing no significant degradation. These results indicate that UNITALK serves not only as a challenging evaluation benchmark but also as an effective pretraining source. The model acquires transferable representations that can be quickly adapted to narrower domains such as A V A, making it a practical starting point for real-world ASD applications. 5 Conclusion We introduced UNITALK, a new large-scale benchmark for active speaker detection designed to better reflect the complexity of real-world audiovisual environments. Unlike prior benchmarks that rely heavily on clean, scripted movie content, UNITALK emphasizes diversity across three key axes of difficulty—language variation, background noise, and visual crowding—offering a more realistic and challenging testbed for model development. Through extensive experiments, we show that while state-of-the-art models achieve near-perfect scores on existing datasets like A V A, their performance drops significantly on UNITALK. Moreover, our results highlight that models trained onUNITALK generalize more robustly across other real-world datasets, demonstrating the value ofUNITALK as both a benchmark and a pretraining resource. Subgroup analyses further reveal persistent model weaknesses under specific difficulty conditions, motivating targeted improvements. We hope UNITALK will serve as a valuable benchmark for the community and foster the development of more robust, generalizable active speaker detection models in realistic acoustic and visual settings. Limitation and Future Work. While UNITALK strives to reflect global language diversity, the final distribution remains skewed due to several practical challenges in data curation. English, being the dominant language on public video platforms, is associated with a wider range of high- quality and diverse content, leading to its overrepresentation. In contrast, identifying suitable data in other languages proved more difficult—some content, while abundant, raised ethical or safety concerns, and was excluded to maintain high standards for public release. Additionally, our reliance on keyword-based sourcing and limited fluency in many non-English languages made it harder to validate and curate linguistically balanced content. As a result, although UNITALK improves upon prior benchmarks like A V A in language diversity, it does not achieve perfect balance. To partially mitigate this, we include a dedicated evaluation subgroup for underrepresented languages to assess model performance in these less-represented conditions. We hope future iterations of UNITALK can improve coverage through multilingual collaboration or broader community contributions. Broader Impact. Our dataset supports the development of more robust and generalizable active speaker detection (ASD) systems, with potential applications in accessibility, video understanding, and human-computer
|
https://arxiv.org/abs/2505.21954v1
|
interaction. By focusing on real-world variability, it encourages progress 9 beyond current benchmarks. However, as with any work in this area, there is potential for misuse, such as in surveillance or privacy-invading applications. These risks are not unique to our dataset, but are shared across the broader research direction. We encourage responsible development and deployment of ASD technologies. References [1]J. Achiam, S. Adler, S. Agarwal, L. Ahmad, I. Akkaya, F. L. Aleman, D. Almeida, J. Altenschmidt, S. Altman, S. Anadkat, et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774 , 2023. [2]T. Afouras, J. S. Chung, A. Senior, O. Vinyals, and A. Zisserman. Deep audio-visual speech recognition. IEEE transactions on pattern analysis and machine intelligence , 44(12):8717–8727, 2018. [3]J. L. Alcázar, F. Caba, L. Mai, F. Perazzi, J.-Y . Lee, P. Arbeláez, and B. Ghanem. Active speakers in context. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 12465–12474, 2020. [4]J. L. Alcázar, F. Caba, A. K. Thabet, and B. Ghanem. Maas: Multi-modal assignation for active speaker detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision , pages 265–274, 2021. [5]J. L. Alcázar, M. Cordes, C. Zhao, and B. Ghanem. End-to-end active speaker detection. In European Conference on Computer Vision , pages 126–143. Springer, 2022. [6]P. Chakravarty and T. Tuytelaars. Cross-modal supervision for learning active speaker detection in video. In B. Leibe, J. Matas, N. Sebe, and M. Welling, editors, Lecture Notes in Computer Science, ECCV 2016 (Part V) , volume 9909, pages 285–301. Springer International Publishing, 2016. [7]J. S. Chung, J. Huh, A. Nagrani, T. Afouras, and A. Zisserman. Spot the conversation: Speaker diarisation in the wild. In Interspeech 2020 . ISCA, Oct. 2020. [8]G. Datta, T. Etchart, V . Yadav, V . Hedau, P. Natarajan, and S.-F. Chang. Asd-transformer: Efficient active speaker detection using self and multimodal transformers. In ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) , pages 4568–4572. IEEE, 2022. [9] P. K. Diederik. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 , 2014. [10] Y . Jiang, R. Tao, Z. Pan, and H. Li. Target active speaker detection with audio-visual cues. In Interspeech 2023 , pages 3152–3156, 2023. [11] C. Jung, S. Lee, K. Nam, K. Rho, Y . J. Kim, Y . Jang, and J. S. Chung. Talknce: Improving active speaker detection with talk-aware contrastive learning. In ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) , pages 8391–8395. IEEE, 2024. [12] S.-H. Kang and J.-H. Han. Video captioning based on both egocentric and exocentric views of robot vision for human-robot interaction. International Journal of Social Robotics , 15(4):631–641, 2023. [13] Y . J. Kim, H.-S. Heo, S. Choe, S.-W. Chung, Y . Kwon, B.-J. Lee, Y . Kwon, and J. S. Chung. Look who’s talking: Active speaker detection in the wild. arXiv preprint arXiv:2108.07640 , 2021. [14] O. Köpüklü, M. Taseska, and G. Rigoll. How to design a three-stage architecture for audio-visual active speaker detection in the wild. In Proceedings of the IEEE/CVF international conference on computer vision , pages 1193–1203,
|
https://arxiv.org/abs/2505.21954v1
|
2021. [15] J. Liao, H. Duan, K. Feng, W. Zhao, Y . Yang, and L. Chen. A light weight model for active speaker detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 22932–22941, 2023. [16] Q. Lin, R. Yin, M. Li, H. Bredin, and C. Barras. Lstm based similarity measurement with spectral clustering for speaker diarization. arXiv preprint arXiv:1907.10393 , 2019. [17] P. Ma, S. Petridis, and M. Pantic. End-to-end audio-visual speech recognition with conformers. In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) , pages 7613–7617. IEEE, 2021. [18] K. Min, S. Roy, S. Tripathi, T. Guha, and S. Majumdar. Learning long-term spatial-temporal graphs for active speaker detection. In European Conference on Computer Vision , pages 371–387. Springer, 2022. 10 [19] A. Nagrani, J. S. Chung, and A. Zisserman. V oxceleb: a large-scale speaker identification dataset. arXiv preprint arXiv:1706.08612 , 2017. [20] J. Roth, S. Chaudhuri, O. Klejch, R. Marvin, A. Gallagher, L. Kaver, S. Ramaswamy, A. Stopczynski, C. Schmid, Z. Xi, et al. Ava active speaker: An audio-visual dataset for active speaker detection. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) , pages 4492–4496. IEEE, 2020. [21] T. B. Sheridan. Human–robot interaction: status and challenges. Human factors , 58(4):525–532, 2016. [22] G. Skantze. Turn-taking in conversational systems and human-robot interaction: a review. Computer Speech & Language , 67:101178, 2021. [23] R. Tao, Z. Pan, R. K. Das, X. Qian, M. Z. Shou, and H. Li. Is someone speaking? exploring long-term temporal features for audio-visual active speaker detection. In Proceedings of the 29th ACM international conference on multimedia , pages 3927–3935, 2021. [24] Q. Wang, C. Downey, L. Wan, P. A. Mansfield, and I. L. Moreno. Speaker diarization with lstm. In 2018 IEEE International conference on acoustics, speech and signal processing (ICASSP) , pages 5239–5243. IEEE, 2018. [25] R. Wang, J. Ao, L. Zhou, S. Liu, Z. Wei, T. Ko, Q. Li, and Y . Zhang. Multi-view self-attention based transformer for speaker recognition. In ICASSP 2022-2022 IEEE international conference on acoustics, speech and signal processing (ICASSP) , pages 6732–6736. IEEE, 2022. [26] X. Wang, F. Cheng, and G. Bertasius. Loconet: Long-short context network for active speaker detection. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 18462– 18472, 2024. [27] J. Xiong, Y . Zhou, P. Zhang, L. Xie, W. Huang, and Y . Zha. Look&listen: Multi-modal correlation learning for active speaker detection and speech enhancement. IEEE Transactions on Multimedia , 25:5800–5812, 2023. [28] S. Zhang, X. Zhu, Z. Lei, H. Shi, X. Wang, and S. Z. Li. S3fd: Single shot scale-invariant face detector. In Proceedings of the IEEE International Conference on Computer Vision (ICCV) , Oct 2017. [29] Y . Zhang, S. Liang, S. Yang, X. Liu, Z. Wu, S. Shan, and X. Chen. Unicon: Unified context network for robust active speaker detection. In Proceedings of the 29th ACM International Conference on Multimedia , page 3964–3972. ACM, Oct. 2021. 11 Appendix A Data Curation Details Low Mid High Visual ComplexityLowMidHighBackground NoiseGameshow
|
https://arxiv.org/abs/2505.21954v1
|
Group DiscussionGroup Interview PodcastSinging Performance Cockpit Video Site Report Sport InterviewLive Concert Group Sing Street InterviewGroup Vlog Crowded Scenes Noisy Background Hard Examples with Mixed Difficulty Figure A: Difficulty space of candidate video search terms. Each point represents a YouTube keyword query, plotted by the average number of faces per frame (x-axis, visual complexity) and average background noise level (y-axis, measured via RMS after V AD). We highlight three shaded regions corresponding to different axes of difficulty: crowded scenes (high visual complexity, bottom right), noisy backgrounds (high acoustic complexity, top left), and hard examples (both high visual and acoustic complexity, top right). To initiate large-scale video sourcing, we use GPT-4 to generate search keywords targeting diverse audiovisual conditions. The goal is to surface scenes with visually or acoustically challenging content. We use the following prompt template: :Could you please suggest YouTube search terms that feature <condition> suitable for my active speaker detection task? where <condition> is replaced with: • multiple people speaking together in a crowded scene • speakers against high background noise • hard cases combining crowded scenes and high background noise The resulting keyword terms are then translated into multiple languages to promote linguistic and regional diversity during video retrieval. These multilingual queries are used to collect a wide range of candidate videos, which are then processed as described in the main manuscript. The main curation pipeline naturally includes videos in various languages due to the diversity of the multilingual keyword queries. However, these videos often come with additional challenges—such as background noise or visual crowding—making it difficult to isolate the impact of language variation 12 High background noise Crowded Scene Underrepresented Language Group interviewGroup discussionGameshow Figure B: Illustrative examples across different axes of difficulty. Top row: Example videos in underrepresented languages (e.g., Chinese, Arabic, Vietnamese), used to evaluate model robustness to language variation. Middle row: High background noise scenarios, including (left to right) cockpit videos with engine noise, musical performances with background music, and sports interviews with crowd noise. These examples highlight challenges in detecting speech under acoustic interference. Bottom row: Visually crowded scenes such as gameshows, group interviews, and group discussions, where multiple visible speakers create ambiguity in speaker identification. alone. To evaluate model robustness specifically under language shift, we separately curate a clean set of videos in the same multilingual group used in the main data curation process, selecting only those that feature clear speech, minimal background noise, and low visual complexity (e.g., one or two visible speakers in a quiet setting). This subset is used exclusively to construct a test group focused on linguistic generalization. To better understand the range of difficulty captured by the keywords, we sample representative videos for each term and compute two diagnostic metrics: Visual complexity. Defined as the average number of visible faces per frame, using face detection. A threshold of three faces per frame serves as a mid-level cutoff, informed by A V A statistics [ 26], where 99% of samples fall below this value. Background noise level. We remove speech segments using a V oice Activity Detection (V AD)
|
https://arxiv.org/abs/2505.21954v1
|
tool, then compute the Root Mean Square (RMS) energy over the remaining background audio. A threshold of 0.03 RMS distinguishes low and high noise levels. We plot each video search term in a 2D space (Figure A) using these metrics. The visualizations provide an interpretable overview of the diversity in audiovisual conditions present in the collected data. These same measures are later used to define evaluation subsets within the test set. B Instructions to Annotators Following A V A [ 20], annotators were instructed to identify active speaking instances on a frame- by-frame basis, using both audio and visual information. A person is considered actively speaking when they are visibly engaged in producing speech that carries semantic content. Typical positive examples include: • Conversational speech, including full sentences and phrases. • Clear verbal responses such as “Yes,” “No,” “Go,” or “Okay.” 13 Noise and crowded only Language and crowded only Language and noise only All three axes Figure C: Examples of videos exhibiting different combinations of difficulty axes. Top row: Videos with background noise and crowded scenes, but no language shift (e.g., English-speaking group performances). Second row: Videos with language shift and crowded scenes, but minimal background noise (e.g., multilingual studio recordings). Third row: Videos with language shift and background noise (crowd noise), but minimal visual crowding (background noise from off-screen crowds). Bottom row: Videos that exhibit all three axes of difficulty simultaneously, such as Chinese- language cockpit videos with engine noise and multiple visible speakers. These curated combinations support targeted evaluation of model robustness. • Speaking during presentations, interviews, or dialogues. In addition to these standard cases, annotators were also instructed to include less conventional forms of verbal output that still reflect communicative intent, such as: • Short utterances and vocal fillers (e.g., “um,” “ah,” “hmm”). • Audible mumbling, provided it conveys speech-like intent. • Singing, regardless of musical accompaniment. Conversely, annotators were directed to exclude non-verbal or ambiguous vocalizations. The follow- ing were explicitly labeled as non-speaking : • Laughter, sighs, groans, grunts, coughing, humming. • Breath sounds alone, without speech articulation. • Mouthing lyrics or dialogue without producing actual sound. • Gestures and non-verbal communication (e.g., nodding, waving). • Audio-visual desynchronization (e.g., dubbed content, off-screen narration). These guidelines aim to ensure consistent, high-quality annotations across diverse audiovisual conditions, including crowded and noisy scenes where cues may be ambiguous. C Examples of Difficulty Combinations To better illustrate the interaction between different sources of difficulty in our dataset, we present qualitative examples exhibiting all pairwise and three-way combinations of the difficulty axes: language shift, background noise, and visual crowding (Figure C). Each row in the figure corresponds to a specific configuration of these factors. 14 GroundtruthTalkNCELoCoNetTalkNet Figure D: Failure cases of state-of-the-art ASD models on a challenging test video from UNITALK. This example contains all three difficulty axes: (1) crowded scenes with multiple small and over- lapping faces, (2) high background noise from musical instruments and ambient sounds in an open environment, and (3) language variation , with all speech in Vietnamese. All three models—TalkNCE, LoCoNet, and TalkNet—are not robust in
|
https://arxiv.org/abs/2505.21954v1
|
this setting, with predictions frequently incorrect across frames. Green and Red indicate ground truth speaking and non-speaking frames; Orange marks incorrect predictions. Background noise + visual crowding (First Row in Figure C). This configuration combines acoustic and visual challenges, while the spoken language remains familiar. The example in First Row shows an English-speaking group performance with multiple visible speakers and overlapping voices, accompanied by ambient background music. This setup captures crowded scenes with noisy environments, commonly observed in live events or entertainment shows. Language shift + visual crowding (Second Row in Figure C). This configuration involves language variation and multiple visible speakers, but minimal background noise. The example in Second Row depicts a multilingual group conversation - such as a studio recording - with several active speakers and clear audio. This setup reflects visually dense yet acoustically clean scenarios, where the main challenges are recognizing speech in an unfamiliar language and identifying the correct speaker among multiple visible people. Language shift + background noise (Third Row in Figure C). This configuration includes language variation and acoustic interference, while minimizing visual complexity. The example in Third Row features a speaker in a relatively clean setting facing an off-screen audience, where crowd chatter introduces background noise. This setup reflects scenarios where the spoken language differs from the training distribution and the audio channel is degraded, but visual cues remain clear and unambiguous. All three combined (Bottom Row in Figure C). This configuration includes language shift, background noise, and visual crowding simultaneously. The example in Bottom Row shows a non- English conversation recorded in a cockpit environment, with persistent engine noise and multiple visible speakers. This scenario represents the most challenging condition in our test set, where all three difficulty axes co-occur and jointly stress model robustness. 15 D Failure Case of State-of-the-Art Models on U NITALK Figure D presents an example from the UNITALK test set where multiple state-of-the-art active speaker detection (ASD) models fail, despite being trained on UNITALK itself. The video combines all three major difficulty factors targeted by our benchmark: (1) an underrepresented language (Vietnamese), (2) high background noise—including musical elements and ambient context sounds from an open public space—and (3) a visually crowded scene with multiple visible faces. This example highlights that even leading models struggle when multiple real-world challenges are present simultaneously. It underscores that the task is not saturated and validates the need for benchmarks like UNITALK that explicitly test robustness under diverse and overlapping sources of difficulty. 16
|
https://arxiv.org/abs/2505.21954v1
|
arXiv:2505.21955v1 [cs.CV] 28 May 2025Towards Comprehensive Scene Understanding: Integrating First and Third-Person Views for LVLMs Insu Lee1∗,Wooje Park1∗,Jaeyun Jang1,Minyoung Noh1, Kyuhong Shim2,Byonghyo Shim1 1Seoul National University,2Sungkyunkwan University {islee, wjpark, jyjang, mynoh, bshim}@islab.snu.ac.kr ,khshim@skku.edu Abstract Large vision-language models (LVLMs) are increasingly deployed in interactive applications such as virtual and augmented reality, where first-person (egocentric) view captured by head-mounted cameras serves as key input. While this view offer fine-grained cues about user attention and hand–object interactions, their narrow field of view and lack of global context often lead to failures on spatially or contextually demanding queries. To address this, we introduce a framework that augments egocentric inputs with third-person (exocentric) views, providing com- plementary information such as global scene layout and object visibility to LVLMs. We present E3VQA, the first benchmark for multi-view question answering with 4K high-quality question–answer pairs grounded in synchronized ego–exo image pairs. Additionally, we propose M3CoT, a training-free prompting technique that constructs a unified scene representation by integrating scene graphs from three complementary perspectives. M3CoT enables LVLMs to reason more effectively across views, yielding consistent performance gains (4.84% for GPT-4o and 5.94% for Gemini 2.0 Flash) over a recent CoT baseline. Our extensive evaluation reveals key strengths and limitations of LVLMs in multi-view reasoning and highlights the value of leveraging both egocentric and exocentric inputs. 1 Introduction In recent years, large vision-language models (LVLMs) have received significant attention for their unprecedented performance in diverse tasks, including information retrieval, content generation, and multi-modal interaction [ 39,22]. Their applications are now extended into interactive and immersive systems like virtual and augmented reality and embodied robotics. [ 39,22,21,5]. Notable feature in these applications is that images associated with the first-person view (a.k.a. egocentric view), captured by head-mounted cameras or smart glasses, are used as input. Although the first-person view is crucial in revealing the user’s intention, LVLMs might not interpret it well since most of their training data obtained from generic web images does not have a first-person perspective. Recently, several approaches overcoming the scarcity of this so called egocentric data have been suggested. Notable ones include synthetic generation of egocentric data, incorporating large-scale egocentric datasets during pre-training stage, and leveraging parameter-efficient fine-tuning on small- scale egocentric data [ 20,30,33]. While these approaches are effective to some extent, due to inherent constraints of the egocentric view, such as limited field of view and lack of global context, LVLMs might not properly handle user queries that require broader contextual understanding. Consider the scenario in Figure 1(a), where the user interacts with a visual assistant to make a food. When an egocentric image is provided as an input, LVLMs can readily answer the question “What should I do ∗Equal contribution. Preprint. Under review. (a) Cooking ScenarioExocentric viewEgocentric view Cut the vegetables.LVLM What should I do next? Chef On the right.LVLM Whereisthe cup? Chef3carrots.LVLM How many carrots are left? ChefHow many players are in our half? Player5players.LVLM Yes.LVLM Is there a striker to my right? Player(b) Sports ScenarioExocentric viewEgocentric viewWhat is the player’s back number in front of me? Player7.LVLM Figure 1: Example scenarios that require
|
https://arxiv.org/abs/2505.21955v1
|
a joint understanding of egocentric (first-person) and exocentric (third-person) views. In each scenario, the first question can be answered using only the egocentric view, while the subsequent two questions require integrating information from both views. Yellow and gray overlays indicate egocentric and exocentric views, respectively. next?”, but may not provide the proper answer to questions such as “Where is the cutting board?” or “How many carrots are left?”, since the egocentric view has limited spatial coverage and thus fails to capture the comprehensive picture of the user environment. The moral of the story is that the egocentric view alone is not sufficient enough to handle the full range of user queries. An aim of this paper is to propose a framework integrating multi-view information to enhance LVLMs’ capacity to understand the comprehensive context of a user’s surrounding environment. The core of our approach is to integrate images from the egocentric view and third-person view (a.k.a. exocentric view) to extract the merits of both. The exocentric view provides broader contextual cues, such as the user’s posture and the visibility of objects in blind spots and the egocentric view offers information about user focus, including hand–object interactions and fine-grained object details. While integrating multiple viewpoints can help answer diverse user queries, it remains unclear whether conventional LVLMs can fully leverage them and really enhance the output quality. To systematically answer the question, we design 1) an ego-exo1multi-view question answering benchmark, referred to as the Ego-Exo Expanded Visual Question Answering (E3VQA) benchmark and 2) a novel prompting technique called Multi-view, Multi-perspective, Multi-turn Chain-of-Thought prompting (M3CoT). The purpose of E3VQA is to evaluate the ability of LVLMs to jointly perceive and reason when egocentric and exocentric views are provided. To this end, we meticulously curate a dataset comprising 4K question-answer pairs, each coupled with synchronized ego-exo images collected from the EgoExo4D dataset [ 12] covering four distinct tasks including action understanding, object recognition, spatial reasoning, and numerical estimation. Further, we propose a novel prompting technique called M3CoT to fully leverage the complementary cues of ego-exo views. Specifically, we construct a unified description of the scene (i.e., scene graphs) that guides the LVLM to understand the entire scene environment. This process consists of two main stages. First, we build three distinct scene graphs, each capturing the scene from a different perspective. The first graph is generated by simultaneously processing both ego and exo images, providing a holistic view of the scene while potentially overlooking fine-grained details. The second and third graphs are constructed by sequentially processing, either from ego to exo or from exo to ego, allowing the model to capture minute details. Second, we iteratively unify the three scene graphs, enabling each to progressively converge into a more robust and comprehensive representation. These unified scene graphs are used by LVLMs to generate the final responses. We evaluate M3CoT on the E3VQA benchmark using state-of-the-art LVLMs, including GPT-4o [ 13] and Gemini 2.0 Flash [ 34], and observe considerable gain in accuracy of 4.84% and 5.94% over the recent CoT method CCoT [ 29]. We also observe
|
https://arxiv.org/abs/2505.21955v1
|
the noticeable gain (8.93%) on numerical reasoning questions, demonstrating our method’s effectiveness at integrating dual-view information. In summary, our contributions are as follows: •We build the ego-exo multi-view VQA benchmark, E3VQA, consisting of 4K rigorously curated question–answer pairs with synchronized ego–exo image pairs. We construct E3VQA through a 1Ego and exo denote the egocentric and exocentric, respectively. 2 Pose & Action PerceptionQ: What am I doing with my right hand? A) Chopping onionsB) Grabbing a bowlC) Washing dishesD) Holding a forkQ: How is the person with glasses positioned?A) Sitting B) KneelingC)CrouchingD) StandingQuestion about a person’ s physical state and movement, such as their poses, and actions Object & Attribute PerceptionQ: What type of shoes am I wearing?A) SneakersB) CleatsC) SandalsD) Flip-FlopsQ: What color is the goalpost?A) Blue B) GreenC) YellowD) RedQuestions about identifying objects and their key features, like color or typeNumerical ReasoningQ: How many people are there, including me?A) TwoB) ThreeC) Four D) FiveQ: How many soccer balls are on the field?A) One B) TwoC) ThreeD) FourQuestions about counting elements, such as the number of people or objects in a sceneSpatial ReasoningQ: Which object is to my left?A) Mixing bowlB) PlateC) Peeler D) Spice containerQ: Which object is closest to the person?A) Mixing bowlB) Cutting boardC) Spice containerD) PlateQuestions about the relative positioning and arrangement of objects or people Exo ViewEgo ViewExo View Ego View Ego ViewFigure 2: Categories in the E3VQA benchmark. Each question is paired with ego-exo images and multiple-choice answers. The answers are highlighted in bold. The left part shows recognition categories, assessing the ability to focus on question-relevant parts (boxes in figures). The right part shows reasoning categories, evaluating the ability to integrate information across views. systemically designed pipeline to ensure that each instance evaluates the capabilities of LVLMs in integrating and reasoning across ego and exo views. •To address the challenges posed by E3VQA, we propose M3CoT, a training -free prompting technique that combines scene graphs from three complementary perspectives into a unified scene graph. M3CoT improves performance on the E3VQA benchmark, achieving accuracy gains of 4.84% and 5.94% on two leading closed-source LVLMs, GPT-4o and Gemini 2.0 Flash. •We perform a detailed analysis of leading LVLMs on E3VQA, uncovering their specific failure modes in multi-view reasoning and quantifying how egocentric and exocentric inputs affect performance. 2 E3VQA Benchmark 2.1 Motivation and Objectives When compared to single-image setting, LVLM incorporating multi-view images faces a number of challenges. First, the model needs to identify which image and what regions in an image are relevant to the question. Second, the model needs to filter out redundant contents appearing in both views. Third, the model should deliberately extract ‘complementary’ cues from both views to generate a complete answer. Because a single image is provided per question for the conventional visual question answering (VQA) benchmark, they cannot evaluate multi-image reasoning capabilities. See Appendix for a detailed overview of existing benchmarks. To address this gap, we introduce E3VQA, a multiple-choice benchmark specifically designed for paired ego–exo images. Each question is accompanied by a set of plausible but incorrect options (i.e., distractors
|
https://arxiv.org/abs/2505.21955v1
|
) that target typical failure patterns. These patterns include relying solely on one image (ego or exo), ignoring visual input altogether, or failing to merge complementary information from both views. These carefully crafted distractors enable E3VQA to precisely evaluate a model’s ability to reason across ego–exo image pairs. 2.2 E3VQA Composition We organize E3VQA into four categories: pose and action perception, object and attribute perception, numerical reasoning, and spatial reasoning, to encompass a wide range of real-world scenarios (see Figure 2). Each category contains 1,000 question-answer (QA) pairs, evenly divided between egocentric and exocentric questions (e.g., “What am I doing?” vs. “What is the person doing?”), which supports the evaluation of the model’s generalization capability to diverse forms of user queries. To solve the questions, the model must identify the relevant object or person in both views and align 3 (c) Response-Based Question Filtering(b) View-Specific Response ExpansionViewpoint OptionsText Only Ego View Exo ViewBoth Views Single-View Questions𝑄: How many people are in front of me?𝐴!": Only one.Candidate Answers𝐴!": Only one.𝐴#$#%: Two people.𝐴&'": Two.𝐴("%): Two.𝐴#$#%: Two people.𝐴&'": Two.𝐴("%): Two.𝐴#$#%: Two people.𝐴&'": Two.𝐴("%): Two.𝐴!": Only one.𝐴*+": Two people.𝐴*'": Two.𝐴,"%): Two.𝐴%*'%: Only one.+Filtered Questions and Answers𝐴*+": Two.𝐴*'": Three.𝐴,"%): Five.𝐴%*'%: One.𝑄: How many people are in the room?Filter the questions based on the given criteria.𝐴&+": Two.𝐴&'": Three.𝐴("%): Five.𝐴!": One.𝑄: How many people are in the room?𝐴&+": Two.𝐴&'": Three.𝐴("%): Five.𝐴!": One.𝑄: How many people are in the room?𝐴&+": Two.𝐴&'": Three.𝐴("%): Five.𝐴!": One.𝑄: How many people are in the room?𝐴*+": Two people.𝐴*'": Two.𝐴,"%): Two.𝐴%*'%: Only one.𝑄: How many people are in front of me? Generated Questions and AnswersFiltering PromptLVLM LVLMLVLM LVLM+(a) Single-View QA GenerationEgo QA Pairs00𝑄: How many people are in front of me?𝐴*+": Two people.Ego View LVLM Exo View LVLMLVLM LVLMExo QA Pairs00𝑄: How many people are in the room?𝐴*'": Three people.Figure 3: Overview of the E3VQA benchmark’s three-step automated QA generation pipeline: (a) single-view QA generation step, (b) view-specific response expansion step, and (c) response-based question filtering step. their features across the two images. Variations in viewpoints and fields of view can cause the same entity to appear distorted, partially occluded, or differently scaled, making it difficult to match and integrate visual cues. Detailed explanations of the challenges for each category and dataset statistics are provided in Appendix. 2.3 Dataset Construction Pipeline 2.3.1 Source Data Pre-Processing The E3VQA benchmark is constructed utilizing the large-scale synchronized ego-exo dataset, EgoExo4D [ 12], which is composed of diverse user interactions (e.g., cooking, bike repair, soc- cer) filmed in various environments and countries. To ensure diversity within the dataset, video clips are uniformly sampled from the EgoExo4D dataset with respect to both user activities and recording locations. Each selected video clip is downsampled to 1 frame per second, and 8 frames are uniformly sampled from the downsampled clip. As a result, 4,600 ego-exo image pairs are obtained from the 575 video clips. Note that all video clips are selected from the test split to prevent any potential dataset contamination. 2.3.2 Automated QA Generation We introduce a three-step automated QA generation pipeline designed to minimize human effort and improve scalability. Throughout the
|
https://arxiv.org/abs/2505.21955v1
|
entire process, we utilize GPT-4o [ 13], a powerful off-the-shelf LVLM. Figure 3 illustrates the overview of the pipeline. Step 1: Single-View QA Generation We begin by generating QA pairs independently from either the ego or exo image, under the assumption that recent LVLMs are capable of understanding a single image. Specifically, we instruct the model to generate ego-QA pairs (e.g., Q: What am I doing?, Aego: Pouring water) from the ego image, and exo-QA pairs (e.g., Q: What is the person doing?, Aexo: Stirring eggs) from the exo image. The answers obtained from this step, AegoandAexo, are 4 collectively referred to as Ainit. This design ensures that the generated questions remain consistent with the available visual context in each view: the exo image lacks information to identify ‘I’, while the ego image provides insufficient context about ‘the person’ due to its limited field of view. For each image, we generate three QA pairs per category to balance diversity and relevance; generating more than three often results in questions that are not visually grounded. As a result, this process yields a total of 110,400 single-view QA pairs. Step 2: View-Specific Response Expansion In this step, we generate a diverse set of candidate answers by presenting the model with the same question under four distinct input conditions: 1) ego view only, 2) exo view only, 3) both ego and exo views, and 4) text only, with no visual input. We refer to the resulting answers as Aego,Aexo,Aboth, andAtext, respectively. These candidates answers serve two key purposes in subsequent steps: 1) they are used as criteria to select low-quality QAs during the filtering stage, and 2) they function as hard candidate options for constructing multiple-choice questions in the human refinement stage. Step 3: Response-Based Question Filtering A large proportion of the questions generated in the previous step are either too easy or disqualified for multi-view question answering. To filter such questions at scale, we introduce a response-based question filtering strategy that uses Ainit-the initial answer from either the ego or exo view-as a reference. First, we discard questions where Atextmatches Ainit, since this indicates that the question can be answered without any visual input. Second, we remove questions for which Abothis included in Ainit, suggesting that access to both ego and exo views does not change the answer and thus multi-view reasoning is unnecessary. Applying these two criteria retains only those questions that cannot be answered without integrating both views. As a result, approximately 78.5% of the initial questions are filtered out, leaving a set of 23,694 challenging, view-dependent QA samples. 2.3.3 Human Verification Following automated QA generation, we perform a human verification with four expert annotators. The experts thoroughly review all questions, discarding those that are unclear or of low quality. They also carefully craft the answer options, leveraging the responses generated in the previous step: Aego,Aexo,Aboth, and Atext. This process results in the final E3VQA dataset, comprising 4,000 high-quality, multi-perspective QA pairs, representing just 3.6% of the original 110,400 samples after filtering and refinement. Please refer to the Appendix for additional details.
|
https://arxiv.org/abs/2505.21955v1
|
3 M3CoT: Multi-Perspective Scene Understanding 3.1 Multi-Perspective Scene Graph Generation In our proposed ego-exo multi-image question answering scenario, we expect the LVLM to generate the most appropriate answer given a query Qand a pair of ego and exo images I={Iego, Iexo}. To help the model understand ego and exo images comprehensively, we employ a multi-perspective scene graph generation approach. Specifically, three instances of an LVLM act as distinct agents, denoted as F1,F2, andF3, each generating a scene graph from a different perspective as follows: •Ego&Exo : Agent F1generates a joint scene graph S1in a single step by simultaneously processing both IegoandIexoas input. •Ego2Exo : Agent F2first generates a scene graph using only Iego, which is then sequentially expanded by incorporating information from Iexoto generate scene graph S2. •Exo2Ego : Agent F3follows the reverse approach, generating a scene graph based solely on Iexo and subsequently supplementing it with Iegoto generate scene graph S3. Together, the three agents can capture both view-specific details and holistic scene context from complementary perspectives. For each perspective, prompts are carefully designed to capture the complementary information present in the ego–exo images. Please refer to the Appendix for the complete prompts. 5 Q: How many eggs are left?[𝐼ego] [Q]Ego2Exo PerspectiveExo2Ego Perspective Ego&ExoPerspective [𝐼ego][𝐼exo] Inputs & Node Legend[𝐼exo][Q] [𝐼exo][Q][𝐼ego] [Q]Using the provided two imagesand their associated question, generate a unified scene graph[𝐼ego][𝐼exo][Q]For the provided imagefrom a different view and the scene graphgenerated from the previous view, refine the scene graphFor the provided imageand its associated question, generate a scene graphFor the provided imageand its associated question, generate a scene graph For the provided imagefrom a different view and the scene graphgenerated from the previous view, refine the scene graph ><𝑆!>< 𝑆">Unidentified nodeIdentified node < 𝑆"">Scene Graph(GT)Misidentified node <𝑆!" >Answer: 4 eggsAnswer: 3 eggs > ><𝑆!>< 𝑆"><𝑆!>< 𝑆"> >Using the scene graphs generated from different perspectivesas additional context, generate a refined scene graphAnswer: 2 eggsAnswer: 2 eggsAnswer: 2 eggsAnswer: 2 eggsFigure 4: Overview of the M3CoT method. Left: Scene graph generation following the Ego&Exo perspectives. Center: Scene graph generation following the Ego2Exo perspective. Right: Scene graph generation following the Exo2Ego perspective. These graphs are merged to supplement missing objects and relations by integrating complementary information, enabling a coherent answer. 3.2 Iterative Multi-Agent Scene Graph Refinement To further refine each scene graph Si(fori= 1,2,3) generated by agent Fi, we iteratively incorporate information from the other two agents. At iteration t, agent Fitakes the other two scene graphs (e.g.,St 2andSt 3forF1) to examine their objects, attributes, and relationships, and then adjust St ito better align with both IegoandIexo. Here, St idenotes the scene graph Siafter the t-th update. By leveraging complementary information from multiple perspectives, this process improves both the accuracy and completeness of each agent’s scene graph. At each iteration step t, every agent Fi generates an answer to the question, conditioned on St i. We then aggregate the agents’ responses via majority voting. If a consensus emerges, we accept the majority answer and terminate the process. If the agents’ answers remain inconsistent after a fixed number of iterations, the final answer
|
https://arxiv.org/abs/2505.21955v1
|
is selected from the response of F1. This iterative loop yields progressively richer scene representations and promotes convergence among the agents’ answers. 4 Experimental Results 4.1 LVLM Performance on E3VQA To assess ego–exo multi-image reasoning capabilities, we evaluate five closed-source and nine open- source LVLMs on the E3VQA benchmark using each model’s default settings. Detailed specifications, hyperparameters, and the system and user prompt templates are provided in Appendix. Table 1 reports model accuracy across categories, each evaluated on 500 egocentric (Ego) and 500 exocentric (Exo) questions, respectively. Even the best-performing model, GPT-4o, achieve only 6 Table 1: Performance comparison of recent closed and open-source models on E3VQA benchmark. LVLMsPose & Action Object & Attribute Numerical SpatialAvg. Ego Exo Ego Exo Ego Exo Ego Exo Closed-Source GPT-4o [13] 49.87 ±1.10 63.47 ±0.81 72.47 ±0.12 77.00 ±0.40 48.47 ±1.14 57.67 ±0.12 61.13 ±0.99 57.13 ±0.12 60.90 GPT-4o mini [13] 41.33 ±0.31 49.00 ±0.40 66.07 ±0.12 71.00 ±0.00 35.80 ±0.35 44.00 ±0.00 44.00 ±0.20 41.00 ±0.20 49.03 Gemini 2.0 Flash [34] 53.27 ±0.12 60.33 ±0.12 74.00 ±0.20 76.47 ±0.50 46.33 ±0.42 56.20 ±0.20 58.27 ±0.46 53.80 ±0.20 59.80 Gemini 1.5 Pro [11] 53.73 ±0.46 62.40 ±1.31 69.60 ±1.31 72.27 ±0.83 44.07 ±1.01 52.60 ±1.00 58.27 ±2.14 54.00 ±1.00 58.37 Claude 3.5 Sonnet [1] 40.33 ±0.42 50.6±0.20 59.13 ±0.90 62.00 ±0.53 44.13 ±1.03 49.4±1.25 50.73 ±3.19 41.20 ±0.60 49.69 Open-Source InternVL3-14B[50] 44.73 ±1.50 54.93 ±1.42 68.13 ±0.81 73.73 ±0.99 35.60 ±1.11 53.00 ±0.20 45.67 ±0.58 48.33 ±0.99 53.02 Qwen2.5-VL-7B [3] 50.87 ±0.23 53.33 ±0.23 69.60 ±0.20 75.93 ±0.46 35.93 ±0.12 47.87 ±0.31 46.07 ±0.12 41.27 ±0.23 52.61 Qwen2-VL-7B [35] 53.67 ±0.90 56.07 ±1.72 67.13 ±0.76 67.47 ±1.14 32.87 ±0.99 38.07 ±2.42 43.27 ±1.72 42.27 ±1.01 50.10 LLaV A-OneVision-7B [17] 39.87 ±0.12 50.73 ±0.64 67.60 ±0.69 68.80 ±0.35 34.87 ±0.64 40.87 ±0.31 49.20 ±0.20 42.93 ±0.70 49.36 InternVL2-8B[37] 42.20 ±0.53 44.20 ±0.53 61.67 ±0.31 64.67 ±0.23 33.40 ±0.53 38.53 ±0.76 43.67 ±0.50 41.13 ±0.31 46.18 LLaV A-NeXT-7B [18] 34.67 ±0.92 33.87 ±0.46 57.33 ±0.12 62.27 ±0.23 30.27 ±0.23 39.07 ±0.46 47.20 ±0.35 40.67 ±0.58 43.17 Mantis-8B-Idefics2 [15] 28.07 ±0.70 35.47 ±0.23 53.73 ±0.12 56.53 ±0.42 35.67 ±0.31 37.73 ±0.64 41.53 ±0.23 32.53 ±0.90 40.16 Deepseek-VL-Chat-7B [24] 32.60 ±0.72 34.27 ±0.42 51.80 ±0.53 52.47 ±0.23 32.80 ±0.87 29.80 ±0.53 41.00 ±0.40 36.60 ±1.22 38.92 Qwen-VL-Chat-7B [2] 25.20 ±1.04 26.60 ±0.92 33.60 ±1.11 36.80 ±1.22 21.07 ±2.69 21.73 ±1.03 29.47 ±1.70 30.53 ±0.76 28.13 60.90% accuracy on E3VQA, underscoring the benchmark’s difficulty. Among open-source models, InternVL3-14B attains the highest accuracy, while Qwen2.5-VL-7B delivers competitive performance despite its smaller number of parameters. Overall, LVLMs struggle most with numerical reasoning yet perform relatively well on object and attribute recognition. Notably, models consistently underperform on egocentric compared to exocentric questions, highlighting difficulties in resolving the first-person perspective. Based on the constructed E3VQA benchmark, we conduct additional experiments to investigate how existing LVLMs respond differently to single-view and multi-view inputs (see Figure 5(c)). For questions where all relevant information appears in only one view, providing both images leads to a significant performance drop, suggesting that extra context may confuse the model. For questions
|
https://arxiv.org/abs/2505.21955v1
|
that require integrating clues from both egocentric and exocentric views, multi-view inputs improve accuracy compared to single-view setups; however, performance remains low, staying below 40%. For questions where each view alone contains all necessary information, providing both images yields marginal accuracy gain, indicating that consistent cues across views can help reinforce the model’s prediction of the correct answer. 4.2 Performance Evaluation of M3CoT To demonstrate the effectiveness of our M3CoT prompting scheme, we compare it against three recent multimodal CoT techniques (DDCoT, CoCoT, and CCoT) on the E3VQA benchmark. Experiments are conducted using two leading LVLMs, GPT-4o and Gemini 2.0 Flash, both of which achieved the highest performance on E3VQA. Table 2 presents category-wise accuracy under egocentric and exocentric questions. M3CoT improves over CCoT by 4.84 % on GPT-4o and 5.94 % on Gemini 2.0 Flash. In addition, it surpasses DDCoT and CoCoT by 4.15% and 5.71% on GPT-4o, and by 5.03% and 5.81% on Gemini, respectively. The strong gains in Numerical Reasoning highlight M3CoT’s ability to integrate multi-view information into a complete and accurate understanding. In contrast to these existing methods, which exhibit limited or inconsistent improvements and sometimes even shows performance drops, M3CoT achieves consistent and substantial gains across all categories. These results confirm the effectiveness of our approach in overcoming limitations of current multimodal prompting techniques. 7 Table 2: Performance comparison of various methods on best-performing models. MethodsPose & Action Object & Attribute Numerical SpatialAvg.Ego Exo Ego Exo Ego Exo Ego Exo GPT-4o Default 49.87 ±1.10 63.47 ±0.81 72.47 ±0.12 77.00 ±0.40 48.47 ±1.14 57.67 ±0.12 61.13 ±0.99 57.13 ±0.12 60.90 DDCoT [49] 55.20 ±0.72 69.53 ±0.31 73.80 ±0.92 78.80 ±0.40 48.13 ±0.64 57.87 ±0.58 67.27 ±0.76 64.87 ±0.50 64.43 CoCoT [46] 50.93 ±0.31 66.80 ±0.69 72.20 ±0.69 76.33 ±0.64 49.93 ±1.75 60.93 ±0.90 65.07 ±1.10 60.80 ±0.40 62.87 CCoT [29] 55.53 ±0.81 67.47 ±0.31 73.00 ±0.69 77.67 ±0.61 48.27 ±1.86 62.27 ±0.12 63.73 ±0.90 62.00 ±0.53 63.74 M3CoT (Ours) 58.40 ±0.28 69.40 ±1.13 78.90 ±0.42 82.80 ±1.70 56.40 ±1.13 67.90 ±0.71 71.90 ±2.69 62.90 ±0.14 68.58 Gemini 2.0 Flash Default 53.27 ±0.12 60.33 ±0.12 74.00 ±0.20 76.47 ±0.50 46.33 ±0.42 56.20 ±0.20 58.27 ±0.46 53.80 ±0.20 59.80 DDCoT [49] 55.60 ±0.72 62.60 ±1.04 75.53 ±0.95 81.13 ±0.95 46.13 ±0.95 54.67 ±1.29 57.47 ±0.42 55.60 ±1.22 61.09 CoCoT [46] 55.40 ±0.40 61.67 ±0.50 73.80 ±1.25 77.93 ±0.12 45.27 ±1.21 56.20 ±0.35 58.67 ±0.42 53.53 ±1.01 60.31 CCoT [29] 55.93 ±0.46 61.53 ±0.31 71.47 ±0.76 76.93 ±0.58 46.67 ±0.70 60.73 ±1.10 57.27 ±2.27 50.93 ±0.61 60.18 M3CoT (Ours) 57.80 ±0.20 65.80 ±0.20 78.80 ±0.72 82.80 ±0.40 55.60 ±1.06 67.40 ±1.25 62.67 ±0.81 58.07 ±1.10 66.12 10203040506070 AnyEgoExoBothAccuracy(%)ego_onlyexo_onlymulti_view203040text onlyboth viewego viewexo viewoursErrorrate (%) 13%19%12%88%68%123(a) Error rate across different option sets(b) Proportion of correctly answered questions(c) Model accuracy across different visual inputWrongCorrectFiltered OutTextonlyBoth viewEgo viewExo viewOursEgo-onlyExo-onlyMulti-view Figure 5: Analysis of the benchmark construction pipeline and model performance under varied input conditions: (a) error rate across option-generation strategies, (b) proportion of correctly answered questions between retained and excluded questions, and (c) performance across different visual input modalities. 5 Analysis 5.1 Analysis of Automated
|
https://arxiv.org/abs/2505.21955v1
|
QA Generation Pipeline To examine how the source of distractors affects the question difficulty, we sample 160 questions (40 per category) and construct four alternative option sets. In each set, all four answer choices are drawn from a single source: text-only, ego view, exo view, or both views. This setup contrasts with our standard configuration, where each distractor is drawn from a different source. As shown in Figure 5(a), the error rate increases in the following order: text-only, both-view, single-view, and our composite setting. This result highlights that constructing answer options from diverse input sources substantially raises question difficulty, providing a more rigorous evaluation of LVLMs’ multi-view reasoning abilities. To evaluate the impact of our auto-filtering process, we compare performance on two subsets: the filtered subset (68% of the data), which passed our automatic screening, and the unfiltered subset (the remaining 32%). As shown in Figure 5(b), 42% of questions in the unfiltered subset remained difficult (i.e., incorrectly answered even with full multi-view input), whereas only 12% of questions in the filtered subset were similarly challenging. This substantial reduction indicates that our filtering process effectively removes questions solvable by superficial cues, thus lowering annotation effort and improving the reliability of subsequent human verification. Although this reduction entails the removal of a significant portion of the data, it sharpens the challenge, making the remaining QA samples more diagnostically valuable. 8 Scene graph: ["objects": [{"name": "tripod", "positions" : "left"}, {"name": "tripod", "positions" : "right"}, {"name": "tripod", "positions" : "back left"}, {"name": "tripod", "positions" : "back right"}]]M3CoTFinal Answer : D)Sub question 1 : How many tripods are visible in the first image?Sub Answer 1 : The first image shows 3 tripods.Sub question 2 : Do the images show the same number of tripods?Sub Answer 2 : Yes, both images show the same number of tripods.DDCoTFinal Answer : A)Scene graph: ["objects": [{"id": "tripod1", "name": "tripod"}, {"id": "tripod2", "name": "tripod"}, {"id": "tripod3", "name": "tripod"}]]CCoTFinal Answer : A)Q: How many tripods are set up around me?A) 3B) 0C) 2D) 4Ego ViewExo View Scene graph:["objects": [{"name": "pan", "relation": "held by", "target": "left hand"},… , {"name": "spatula", … ,"relation": "held by", "target": "right"}]M3CoTFinal Answer : B)Sub question 1: What is the user doing in the images? Sub Answer 1: The user is cooking in a pan.Sub question 2:What is the user holding in their left hand? Sub Answer 2: The user is holding a spatula in their left hand.DDCoTFinal Answer : C)Scene graph: ["object": {"name": "spatula", … , "relation":"holding",…,"target": "left hand"}]CCoTFinal Answer : C)Q: What am I doing with my left hand?A) StirringB) Holding a panC) Holding a spatulaD) Chopping vegetablesEgo ViewExo ViewFigure 6: Qualitative examples of answers and reasoning processes generated by different prompting methods. Blue/Red words indicate the key cues that leads to correct/wrong answers. 5.2 Analysis of Multi-Perspective Scene Graph Generation Strategies To analyze the advantages of our three scene generation perspectives (Ego2Exo, Exo2Ego, and Ego&Exo), we partition the E3VQA dataset into four subsets, Any, Ego, Exo, and Both, according to which view(s) are necessary to answer each question. Specifically, Any subset questions can be answered with
|
https://arxiv.org/abs/2505.21955v1
|
either the ego or exo image; Ego subset questions require only the ego image; Exo subset questions require only the exo image; Both subset questions require both images. Table 3 shows that the Ego&Exo strategy achieves the largest accuracy gain in the Both subset, demon- strating its advantage in integrating complementary cues across viewpoints. In contrast, Ego2Exo performs best on Exo subset, while Exo2Ego yields the highest improvement on Ego subset, reflecting their specialization in inferring one view from the other. This highlights that different scene graph generation strategies can provide complementary benefits depending on where critical information is located within the given image sources. In addition, our scene graph refinement stage improves performance beyond any individual strategy by combining their strengths and filling their gaps. These findings confirm that fusing diverse scene graph perspectives produces more robust reasoning in ego–exo multi-image scenarios. 5.3 Qualitative Examples Table 3: Performance comparison of M3CoT per- spectives across question subsets grouped by the image view(s) required to answer. PerspectiveRequired View(s)Avg. Any Ego Exo Both Default 66.29 64.83 54.21 37.49 59.80 Ego&Exo 66.29 67.44 56.67 50.87 63.65 Ego2Exo 68.08 62.71 61.51 43.91 62.83 Exo2Ego 66.94 68.02 59.92 39.13 62.98 M3CoT (ours) 69.79 69.28 62.91 53.04 66.12Figure 6 shows qualitative examples of answers and reasoning processes generated by DDCoT, CCoT, and M3CoT methods using Gemini 2.0 Flash. Although CCoT produces plausible scene graphs, it fails to integrate information across multiple views, frequently resulting in incorrect answers. Specifically, it often misidentifies the same object viewed from different perspectives as separate entities. In contrast, our method suc- cessfully aligns the observations across views, accurately identifies the same object, and ex- tracts the critical information needed to answer the question correctly by leveraging the multi- view scene graph reasoning. 6 Conclusion In this work, we introduced E3VQA, the first benchmark that systematically assesses whether LVLMs can reason jointly over egocentric and exocentric views. By curating 4K high-quality question–answer 9 pairs grounded in synchronized ego–exo images, E3VQA serves as a rigorous test bed for multi-view understanding. In addition, we propose M3CoT, a novel prompting strategy that merges scene graphs from diverse perspectives into a unified representation. Extensive experiments demonstrate that M3CoT delivers consistent gains over the strong CCoT baseline, highlighting the value of integrating diverse viewpoints to address user queries. By enhancing LVLMs’ capacity for ego–exo reasoning, this work takes a step toward more context-aware visual assistants capable of operating in complex, real-world environments. References [1] Anthropic. Claude 3.5 sonnet model card addendum, 2024. [2]Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, and Jingren Zhou. Qwen-vl: A frontier large vision-language model with versatile abilities. arXiv preprint arXiv:2308.12966 , 2023. [3]Shuai Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, Sibo Song, Kai Dang, Peng Wang, Shijie Wang, Jun Tang, Humen Zhong, Yuanzhi Zhu, Mingkun Yang, Zhaohai Li, Jianqiang Wan, Pengfei Wang, Wei Ding, Zheren Fu, Yiheng Xu, Jiabo Ye, Xi Zhang, Tianbao Xie, Zesen Cheng, Hang Zhang, Zhibo Yang, Haiyang Xu, and Junyang Lin. Qwen2.5-vl technical report, 2025. [4]Sijie Cheng, Zhicheng Guo, Jingwen
|
https://arxiv.org/abs/2505.21955v1
|
Wu, Kechen Fang, Peng Li, Huaping Liu, and Yang Liu. Egothink: Evaluating first-person perspective thinking capability of vision-language models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 14291–14302, 2024. [5]Abhishek Das, Samyak Datta, Georgia Gkioxari, Stefan Lee, Devi Parikh, and Dhruv Batra. Embodied question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition , pages 1–10, 2018. [6]Zi-Yi Dou, Xitong Yang, Tushar Nagarajan, Huiyu Wang, Jing Huang, Nanyun Peng, Kris Kitani, and Fu-Jen Chu. Unlocking exocentric video-language data for egocentric video representation learning. arXiv preprint arXiv:2408.03567 , 2024. [7]Nouha Dziri, Ximing Lu, Melanie Sclar, Xiang Lorraine Li, Liwei Jiang, Bill Yuchen Lin, Sean Welleck, Peter West, Chandra Bhagavatula, Ronan Le Bras, et al. Faith and fate: Limits of transformers on compositionality. Advances in Neural Information Processing Systems , 36:70293–70332, 2023. [8]Chenyou Fan. Egovqa-an egocentric video question answering benchmark dataset. In Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops , 2019. [9]Chenyou Fan, Jangwon Lee, Mingze Xu, Krishna Kumar Singh, Yong Jae Lee, David J Crandall, and Michael S Ryoo. Identifying first-person camera wearers in third-person videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition , pages 5125–5133, 2017. [10] Yuqian Fu, Runze Wang, Yanwei Fu, Danda Pani Paudel, Xuanjing Huang, and Luc Van Gool. Objectrelator: Enabling cross-view object relation understanding in ego-centric and exo-centric videos. arXiv preprint arXiv:2411.19083 , 2024. [11] Gemini Team. Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context. arXiv preprint arXiv:2403.05530 , 2024. [12] Kristen Grauman, Andrew Westbury, Lorenzo Torresani, Kris Kitani, Jitendra Malik, Triantafyllos Afouras, Kumar Ashutosh, Vijay Baiyya, Siddhant Bansal, Bikram Boote, et al. Ego-exo4d: Understanding skilled human activity from first-and third-person perspectives. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 19383–19400, 2024. [13] Aaron Hurst, Adam Lerer, Adam P Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, et al. Gpt-4o system card. arXiv preprint arXiv:2410.21276 , 2024. [14] Baoxiong Jia, Yixin Chen, Siyuan Huang, Yixin Zhu, and Song-Chun Zhu. Lemma: A multi-view dataset for le arning m ulti-agent m ulti-task a ctivities. In European Conference on Computer Vision , pages 767–786. Springer, 2020. [15] Dongfu Jiang, Xuan He, Huaye Zeng, Cong Wei, Max Ku, Qian Liu, and Wenhu Chen. Mantis: Interleaved multi-image instruction tuning. Transactions on Machine Learning Research , 2024. 10 [16] Andrew K Lampinen, Ishita Dasgupta, Stephanie CY Chan, Kory Matthewson, Michael Henry Tessler, Antonia Creswell, James L McClelland, Jane X Wang, and Felix Hill. Can language models learn from explanations in context? arXiv preprint arXiv:2204.02329 , 2022. [17] Bo Li, Yuanhan Zhang, Dong Guo, Renrui Zhang, Feng Li, Hao Zhang, Kaichen Zhang, Peiyuan Zhang, Yanwei Li, Ziwei Liu, et al. Llava-onevision: Easy visual task transfer. arXiv preprint arXiv:2408.03326 , 2024. [18] Feng Li, Renrui Zhang, Hao Zhang, Yuanhan Zhang, Bo Li, Wei Li, Zejun Ma, and Chunyuan Li. Llava-next-interleave: Tackling multi-image, video, and 3d in large multimodal models. arXiv preprint arXiv:2407.07895 , 2024. [19] Yanghao Li, Tushar Nagarajan, Bo Xiong, and
|
https://arxiv.org/abs/2505.21955v1
|
Kristen Grauman. Ego-exo: Transferring visual representa- tions from third-person to first-person videos. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 6943–6953, 2021. [20] Kevin Qinghong Lin, Jinpeng Wang, Mattia Soldan, Michael Wray, Rui Yan, Eric Z Xu, Difei Gao, Rong-Cheng Tu, Wenzhe Zhao, Weijie Kong, et al. Egocentric video-language pretraining. Advances in Neural Information Processing Systems , 35:7575–7586, 2022. [21] Chao Liu, Chi San Cheung, Mingqing Xu, Zhongyue Zhang, Mingyang Su, and Mingming Fan. Toward facilitating search in vr with the assistance of vision large language models. In Proceedings of the 30th ACM Symposium on Virtual Reality Software and Technology , pages 1–14, 2024. [22] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. Advances in neural information processing systems , 36, 2024. [23] Jia-Wei Liu, Weijia Mao, Zhongcong Xu, Jussi Keppo, and Mike Zheng Shou. Exocentric-to-egocentric video generation. Advances in Neural Information Processing Systems , 37:136149–136172, 2024. [24] Haoyu Lu, Wen Liu, Bo Zhang, Bingxuan Wang, Kai Dong, Bo Liu, Jingxiang Sun, Tongzheng Ren, Zhuoshu Li, Hao Yang, et al. Deepseek-vl: towards real-world vision-language understanding. arXiv preprint arXiv:2403.05525 , 2024. [25] Arjun Majumdar, Anurag Ajay, Xiaohan Zhang, Pranav Putta, Sriram Yenamandra, Mikael Henaff, Sneha Silwal, Paul Mcvay, Oleksandr Maksymets, Sergio Arnaud, et al. Openeqa: Embodied question answering in the era of foundation models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 16488–16498, 2024. [26] Sagnik Majumder, Tushar Nagarajan, Ziad Al-Halah, and Kristen Grauman. Switch-a-view: Few-shot view selection learned from edited videos. arXiv preprint arXiv:2412.18386 , 2024. [27] Sagnik Majumder, Tushar Nagarajan, Ziad Al-Halah, Reina Pradhan, and Kristen Grauman. Which viewpoint shows it best? language for weakly supervising view selection in multi-view videos. arXiv preprint arXiv:2411.08753 , 2024. [28] Karttikeya Mangalam, Raiymbek Akshulakov, and Jitendra Malik. Egoschema: A diagnostic benchmark for very long-form video language understanding. Advances in Neural Information Processing Systems , 36:46212–46244, 2023. [29] Chancharik Mitra, Brandon Huang, Trevor Darrell, and Roei Herzig. Compositional chain-of-thought prompting for large multimodal models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 14420–14431, 2024. [30] Shraman Pramanick, Yale Song, Sayan Nag, Kevin Qinghong Lin, Hardik Shah, Mike Zheng Shou, Rama Chellappa, and Pengchuan Zhang. Egovlpv2: Egocentric video-language pre-training with fusion in the backbone. In Proceedings of the IEEE/CVF International Conference on Computer Vision , pages 5285–5297, 2023. [31] Dominick Reilly, Manish Kumar Govind, Le Xue, and Srijan Das. From my view to yours: Ego-augmented learning in large vision language models for understanding exocentric daily living activities. arXiv preprint arXiv:2501.05711 , 2025. [32] Gunnar A Sigurdsson, Abhinav Gupta, Cordelia Schmid, Ali Farhadi, and Karteek Alahari. Charades-ego: A large-scale dataset of paired third and first person videos. arXiv preprint arXiv:1804.09626 , 2018. [33] Alessandro Suglia, Claudio Greco, Katie Baker, Jose L. Part, Ioannis Papaioannou, Arash Eshghi, Ioannis Konstas, and Oliver Lemon. AlanaVLM: A multimodal embodied AI foundation model for egocentric video understanding. In Findings of the Association for Computational Linguistics: EMNLP 2024 , pages 11101–11122. Association for Computational Linguistics, November 2024. 11 [34] Gemini
|
https://arxiv.org/abs/2505.21955v1
|
Team. Gemini: A family of highly capable multimodal models, 2024. [35] Peng Wang, Shuai Bai, Sinan Tan, Shijie Wang, Zhihao Fan, Jinze Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, et al. Qwen2-vl: Enhancing vision-language model’s perception of the world at any resolution. arXiv preprint arXiv:2409.12191 , 2024. [36] Qitong Wang, Long Zhao, Liangzhe Yuan, Ting Liu, and Xi Peng. Learning from semantic alignment be- tween unpaired multiviews for egocentric video recognition. In Proceedings of the IEEE/CVF International Conference on Computer Vision , pages 3307–3317, 2023. [37] Weiyun Wang, Shuibo Zhang, Yiming Ren, Yuchen Duan, Tiantong Li, Shuo Liu, Mengkang Hu, Zhe Chen, Kaipeng Zhang, Lewei Lu, et al. Needle in a multimodal haystack. arXiv preprint arXiv:2406.07230 , 2024. [38] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems , 35:24824–24837, 2022. [39] Yiqi Wu, Xiaodan Hu, Ziming Fu, Siling Zhou, and Jiangong Li. Gpt-4o: Visual perception performance of multimodal large language models in piglet activity understanding. arXiv preprint arXiv:2406.09781 , 2024. [40] Dejing Xu, Zhou Zhao, Jun Xiao, Fei Wu, Hanwang Zhang, Xiangnan He, and Yueting Zhuang. Video question answering via gradually refined attention over appearance and motion. In Proceedings of the 25th ACM international conference on Multimedia , pages 1645–1653, 2017. [41] Jilan Xu, Yifei Huang, Junlin Hou, Guo Chen, Yuejie Zhang, Rui Feng, and Weidi Xie. Retrieval-augmented egocentric video captioning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 13525–13536, 2024. [42] Jilan Xu, Yifei Huang, Baoqi Pei, Junlin Hou, Qingqiu Li, Guo Chen, Yuejie Zhang, Rui Feng, and Weidi Xie. Egoexo-gen: Ego-centric video prediction by watching exo-centric videos. arXiv preprint arXiv:2504.11732 , 2025. [43] Zihui Sherry Xue and Kristen Grauman. Learning fine-grained view-invariant representations from unpaired ego-exo videos via temporal alignment. Advances in Neural Information Processing Systems , 36:53688–53710, 2023. [44] Heeseung Yun, Youngjae Yu, Wonsuk Yang, Kangil Lee, and Gunhee Kim. Pano-avqa: Grounded audio- visual question answering on 360deg videos. In Proceedings of the IEEE/CVF International Conference on Computer Vision , pages 2031–2041, 2021. [45] Amir Zadeh, Michael Chan, Paul Pu Liang, Edmund Tong, and Louis-Philippe Morency. Social-iq: A question answering benchmark for artificial social intelligence. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 8807–8817, 2019. [46] Daoan Zhang, Junming Yang, Hanjia Lyu, Zijian Jin, Yuan Yao, Mingkai Chen, and Jiebo Luo. Cocot: Contrastive chain-of-thought prompting for large multimodal models with multiple image inputs. arXiv preprint arXiv:2401.02582 , 2024. [47] Haoyu Zhang, Qiaohui Chu, Meng Liu, Yunxiao Wang, Bin Wen, Fan Yang, Tingting Gao, Di Zhang, Yaowei Wang, and Liqiang Nie. Exo2ego: Exocentric knowledge guided mllm for egocentric video understanding. arXiv preprint arXiv:2503.09143 , 2025. [48] Ziwei Zhao, Yuchen Wang, and Chuhua Wang. Fusing personal and environmental cues for identification and segmentation of first-person camera wearers in third-person views. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 16477–16487, 2024. [49] Ge Zheng, Bin Yang, Jiajin
|
https://arxiv.org/abs/2505.21955v1
|
Tang, Hong-Yu Zhou, and Sibei Yang. Ddcot: Duty-distinct chain-of-thought prompting for multimodal reasoning in language models. Advances in Neural Information Processing Systems , 36:5168–5191, 2023. [50] Jinguo Zhu, Weiyun Wang, Zhe Chen, Zhaoyang Liu, Shenglong Ye, Lixin Gu, Hao Tian, Yuchen Duan, Weijie Su, Jie Shao, Zhangwei Gao, Erfei Cui, Xuehui Wang, Yue Cao, Yangzhou Liu, Xingguang Wei, Hongjie Zhang, Haomin Wang, Weiye Xu, Hao Li, Jiahao Wang, Nianchen Deng, Songze Li, Yinan He, Tan Jiang, Jiapeng Luo, Yi Wang, Conghui He, Botian Shi, Xingcheng Zhang, Wenqi Shao, Junjun He, Yingtong Xiong, Wenwen Qu, Peng Sun, Penglong Jiao, Han Lv, Lijun Wu, Kaipeng Zhang, Huipeng Deng, Jiaye Ge, Kai Chen, Limin Wang, Min Dou, Lewei Lu, Xizhou Zhu, Tong Lu, Dahua Lin, Yu Qiao, Jifeng Dai, and Wenhai Wang. Internvl3: Exploring advanced training and test-time recipes for open-source multimodal models, 2025. 12 Table 4: Comparison with existing VQA benchmarks and E3VQA. Unlike existing benchmarks, E3VQA is designed to evaluate comprehensive scene understanding and reasoning by leveraging paired ego–exo images and diverse question perspectives. Benchmark Task ObjectiveVisual PerspectiveQuestion PerspectiveAnswer TypeEvaluator#Questions (test) MSVD-QA [40] General Understanding Exo Exo Predefined-Label Accuracy 13K MSRVTT-QA [40] General Understanding Exo Exo Predefined-Label Accuracy 72K Social-IQ [45] Social Understanding Exo Exo Multi-Choice Accuracy 7.5K Pano-A VQA [44] Spatial / Audio-Visual Reasoning Exo Exo Predefined-Label Accuracy 5.3K EgoVQA [8] Egocentric Visual Understanding Ego Ego or Exo Multi-Choice Accuracy 120 EgoSchema [28] Long-Term Reasoning Ego Ego Multi-Choice Accuracy 5K EgoThink [4] First-Person Thinking Ego Ego Open-Ended LLMs 700 EmbodiedQA [5] Goal-Driven Scene Understanding Ego Exo Predefined-Label Accuracy 529 OpenEQA [25] Environment Understanding Ego Ego or Exo Open-Ended LLMs 1.6K E3VQAComprehensive SceneEgo and Exo Ego or Exo Multi-Choice Accuracy 4K Understanding and Reasoning A Related Work A.1 Ego-Exo Datasets and Tasks Egocentric and exocentric views offer complementary information for understanding users and their environments. Early datasets like Charades-Ego [ 32] and LEMMA [ 14] introduced paired ego-exo data, while EgoExo4D [ 12] further scaled this paired ego-exo data with large, synchronized videos capturing diverse real-world scenarios. To generalize semantic understanding across multiple perspec- tives, a body of work has focused on learning view-invariant representations [ 36,43]. Furthermore, efforts to align ego-exo content have emerged, including object-level mappings [10] and techniques for identifying and segmenting camera wearers in exocentric scenes [ 9,48]. In parallel, cross-view knowledge transfer has been actively explored, with each perspective leveraged to improve the understanding of the other [ 47,19,41,31]. Several studies have addressed viewpoint selection across perspectives by proposing methods for dynamically selecting informative views over time [ 26,27]. Others have explored generating egocentric video from exocentric inputs using diffusion-based models [ 23,42] or cropping third-person frames to distill egocentric-relevant cues [ 6]. Despite these advances, a task that jointly reasons over synchronized egocentric and exocentric views within LVLMs remains underexplored, highlighting a promising direction for future research. A.2 Visual Question Answering with LVLMs Visual Question Answering (VQA) benchmarks test a model’s ability to interpret and reason over diverse visual content. Most existing VQA datasets are constructed from large-scale web-crawled data, typically consisting of images captured from fixed
|
https://arxiv.org/abs/2505.21955v1
|
third-person cameras. MSVD-QA [ 40] and MSRVTT-QA [ 40] target general visual understanding through diverse question types, including what, how, when, where, and why. Pano-A VQA [ 44] evaluates spatial and audio-visual reasoning in panoramic 360 °scenes, while Social IQ [ 45] focuses on social understanding by inferring the intentions and interactions of people within a scene. To support scenarios that require understanding from the user’s perspective, egocentric VQA datasets capturing first-person views have emerged. EgoVQA [ 8] evaluates first-person visual understanding by offering both egocentric and exocen- tric queries on first-person visual inputs. EgoSchema [ 28] evaluates long-form egocentric video understanding by assessing a model’s ability to recall previously observed objects and events. Ego- Think [ 4] evaluates first-person reasoning across diverse categories that reflect practical, real-world scenarios. Another line of work includes embodied QA benchmarks such as EmbodiedQA [ 5] and OpenEQA [ 25], where agents are required to navigate or interact with their environments to answer queries. Although numerous VQA datasets aim to evaluate LVLMs across diverse aspects, they cannot assess a model’s ability to seamlessly combine complementary visual information from paired ego and exo views (see Table 4). 13 (a)Answerdistributions(b) Source video types(c) Question compositionAction & PoseObject & Attribute SpatialNumericalPose & ActionPoseAction Object & AttributeExistenceAttributeColorPatternTypeSpatialRelative DirectionFrontLeftRightSpatial RelationCloseFarSimple CountingNumericalConditional CountingCookingHealth CareCovidCPR SportsSoccerBasket ballMusicBike Repair BoulderingDance AboveBehindBelowBetweenComparativeCounting A25%B24%C25%D26% A26%B25%C23%D26% A25%B26%C26%D23% A26%B24%C27%D23%Figure 7: E3VQA statistics: (a) Distribution of correct answers across the four options (A–D), (b) Distribution of source video types used to construct E3VQA, and (c) Composition of question types within each category. B E3VQA Benchmark Details B.1 Categories and Challenges In addition to the challenges described in Section 2.2, each of the following four categories highlights a distinct challenge in the ego-exo multi-image scenario: •Pose & Action Perception focuses on recognizing a person’s physical state and movement, such as how their body is positioned and what kinds of gestures or actions they are performing. The presence of multiple people, including the user and duplicated individuals across views, can confuse the model when identifying the question’s target. The model must correctly identify the intended individuals and interpret their physical state and behavior. •Object & Attribute Perception involves identifying objects and their attributes, such as color, pattern, or type. Objects may appear in only one view, be partially occluded, or look different due to variations in viewpoint and field of view. To answer correctly, models must resolve such ambiguities and ground the object consistently across views. •Numerical Reasoning addresses tasks involving counting and comparing quantities, such as determining the number of people or objects in a scene. A single view may not include all instances necessary to answer the question, and the same object may appear redundantly across different views. To produce accurate counts, the model must integrate information from both views by handling overlapping objects and aggregating evidence across views. •Spatial Reasoning focuses on understanding the spatial information of a scene, including how objects and people are positioned relative to one another and how they are arranged within the environment. In multi-view spatial reasoning, differences in viewpoint angle and field
|
https://arxiv.org/abs/2505.21955v1
|
of view can cause the same object to appear at varying positions in each image, become occluded in some views, or exhibit different spatial relationships with surrounding objects. To overcome these challenges, the model must align positional information from multiple views to construct a coherent understanding of spatial relationships within the scene. B.2 Statistics Figure 7 summarizes the statistics of the E3VQA benchmark. Figure 7(a) shows the distribution of correct answer choices across options (A–D) within each category. The uniform distribution of correct answers across the four options helps mitigate answer position bias. Figure 7(b) shows the distribution of source video types used to construct E3VQA, demonstrating the benchmark’s broad coverage of real-world user-interaction scenarios. Finally, Figure 7(c) illustrates the detailed composition of question types within each category, underscoring E3VQA’s broad scope of evaluation. 14 Figure 8: User interface provided for annotators during the human verification stage. B.3 Details of Human Verification Stage During the human verification stage, each of the four expert annotators is assigned to a specific question category and conducts verification using the user interface shown in Figure 8. Annotators initially utilized the view-specific responses generated in Section 2.3.2 to construct answer options. To supplement potentially redundant or low-quality responses, we generate two additional option sets, each containing four candidate options derived from the ego and exo images, respectively. Using these option sets, each annotator independently curates full question sets, including the question, correct answer, and distractor options, for their assigned category. After the initial construction, each annotator reviews subsets created by others to ensure consistency and clarity across the dataset, filtering out any ambiguous or low-quality instances. C Experimental Details C.1 LVLMs Overview and Evaluation Setup Table 5 provides an overview of the LVLMs used in our experiments in terms of the model architecture and the use of egocentric data in training. These models are selected based on their capability of processing multi-image inputs. By evaluating models with diverse vision-language architectures, we examine how recent LVLMs respond to and reason through the challenges posed by the E3VQA benchmark. For evaluation, we use NVIDIA RTX A6000 GPUs. All evaluation results are reported as the mean and standard deviation over three independent runs, using each model’s default generation settings. C.2 CoT Baselines Overview Building on its success in large language models (LLMs), CoT prompting has been extended to LVLMs to enhance inference-time reasoning. DDCoT [ 49] breaks down a question into a sequence of sub-questions and corresponding sub-answers, which are then used collectively to derive the final answer to the original question. CoCoT [ 46], introduced for multi-image input scenarios, compares the similarities and differences between images, guiding the model to answer questions based on the identified visual contrasts. CCoT [ 29] helps understand the overall context of an image through scene graphs, where a scene graph is first generated via the LVLM and then incorporated into the prompt to enable compositional reasoning over objects, relations, and attributes. Despite their successes, 15 Table 5: Comparison of open-Source LVLMs: architecture (vision encoder and LLM) and egocentric data usage during training. Model
|
https://arxiv.org/abs/2505.21955v1
|
Vision Encoder LLM Backbone Train w/ Ego Data InternVL3-14B InternViT-300M-448px-V2.5 Qwen2.5-14B Not Provided Qwen2.5-VL-7B ViT (customized) Qwen2.5-7B Not Provided Qwen2-VL-7B ViT-L Qwen2-7B ✗ LLaV A-NeXT-OneVision-7B SigLIP-SO Qwen2-7B ✓ InternVL2-8B InternViT-300M Qwen2.5-7B ✓ LLaV A-NeXT-Interleave-7B SigLIP-SO Qwen1.5-7B ✗ MANTIS-Idefics2-8B SigLIP Mistral-7B-v0.1 ✗ Deepseek-VL-chat-7B SigLIP-L, SAM-B DeepSeek-LLM-7B ✗ Qwen-VL-Chat-7B ViT-bigG Qwen-7B ✗ Table 6: Performance comparison of various methods on open-source models. MethodsPose & Action Object & Attribute Numerical SpatialAvg.Ego Exo Ego Exo Ego Exo Ego Exo InternVL3 - 14B Default 44.73 ±1.50 54.93 ±1.42 68.13 ±0.81 73.73 ±0.99 35.60 ±1.11 53.00 ±0.20 45.67 ±0.58 48.33 ±0.99 53.02 DDCoT [49] 47.87 ±0.83 58.33 ±2.64 68.47 ±0.50 72.67 ±1.42 35.33 ±2.53 46.80 ±2.12 50.67 ±1.10 45.93 ±0.95 53.26 CoCoT [46] 49.53 ±0.81 57.27 ±0.50 68.27 ±1.14 72.53 ±1.14 34.87 ±1.55 47.93 ±0.64 49.20 ±1.91 46.27 ±0.95 53.23 CCoT [29] 44.60 ±1.91 58.40 ±2.09 65.27 ±0.64 73.80 ±0.53 37.80 ±3.30 50.00 ±0.72 46.27 ±1.33 48.80 ±1.78 53.12 M3CoT (Ours) 45.87 ±1.21 60.00 ±0.35 70.60 ±0.40 75.73 ±0.76 35.07 ±0.50 50.87 ±0.70 50.80 ±0.92 49.40 ±0.72 54.79 InternVL3 - 8B Default 43.70 ±3.25 54.90 ±0.42 64.80 ±0.85 70.30 ±0.71 35.90 ±2.12 45.20 ±1.13 42.10 ±2.97 46.60 ±3.11 50.44 DDCoT [49] 48.10 ±0.99 59.20 ±4.24 67.20 ±0.28 68.80 ±1.13 34.60 ±0.85 47.00 ±0.00 46.30 ±2.69 45.80 ±1.41 52.13 CoCoT [46] 43.90 ±0.14 58.20 ±1.13 65.10 ±0.42 68.40 ±1.13 37.10 ±1.84 48.70 ±0.42 43.50 ±2.97 43.40 ±0.57 51.04 CCoT [29] 44.00 ±0.85 55.00 ±1.41 63.60 ±0.57 68.30 ±1.27 35.50 ±0.42 51.20 ±1.41 43.40 ±0.57 44.70 ±3.54 50.71 M3CoT (Ours) 45.50 ±0.14 57.20 ±0.00 68.20 ±0.00 71.20 ±0.00 37.60 ±0.00 47.50 ±0.71 53.80 ±0.00 49.20 ±0.00 53.78 their applicability to ego-exo multi-image contexts remains unexplored, raising an open challenge for extending CoT method to multi-image settings. D Additional Experiments and Analysis of M3CoT D.1 Evaluation on Open-Source Models We present experimental results of our M3CoT prompting technique compared to existing CoT methods on open-source LVLMs. Specifically, we apply M3CoT on InternVL3-14B [ 50], the top- performing open-source model, and further evaluate the performance on InternVL3-8B. As shown in Table 6, most CoT methods result in only marginal performance gains, with several failing to improve accuracy and even causing degradation in certain categories. This aligns with prior findings suggesting that the CoT method is often ineffective in smaller models with limited reasoning capability [ 38,16,7]. Despite the limitations observed in smaller models, our M3CoT consistently achieves superior performance compared to other CoT methods, highlighting its robustness across model sizes. D.2 Analysis of Iteration Steps in Multi-Agent Scene Graph Refinement To analyze the effect of iteration steps in M3CoT, we report the accuracy of each individual perspective as well as the majority-voted answer derived from them at each iteration. As shown in Figure 9, without any information exchange across perspectives, all individual perspectives and the majority- voted answer achieve relatively low accuracy (iteration 0). As agents begin to exchange their scene graphs, we observe a steady improvement in the accuracy of each individual perspective, suggesting that iterative refinement facilitates mutual enhancement through shared contextual understanding. This process also leads to a corresponding
|
https://arxiv.org/abs/2505.21955v1
|
increase in voting accuracy, reflecting not only the enhanced quality of individual predictions but also a stronger consensus across perspectives. However, beyond the second iteration, we find that both individual accuracy and voting accuracy plateau. We 16 61.562.563.564.565.566.5 Iteration 0Iteration 1Iteration 2Iteration 3Accuracy (%)Ego&ExoEgo2ExoExo2EgoVotingFigure 9: Performance across different perspectives and majority voting results over iteration steps. attribute this saturation to the convergence of information across agents: while initial iterations benefit from the diversity of complementary perspectives, excessive alignment diminishes the gains from their integration. This observation highlights a trade-off in our multi-perspective refinement strategy between refining individual scene representations and preserving representational diversity. Note that all experiments and analyses in this paper are conducted with a fixed iteration count of 1, using Gemini 2.0 Flash unless otherwise specified. D.3 Qualitative Examples of Scene Graphs from Three Perspectives To further examine how different perspectives in M3CoT contribute to capturing complementary infor- mation, we present additional qualitative examples of scene graphs derived from each perspective. As shown in Figures 10, 11, and 12, the scene graphs from the three perspectives exhibit complementary strengths depending on the question, particularly regarding which image should be referenced to answer it. D.4 Additional Qualitative Examples of Different CoT Reasoning Processes We provide additional examples that illustrate how our method improves reasoning compared to other CoT approaches (see Figure 13). E Prompt Templates E.1 Prompt Templates for E3VQA Construction To guide LVLMs in understanding the question categories and tasks for generating meaningful question–answer pairs, we carefully design the prompts for each stage. To generate question–answer pairs from a single viewpoint, we use the prompts shown in Figure 14–23. For view-specific response generation, we apply the prompts in Figure 24–31. For response-based filtering, we use the prompts shown in Figure 32 and 33. Finally, to generate four candidate options from either the ego or exo image, we use the prompts illustrated in Figure 34 and 35. E.2 Prompt Templates for Experiments The default system prompt and question prompt for E3VQA are shown in Figure 36. In addition, we present the prompts employed in the M3CoT. Scene graph generation prompts for each perspective are shown in Figure 37–39, and the prompts for scene graph refinement across agents are presented in Figure 40. For reference, the prompts used in other CoT methods are shown in Figure 41–43. 17 F Limitations Despite its contributions, this work has several limitations. First, the E3VQA benchmark is solely based on the EgoExo4D dataset, which may exhibit dataset bias and limited generalizability in real-world visual assistant scenarios. Second, although the queries and answer options in E3VQA are carefully crafted, they may not fully capture the diversity of natural language expressions and user intents encountered in real-world interactions with visual AI assistants. Third, while recent AI APIs offer a solution for scaling the benchmark, their use entails substantial financial costs. Fourth, M3CoT introduces increased computational overhead due to its multi-step reasoning across multiple perspectives, which may limit its applicability in resource-constrained scenarios. Finally, since E3VQA is constructed from images rather than videos, the benchmark may not fully assess
|
https://arxiv.org/abs/2505.21955v1
|
an LVLM’s ability to capture temporal cues and motion dynamics, an aspect we leave for future work. G Ethics Statement This work has the potential to positively impact society by enhancing the capabilities of visual assistants and embodied AI systems, particularly in scenarios that require comprehensive scene un- derstanding from both egocentric and exocentric views. Such advancements may enhance human–AI interaction and improve support in assistive technologies. However, the use of egocentric visual data may raise important privacy concerns, especially in sensitive environments. We acknowledge these risks and emphasize the importance of implementing safeguards and transparency mechanisms in future deployments. As part of our commitment to responsible data use, we have obtained the appropriate licenses from the contributing institutions for the use of the EgoExo4D dataset in this research. 18 Scene graph:[{"objects": [{"id": "person1", "description": "person", "count": 1}]}, … , "summary": {"total_people": 1}Ego2ExoFinal Answer : D)Scene graph:[{"objects": [{"id": "person_1", "name": "person", "count": 3}]} , …}]Exo2EgoFinal Answer : D)Scene graph:[{"objects": [{"id": "person1", "perspective": "image1"}, {"id": "person2", "perspective": "image1"}, {"id": "person3", "perspective": "image1"}, {"id": "person4","perspective": "image2"}]}]Ego&ExoFinal Answer : B)Q: How many people are in the scene?A) 2 B) 4C) 1 D) 3Ego ViewExo ViewScene graph:[{"objects": [{"id": "person1", "name": "person", "attributes": {"standing": true}}], "relationships": [{"subject": "person1", "relation": "in", "object": "scene"}]Ego2ExoFinal Answer : D)Scene graph:[{"objects": [{"id": "person1”, "attributes": ["wearing glasses", "wearing a white shirt"]}, {"id": "person2", "attributes": "standing”}, {"id": "person3", "attributes": "sitting”}]}, … , "summary": "There are three people in the scene.”Exo2EgoFinal Answer : D)Scene graph:"objects": [{"id": "person1", "perspective": "image1",}, {"id": "person2", "perspective": "image1"}, {"id": "person3", "perspective": "image1"}, {"id": "user", "perspective": "image2"}], "total_people": 4}Ego&ExoFinal Answer : B)Q: How many people are in the scene?A) 2B) 4C) 1 D) 3Ego ViewExo View Figure 10: Qualitative examples of answers and reasoning processes generated by different perspec- tives. The scene graph from the Ego&Exo perspective demonstrates a strong capability to capture the information necessary for answering questions grounded in both ego and exo views. Scene graph:[{"objects": [{"id": "person_light_blue_shirt", "attributes": {"shirt_color": "light blue","arm_position": "hands resting by the sides”}}, … , {"id": "person_crossing_arms", …}]Ego2ExoFinal Answer : A)Scene graph:[{objects: [{”name": "person", "attributes": "wearing light blue shirt”}, … , {”name": "arms", "relation": "crossed", "target": "person"}]Exo2EgoFinal Answer : C)Scene graph:[{"objects": [{"id": "person1", "description": "person in light blue shirt", "attributes": {"shirt_color": "light blue", "position": "standing", "pose": "crossing arms"}, … }]}]Ego&ExoFinal Answer : C)Q: How is the person in the light blue shirt positioned?A) Hands resting by the sidesB) Sitting on a benchC) Crossing arms D) Leaning on a ladderEgo ViewExo ViewScene graph:[{"objects": [{"id": "person", "description": "person playing violin"}, … , {“id”: “tripod”, "relation": "to the right of”, ”target": "person"}, …]}]Ego2ExoFinal Answer : B)Scene graph:[{”objects”: [{"id": "person", "relation": "playing", "target": "violin"}, … , {"id": "tripod", "relation": "in front of", "target": "person"}]Exo2EgoFinal Answer : C)Scene graph:[{"objects": [{"id": "tripod", "relation: "to_the_right_of", "target": "person"}, … , {"id": "light_switch", "relation": "to_the_right_of", "target": "person"}]}]Ego&ExoFinal Answer : C)Q: What object is on the right of the person playing violin?A) Music stand B) TripodC) Light switch D) Trash binEgo ViewExo View Figure 11: Qualitative examples of answers and reasoning processes generated by different perspec- tives. The scene graph from the Ego2Exo perspective demonstrates
|
https://arxiv.org/abs/2505.21955v1
|
a strong capability to capture the information necessary for answering questions grounded in the exo view alone. 19 Scene graph:[{objects: [{"name": "man", "attributes": ["in light green shirt", "sitting"], "relation": "holding", "target": "swab", "hand": "left"}, {"name": "timer", "relation": "on", "target": "table"}]}]Ego2ExoFinal Answer : A)Scene graph:[{“objects”: [{"name": "man", "attributes": ["sitting", "light green shirt"], {"relation": "holding", "target": "timer"}}]},…]Exo2EgoFinal Answer : C)Scene graph:[{"objects": [{"id": "man", "description": "Man in light green shirt", "attributes": {"shirt_color": "light green"}, {"id": "object_in_left_hand", "attributes": {"type":"swab"}}]}]Ego&ExoFinal Answer : A)Q: What is the man in a light green top holding in his left hand?A) Box B) Instruction manualC) TimerD) PhoneEgo ViewExo View Scene graph:[{objects: [{”name": "frying pan", "relation": "far", "target": "window"}, … , {”name": "gas cylinder", "relation": "near", "target": "window"}]}]Ego2ExoFinal Answer : C)Scene graph:[{objects: [{”name": "window", … } , … , {”name": "frying pan", "relation": "closest to", "target": "window"}]}]Exo2EgoFinal Answer : A)Scene graph:[ … , {"relationships": [{"source": "frying_pan", "target": "window", "type": "far_from"}, {"source": "gas_cylinder", "target": "window", "type": "near"}]}]Ego&ExoFinal Answer : C)Q: What is the object closest to the window?A) Frying panB) Cutting boardC) Gas cylinder D) KnifeEgo ViewExo View Figure 12: Qualitative examples of answers and reasoning processes generated by different perspec- tives. The scene graph from the Exo2Ego perspective demonstrates a strong capability to capture the information necessary for answering questions grounded in the ego view alone. Scene graph:[{"objects": [{”id": ”person", "relation": "near", "target": "sink"},… , {" id": ”stove", … ,"relation": "to the right of", "target": "sink"}], …}]M3CoTFinal Answer : B)Sub question 1: What is to the right of the sink in the second image?Sub Answer 1: In the second image, the stove is to the right of the sinkSub question 2:Which of the answer choices is to the right of the sink?Sub Answer 2: The cutting boardis on the counter to the right of the sinkDDCoTFinal Answer : A)Scene graph: [{“object": "person”, "relation": "to the right of", "target": "sink”}, … {"object": ”cuttingboard", “relation": "to the right of", "target": "sink”}]CCoTFinal Answer : A)Q: What object is to the right of the sink, relative to the person? A) Cutting boardB) StoveC) Blender D) ToasterEgo ViewExo View Scene graph: [{"objects": [{"name": "island", "attribute" : " kitchen"}, {"name": "plate ", " attribute " : "blue", “relation" : "on", " target " : " island"}, {"name": " spatula", " relation" : " on"," target " : " plate"}], … }]M3CoTFinal Answer : C)Sub question 1 : Where is the blue plate located?Sub Answer 1 : The blue plate is located on the kitchen island.Sub question 2 : . What cooking utensil is on the blue plate?Sub Answer 2 : The cooking utensil on the blue plate is a fork.DDCoTFinal Answer : A)Scene graph: [{"name": " plate ", " attribute " : "blue", “relation" : "on ", " target " : " island "}, {"name": " fork", " relation" : " on", " target " : " plate"}]CCoTFinal Answer : A)Q: Which cooking utensil is on the blue plate positioned on the kitchen island?A) ForkB) PillerC) SpatulaD) KnifeEgo ViewExo View Figure 13: Qualitative examples of answers and reasoning processes generated by different prompting methods. 20 Egocentric Single-View QA
|
https://arxiv.org/abs/2505.21955v1
|
Generation Prompt {Ego Image} You are given the v i s u a l in p ut from the camera worn by the user ( r e f e r r e d to as ‘ I ’ ) . Based on t h i s v i s u a l input , generate three question −answer pa i r s . Ensure t h a t the generated question −answer p a i r s are d i r e c t l y based on the v i s u a l i n pu t . {Category-wise Prompt} Requirements : Each question must e x p l i c i t l y include the pronoun ‘ I ’ or ‘me’ to ensure the focus remains on the user . Each answer should be a s i n g l e word or a short phrase . Ensure t h a t a l l three question −answer p a i r s meet these c r i t e r i a and are re levant to the v i s u a l i np u t . S t r i c t l y adhere to the format of the provided examples . Figure 14: Egocentric single-view QA generation prompt. 21 Egocentric Single-View QA Generation Prompt: Action & Pose I n s t r u c t i o n s : Each question must focus on my actions , body posture , or gestures . The answer must be a verb or verb phrase ( e . g . , w r i t i n g , s tr e t ch i n g , crossing arms ) . Do not generate QA p a ir s with overly generic answers l i k e ‘ standing ’ or ‘ reaching ’ . Question Categories & Templates : Actions ( What am I doing ?) − What am I doing? − What am I doing with my [ body part ]? Body Posture (How am I positioned ?) − How i s my body positioned ? − How am I s i t t i n g / standing / l y i n g ? − What i s my posture? Gestures ( What movement am I making ?) − What am I doing with my hands? − What gesture am I making? − How am I moving my arms / legs / head? Examples : Q: How i s my body positioned ? A: S i t t i n g cross −legged Q: What am I doing with my l e f t hand? A: Holding a book Q: What gesture am I making? A: Waving Figure 15: Egocentric single-view QA generation prompt: Action & Pose. 22 Egocentric Single-View QA Generation Prompt: Object & Attribute I n s t r u c t i o n s : Each question must focus on i d e n t i f y i n g a s p e
|
https://arxiv.org/abs/2505.21955v1
|
c i f i c object ( e . g . , mug cup , laptop ) or describing an a t t r i b u t e of an object ( e . g . , navy blue , s t r i p e d pattern ) associated with me. The answer must be a noun or noun phrase , avoiding overly generic responses such as something or object . Question Categories & Templates : Object I d e n t i f i c a t i o n ( What am I i n t e r a c t i n g with ?) − What am I holding ? − What object i s on the t ab l e beside me? − Which item am I picking up? Object A t t r i b u t e s ( What does i t look l i k e ?) − What c o l o r i s the s h i r t I am wearing? − What pattern i s on my j a c k e t ? − What type of shoes am I wearing? Examples : Q: What c o lo r i s the s h i r t I am wearing? A: Navy blue Q: Which object am I holding i n my r i g h t hand? A: A small notebook Q: What pattern does my sweater have? A: Checkered pattern Figure 16: Egocentric single-view QA generation prompt: Object & Attribute. 23 Egocentric Single-View QA Generation Prompt: Spatial I n s t r u c t i o n s : Each question must focus on the s p a t i a l r e l a t i o n s h i p s between me and objects i n my surroundings . The answer must be a s p e c i f i c object or l o c a t i o n d e s c r i p t o r ( e . g . , coffee cup , bookshelf , under the ta b le ) . Do not generate QA p a ir s with overly generic answers . Question Categories & Templates : Object Proximity ( What i s closest or f a r t h e s t ?) : − What object i s closest to me? − Which object i s the f a r t h e s t from me? − What i s the nearest object to my [ body part ]? Relative P o s i t i o n i n g ( Where are objects located ?) − What object i s to my l e f t / r i g h t / f r o n t / behind? − Which object i s above / below me? − S p a t i a l Relations (How are objects arranged ?) − Which object i s between me and
|
https://arxiv.org/abs/2505.21955v1
|
[ another object ]? Examples : Q: What object i s closest to my l e f t hand? A: Coffee cup Q: Which object i s the f a r t h e s t from me? A: Bookshelf Q: What object i s on my r i g h t side? A: Tissue Figure 17: Egocentric single-view QA generation prompt: Spatial. 24 Egocentric Single-View QA Generation Prompt: Numerical I n s t r u c t i o n s : Each question must focus on numerical reasoning by counting or q u a n t i f y i n g s p e c i f i c elements d i r e c t l y r e l a t e d to me. This may include the number of people , objects , or other countable items present i n my surroundings . The answer must be a numerical value t h a t accurately represents the count of the i n d i c a t e d elements . Do not generate questions about overly generic objects ( e . g . , items , objects ) . A l l numerical answers must be w i t h i n the range of 0 to 5. Question Categories & Templates : Counting People (How many people are around me?) − How many people are i n the image excluding me? − How many i n d i v i d u a l s are facing the same d i r e c t i o n as I am? Counting Objects (How many things are near or with me?) − How many [ objects ] am I holding ? − How many [ items ] are on the t ab l e beside me? Q u a n t i t a t i v e Comparisons (How do the numbers compare to what I have ?) − How many more books are on my desk than on the s h e l f ? − By how much does the number of items i n my hands exceed the number on the t a bl e ? Examples : Q: How many people are i n the image excluding me? A: 3 Q: How many more bowls are on my t ab l e compared to the t ab l e behind me? A: 2 Q: How many apples am I holding ? A: 3 Figure 18: Egocentric single-view QA generation prompt: Numerical. 25 Exocentric Single-View QA Generation Prompt {Exo Image} You are given with the v i s u a l in p ut from a fixed − p o s i t i o n camera capturing a scene . Based on t h i s v i s u a l input , generate three question −answer pa i r s . Ensure t h a t the generated question −answer p a i r s are d i r e c t l y based on
|
https://arxiv.org/abs/2505.21955v1
|
the v i s u a l i n pu t . {Category-wise Prompt} Requirements : Each answer should be a s i n g l e word or a short phrase . Ensure t h a t a l l three question −answer p a i r s meet these c r i t e r i a and are re levant to the v i s u a l i np u t . S t r i c t l y adhere to the format of the provided examples . Figure 19: Exocentric single-view QA generation prompt. Exocentric Single-View QA Generation Prompt: Action & Pose I n s t r u c t i o n s : Each question must focus on the actions , body posture , or gestures w i t h i n the scene . The answer must be a verb or verb phrase ( e . g . , w r i t i n g , s tr e t ch i n g , crossing arms ) . Do not generate QA p a ir s with overly generic answers l i k e ‘ standing ’ or ‘ reaching ’ . Question Categories & Templates : Actions ( What i s the person doing ?) − What i s the [ d e s c r i p t i v e ] person doing? − What i s the [ d e s c r i p t i v e ] person doing with t h e i r [ body part ]? Body Posture (How i s the person positioned ?) − How i s the [ d e s c r i p t i v e ] person positioned ? − What i s the posture of the [ d e s c r i p t i v e ] person? Gestures ( What movements i s the person making ?) − What kind of gesture i s the [ d e s c r i p t i v e ] person making? − How i s the [ d e s c r i p t i v e ] person moving t h e i r arms / legs / head? Examples : Q: What i s the man s i t t i n g i n the c h a ir doing? A: Watching a phone Q: What i s the posture of the person wearing a green s h i r t ? A: Raising one arm Q: What i s the woman i n the black j a c k e t doing with t h e i r r i g h t hand? A: Holding a book Figure 20: Exocentric single-view QA generation prompt: Action & Pose. 26 Exocentric Single-View QA Generation Prompt: Object & Attribute I n s t r u c t i o n s : Each question must focus on i d e n t i f y i n g
|
https://arxiv.org/abs/2505.21955v1
|
a s p e c i f i c object i n the scene ( e . g . , ‘mug cup ’ , ‘ laptop ’ ) or describing an a t t r i b u t e of an object ( e . g . , ‘ navy blue ’ , ‘ s t r i p e d pattern ’ ) . Questions should reference people or objects by d e s c r i p t o r s ( e . g . , ‘ the woman i n the white top ’ , ‘ the man with the s t r i p e d s h i r t ’ ) . The answer must be a noun or noun phrase , avoiding overly generic responses such as ‘ something ’ or ‘ object ’ . Question Categories & Templates : Object I d e n t i f i c a t i o n ( What i s present ?) − What i s the man with the s t r i p e d s h i r t holding ? − What object i s placed on the t ab l e ? − Which item i s the woman wearing blue top picking up? Object A t t r i b u t e s ( What does i t look l i k e ?) − What c o l o r i s the s h i r t worn by the man wearing a cap? − What pattern i s on the j a c k e t worn by the woman carr ying a handbag? − What type of shoes i s the man standing near the window wearing? Examples : Q: What c o lo r i s the top worn by the woman holding the towel? A: White Q: Which object i s the man i n the black s h i r t holding i n his r i g h t hand? A: Smartphone Q: What pattern does the sweater worn by the person holding a cup have? A: Checkered pattern Figure 21: Exocentric single-view QA generation prompt: Object & Attribute. 27 Exocentric Single-View QA Generation Prompt: Spatial I n s t r u c t i o n s : Each question must e x p l i c i t l y reference an object ’ s or a person ’ s s p a t i a l r e l a t i o n s h i p w i t h i n the scene . The answer must be a s p e c i f i c object or l o c a t i o n d e s c r i p t o r ( e . g . , scissors , f r y i n g pan , under the t ab l e ) . Do not generate QA p a ir s with overly generic answers
|
https://arxiv.org/abs/2505.21955v1
|
. Question Categories & Templates : Object Proximity ( What i s closest or f a r t h e s t ?) − Which object i s closest to the person wearing [ s p e c i f i c item ]? − Which object i s the f a r t h e s t from [ reference p o in t ]? − What i s the nearest object to [ s p e c i f i c l o c a t i o n or object ]? Relative P o s i t i o n i n g ( Where are objects located ?) − What object i s to the l e f t / r i g h t / f r o n t / behind of the man with [ s p e c i f i c item ]? − What object i s to the l e f t / r i g h t / f r o n t / behind [ reference object ]? − Which object i s positioned above / below [ reference object ]? S p a t i a l Relations (How are objects arranged ?) − Which object i s positioned between [ object A] and [ object B]? − What item i s placed underneath / i n s i d e [ object ]? − Which object i s located between the two people s i t t i n g on the \ \ bench? Examples : Q: What i s the object on the f a r r i g h t of the desk? A: Scissors Q: Which cookware i s closest to the woman wearing a s t r i p e d s h i r t ? A: Frying pan Q: What object i s placed d i r e c t l y i n f r o n t of the man wearing a cap? A: Backpack Q: What object i s placed underneath the t ab l e ? A: Storage box Figure 22: Exocentric single-view QA generation prompt: Spatial. 28 Exocentric Single-View QA Generation Prompt: Numerical I n s t r u c t i o n s : Each question must focus on numerical reasoning by counting or q u a n t i f y i n g s p e c i f i c elements w i t h i n the scene . This may include the number of people , objects , or other countable items present i n the image . The answer must be a numerical value t h a t accurately represents the count of the i n d i c a t e d elements . Do not generate questions about overly generic objects ( e . g . , items , objects ) . A l l numerical answers must be w i t h i n the range of 0 to 5. Question Categories & Templates
|
https://arxiv.org/abs/2505.21955v1
|
: Counting People (How many are there ?) − How many people are i n the scene? − How many i n d i v i d u a l s are facing the camera? Counting Objects (How many things are v i s i b l e ?) − How many objects i s [ person d e s c r i p t o r ] holding ? − How many items are on the t ab l e ? Q u a n t i t a t i v e Comparisons (How do the numbers compare ?) − How many more books are on the t ab l e than on the s h e l f ? − By how much does the number of items i n the man’ s hands exceed the number on the ta b le ? Examples : Q: How many people are i n the scene? A: 3 Q: How many objects i s the woman i n the s t r i p e d s h i r t holding ? A: 2 Q: How many oranges are placed on the t ab l e ? A: 5 Figure 23: Exocentric single-view QA generation prompt: Numerical. 29 View-Specific Response Expansion Prompt: Ego View {Ego Image} You are given a v i s u a l in p ut from a camera worn by the user ( r e f e r r e d to as ‘ I ’ ) along with a corresponding question . Based on the v i s u a l input , generate the best possible answer . {Category-wise Prompt} Requirements : Each answer option should be a s i n g l e word or a short phrase . Follow the provided format s t r i c t l y . Q: {Question} Figure 24: View-specific response expansion prompt: Ego view. View-Specific Response Expansion Prompt: Exo View {Exo Image} You are given a v i s u a l in p ut from a fixed − p o s i t i o n camera capturing a scene along with a corresponding question . Based on the v i s u a l input , generate the best possible answer . {Category-wise Prompt} Requirements : Each answer should be a s i n g l e word or a short phrase . Follow the provided format s t r i c t l y . Q: {Question} Figure 25: View-specific response expansion prompt: Exo view. 30 View-Specific Response Expansion Prompt: Both Views {Ego Image} {Exo Image} You are provided with two v i s u a l inputs i n sequence , each captured from a d i f f e r e n t perspective : 1. The view from the camera worn by the user ( ‘ I ’ ) . 2. The view captured by an ext ernal camera observing the user ( ‘ I ’ ) . These two images capture the same event at the same time
|
https://arxiv.org/abs/2505.21955v1
|
. Based on the v i s u a l inputs , generate the best possible answer . {Category-wise Prompt} Requirements : Each answer should be a s i n g l e word or a short phrase . Follow the provided format s t r i c t l y . Q: {Question} Figure 26: View-specific response expansion prompt: Both views. View-Specific Response Expansion Prompt: Text Only Based on the question , generate the best possible answer . {Category-wise Prompt} Requirements : Each answer should be a s i n g l e word or a short phrase . Follow the provided format s t r i c t l y . Q: {Question} Figure 27: View-specific response expansion prompt: text only. View-Specific Response Expansion Prompt: Action & Pose I n s t r u c t i o n s : The answer must be a verb or verb phrase ( e . g . , w r i t i n g , s tr e t ch i n g , crossing arms ) . Do not generate overly generic answers l i k e ‘ standing ’ or ‘ reaching ’ . Output format : Q: How i s my body positioned ? A: S i t t i n g cross −legged Q: What i s the man s i t t i n g i n the c h a ir doing? A: Watching a phone Figure 28: View-specific response expansion prompt: Action & Pose. 31 View-Specific Response Expansion Prompt: Object & Attribute I n s t r u c t i o n s : The answer must be a noun or noun phrase , avoiding overly generic responses such as ‘ something ’ or ‘ object ’ . Output format : Q: What c o lo r i s the s h i r t I am wearing? A: Navy blue Q: What c o lo r i s the top worn by the woman holding the towel? A: White Figure 29: View-specific response expansion prompt: Object & Attribute. View-Specific Response Expansion Prompt: Spatial I n s t r u c t i o n s : The answer must be a s p e c i f i c object or l o c a t i o n d e s c r i p t o r ( e . g . , coffee cup , bookshelf , under the ta b le ) . Do not generate overly generic answers . Output format : Q: What object i s closest to my l e f t hand? A: Coffee cup Q: What i s the object on the f a r r i g h t of the desk? A: Scissors Figure 30: View-specific response expansion prompt: Spatial. View-Specific Response Expansion Prompt: Numerical I n s t r u c t i o n s : The answer must be a numerical value t h a t accurately represents the count of the i n d i c a
|
https://arxiv.org/abs/2505.21955v1
|
t e d elements . A l l numerical answers must be w i t h i n the range of 0 to 5. Output format : Q: How many people are i n the image excluding me? A: 3 Q: How many people are i n the scene? A: 3 Figure 31: View-specific response expansion prompt: Numerical. 32 Response-Based Question Filtering Prompt 1 Here i s the question : ’{Question} ’ . The provided answer i s {answer_both} , and the given l a b e l i s {answer_init} . Do they convey the same meaning based on the question? Respond with a s i n g l e word or phrase . Figure 32: Response-based question filtering prompt (1). Response-Based Question Filtering Prompt 2 Here i s the question : ’{Question} ’ . The provided answer i s ‘{answer_text} ’ , and the given l a b e l i s ‘{answer_init} ’ . Do they convey the same meaning based on the question? Respond with a s i n g l e word or phrase . Figure 33: Response-based question filtering prompt (2). Option Generation Prompt: Ego {Ego Image} You are given a v i s u a l in p ut from a camera worn by the user ( r e f e r r e d to as ‘ I ’ ) . Based on the f o l l o w i n g question and answer , generate four multiple − choice options . Question : {Question} Answer : {answer_ego} Ensure t h a t each i n c o r r e c t option i s c l o s e l y r e l a t e d to the v i s u a l content , making i t challenging to e a s i l y i d e n t i f y the c o r r e c t answer . Follow the format below exactly : Options : [ Option1 ] [ Option2 ] [ Option3 ] [ Option4 ] Figure 34: Option generation prompt: Ego. 33 Option Generation Prompt: Exo {Exo Image} You are given a v i s u a l in p ut from a fixed − p o s i t i o n camera capturing a scene . Based on the f o l l o w i n g question and answer , generate four multiple − choice options . Question : {Question} Answer : {answer_exo} Ensure t h a t each i n c o r r e c t option i s c l o s e l y r e l a t e d to the v i s u a l content , making i t challenging to e a s i l y i d e n t i f y the c o r r e c t answer . Follow the format below exactly : Options : [ Option1 ] [ Option2 ] [ Option3 ] [ Option4 ]
|
https://arxiv.org/abs/2505.21955v1
|
Figure 35: Option generation prompt: Exo. 34 System Prompt & Question (Instruction) Prompt System Prompt You are a h e l p f u l a s s i s t a n t . You are provided with two v i s u a l inputs i n sequence , each captured from a d i f f e r e n t perspective : 1. The view from the camera worn by the user ( ‘ I ’ ) . 2. The view captured by an ext ernal camera observing the user ( ‘ I ’ ) . The f i r s t image shows what the user ( ‘ I ’ ) sees from t h e i r perspective . The user ’ s f u l l body cannot be v i s i b l e ; you may only see parts of t h e i r body , l i k e t h e i r hand , foot , or arm , or i n some cases , none of the user ’ s body at a l l . The second image shows both the user and the environment from a t h i r d −person perspective with a broad view . The user ’ s f u l l body i s v i s i b l e , but due to the f i x e d viewpoint , some parts may not be v i s i b l e . These two images capture the same event at the same time . Your task i s to analyze both images along with the question and provide the most accurate response based on the v i s u a l information from both perspectives . Question (Instruction) Prompt {Ego Image} {Exo Image} {Question} Only one option i s c o r r e c t . Present the answer i n the form X) . Figure 36: System Prompt and Question(Instruction) Prompt. 35 M3CoT Prompts - Ego2Exo Perspective Scene graph generation phase (Ego2Exo) Task : For the provided image and i t s associated question , generate a scene graph i n JSON format t h a t includes the f o l l o w i n g : 1. Objects t h a t are rel evant to answering the question . 2. Object a t t r i b u t e s t h a t are re levant to answering the question . 3. Object r e l a t i o n s h i p s t h a t are re levant to answering the question . Just generate the scene graph i n JSON format . Do not say extra words . {Ego Image} {Question Prompt} Scene graph refinement phase (Ego2Exo) Task : For the provided image from a d i f f e r e n t view and the scene graph generated from the previous view , r e f i n e the
|
https://arxiv.org/abs/2505.21955v1
|
scene graph i n JSON format as f o l l o w s : 1. Review and Update E x i s t i n g Objects and Relationships : Examine the objects and r e l a t i o n s h i p s i n the i n i t i a l scene graph . Update t h e i r a t t r i b u t e s or p o s i t i o n s based on observations from both views . Remove only elements t h a t are c l e a r l y erroneous ( e . g . , annotation e r r o r s or duplicates ) . 2. Incorporate New Information : I d e n t i f y and add any new objects or r e l a t i o n s h i p s t h a t appear i n the new view . 3. Align and Reconcile Across Views : For overlapping objects and r e l a t i o n s h i p s , a l i g n them using s p a t i a l p r o x i m i t y and semantic s i m i l a r i t y . I f a t t r i b u t e discrepancies arise , s e l e c t values t h a t best r e f l e c t the combined observations . Ensure t h a t the updated scene graph i s l o g i c a l l y and p h y s i c a l l y consistent , avoiding c o n t r a d i c t i o n s or impossible c o n f i g u r a t i o n s . Just generate the r e f i n e d scene graph i n JSON format . Do not say extra words . {Exo Image} {Question Prompt} {Assistant’s response(Ego-only SG)} Initial question response phase (Ego2Exo) Use the images and the r e f i n e d scene graph as context and answer the f o l l o w i n g question . {Ego Image} {Exo Image} {Question Prompt} {Assistant’s response(Refined SG)} Figure 37: M3CoT prompt (1). 36 M3CoT Prompts - Exo2Ego Perspective Scene graph generation phase (Exo2Ego) Task : For the provided image and i t s associated question , generate a scene graph i n JSON format t h a t includes the f o l l o w i n g : 1. Objects t h a t are rel evant to answering the question . 2. Object a t t r i b u t e s t h a t are re levant to answering the question . 3. Object r e l a t i o n s h i p
|
https://arxiv.org/abs/2505.21955v1
|
s t h a t are re levant to answering the question . Just generate the scene graph i n JSON format . Do not say extra words . {Ego Image} {Question Prompt} Scene graph refinement phase (Exo2Ego) Task : For the provided image from a d i f f e r e n t view and the scene graph generated from the previous view , r e f i n e the scene graph i n JSON format as f o l l o w s : 1. Review and Update E x i s t i n g Objects and Relationships : Examine the objects and r e l a t i o n s h i p s i n the i n i t i a l scene graph . Update t h e i r a t t r i b u t e s or p o s i t i o n s based on observations from both views . Remove only elements t h a t are c l e a r l y erroneous ( e . g . , annotation e r r o r s or duplicates ) . 2. Incorporate New Information : I d e n t i f y and add any new objects or r e l a t i o n s h i p s t h a t appear i n the new view . 3. Align and Reconcile Across Views : For overlapping objects and r e l a t i o n s h i p s , a l i g n them using s p a t i a l p r o x i m i t y and semantic s i m i l a r i t y . I f a t t r i b u t e discrepancies arise , s e l e c t values t h a t best r e f l e c t the combined observations . Ensure t h a t the updated scene graph i s l o g i c a l l y and p h y s i c a l l y consistent , avoiding c o n t r a d i c t i o n s or impossible c o n f i g u r a t i o n s . Just generate the r e f i n e d scene graph i n JSON format . Do not say extra words . {Exo Image} {Question Prompt} {Assistant’s response(Exo-only SG)} Initial question response phase (Exo2Ego) Use the images and the r e f i n e d scene graph as context and answer the f o l l o w i n g question . {Ego Image} {Exo Image} {Question Prompt} {Assistant’s response(Refined SG)} Figure 38: M3CoT prompt (2). 37 M3CoT Prompts - Ego&Exo Perspective Scene graph generation phase (Ego&Exo) Task : Using the provided two images and t h e i r associated question ,
|
https://arxiv.org/abs/2505.21955v1
|
generate a u n i f i e d scene graph i n JSON format t h a t includes the f o l l o w i n g : 1. Objects t h a t are rel evant to answering the question . 2. Object a t t r i b u t e s t h a t are re levant to answering the question . 3. Object r e l a t i o n s h i p s t h a t are re levant to answering the question . 4. Ensure t h a t objects and r e l a t i o n s h i p s from both perspectives are a p p r o p r i a t e l y aligned , i n t e g r a t e d and r e f i n e d to provide a complete scene representation . Just generate the u n i f i e d scene graph i n JSON format . Do not say extra words . {Ego Image} {Exo Image} {Question Prompt} Initial question response phase (Ego&Exo) Use the images and the u n i f i e d scene graph as context and answer the f o l l o w i n g question . {Ego Image} {Exo Image} {Question Prompt} {Assistant’s Response(Ego&Exo SG)} Figure 39: M3CoT prompt (3). 38 M3COT Prompts - SG Refinement between Agents Scene graph cross-refinement phase ( Ego&Exo /Ego2Exo /Exo2Ego ) Task : Below are d i f f e r e n t scene graphs generated using d i f f e r e n t reasoning methods : One scene graph : {Ego2Exo SG} / {Exo2Ego SG} / {Exo&Ego SG} One scene graph : {Exo2Ego SG} / {Exo&Ego SG} / {Ego2Exo SG} Using the scene graphs generated from d i f f e r e n t methods as a d d i t i o n a l context , generate a r e f i n e d scene graph i n JSON format f o r the provided images and t h e i r associated question as f o l l o w s : 1. Review the objects and r e l a t i o n s h i p s from the scene graphs and make any necessary adjustments to b e t t e r a l i g n with both views . 2. Ensure t h a t overlapping objects or r e l a t i o n s h i p s between the two views are a p p r o p r i a t e l y aligned and refined , enhancing the accuracy of the scene graph . Just generate the r e f i n e d scene graph i n JSON format . Do not say extra words . {Ego Image} {Exo Image} {Question Prompt} Question response phase ( Ego&Exo /Ego2Exo /Exo2Ego ) Use the images and
|
https://arxiv.org/abs/2505.21955v1
|
the u n i f i e d scene graph as context and answer the f o l l o w i n g question : {Ego Image} {Exo Image} {Question Prompt} {Assistant’s response(Unified SG)} Figure 40: M3CoT prompt (4). 39 Other CoT Prompts - DDCoT For the provided images and t h e i r associated question , t h i n k step −by− step about the p r e l i m i n a r y knowledge required to answer the question . Deconstruct the problem as completely as possible i n t o necessary sub− questions . Then , with the aim of helping humans answer the o r i g i n a l question , attempt to answer those sub−questions . The expected answering format i s as f o l l o w s : Sub−questions : 1. <sub−question 1> 2. <sub−question 2> . . . Sub−answers : 1. <sub−answer 1> 2. <sub−answer 2> . . . {Question Prompt} Context : {Assistant’s response} Give your answer of the question according to the sub−questions and sub−answers . {Question Prompt} Figure 41: DDCoT Prompt. Other CoT Prompts - CoCoT Please t e l l me the s i m i l a r i t i e s and d i f f e r e n c e s of these two images , and answer to the question . {Question Prompt} Figure 42: CoCoT Prompt. 40 Other CoT Prompts - CCoT For the provided images and t h e i r associated question , generate a scene graph i n JSON format t h a t includes the f o l l o w i n g : 1. Objects t h a t are rel evant to answering the question . 2. Object a t t r i b u t e s t h a t are re levant to answering the question . 3. Object r e l a t i o n s h i p s t h a t are re levant to answering the question . Just generate the scene graph i n JSON format . Do not say extra words . {Question Prompt} Scene Graph : {Assistant’s response} Use the images and scene graph as context and answer the f o l l o w i n g question . {Question Prompt} Figure 43: CCoT Prompt. 41
|
https://arxiv.org/abs/2505.21955v1
|
arXiv:2505.21956v1 [cs.CV] 28 May 2025Cross-modal RAG: Sub-dimensional Retrieval-Augmented Text-to-Image Generation Mengdan Zhu1,Senhao Cheng2,Guangji Bai1,Yifei Zhang1,Liang Zhao1 1Department of Computer Science, Emory University 2Department of Electrical Engineering & Computer Science, University of Michigan Abstract Text-to-image generation increasingly demands access to domain-specific, fine- grained, and rapidly evolving knowledge that pretrained models cannot fully cap- ture. Existing Retrieval-Augmented Generation (RAG) methods attempt to address this by retrieving globally relevant images, but they fail when no single image contains all desired elements from a complex user query. We propose Cross- modal RAG, a novel framework that decomposes both queries and images into sub-dimensional components, enabling subquery-aware retrieval and generation. Our method introduces a hybrid retrieval strategy—combining a sub-dimensional sparse retriever with a dense retriever—to identify a Pareto-optimal set of images, each contributing complementary aspects of the query. During generation, a multi- modal large language model is guided to selectively condition on relevant visual features aligned to specific subqueries, ensuring subquery-aware image synthesis. Extensive experiments on MS-COCO, Flickr30K, WikiArt, CUB, and ImageNet- LT demonstrate that Cross-modal RAG significantly outperforms existing baselines in both retrieval and generation quality, while maintaining high efficiency. 1 Introduction User query:A Cybertruckwith a third-generation Labubuon the roof is parked in front of a Tesla store onVenusground.PreviousRAGOurs(Cross-modalRAG)RetrievedImagesTop-kGlobalRetrievalMulti-objectiveJointRetrieval Generationquery+retrievedimagesquery+retrievedimageswithsub-dimensionalsatisfiedinformation Cybertruck,Tesla storeCybertruck,Tesla storeCybertruck,Tesla storeCybertruck,Tesla storeVenusgroundVenusgroundthird-generation Labubu Figure 1: Visualization of retrieval and gen- eration in Cross-modal RAG (ours) versus previous RAG.Text-to-Image Generation (T2I-G) has witnessed rapid progress in recent years, driven by advances in diffusion models [ 1,2,3] and multimodal large language models (MLLMs) [ 4,5,6,7,8], enabling the synthesis of increasingly realistic and diverse im- ages from natural language descriptions. However, in many real-world applications, domain-specific im- age generation requires knowledge that is not readily encoded within pre-trained image generators, espe- cially when such information is highly long-tailed, fast-updated, and proprietary. To address this limi- tation, Retrieval-Augmented Generation (RAG) has emerged as a promising paradigm by incorporating an exrternal image database to supply factual reference during generation [ 9,10]. Notable RAG-based im- age generation approaches such as Re-Imagen [ 11], RDM [ 12], and KNN-Diffusion [ 13] integrate re- trieved images with diffusion models to improve out- put fidelity. However, these existing RAG methods typically rely on off-the-shelf retrievers (e.g., those based on CLIP [ 14]) which compute global image-text similarities and retrieve whole images based on the full user query. This coarse-grained retrieval strategy often fails in complex scenarios where the query involves multiple fine-grained entities or attributes [ 15] – especially when no single image contains all required components in the query. In practice, it is extremely common that single images in the retrieval database only satisfy a subset of the query. As in Fig. 1, no single image in the retrieval database perfectly covers all four aspects in the query; instead, each covers different subsets of the query. Existing RAG methods often retrieve top-k images based on the entire query, so images that redundantly contain most aspects of the query tend to be retrieved (e.g., three images that all include “Cybertruck” and “Tesla store”), while some aspects may be underweighted (e.g., “Venus ground”)
|
https://arxiv.org/abs/2505.21956v1
|
or even missed (e.g., “third-generation Labubu”), leading to the distortion in the missed aspects. Also, during generation, existing RAG has not be precisely instructed about which aspects of each image should be leveraged, resulting in the superfluous lightning in the generated image by previous RAG. Therefore, instead of being restricted to considering whether each whole image is related to the entirety of the query, it is desired to advance to pinpointing which aspects (i.e., sub-dimensions) of which images can address which aspects of the query for image generation. Such desire is boiled down to several open questions. First , how to precisely gauge which part of the queries match which aspects of each image? Existing global embedding methods, like CLIP, do not naturally support sub-dimensional alignment [ 14], and current fine-grained vision-language matching is limited to region-level object patterns [ 15,16], which are computationally expensive and error-prone. Second , how to retrieve the smallest number of images that cover all necessary information? It is desired to retrieve an optimal set of images such that each covers different aspects of the query while avoiding redundancy in order to maximize the amount of relevant information under a limited context window size. Third , how to precisely inform the image generator of what aspect of each image to refer to when generating? Existing image generators typically can take in the input query or images, but here it requires adding fine-grained instructions about how to leverage relevant aspects of images when generating, which is not well explored. To address these open problems, we propose Cross-modal Sub-dimensional Retrieval Augmented Generation (Cross-modal RAG) , a novel text-to-image generation that can identify, retrieve, and leverage image sub-dimensions to satisfy different query aspects. To decompose and identify key image sub-dimensions, we decompose the user query into subqueries and candidate images into sub-dimensional representations with respect to the subqueries, enabling accurate subquery-level alignment. To retrieve comprehensive and complementary image sub-dimensions, we formulate the retrieval goal as a multi-objective optimization problem and introduce an efficient hybrid retrieval strategy – combining a lightweight sub-dimensional sparse retriever with a sub-dimensional dense retriever – to retrieve a set of Pareto-optimal images that collectively cover all subqueries in the query as in Fig. 1 (right). To effectively instruct image generators with the retrieved image sub-dimensions, we present a model-agnostic and subquery-aware generation with MLLMs, which explicitly preserves and composes the subquery-aligned components from the retrieved images into a coherent final image. For instance, our method only preserves the “Venus ground” in the final image, while previous RAG can also preserve the irrelevant lightning in Fig. 1. Extensive experiments demonstrate that Cross- modal RAG achieves state-of-the-art performance in both text-to-image retrieval and text-to-image generation tasks across multiple fine-grained, domain-specific, and long-tailed image benchmarks, while maintaining excellent computational efficiency. 2 Related Work 2.1 Text-to-Image Generation Text-to-Image Generation (T2I-G) has made significant strides, evolving through methodologies such as Generative Adversarial Networks (GANs) [ 17,18], auto-regressive models [ 19,20], and diffusion models [ 21,22]. Recent breakthroughs in diffusion models and multimodal large language models (MLLMs), driven by scaling laws [
|
https://arxiv.org/abs/2505.21956v1
|
23], have significantly advanced the capabilities of T2I-G. Notable examples include the DALL-E series [ 20,24], the Imagen series [ 2], and the Stable Diffusion (SD) series [ 1,25,26]. More recently, image generation functionalities have been integrated directly into advanced MLLMs such as GPT Image [ 4] and Gemini 2.0 Flash Image Generation [ 5]. However, despite these advancements, traditional T2I-G methods often struggle with knowledge-intensive, long-tailed, and fine-grained image-generation tasks. These scenarios typically require additional context to generate accurate images, necessitating RAG techniques. 2.2 Text-to-Image Retrieval Text-to-Image Retrieval (T2I-R) has become a crucial subtask in supporting fine-grained image understanding and generation. CLIP [ 14] is currently the most widely adopted approach, mapping 2 Cli$ near Dieppe in the style of Claude MonetA Cli$ in the style of Pavel SvinyinDieppe in the style of Charles ConderCli$ at Dieppe in the style of Claude MonetStabble near Dieppe in the style of Paul GauguinSeascape with cow on the edge of a cli$ in the style of Paul Gauguin “Cli% near Dieppe in the style of Paul Gauguin.” User queryCli$DieppeThe style of Paul Gauguin ✔ ✔ ✔ ✔ ✖ ✖ CLIP VisualEnoder CLIP TextEnoder Stage 1: Sub-dimensional Sparse RetrieverStage 2: Sub-dimensional Dense RetrieverStage 3: Multi-objective Joint RetrievalStage 4: Generation 111Pareto Front Pareto Optimal Image Set Cli;, Dieppe Dieppe, The style of Paul Gauguin Dieppe, The style of Paul Gauguin SatisfiedSubqueries MLLM GeneratedImage pairwise cosine similarityweighted meanuser query/visual/subquery tokensvisual embeddingstext embeddingsRetrievalDatabase ❄ ❄ ❄Learnable querytokens Cross attention 🔥Cross attention FeedForwardAdaptor Figure 2: Overview of the Cross-modal RAG framework. The framework consists of four stages: (1) Sub-dimensional Sparse Retriever, where images are filtered based on lexical subquery matches; (2) Sub-dimensional Dense Retriever, where candidate images are re-ranked using the mean of pairwise cosine similarities between sub-dimensional vision embeddings and subquery embeddings; (3) Multi-objective Joint Retrieval, where a Pareto-optimal set of images is selected by Eq.6 to collectively cover the subqueries. Pfis composed of three orange points( ) (solid points are on the line while dashed points are off the line); and (4) Generation, where a MLLM composes a final image by aligning subquery-level visual components from retrieved images. images and texts into a shared embedding space via contrastive learning. While CLIP excels at coarse- grained alignment, it underperforms in fine-grained text-to-image retrieval, especially in scenes involving multiple objects or nuanced attributes. ViLLA [ 15] explicitly highlights this limitation, demonstrating that CLIP fails to capture detailed correspondences between image regions and textual attributes. SigLIP [ 27], along with other refinements such as FILIP [ 28] and SLIP [ 29], improves CLIP’s contrastive learning framework and achieves superior zero-shot classification performance. However, these methods still rely on global image-text embeddings, which are inadequate for resolving localized visual details required by fine-grained queries. To address this, recent works on fine-grained text-to-image retrieval (e.g., ViLLA [ 15], Region- CLIP [ 16]) have adopted region-based approaches that involve cropping image patches for localized alignment. In contrast, our vision-based sub-dimensional dense retriever bypasses the need for explicit cropping. By constructing sub-dimensional vision embeddings directly from the full image, we enable more efficient and
|
https://arxiv.org/abs/2505.21956v1
|
effective matching against subqueries. 2.3 Retrieval-Augmented Generation Retrieval-Augmented Generation has demonstrated significant progress in improving factuality for both natural language generation [ 30,31] and image generation [ 11,32]. Most RAG-based approaches for image generation are built upon diffusion models (e.g., Re-Imagen [ 11], RDM [ 12], KNN- Diffusion [ 13]), but these methods largely overlook fine-grained semantic alignment. FineRAG [ 33] takes a step toward fine-grained image generation by decomposing the textual input into fine-grained entities; however, it does not incorporate fine-grained decomposition on the visual side. In contrast, our approach performs dual decomposition: (i) the query is parsed into subqueries that capture distinct semantic components, and (ii) the candidate images are decomposed into sub-dimensional vision embeddings aligned with the corresponding subqueries. Furthermore, while existing RAG-based image models typically rely on off-the-shelf retrievers, we introduce a novel retrieval method that combines a sub-dimensional sparse filtering stage with a sub-dimensional dense retriever. Finally, with the recent surge of MLLM-based image generation, we explore how our fine-grained retrieval information can be integrated to guide generation at the sub-dimensional level. 3 Proposed Method We introduce Cross-modal RAG , as shown in Figure 2. The framework consists of four stages: (1) Sub- dimensional sparse retriever based on lexical match on subqueries in Sec. 3.1.2; (2) Sub-dimensional dense retriever based on semantic match on sub-dimensional vision embeddings and textual subquery embeddings in Sec. 3.1.1; (3) Multi-objective joint retrieval to select a set of Pareto-optimal images in Sec. 3.1.3; and (4) Subquery-aware image generation with retrieved images in Sec. 3.2. 3 The framework of Cross-modal RAG focuses on: 1) how to retrieve theoptimal images from the retrieval database given multiple subqueries, and 2) how to guide the generator to generate images preserving the satisfied subquery features in each retrieved image. 3.1 Multi-objective Retrieval for Image Generation 3.1.1 Sub-dimensional Dense Retriever Given a user query Qand a candidate image Ij, we decompose Qinto a set of subqueries {q1, q2, ..., q n}, where each subquery qicaptures a specific aspect of Q, such as object categories or attributes, and we further compute the similarity scores between its normalized sub-dimensional vision embeddings and textual subquery embeddings as follows: S(Q, Ij) =1 nnX i=1sim(vji, ti). (1) Here, the similarity score sim is cosine similarity. The similarity scores are aggregated across n subqueries to form an overall similarity metric S(Q, Ij). Images are ranked based on their similarity S(Q, Ij)for the given query, and the top-ranked images are retrieved. In terms of the sub-dimensional vision embeddings vji, after the image Ijis fed into a pretrained CLIP vision encoder (Φclip-v(·)), a multi-head cross-attention module is introduced, functioning as the vision adapter fa, to compute fine-grained sub-dimensional vision subembeddings: vji=fa(Φclip-v(Ij), ti). (2) The vision adapter consists of: 1) A multi-head vision cross-attention layer where the learnable query tokens attend to the vision embeddings extracted from the frozen CLIP visual encoder; 2) A multi-head text cross-attention layer where the output of the vision cross-attention further attends to the subquery embeddings extracted from the frozen CLIP text encoder; 3) An MLP head that maps the attended features
|
https://arxiv.org/abs/2505.21956v1
|
to a shared multimodal embedding space, followed by layer normalization. The output vjirepresents Ij’s ith-dimensional vision embedding corresponding to the subquery qi, which is decomposed from Qand can be obtained by an off-the-shelf LLM (e.g. GPT-4o mini) using the structured prompt in Appendix A. The subquery embeddings tiwith respect to subquery qican be computed as: ti= Φ clip-t( g(qi)), (3) where g(·)denotes the tokenization, and Φclip-t(·)denotes the pre-trained CLIP text encoder. The vision adapter fais optimized using the Info-NCE loss: LInfo-NCE =−logP (vji,ti)∈Pexp ⟨vT ji,ti⟩ τ P (vji,ti)∈Pexp ⟨vT ji,ti⟩ τ +P (v′ ji,t′ i)∼Nexp ⟨v′T ji,t′ i⟩ τ, (4) where Pis a set of positive pairs with all sub-dimensional vision embeddings and subquery embed- dings,Nconversely refers to an associated set of negative pairs. τis a temperature parameter. 3.1.2 Sub-dimensional Sparse Retriever Definition 3.1 (Sub-dimensional Sparse Retriever) .For each retrieval candidate image Ij, we define a binary satisfaction score for the sub-dimensional sparse retriever: si(Ij) =1,if the caption of Ijcontains qi 0,otherwise(5) Hence, each image Ijyields an n-dimensional satisfaction vector [s1(Ij), . . . , s n(Ij)]. Definition 3.2 (Image Dominance) .Consider two images IaandIbfrom the retrieval database D, with corresponding subquery satisfaction vectors s(Ia) = [ s1(Ia), . . . , s n(Ia)]and s(Ib) = [s1(Ib), . . . , s n(Ib)]. We say Iadominates Ib, denoted Ia≻Ib, if: (1)si(Ia)≥si(Ib),∀i∈ {1, . . . , n }, and (2)∃j∈ {1, . . . , n }s.t.sj(Ia)> sj(Ib). That is, Iais never worse in any subquery’s score and is strictly better in at least one subquery. Ia∈ D is retrieved by the sub-dimensional sparse retriever if there exists no other image Ib∈ D such that Ibdominates Ia. Formally, ∄Ib∈ D s.t.Ib≻Ia. 4 Algorithm 1 MULTI -OBJECTIVE JOINT RETRIEVAL ALGORITHM Require: Query Qdecomposed into subqueries {qi}n i=1, image retrieval database D, weights {αi}n i=1withαi>0,P iαi= 1, trade-off parameter βwith0< β < β max Ensure: The set of Pareto optimal images P={I∗ j} 1:forIj∈ D do 2: compute s(Ij) = [s1(Ij), . . . , s n(Ij)] 3:end for 4:eD← {Ij|s(Ij)is not all zeros } 5:P ← ∅ 6:forαin a discretized grid over the simplex do 7: P ← P ∪ arg maxIj∈eDPn i=1αisi(Ij) +β·nS(Q, Ij) 8:end for 3.1.3 Multi-objective Optimization Formulation and Algorithm Each subquery can be regarded as a distinct objective, giving rise to a multi-objective optimization problem for retrieval. Our primary goal is to select images that collectively maximize text-based subquery satisfaction (sub-dimensional sparse retrieval), while also maximizing fine-grained vision- based similarity (sub-dimensional dense retrieval). Thus, the overall objective is formalized as: I∗ j= arg max IjnX i=1αisi(Ij) +β·nS(Q, Ij),s.t.∀αi:αi>0,nX i=1αi= 1, β∈(0, βmax),(6) where αiis the relative importance of each subquery qiin the sub-dimensional sparse retrieval, and the weight βtrades off between the sub-dimensional sparse and dense retrieval. Definition 3.3 (Pareto Optimal Images) .The solution to Eq.6 is called the set of Pareto optimal images , such that I∗ jis not dominated by by any other image Ik∈ D in terms of both s(I)and S(Q, I). Formally, P={I∗ j∈ D | ∄Ik∈ D s.t.F(Ik)> F(I∗ j)}, (7) where F(Ij) =Pn
|
https://arxiv.org/abs/2505.21956v1
|
i=1αisi(Ij) +β·nS(Q, Ij),s.t.∀αi:αi>0,Pn i=1αi= 1, β∈(0, βmax). Definition 3.4 (Pareto Front of the Pareto Optimal Images) .Pis sometimes referred to as the Pareto setin the decision space (here, the set of images). Pareto front Pfof the Pareto optimal image is the corresponding set of non-dominated tuples in the objective space: Pf={(s(I∗ j), S(Q, I∗ j)) :I∗ j∈ P} . (8) Therefore, the Pareto optimal images in Prepresent the “best trade-offs” across all subqueries, since no single image in Dcan strictly improve the Pareto front Pfon every subquery dimension. We propose the multi-objective joint retrieval algorithm in Algorithm 1. If an image is Pareto optimal, there exists at least one choice of { αi} for which it can maximizePn i=1αisi(Ij). In particular, if multiple images share the same subquery satisfaction vector s(Ij), we can use the sub-dimensional dense retriever to further distinguish among images. Theorem 3.1 (Retrieval Efficiency) .LetNbe the total number of images in D,eN≪Nbe the number of images in eD,Kis a grid of α-values and nbe the number of subqueries. Also let Tclip represent the cost of processing a single image with the CLIP vision encoder, and Tadaptor represent the cost of the adaptor. The time complexity of Algorithm 1 is O(N) +O(K×eN) +O(K× eN×n×(Tclip+Tadaptor ))and the time complexity of a pure sub-dimensional dense retriever is O(N×n×(Tclip+Tadaptor )). Proof. The formal proof can be found in Appendix B. SinceeN≪NandKis a relatively small constant, the dominant term of Algorithm 1 is far less than a pure sub-dimensional dense retriever. In terms of retrieval efficiency, we adopt Algorithm 1 - a hybrid of sub-dimensional sparse and dense retriever. 5 Theorem 3.2 (Algorithm Optimality) .Letδmin= min {αi|αi>0}be the smallest nonzero subquery weight, and Cmax= maxPn i=1cos (vj,i, ti). For any 0< β < β max=δmin Cmax, Algorithm 1 returns all Pareto-optimal solutions to Eq.6. Proof. The formal proof can be found in Appendix C. 3.2 Image Generation with Retrieved Images To generate an image from the user query Qwhile ensuring that the satisfied subquery features in R are preserved, we utilize a pretrained MLLM with subquery-aware instructions. Given the set of retrieved images is P={I∗ j}, each retrieved image I∗ jis associated with a subquery satisfaction vector s(I∗ j). Let Qr={qi|si(I∗ j) = 1} (9) be the subset of subqueries from the user query QthatI∗ jactually satisfies. For each image I∗ j∈ P, we construct an in-context example in a form: ⟨I∗ j⟩Use only [ Qr] in [I∗ j]. Here,⟨I∗ j⟩denotes the visual tokens for the r-th retrieved image, [ Qr] is the satisfied subqueries in I∗ j, and [ I∗ j] is "the r-th retrieved image". Next, we feed the in-context examples to a pretrained MLLM together with the original query Q. The MLLM, which operates in an autoregressive manner, is thus guided to generate the final image ˆIas: pθ ˆI Q,{I∗ j},{Qr} =TY t=1pθ ˆIt ˆI<t, Q,{I∗ j},{Qr} , (10) where ˆItdenotes the t-th visual token in the generated image representation, and θrep- resents the parameters of the pretrained MLLM. By referencing the full prompt: [Q] ⟨I∗ j⟩Use only [ Qr] in [
|
https://arxiv.org/abs/2505.21956v1
|
I∗ j], the MLLM learns to preserve the relevant subquery features that each retrieved image I∗ jcontributes. 4 Experiments 4.1 Experiment Setup Baselines and Evaluation Metrics We compare our proposed method with several baselines on text-to-image retrieval and text-to-image generation. •Text-to-Image Retrieval Baselines: CLIP(ViT-L/14) [ 14] is a widely adopted dual-encoder model pretrained on large-scale image-text pairs and remains the most commonly used baseline for T2I retrieval. SigLIP(ViT-SO400M/14@384) [ 27] improves retrieval precision over CLIP by replacing the contrastive loss with a sigmoid-based loss. ViLLA [ 15] is a large-scale vision-language pretraining model with multi-granularity objectives. GRACE (Structured ID) [ 34] is a recent generative cross-modal retrieval model. IRGen [ 35] is a transformer-based image-text retriever. We report GRACE and IRGen results as cited from [ 36]. GILL [ 37] is a unified framework that combines generation and retrieval. •Text-to-Image Generation Baselines: SDXL [ 25] is a widely used high-quality T2I diffusion model. LaVIT [ 6] is a vision-language model that supports T2I generation. RDM [ 12] is a representative retrieval-augmented diffusion model. UniRAG [ 38] is a recent retrieval-augmented vision-language model. GILL [37] can perform both T2I-R and T2I-G. For T2I-R, we adopt the standard retrieval metric Recall at K(R@K, K=1, 5, and 10). For T2I-G, we evaluate the quality of generated images by computing the average pairwise cosine similarity of generated and ground-truth images with CLIP(ViT-L/14) [ 14], DINOv2(ViT-L/14) [ 39], and SigLIP(ViT-SO400M/14@384) [ 27] embeddings. We also employ style loss [ 40] to assess the artistic style transfer in the WikiArt dataset [41]. Dataset Construction We evaluate the text-to-image retrieval on the standard benchmark MS- COCO [ 42] and Flickr30K [ 43] test sets. As for the text-to-image generation, we evaluate the model’s image generation capabilities across different aspects and choose three datasets: artistic style transfer 6 Table 1: Evaluation of Text-to-Image Retrieval on MS-COCO and Flickr30K. MethodMS-COCO (5K) Flickr30K (1K) R@1 R@5 R@10 R@1 R@5 R@10 CLIP (ViT-L/14) 43.26 68.70 78.12 77.40 94.80 96.60 SigLIP (ViT-SO400M/14@384) 46.96 71.72 80.64 82.20 95.90 97.70 ViLLA 34.77 60.67 70.69 59.41 85.02 92.82 GRACE (Structured ID) 16.70 39.20 50.30 37.40 59.50 66.20 IRGen 29.60 50.70 56.30 49.00 68.90 72.50 GILL 32.12 57.73 66.55 55.41 81.94 89.77 Ours 80.78 97.00 99.16 97.50 100.00 100.00 Table 2: Evaluation of Text-to-Image Generation on WikiArt, CUB, and ImageNet-LT. MethodWikiArt CUB ImageNet-LT CLIP↑DINO ↑SigLIP ↑Style Loss ↓CLIP↑DINO ↑SigLIP ↑CLIP↑DINO ↑SigLIP ↑ SDXL 0.688 0.504 0.720 0.022 0.743 0.519 0.738 0.668 0.403 0.653 LaVIT 0.689 0.485 0.721 0.036 0.676 0.245 0.647 0.662 0.365 0.652 RDM 0.507 0.237 0.528 0.024 0.638 0.326 0.663 0.576 0.333 0.603 UniRAG 0.646 0.362 0.654 0.068 0.746 0.344 0.718 0.610 0.255 0.600 GILL 0.629 0.439 0.654 0.027 0.719 0.185 0.675 0.635 0.228 0.615 Ours 0.746 0.604 0.744 0.019 0.764 0.600 0.744 0.815 0.761 0.812 in the WikiArt [ 41], fine-grained image generation in the CUB [ 44], and long-tailed image generation in the ImageNet-LT [ 45]. For each genereation dataset, we select some test samples, and use the remaining as the retrieval database. More details in Appendix D. Implementation Details For T2I-R, our sub-dimensional
|
https://arxiv.org/abs/2505.21956v1
|
dense retriever is composed of a pretrained CLIP vision encoder (ViT-L/14) and an adaptor. We train the sub-dimensional dense retriever on the COCO training set using the InfoNCE loss with a temperature of 0.07. The adaptor is optimized using the Adam optimizer with an initial learning rate of 5e-5, and a StepLR scheduler with a step size of 3 epochs and a decay factor of 0.6. For T2I-G, we use gpt-image-1 as our MLLM backbone and set β= 0.015based on Therorem 3.2. The experiments1are conducted on a 64-bit machine with 24-core Intel 13th Gen Core i9-13900K@5.80GHz, 32GB memory and NVIDIA GeForce RTX 4090. 4.2 Quantitative Evaluation of Text-to-Image Retrieval We test our sub-dimensional dense retriever model with various types of T2I-R models. As shown in Tab. 1, our proposed sub-dimensional dense retriever achieves state-of-the-art performance across all metrics and outperforms all baselines by a substantial margin on both MS-COCO and Flickr30K datasets. On MS-COCO, our method achieves R@1 = 80.78%, R@5 = 97.00%, and R@10 = 99.16%, which are significantly higher than the best-performing baseline SigLIP. The relative improvements are 72% on R@1, 35% on R@5, and 20% on R@10, demonstrating our model’s superior capability in the text-to-image retrieval. Notably, it exhibits strong zero-shot T2I-R performance on Flickr30K, achieving near-perfect accuracy with R@1 = 97.50%, R@5 = 100.00%, and R@10 = 100.00%, and surpassing SigLIP’s R@1 = 82.20% by nearly 20%. These results confirm that our proposed sub- dimensional dense retriever significantly enhance fine-grained T2I-R compared to global embedding alignment such as CLIP and SigLIP, generative retrieval methods like GRACE and IRGen, region- based fine-grained match on ViLLA, and the retrieval and generation unified framework GILL. 4.3 Quantitative Evaluation of Text-to-Image Generation We benchmark our Cross-modal RAG method against state-of-the-art text-to-image generation models, including diffusion-based (SDXL), autoregressive (LaVIT), retrieval-augmented (RDM, UniRAG), and retrieval and generation unified (GILL) baselines. The evaluation is conducted on three datasets that span different generation challenges: WikiArt (artistic style transfer), CUB (fine-grained), and ImageNet-LT (long-tailed). As shown in Tab. 2, Cross-modal RAG achieves the highest scores across all models and datasets. On WikiArt, our method achieves the best performance in CLIP, DINO, and SigLIP, along with the lowest style loss, indicating it can capture the particular artistic style specified in the retrieved images effectively. On CUB, Cross-modal RAG also performs strongly across all 1The code is available at https://github.com/mengdanzhu/Cross-modal-RAG. 7 UserQueryDecomposedSubqueriesPareto Optimal Images with Satisfied SubqueriesRetrievedimage1 GenerationRetrievedimage2Retrievedimage3Draw a Hooded Warbler. This bird has a black crown with black throat and yellow belly. 1.Hooded Warbler,2.black crown, 3.black throat,4.yellow belly 1.Hooded Warbler2.black crown4.yellow belly1.Hooded Warbler2.black crown3.black throat1.Hooded Warbler3.black throat4.yellow belly UserQueryDecomposedSubqueries Retrievedimage1A photo of [totempole].1.[totempole] 1.[totempole] Pareto Optimal Images with Satisfied Subqueries Generation(b)CUB(c)ImageNet-LT(a)WikiArtUserQueryRoadin theforest ofFontainebleau in the style of Theodore Rousseau.1.road,2.forest, 3.Fontainebleau,4.the style of Theodore Rousseau Retrievedimage1Retrievedimage2Retrievedimage3 1.road2.forest3.Fontainebleau2.forest3.Fontainebleau4.the style of Theodore Rousseau1.road4.the style of Theodore Rousseau DecomposedSubqueriesPareto Optimal Images with Satisfied Subqueries Generation Figure 3: The retrieved pareto optimal images with their corresponding satisfied subqueries in (a) WikiArt, (b) CUB and (c) ImageNet-LT datasets and model generation results of Cross-modal RAG. three metrics, because it can localize and leverage the
|
https://arxiv.org/abs/2505.21956v1
|
specific visual details in the retrieved images to facilitate generation. On ImageNet-LT, Cross-modal RAG improves CLIP similarity by 22%, DINO by 89%, and SigLIP by 24% over the second-best SDXL. This indicates that our retrieval method can retrieve images that best match the query and only use the relevant entity for generation, which greatly benefits T2I-G in the long-tailed situation. 4.4 Qualitative Analysis To qualitatively illustrate the effectiveness of our Cross-modal RAG model, we visualize some examples of our retrieved pareto optimal images with their corresponding satisfied subqueries and generated outputs across all datasets in Fig. 3. The satisfied subqueries of each retrieved Pareto- optimal image are non-overlapping, and each retrieved image is optimal with respect to the sub- dimensions it satisfies. Therefore, we can guarantee that the Pareto set Pcollectively covers images with all satisfied subqueries in the retrieval database D. Moreover, since the model knows which subqueries are satisfied by each retrieved image, MLLM can be guided to condition on the relevant subquery-aligned parts of each retrieved image during generation. As shown Fig. 3(a), the model is capable of style transfer , learning the artistic style of a certain artist ( e.g., Theodore Rousseau) while preserving the details corresponding to each subquery ( e.g., road, forest, Fontainebleau). The model is also able to retrieve accurate fine-grained information and compose the entities in subqueries ( e.g., black crown, black throat, yellow belly) to perform fine-grained image generation on the CUB dataset in Fig. 3(b). Moreover, the model is good at long-tailed or knowledge-intensive image generation . In Fig. 3(c), ImageNet-LT is a long-tailed distribution dataset with many rare entities ( e.g., totem pole). Retrieving such correct images can help improve generation fidelity. Baseline models without retrieval capabilities tend to struggle in these scenarios. More comparisons of generated images with other baselines are provided in Appendix F. 8 Table 3: Evaluation of the retrieval efficiency on the COCO. Our methods are denoted in gray . Method GPU Memory (MB) # of Parameters (M) Query Latency (ms) CLIP (ViT-L/14) 2172.19 427.62 8.68 dense (all) 2195.26 433.55 14.22 dense (adaptor) 23.07 7.31 4.35 sparse 0 0 2.17 4.5 Efficiency Analysis As shown in Tab. 3, we compare the retrieval efficiency of CLIP (ViT-L/14) with our sub-dimensional dense and sparse retrievers on the COCO test set. Our sub-dimensional dense retriever is com- posed of a frozen CLIP encoder (ViT-L/14) and a lightweight adaptor. As reported in Table 1, the sub-dimensional dense retriever improves Recall@1 by +86.73% over CLIP on COCO. Despite the adaptor’s minimal overhead – only 0.01 ×CLIP’s GPU memory usage, 0.017 ×its number of parameters, and 0.5 ×its query latency – its performance gain is substantial. Our sub-dimensional sparse retriever is text-based and operates solely on the CPU, requiring no GPU memory consumption, no learnable parameters and achieving the lowest query latency. Our Cross-modal RAG method, a hybrid of our sub-dimensional sparse and dense retriever, can leverage the complementary strengths of both and achieve query latency that lies between the pure sparse and dense retrievers – closer to that of the sparse. These
|
https://arxiv.org/abs/2505.21956v1
|
results show Cross-modal RAG’s efficiency and scalability for large-scale text-to-image retrieval tasks without compromising effectiveness. 4.6 Ablation Study Ablation Study on Subquery Decomposition Figure 4: Ablation Study on Subquery De- composition on the WikiArt and CUB.We evaluate retrieval performance without subquery decomposition on multi-subquery datasets, WikiArt and CUB, by directly using a BM25 retriever to re- trieve the top-1, top-2, and top-3 images based on the full user query. Our multi-objective joint retrieval method achieves a higher subquery coverage rate compared to the conventional text-based BM25 re- trieval on both WikiArt and CUB in Fig. 4. This result indicates that our multi-objective joint retrieval method retrieves a set of images Pthat collectively cover the largest number of subqueries from D, demonstrating its superior ability to capture the full semantic intent of the user query. Ablation Study on the Sub-dimensional Dense Retriever We retain the sub-dimensional sparse retriever and replace the sub-dimensional dense retriever in Cross- modal RAG with a randomly selected image. The results in Tab. 4 show that our dense retriever is able to retrieve images that best match the entity in the query in the ImageNet-LT. Notably, our dense retriever, though trained only on the COCO, general- izes well to unseen entities on the ImageNet-LT.Table 4: Ablation Study of our Cross-modal RAG w/o dense retriever on the ImageNet-LT. Method CLIP ↑DINO ↑SigLIP ↑ Ours 0.815 0.761 0.812 w/o dense 0.773 0.607 0.752 5 Conclusion We proposed Cross-modal RAG, a novel sub-dimensional retrieval-augmented text-to-image gener- ation framework addressing domain-specific, fine-grained, and long-tailed image generation. Our method leverages a hybrid retrieval strategy combining sub-dimensional sparse filtering with dense retrieval to precisely align subqueries with visual elements, guiding a MLLM to generate coherent images on the subquery level. The Pareto-optimal image selection ensures the largest coverage of various aspects in the query. Extensive experiments demonstrated Cross-modal RAG’s superior performance over state-of-the-art baselines in T2I-R and T2I-G. The ablation study and efficiency analysis highlight the effectiveness of each component and efficiency in the Cross-modal RAG. 9 References [1]Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High- resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 10684–10695, 2022. [2]Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily L Denton, Kamyar Ghasemipour, Raphael Gontijo Lopes, Burcu Karagol Ayan, Tim Salimans, et al. Photorealistic text-to-image diffusion models with deep language understanding. Advances in neural information processing systems , 35:36479–36494, 2022. [3]Alex Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob McGrew, Ilya Sutskever, and Mark Chen. Glide: Towards photorealistic image generation and editing with text-guided diffusion models. arXiv preprint arXiv:2112.10741 , 2021. [4]OpenAI. Addendum to gpt-4o system card: Native image generation, March 2025. Accessed: 2025-05-08. [5]Google. Experiment with gemini 2.0 flash native image generation, April 2025. Accessed: 2025-05-08. [6]Yang Jin, Kun Xu, Liwei Chen, Chao Liao, Jianchao Tan, Quzhe Huang, Bin Chen, Chenyi Lei, An Liu, Chengru Song, et al. Unified language-vision pretraining in llm with dynamic discrete visual tokenization. arXiv preprint arXiv:2309.04669 , 2023. [7]Mengdan Zhu, Raasikh Kanjiani,
|
https://arxiv.org/abs/2505.21956v1
|
Jiahui Lu, Andrew Choi, Qirui Ye, and Liang Zhao. La- tentexplainer: Explaining latent representations in deep generative models with multi-modal foundation models. arXiv preprint arXiv:2406.14862 , 2024. [8]Guangji Bai, Zheng Chai, Chen Ling, Shiyu Wang, Jiaying Lu, Nan Zhang, Tingwei Shi, Ziyang Yu, Mengdan Zhu, Yifei Zhang, et al. Beyond efficiency: A systematic survey of resource-efficient large language models. arXiv preprint arXiv:2401.00625 , 2024. [9]Xu Zheng, Ziqiao Weng, Yuanhuiyi Lyu, Lutao Jiang, Haiwei Xue, Bin Ren, Danda Paudel, Nicu Sebe, Luc Van Gool, and Xuming Hu. Retrieval augmented generation and understanding in vision: A survey and new outlook. arXiv preprint arXiv:2503.18016 , 2025. [10] Penghao Zhao, Hailin Zhang, Qinhan Yu, Zhengren Wang, Yunteng Geng, Fangcheng Fu, Ling Yang, Wentao Zhang, Jie Jiang, and Bin Cui. Retrieval-augmented generation for ai-generated content: A survey. arXiv preprint arXiv:2402.19473 , 2024. [11] Wenhu Chen, Hexiang Hu, Chitwan Saharia, and William W Cohen. Re-imagen: Retrieval- augmented text-to-image generator. arXiv preprint arXiv:2209.14491 , 2022. [12] Andreas Blattmann, Robin Rombach, Kaan Oktay, Jonas Müller, and Björn Ommer. Retrieval- augmented diffusion models. Advances in Neural Information Processing Systems , 35:15309– 15324, 2022. [13] Shelly Sheynin, Oron Ashual, Adam Polyak, Uriel Singer, Oran Gafni, Eliya Nachmani, and Yaniv Taigman. Knn-diffusion: Image generation via large-scale retrieval. arXiv preprint arXiv:2204.02849 , 2022. [14] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning , pages 8748–8763. PmLR, 2021. [15] Maya Varma, Jean-Benoit Delbrouck, Sarah Hooper, Akshay Chaudhari, and Curtis Langlotz. Villa: Fine-grained vision-language representation learning from real-world data. In Proceedings of the IEEE/CVF International Conference on Computer Vision , pages 22225–22235, 2023. [16] Yiwu Zhong, Jianwei Yang, Pengchuan Zhang, Chunyuan Li, Noel Codella, Liunian Harold Li, Luowei Zhou, Xiyang Dai, Lu Yuan, Yin Li, et al. Regionclip: Region-based language-image pretraining. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 16793–16803, 2022. 10 [17] Ian J Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. Advances in neural information processing systems , 27, 2014. [18] Andrew Brock, Jeff Donahue, and Karen Simonyan. Large scale gan training for high fidelity natural image synthesis. arXiv preprint arXiv:1809.11096 , 2018. [19] Aäron Van Den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. In International conference on machine learning , pages 1747–1756. PMLR, 2016. [20] Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea V oss, Alec Radford, Mark Chen, and Ilya Sutskever. Zero-shot text-to-image generation. In International conference on machine learning , pages 8821–8831. Pmlr, 2021. [21] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in neural information processing systems , 33:6840–6851, 2020. [22] Alexander Quinn Nichol and Prafulla Dhariwal. Improved denoising diffusion probabilistic models. In International conference on machine learning , pages 8162–8171. PMLR, 2021. [23] Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei.
|
https://arxiv.org/abs/2505.21956v1
|
Scaling laws for neural language models. arXiv preprint arXiv:2001.08361 , 2020. [24] James Betker, Gabriel Goh, Li Jing, Tim Brooks, Jianfeng Wang, Linjie Li, Long Ouyang, Juntang Zhuang, Joyce Lee, Yufei Guo, et al. Improving image generation with better captions. Computer Science. https://cdn. openai. com/papers/dall-e-3. pdf , 2(3):8, 2023. [25] Dustin Podell, Zion English, Kyle Lacey, Andreas Blattmann, Tim Dockhorn, Jonas Müller, Joe Penna, and Robin Rombach. Sdxl: Improving latent diffusion models for high-resolution image synthesis. arXiv preprint arXiv:2307.01952 , 2023. [26] Patrick Esser, Sumith Kulal, Andreas Blattmann, Rahim Entezari, Jonas Müller, Harry Saini, Yam Levi, Dominik Lorenz, Axel Sauer, Frederic Boesel, et al. Scaling rectified flow trans- formers for high-resolution image synthesis. In Forty-first international conference on machine learning , 2024. [27] Xiaohua Zhai, Basil Mustafa, Alexander Kolesnikov, and Lucas Beyer. Sigmoid loss for language image pre-training. In Proceedings of the IEEE/CVF international conference on computer vision , pages 11975–11986, 2023. [28] Lewei Yao, Runhui Huang, Lu Hou, Guansong Lu, Minzhe Niu, Hang Xu, Xiaodan Liang, Zhenguo Li, Xin Jiang, and Chunjing Xu. Filip: Fine-grained interactive language-image pre-training. arXiv preprint arXiv:2111.07783 , 2021. [29] Norman Mu, Alexander Kirillov, David Wagner, and Saining Xie. Slip: Self-supervision meets language-image pre-training. In European conference on computer vision , pages 529–544. Springer, 2022. [30] Yunfan Gao, Yun Xiong, Xinyu Gao, Kangxiang Jia, Jinliu Pan, Yuxi Bi, Yixin Dai, Jiawei Sun, Haofen Wang, and Haofen Wang. Retrieval-augmented generation for large language models: A survey. arXiv preprint arXiv:2312.10997 , 2:1, 2023. [31] Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Haonan Chen, Zheng Liu, Zhicheng Dou, and Ji-Rong Wen. Large language models for information retrieval: A survey. arXiv preprint arXiv:2308.07107 , 2023. [32] Michihiro Yasunaga, Armen Aghajanyan, Weijia Shi, Rich James, Jure Leskovec, Percy Liang, Mike Lewis, Luke Zettlemoyer, and Wen-tau Yih. Retrieval-augmented multimodal language modeling. arXiv preprint arXiv:2211.12561 , 2022. [33] Huaying Yuan, Ziliang Zhao, Shuting Wang, Shitao Xiao, Minheng Ni, Zheng Liu, and Zhicheng Dou. Finerag: Fine-grained retrieval-augmented text-to-image generation. In Proceedings of the 31st International Conference on Computational Linguistics , pages 11196–11205, 2025. 11 [34] Yongqi Li, Wenjie Wang, Leigang Qu, Liqiang Nie, Wenjie Li, and Tat-Seng Chua. Generative cross-modal retrieval: Memorizing images in multimodal language models for retrieval and beyond. arXiv preprint arXiv:2402.10805 , 2024. [35] Yidan Zhang, Ting Zhang, Dong Chen, Yujing Wang, Qi Chen, Xing Xie, Hao Sun, Weiwei Deng, Qi Zhang, Fan Yang, et al. Irgen: Generative modeling for image retrieval. In European Conference on Computer Vision , pages 21–41. Springer, 2024. [36] Leigang Qu, Haochuan Li, Tan Wang, Wenjie Wang, Yongqi Li, Liqiang Nie, and Tat-Seng Chua. Tiger: Unifying text-to-image generation and retrieval with large multimodal models. In The Thirteenth International Conference on Learning Representations . [37] Jing Yu Koh, Daniel Fried, and Russ R Salakhutdinov. Generating images with multimodal language models. Advances in Neural Information Processing Systems , 36:21487–21506, 2023. [38] Sahel Sharifymoghaddam, Shivani Upadhyay, Wenhu Chen, and Jimmy Lin. Unirag: Universal retrieval augmentation for multi-modal large language models. arXiv preprint arXiv:2405.10311 , 2024. [39] George Stein, Jesse Cresswell, Rasa Hosseinzadeh, Yi Sui, Brendan
|
https://arxiv.org/abs/2505.21956v1
|
Ross, Valentin Villecroze, Zhaoyan Liu, Anthony L Caterini, Eric Taylor, and Gabriel Loaiza-Ganem. Exposing flaws of generative model evaluation metrics and their unfair treatment of diffusion models. Advances in Neural Information Processing Systems , 36:3732–3784, 2023. [40] Leon A Gatys, Alexander S Ecker, and Matthias Bethge. A neural algorithm of artistic style. arXiv preprint arXiv:1508.06576 , 2015. [41] Asahi Ushio. Wikiart general dataset. https://huggingface.co/datasets/asahi417/ wikiart-all , 2024. Accessed: 2025-05-08. [42] Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco captions: Data collection and evaluation server. arXiv preprint arXiv:1504.00325 , 2015. [43] Peter Young, Alice Lai, Micah Hodosh, and Julia Hockenmaier. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. Transactions of the association for computational linguistics , 2:67–78, 2014. [44] C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie. The caltech-ucsd birds-200-2011 dataset. Technical Report CNS-TR-2011-001, California Institute of Technology, 2011. [45] Ziwei Liu, Zhongqi Miao, Xiaohang Zhan, Jiayun Wang, Boqing Gong, and Stella X Yu. Large- scale long-tailed recognition in an open world. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 2537–2546, 2019. [46] Scott Reed, Zeynep Akata, Honglak Lee, and Bernt Schiele. Learning deep representations of fine-grained visual descriptions. In Proceedings of the IEEE conference on computer vision and pattern recognition , pages 49–58, 2016. 12 A The Prompt Example to Decompose Queries into Subqueries Decomposing Queries into Subqueries Given an image caption, decompose the caption into an atomic entity. Each entity should preserve descriptive details (e.g., size, color, material, location) together with the entity in a natural, readable phrase. The entity should contain a noun and reserve noun modifiers in the caption. Please ignore the entities like ‘a photo of’, ‘an image of’, ‘an overhead shot’, ‘the window showing’ that are invisible in the image and ignore the entities like ’one’ and ’the other’ that have duplicate entities before. Caption: two cars are traveling on the road and waiting at the traffic light. Entity: cars, road, traffic light Caption: duplicate images of a girl with a blue tank top and black tennis skirt holding a tennis racquet and swinging at a ball. Entity: girl, blue tank top, black tennis skirt, tennis racqet, ball Caption: the window showing a traffic signal is covered in droplets of rainwater. Entity: traffic signal, droplets of rainwater Caption: an overhead shot captures an intersection with a "go colts" sign. Entity: intersection, "go colts" sign Caption: a van with a face painted on its hood driving through street in china. Entity: van, a face painted on its hood, street in china Caption: two men, one with a black shirt and the other with a white shirt, are kicking each other without making contact. Entity: men, black shirt, white shirt Caption: { caption } Entity: Figure 5: The Prompt Example for Decomposing User Queries into Subqueries on MS-COCO. B Proof of the Time Complexity in Retrieval Efficiency 1. Proof of the time complexity for Algorithm 1 LetNbe the total number of images in
|
https://arxiv.org/abs/2505.21956v1
|
D. We can score each image’s sparse textual match inO(N). We discard images that do not satisfy any subquery, leaving a reduced set eD⊆D of size eN. We then discretize the simplex of subquery weights αintoKpossible combinations. Each combination requires checkingP iαisi(Ij)inO(eN)time , thus O(K×eN)in total. Each adaptor pass handles both CLIP-based vision encoding (costing Tclipand the adaptor’s own cross-attention (costing Tadaptor )). If we assume one pass per subquery set (of size n), the total cost is eN×n×(Tclip+Tadaptor ). Multiplied by Kweight vectors, this yields O K×eN×n×(Tclip+Tadaptor ) . Combining the steps above, total time is: O(N) +O(K×eN) +O K×eN×n×(Tclip+Tadaptor) . 2. Proof of the time complexity for a pure sub-dimensional dense retriever If we skip the sparse filter, we must embed all Nimages for each subquery. Thus, the pure dense approach demands O(N×n×(Tclip+Tadaptor)). 13 Because eN≪NandKis small, this total is typically far lower than scanning all Nimages with the sub-dimensional dense retriever. C Proof of Algorithm Optimality Because si(Ij)∈ {0,1}andP iαi= 1,P iαisi(Ij)lies in [0,1]. Since cos (vj,i, ti)∈[0,1],P icos (vj,i, ti)≤n. Hence Cmax≤n. Suppose Iadominates Ib. Then ∆sparse=X iαisi(Ia)−X iαisi(Ib)>0. (11) Let ∆dense=X icos (va,i, ti)−X icos (vb,i, ti). (12) We want ∆sparse+β∆dense>0. In the worst case for Ia,∆dense<0, potentially as low as −Cmax. A sufficient condition for Iato stay preferred is ∆sparse−βCmax>0. Because ∆sparse≥δminifIaindeed satisfies at least one more subquery and β >0is assumed by definition, we obtain: 0< β <δmin Cmax=βmax. (13) We discretize the simplex {α:αi≥0,P iαi= 1}. Because subqueries are strictly enumerated by α, if an image satisfies a unique subquery set, it must appear as an arg maxP iαisi(Ij)for some α. Thus, no non-dominated s(Ij)is missed. ∆dense can be further used to find an optimal image among those sharing the same s(Ij). Therefore, all Pareto-optimal solutions are obtained and s(Ij)in the Pareto front Pfis unique. D Datasets For T2I-R on MS-COCO, we follow the Karpathy split using 82,783 training images, 5,000 validation images, and 5,000 test images. For Flickr30K, we only use 1,000 images in the test set for evaluation. Regarding T2I-G, WikiArt dataset is a comprehensive collection of fine art images sourced from the WikiArt online encyclopedia. Our implementation is based on the version provided in [ 41]. To construct the test set, we compare artwork titles across different images and identify pairs that differ by at most three tokens. From each matched pair, we retain one sample, resulting in 2,619 distinct test examples. The query for each test sample is formatted as: <title> in the style of <artistName>. The retrieval database is composed of the remaining WikiArt images after excluding all test samples, ensuring no overlap between ground-truth and retrieval candidates, as shown in Tab. 5. The Caltech- UCSD Birds-200-2011 (CUB-200-2011) [ 44] is a widely used benchmark for fine-grained image classification and generation tasks. It contains 11,788 images across 200 bird species. We use the CUB dataset with 10 single-sentence visual descriptions per image collected by [ 46]. Similarly, to construct the test set, we compare captions across different images and identify pairs that differ by one token, resulting in 5,485
|
https://arxiv.org/abs/2505.21956v1
|
distinct test samples. The query for each test sample is formatted as: Draw a <speciesName>. <caption>. For each test sample, the retrieval candidates consist of all remaining images in the CUB dataset, excluding that test image. The ImageNet-LT dataset [ 45] is a long-tailed version of the original ImageNet dataset. It contains 1,000 classes with 5 images per class. We randomly choose one image from each class to construct the test samples. The retrieval database is composed of the remaining ImageNet-LT images after excluding all test samples. The query for each test sample is formatted as: A photo of <className>. Table 5: Data construction of the T2I-G datasets Dataset # of image in the dataset # of test samples # of images in the retrieval database WikiArt 63,061 2,619 60,442 CUB 11,788 5,485 11,787 ImageNet-LT 50,000 1,000 49,000 14 E Limitations While our multi-objective joint retrieval combining sub-dimensional sparse and dense retrievers is efficient and achieves good granularity, when it comes to generation, the granularity of control is still bounded by the capabilities of the underlying MLLM. If the underlying MLLM lacks the granularity of control in generation, it becomes difficult to effectively leverage the subquery-level information obtained from our retrieval method. F More Visualizations of Our Method Compared with Other Baselines More visualizations of our proposed method Cross-modal RAG compared with other baselines can be found in Fig. 6 to 8. Across all three datasets, our method achieves superior image generation capability. For the CUB in Fig. 6, our Cross-modal RAG can generate realistic images that align with all subqueries, which are often ignored or distorted in GILL and UniRAG. Besides, SDXL, LaVIT, and RDM tend to generate the sketch-like images rather than photo-realistic birds. On the ImageNet-LT in Fig. 7, retrieving relevant images plays a crucial role in generating accurate long-tailed objects. Our method successfully generates all three long-tailed objects with high visual fidelity. In contrast, none of the baselines are able to generate all three correctly - UniRAG and GILL even fail to produce a single accurate image. For WikiArt in Fig. 8, creative image generation poses a unique challenge, as it is inherently difficult to reproduce the exact ground-truth image. However, our method explicitly retrieve the images with satisfied subqueries and can capture the particular artistic style specified in the query. As a result, all three generated images of Cross-modal RAG closely resemble the style of the target artist in the query. In contrast, other RAG baselines can not guarantee if the retrieved images grounded in the intended artist’s style. RDM even suffers from low visual fidelity when generating human faces. UserQueryPareto Optimal Images with Satisfied SubqueriesRetrievedimage1Retrievedimage2Retrievedimage3Draw a Hooded Warbler. This bird has a blackcrown with black throat and yellow belly. 1.Hooded Warbler2.black crown4.yellow belly1.Hooded Warbler2.black crown3.black throat1.Hooded Warbler3.black throat4.yellow belly Generation(Ours)Generation(SDXL)Generation(LaVIT)Generation(RDM)Generation(UniRAG)Generation(GILL) UserQueryPareto Optimal Images with Satisfied SubqueriesRetrievedimage1Retrievedimage2Retrievedimage3Draw a White throated Sparrow. This bird has wings that are brownand has a white bellyand yellow crown.Generation(Ours)Generation(SDXL)Generation(LaVIT)Generation(RDM)Generation(UniRAG)Generation(GILL) UserQueryPareto Optimal Images with Satisfied SubqueriesRetrievedimage1Retrievedimage2Retrievedimage3Generation(Ours)Generation(SDXL)Generation(LaVIT)Generation(RDM)Generation(UniRAG)Generation(GILL)1.White throated Sparrow3.white belly4.yellow crown 1.White throated Sparrow2.Wingsthatarebrown3.Whitebelly 1.White throated Sparrow2.Wingsthatarebrown4.yellow crown Draw a Vermilion Flycatcher. Asmall red
|
https://arxiv.org/abs/2505.21956v1
|
bird with black wings and a small black beak.1.Vermilion Flycatcher 2.small red bird3.black wings 1.Vermilion Flycatcher 2.small red bird4.small black beak1.Vermilion Flycatcher 3.black wings4.small black beak Figure 6: Visualizations on CUB compared with other baselines. 15 UserQueryPareto Optimal Images with Satisfied Subqueries Retrievedimage1A photo of [cardoon].Generation(Ours)Generation(SDXL)Generation(LaVIT)Generation(RDM)Generation(UniRAG)Generation(GILL) UserQueryPareto Optimal Images with Satisfied Subqueries Retrievedimage1A photo of [wire-haired fox terrier].Generation(Ours)Generation(SDXL)Generation(LaVIT)Generation(RDM)Generation(UniRAG)Generation(GILL) UserQueryPareto Optimal Images with Satisfied Subqueries Retrievedimage1A photo of [yurt].Generation(Ours)Generation(SDXL)Generation(LaVIT)Generation(RDM)Generation(UniRAG)Generation(GILL) [cardoon] [wire-haired fox terrier] [yurt] Figure 7: Visualizations on ImageNet-LT compared with other baselines. UserQueryPareto Optimal Images with Satisfied SubqueriesRetrieved image1Retrievedimage2Retrievedimage3Fisherwomenon the Beach, Valenciain the style of Joaquín Sorolla.Generation(Ours)Generation (SDXL)Generation(LaVIT)Generation(RDM)Generation (UniRAG)Generation(GILL) UserQueryPareto Optimal Images with Satisfied SubqueriesRetrievedimage1Retrieved image2Retrievedimage3Roadnear Cagnesinthe style of Pierre-Auguste Renoir.Generation (Ours)Generation(SDXL)Generation(LaVIT)Generation(RDM)Generation(UniRAG)Generation (GILL) UserQueryPareto Optimal Images with Satisfied SubqueriesRetrieved image1Retrievedimage2Retrievedimage3Portraitof a Young Womanin the style of Alfred Stevens.Generation(Ours)Generation (SDXL)Generation(LaVIT)Generation(RDM)Generation (UniRAG)Generation(GILL) 2.Beach3.Valencia4.the style of Joaquín Sorolla 1.fisherwomen2.Beach1.Fisherwomen3.Valencia4.the style of Joaquín Sorolla 1. Road2. Cagnes2.Cagnes3.the style of Pierre-Auguste Renoir1.Road3.the style of Pierre-Auguste Renoir 1. Portrait 2. Young Woman 1.Portrait 3.the style of Alfred Stevens2.YoungWoman 3.the style of Alfred Stevens Figure 8: Visualizations on WikiArt compared with other baselines. 16
|
https://arxiv.org/abs/2505.21956v1
|
LaMDAgent: An Autonomous Framework for Post-Training Pipeline Optimization via LLM Agents Taro Yano NEC Corporation taro_yano@nec.comYoichi Ishibashi NEC Corporation yoichi-ishibashi@nec.comMasafumi Oyamada NEC Corporation oyamada@nec.com Abstract Large Language Models (LLMs) have demon- strated exceptional performance across a wide range of tasks. To further tailor LLMs to specific domains or applications, post-training techniques such as Supervised Fine-Tuning (SFT), Preference Learning, and model merg- ing are commonly employed. While each of these methods has been extensively studied in isolation, the automated construction of com- plete post-training pipelines remains an under- explored area. Existing approaches typically rely on manual design or focus narrowly on op- timizing individual components, such as data ordering or merging strategies. In this work, we introduce LaMDAgent (short for Language Model Developing Agent), a novel framework that autonomously constructs and optimizes full post-training pipelines through the use of LLM-based agents. LaMDAgent systemati- cally explores diverse model generation tech- niques, datasets, and hyperparameter config- urations, leveraging task-based feedback to discover high-performing pipelines with min- imal human intervention. Our experiments show that LaMDAgent improves tool-use accu- racy by 9.0 points while preserving instruction- following capabilities. Moreover, it uncovers effective post-training strategies that are often overlooked by conventional human-driven ex- ploration. We further analyze the impact of data and model size scaling to reduce compu- tational costs on the exploration, finding that model size scalings introduces new challenges, whereas scaling data size enables cost-effective pipeline discovery. 1 Introduction Large Language Models (LLMs) have undergone rapid development, demonstrating exceptional per- formance across diverse tasks and significantly impacting both academic and industrial domains, with the rise of high-performing proprietary mod- els (OpenAI, 2023; Anthropic, 2024; Google, 2024)as well as open-sourced models (Dubey et al., 2024; Yang et al., 2024; DeepSeek-AI et al., 2024; Abdin et al., 2024). LLM development typically involves pre-training on large-scale web corpora followed by post-training with curated data (Ouyang et al., 2022), with this study focusing on the latter stage due to the increasing emphasis on post-training driven by the release of models and datasets tai- lored for domain and task adaptation (Tie et al., 2025). In post-training, widely adopted approaches in- clude Supervised Fine-Tuning (SFT) using human- created prompt-response pairs and Preference Learning based on preference labels for response pairs (Rafailov et al., 2023; Ethayarajh et al., 2024; Song et al., 2024; Munos et al., 2024). Further- more, innovative techniques are rapidly evolving, such as autonomous training data generation and “model merging” that creates new models through arithmetic operations on different model parame- ters (Wortsman et al., 2022; Ilharco et al., 2023; Yadav et al., 2023). To generate superior mod- els, existing studies either manually build pipelines or focus on optimizing specific steps such as fine- tuning data orderings (Chen et al., 2023; Kim and Lee, 2024; Pattnaik et al., 2024) or model merging strategies (Ishibashi et al., 2025; Akiba et al., 2025). However, full adaptation for target tasks requires combining these methods into integrated pipelines to optimize, yet automating this end-to-end process remains largely unexplored. In this paper, we propose Language Model Developing Agent (LaMDAgent), a method that autonomously constructs post-training
|
https://arxiv.org/abs/2505.21963v1
|
pipelines using LLM-based agents and continuously im- proves them based on feedback from the generated model’s performance on target tasks. LaMDAgent treats heterogeneous model improving methods such as supervised fine-tuning, preference learn- ing, or model merging in a unified manner and automates end-to-end post-training pipeline con- 1arXiv:2505.21963v1 [cs.CL] 28 May 2025 struction by exploring appropriate model genera- tion methods, datasets, hyperparameters, and their optimal application order, thereby reducing the spe- cialized knowledge and human costs required for pipeline construction. Additionally, to reduce computational costs for LaMDAgent’s exploration, we experimentally ver- ify data size scaling and model size scaling, where data size scaling and model size scaling respec- tively involve exploring pipelines with smaller data quantities and model sizes, then transferring the dis- covered efficient pipelines to larger data quantities and model sizes. The contributions of this paper are as follows: 1.We propose an LLM Agents-driven frame- work “LaMDAgent” that autonomously con- structs and optimizes post-training pipeline. LaMDAgent treats heterogeneous model im- proving methods in a unified framework to optimize the entire pipeline in post-training, reducing the specialized knowledge and hu- man costs required for pipeline construction. 2.In our experiments across two distinct settings, we show that LaMDAgent effectively im- proves mathematical capability by 3.7 points in average accuracy in Experiment 1 and en- hances tool utilization accuracy by 9.0 points in Experiment 2 compared to strong baselines, while maintaining general capabilities through the discovery of novel pipelines that are not easily identified by humans. 3.To reduce LaMDAgent’s exploration costs, we verify the effectiveness of data size scal- ing and model size scaling, finding that model size scalings introduces new chal- lenges, whereas scaling data size enables cost- effective pipeline discovery. 2 Methodology 2.1 Overview We propose a novel method called Language Model Developing Agent (LaMDAgent) that fully auto- mates the construction and optimization of lan- guage model post-training pipelines using LLM Agents. Figure 1 illustrates the overview of our proposed method. The proposed method aims to create better models by iteratively repeating the following four steps: 1. Action enumeration , 2.Action selection, 3. Model evaluation, and 4. Mem- ory update. Details are described in the following sections. 2.2 Action Enumeration For simplicity of explanation, we define the follow- ing terms: •Object: A concrete entity used in the model training pipeline, such as Llama 3 8B as a model or GSM8k as training data. •Action: An action is a model improving method that takes multiple objects, including models, as input and outputs a new model. An action is defined by an action type such as "SFT" and the objects used, such as specific data, models, or hyperparameters. We obtain possible actions by enumerating all com- binations of action types and objects. Specifically, we use predefined action types and objects that in- clude both pre-prepared datasets and models, as well as models and data obtained during the iter- ation. For example, if we have the action type "SFT" is defined to take (base model, training data) as input objects, and we have Gemma2 2B as a base model and GSM8k and MATH as training
|
https://arxiv.org/abs/2505.21963v1
|
data, then possible actions can be enumerated as (Gemma2 2B, GSM8k) and (Gemma2 2B, MATH). 2.3 Action Selection We use the agent to select one promising model improvement action from possible actions. Dur- ing action selection, we provide the agent with a prompt for action selection and parse its output to determine the action. In practice, rather than pro- viding all action candidates and having the agent output a single action index, we first have the agent select an action type in one inference step and iden- tify the required object types based on the action type. Then, we have the agent select objects in an another inference step to determine the final ac- tion. We first decide on the action type to avoid action parsing failures that might occur if we give the agent the complex task of selecting the action type, understanding what object types are needed for each action type, and selecting objects with- out excess or deficiency. We select all objects in a single inference step rather than multiple steps to minimize order dependency in the selection pro- cess. 2 Figure 1: Overview of our LaMDAgent framework. LaMDAgent first enumerates actions from predefined model improving action types and an object pool containing available data, models, parameters, and other objects (Step1. Action Eunumeration). Next, the agent selects an action based on memory acquired from previous trials and executes the selected action to generate a new model (Step2. Action Selection). Then evaluations on downstream tasks are conducted (Step3. Model Evaluation). Based on the evaluation results of the newly generated model, the agent considers promising future directions and insights, updating the accumulated memory (Step4. Memory Update). Action selection process can be written as atype =Agent (gtype(m, ltype)), (1) aobj=Agent (gobj(m, lobj, atype)), (2) where atype,aobj,gtypeandgobjare determined action type, objects, prompt templates for selecting action types and objects, respectively. The prompts include memory msummarizing experiences from past trials, candidate action types ltype, and objects lobj. The actual gtypeandgobjused in our experi- ments is provided in Appendix A. In preliminary tri- als, we observed mode collapse phenomena where the agent kept selecting the same action as steps progressed, so we explicitly included exploration directives in the prompt. We also added instruc- tions to remove bias after observing that models generated at intermediate step inamed as "Model i" tended to be selected less frequently compared to initial models like "Model GSM8k". 2.4 Model Evaluation We evaluate the selected actions based on the per- formance of the resulting model on target tasks and provide feedback to the agent through numerical scores. In single-task settings, the evaluation met- ric itself can be used as the score, but in multi-task settings, we need to aggregate metrics across tasks. To account for different scales of evaluation metricsacross tasks, we define the multi-task score smulti using the following formula: smulti =X kαk·sk single, (3) where sk singleis the single-task reward for task k, and 0≤αk≤1are scaling factors. In this re- search, we determine these factors so that the max- imum contribution of each ssingle is uniform. For example, MT-Bench
|
https://arxiv.org/abs/2505.21963v1
|
metrics range up to 10, while AceBench metrics reach 1, so we set the weights for each score as αMT= 1/10, αAce= 1. 2.5 Memory Update We update memories of the agent based on the feedback received for the selected action. A mem- ory is a text summarizing experiences from the latest and past trials, and next promising directions to explore. Memory at iteration tis derived from the action at t-th iteration (at type, at obj), score st, and template gmem as follows: mt=Agent (gmem ((at type, at obj, rt), {(at′ type, at′ obj, rt′)}t′<t,{mt′}t′<t)).(4) The memory updating template gmem used in our experiments is provided in Appendix A. 3 3Experiment 1: Teaching Multiple Skills to Base Models 3.1 Experimental Setup We use Gemma2 2B (Rivière et al., 2024)1as our base model, and we target the following tasks in a multi-task setting: the arithmetic rea- soning task GSM8k (Cobbe et al., 2021), the commonsense reasoning task Commonsense QA (CQA) (Talmor et al., 2019), and the reading com- prehension task Trivia QA (TriviaQA) (Joshi et al., 2017), all converted to 0-shot format. For out-of- distribution evaluation tasks, we use GSMSym- bolic (Mirzadeh et al., 2025), which is a more com- plex version of GSM8k with rewritten numbers in the questions, generative arithmetic reasoning tasks from NumGLUE (Mishra et al., 2022) Type1 (NumGLUE1) and Type2 (NumGLUE2), the so- cial common sense reasoning task SocialIQA (Sap et al., 2019), and the reading comprehension task Natural Questions (NQ) (Kwiatkowski et al., 2019). While our approach can handle any action that produces a single model, the actions used in this experiment and their required objects are: •TIES-Merging (TIES): Model 1, Model 2, merge weight (fixed), merge density (fixed) •Supervised Fine-Tuning (SFT): Model, SFT training data, learning rate (fixed) TIES is a representative model merging technique, while SFT is a standard training method using log- likelihood maximization loss. As initial objects, we prepared specialist models trained on 1,000 ex- amples each from GSM8k, CQA, and TriviaQA using Gemma2 2B as the base model, hereafter re- ferred to as GSM8k-specialist, CQA-specialist, and TriviaQA-specialist. We also use the training data same as specialist models along with an aggregated all data as initial objects for SFT. For hyperparam- eters, we fix merging weights of (0.5,0.5), density of 0.5, and learning rate for SFT as 1e−6. We use 100 examples from a different split as validation data, and test data was held out from both train- ing and validation data. To eliminate variability from randomness, temperature was set to 0 during both pipeline exploration and testing. For the agent LLM, we use gpt-4o-2024-08-06 and performed 100 iterations of action selection and feedback. 1https://huggingface.co/google/gemma-2-2bFor details of evaluation methods, GSM8k in- volves parsing the final numeric answer from the prediction and exact matching with the ground truth, CQA requires the answer choice to be the form of "[[choice]]" and checking if the parsed value is correct, and TriviaQA uses exact match between normalized predictions and ground truth. For out-of-distribution tasks, GSMSymbolic, NumGLUE1, and NumGLUE2 uses the same eval- uation method as GSM8k, SocialIQA uses
|
https://arxiv.org/abs/2505.21963v1
|
the same as CQA, and NQ uses the same as TriviaQA. For compared methods, in addition to the GSM8k, CQA, and TriviaQA specialists, we use TIES (Grid Search), which optimizes the weights of the three specialists through grid search, and Fully Fine-Tuned, which is trained on all avail- able training data. To evaluate the effectiveness of LLM-based action selection, we also compare with Policy=Random, Actions=(SFT, TIES)) which randomly selects actions for 100 iterations, and Pol- icy=LLM, Actions=(TIES) which removes the SFT from predefined action types. 3.2 Results The experimental results are shown in Table 1. LaMDAgent Top- irefers to the model with the i-th highest accuracy on validation set among those generated by LaMDAgent. Bold values indicate the best performance among comparison methods, and underlined values indicate the top three. LaMDAgent outperforms baselines, enhanc- ing math skills while preserving others. Com- pared to the best baseline, Fully Fine-Tuned, LaMDAgent Top-1 shows 1.9 point improvement in overall accuracy (Avg) on the test set, demon- strating the effectiveness of the discovered pipeline. The improvement is particularly notable in arith- metic reasoning tasks, with LaMDAgent Top- 1 overperforms Fully Fine-Tuned by 3.7 points on math-related tasks (GSM8k, GSMSymbolic, NumGLUE1, NumGLUE2) on average, while maintaining comparable performances on other tasks. These results suggest that, even with identi- cal training data, appropriately combining model merging and training sequences using LaMDAgent can incorporate multiple skills more effectively than simple SFT on all the data. Training is more effective than model merging for acquiring multiple skills. Interestingly, unlike findings in some previous works (Morrison et al., 2024; Kuroki et al., 2024), in our experimental set- ting, Fully Fine-Tuned, which was trained on all 4 Table 1: LaMDAgent effectively balances multiple skills and generalizes out-of-distribution: The main results of Experiment 1. LaMDAgent achieves the highest average performance (Avg) among compared methods. Notably, LaMDAgent Top-1 overperforms Fully Fine-Tuned by 3.7 points on math-related tasks (GSM8k, GSMSymbolic, NumGLUE1, NumGLUE2) on average, while maintaining performance on other tasks, demonstrating more effective multi-skill acquisition than simply mixing training data or merging specialist models. In-Distribution Out-of-Distribution Method GSM8k CQA TriviaQA GSMSymbolic NumGLUE1 NumGLUE2 SocialIQA NQ Avg Baselines GSM8k-specialist 0.320 0.001 0.000 0.132 0.425 0.395 0.030 0.000 0.163 CQA-specialist 0.018 0.671 0.002 0.007 0.050 0.034 1.000 0.008 0.224 TriviaQA-specialist 0.046 0.027 0.675 0.017 0.050 0.280 0.905 0.269 0.284 TIES (Grid Search) 0.105 0.559 0.562 0.017 0.175 0.265 0.999 0.219 0.363 Fully Fine-Tuned 0.254 0.622 0.672 0.142 0.325 0.238 1.000 0.256 0.439 Proposed LaMDAgent Top-1 0.284 0.628 0.670 0.145 0.375 0.302 1.000 0.259 0.458 LaMDAgent Top-2 0.306 0.627 0.673 0.140 0.350 0.361 1.000 0.248 0.463 LaMDAgent Top-3 0.267 0.674 0.658 0.146 0.300 0.256 1.000 0.250 0.444 Table 2: LLM-based action selection is effective: Ablation study results for LaMDAgent, with validation set scores shown in parentheses. Random action selection achieves only scores comparable to Fully Fine-Tuned, while LLM-based action selection achieves higher average performance. Additionally, the action space provided significantly impacts generated model performances. Method GSM8k CQA TriviaQA Avg Policy=LLM, Actions=(SFT, TIES)) 0.284 (0.350 )0.628 (0.710 )0.670 (0.750 )0.527 (0.603 ) Policy=Random, Actions=(SFT, TIES)) 0.257 (0.280) 0.594 (0.660)
|
https://arxiv.org/abs/2505.21963v1
|
0.674 (0.730) 0.508 (0.556) Policy=LLM, Actions=(TIES) 0.032 (0.030) 0.588 (0.670) 0.575 (0.670) 0.398 (0.456) data, outperformed TIES (Grid Search), which opti- mizes model merging weights, on all in-distribution tasks and 4 out of 5 out-of-distribution tasks, show- ing a 7.6 point higher average accuracy. Agent-based action selection is effective. The ablation results in Table 2 show that random action selection (Policy=Random, Actions=(SFT, TIES)) resulted in a 4.7 point decrease on the validation set and a 1.9 point decrease on the test set, demonstrat- ing the effectiveness of agent-based action selec- tion. This is likely because as iterations progress, random action selection tends to prioritize explor- ing combinations of model merging, which is less effective in this setting as shown in TIES results, over exploration of training data curricula. This occurs because as the number of models increases with iterations, the number of model merging ac- tion candidates grows quadratically, while the num- ber of training candidates grows linearly, making the former more likely to be selected randomly. The choice of action space significantly im- pacts performance. Removing SFT from the ac- tion space (Policy=LLM, Actions=(TIES)) led to decreases of 14.7 and 12.9 points on validation and test sets respectively, showing that the pre-definedaction space significantly affects the final achiev- able accuracy. Discovered pipelines. The highest-performing pipelines discovered by LaMDAgent are shown in Figure 3. The Top-1, 2, and 3 pipelines all have in common that they end with training on all data. For Top-1, the result is consistent with findings (Dong et al., 2024) that learning mathematical skills first before mixing with general skill data is beneficial for balancing mathematical skills like GSM8k with general skills like CQA and TriviaQA. Interest- ingly, while model merging is typically used as a final refinement stage after training, in our experi- ments, pipelines that merge before training (Top-2 and Top-3) also performs well. 4 Experiment 2: Enhancing Tool Usage Skills in Instruction-tuned Models 4.1 Experimental Setup In a more realistic setting, we test whether LaMDA- gent can enhance a specific skill (tool usage in this case) while maintaining the original instruction- following capabilities of a publicly available instruction-tuned model, Gemma2 2B Instruct2. 2https://huggingface.co/google/gemma-2-2b-it 5 Figure 3: Top-1, Top-2, and Top-3 pipelines discovered in experiment 1. Figure 4: LaMDAgent significantly improves tool us- age capability while maintaining instruction-following performance: The overall performance evaluation re- sults of Experiment 2 indicate that LaMDAgent im- proves AceBench accuracy by 9.0 points while preserv- ing the MT-Bench score. In contrast, naive fine-tuning approaches on either individual or full SFT datasets fail to enhance tool usage capabilities, suggesting that the task cannot be effectively addressed with such straight- forward methods. We use AceBench (Chen et al., 2025) to evaluate tool usage capabilities and the first turn of MT- Bench (Zheng et al., 2023) to evaluate instruction- following capabilities. For action types, we adopt TIES and SFT as in Experiment 1. Initial ob- jects include Gemma2 2B Instruct as the model, and for tool usage training data we use Agent- FLAN3(Chen et al., 2024), which includes Tool- bench react10p, Toolbench tflan 60p
|
https://arxiv.org/abs/2505.21963v1
|
r10r5u7, Tool- bench tflan cot 30p, Agent instruct react, Agent instruct tflan, Toolbench instruct j1s1 3k, and Tool- bench negative), ToolACE4(Liu et al., 2024b), and general instruction-following data of Wiz- 3https://huggingface.co/datasets/internlm/Agent-FLAN 4https://huggingface.co/datasets/Team-ACE/ToolACE Figure 5: LaMDAgent learns from feedback to exploit promising actions while exploring unseen pipelines: The graph shows the Average Score, Max Score, and Standard Deviation recorded every 15 iterations. The consistent increase in average score indicates that the agent continues to learn from past feedback to exploit promising actions. The non-zero standard deviation through all iterations and improving max score implies that the agent maintains exploration to discover further improvement opportunities alongside exploitation. ardLM5(Xu et al., 2024). We randomly selected up to 1,000 examples from each of the 7 Agent- FLAN subsets, ToolACE, and WizardLM. As in Experiment 1, we use gpt-4o-2024-08-06 as the LLM for action selection and performed 100 itera- tions. For evaluation, we use only turn 1 of MT- Bench for faster and more cost-effective assess- ment, with gpt-4o-2024-08-06 as the evaluator. For ACEBench, we report the accuracy in the Normal setting, which measures single-turn function call performance. The temperature parameter is set to 0 during both pipeline exploration and testing to eliminate randomness. Compared methods include the Gemma2 2B In- struct, Individually fine-tuned models trained sepa- rately on each of the 9 training datasets, and a Fully fine-tuned model trained on all data. All fine-tuned models use the same hyperparameters as the SFT in LaMDAgent. 4.2 Results The overall performance evaluation results of Ex- periment 2 are summarized in Figure 4. Also, fig- ure 5 plots the average score, maximum score, and standard deviation of scores for models created by LaMDAgent at 15-iteration intervals. LaMDAgent enhances tool usage capabili- ties of Gemma2 2B Instruct while preserv- 5https://huggingface.co/datasets/ WizardLMTeam/WizardLM_evol_instruct_V2_196k 6 ing instruction-following capabilities. The best model generated by LaMDAgent achieves an MT- Bench score of 0.810, comparable to Gemma2 2B Instruct (0.804), while improving AceBench accuracy from 0.410 to 0.500—a 9.0 point im- provement. This demonstrates successful enhance- ment of tool usage capabilities while maintaining instruction-following performance. In contrast, the all fine-tuned models, significantly degrades both instruction-following and tool usage capabil- ities. A possible explanation for this: Gemma2 2B Instruct may have already paid an "alignment tax" (Ouyang et al., 2022) through extensive in- struction tuning, and so unstable that additional tool-focused training could cause catastrophic for- getting of pre-training knowledge easily unless the training pipeline is carefully selected. To sup- port this hypothesis, as shown in Figure 5, while LaMDAgent occasionally takes destructive actions, it learns to avoid them over time through feedback from downstream task, allowing the agent to au- tomatically avoid such actions regardless of the cause. Exploiting from score-based feedback while exploring unseen pipelines. As shown in Figure 5, the average score continues to improve with itera- tions, confirming that the LaMDAgent framework effectively updates its memory to exploit promising actions. The continuous improvement in maximum score and non-zero values of standard deviation of scores suggests that the agent maintains exploration alongside exploitation. LaMDAgent is more effective when training and target distributions do
|
https://arxiv.org/abs/2505.21963v1
|
not match. The score difference between Fully Fine-Tuned and LaMDAgent in Experiment 1 was smaller than in Experiment 2, indicating that LaMDAgent pro- vides greater benefits in Experiment 2. This is be- cause when trainining and target distributions are the same, simply minimizing the loss function on target tasks can yield good performances, whereas when distributions differ (as in Experiment 2), min- imizing loss on all training data doesn’t necessarily minimize loss on target data. Such scenarios repre- sent effective applications for LaMDAgent. Discovered pipelines. Figure 6 shows the Top-1 and Top-2 pipelines with the highest scores. Both pipelines train on Agent instruct tflan followed by ToolACE, suggesting these datasets were ef- fective for AceBench. However, since the Fully Fine-Tuned model (which included these datasets) performs worse than the baseline Gemma2 2b In- Figure 6: Top-1 and Top-2 pipelines discovered in ex- periment 2. struct, suggesting that excluding unnecessary data and establishing an appropriate training orderings are crucial. The Top-1 model’s score evolution is 0.442 (SFT on Agent instruct tflan) →0.592 (SFT on ToolACE) →0.625 (SFT on ToolACE) →0.655 (SFT on Toolbench tflan 60-r10r5u7), showing that similar performance improvements at each step cu- mulatively contributed to the final score, which cannot be easily identified by humans. 5 Reducing Computational Cost In this section, we investigate the effectiveness of data size scaling and model size scaling inspired by pre-training scaling laws (Rivière et al., 2024) as methods to reduce the computational cost of LaMDAgent’s pipeline exploration. Data size scaling is effective. Data size scaling is based on the expectation that pipelines with high scores on small data sizes will maintain high scores when data size is increased. This approach involves exploring effective pipelines with small data sizes, then scaling up the data within those pipelines. For data size scaling to be effective, pipelines that out- perform others with small data sizes must continue to outperform when data sizes are increased. To verify the effectiveness of data size scaling, we examine how scores change when increasing the data in pipelines discovered in Experiment 1 by factors of 2, 4, and 6 times the exploration size. The results are shown in Figure 7. The Top-1 pipeline maintains the highest accuracy across all data sizes, demonstrating that pipelines with high accuracy on small data sizes maintain their advantage when scaled up, confirming the effectiveness of data size scaling for computational cost reduction. Model size scaling has limitations. Model size scaling is the model size version of scaling, which involves exploring effective pipelines with small models, then scaling up the models within those pipelines. To verify the effectiveness of model size scaling, we change the base model from Gemma2 2B to Gemma2 9B to transfer the discovered pipelines 7 Figure 7: Data size scaling is effective for computa- tional cost reduction: Overall score when scaling num- ber of training examples in Top-k pipelines. The Top-1 pipeline consistently performs best, suggesting that ef- fective pipelines at small scales remain effective with more data. Table 3: Challenges exist in computational cost reduc- tion via model size scaling: Evaluation scores
|
https://arxiv.org/abs/2505.21963v1
|
of trans- ferred pipelines on Gemma2 9B, suggesting that al- though some performance gaps are maintained, they sometimes diminish with model size scaling. Method Top-1 Top-50 Top-80 Top-90 Top-100 2B-based 0.603 0.573 0.553 0.546 0.297 9B-based 0.797 0.803 0.783 0.783 0.200 in experiment 1. The results are shown in Table 3. When comparing Top-1 with Top-80, 90, and 100, which had score differences of more than 5 points with the 2B model, the Top-1 pipeline still achieves higher scores with 9B, showing that discovered pipelines remain effective when base model size increases. However, the score difference between Top-1 and Top-50, which was about 3 points with 2B models, reverses when scaling to the larger model, suggesting that small score gaps may dis- appear when increasing model size. Therefore, in practice, rather than pursuing small pipeline dif- ferences that risk disappearing, diversifying action space to explore large score gaps may be an ef- fective use case for LaMDAgent when expecting model size scaling. 6 Related Work LLM Agents. LLMs have evolved beyond chat- bots to agents capable of executing diverse actions (Wang et al., 2024; Xi et al., 2025). ReAct (Yao et al., 2023) enables iterative reasoning via thought- action-observation loops, while Reflexion (Shinn et al., 2023) introduces verbal learning from feed- back on past trajectories. Key areas include webautomation (Ning et al., 2025; Zhou et al., 2024; Deng et al., 2023) and tool use (Patil et al., 2024; Qu et al., 2025; Qin et al., 2024). To our knowl- edge, this is the first work to automate post-training using LLM agents, treating improvement strategies as actions and model scores as rewards. Curriculum Learning in Post-Training. Post- training performance is sensitive to the order of training data. SKILL-IT (Chen et al., 2023) prior- itizes samples effective on validation sets. DMT (Dong et al., 2024) uses a two-stage process start- ing from specialized to general tasks. Kim and Lee (2024) proposes reordering based on attention scores, query length, and loss, while Curri-DPO (Pattnaik et al., 2024) begins with examples show- ing large preference gaps. Other domain-specific efforts exist (Zhao et al., 2021; Upadhyay et al., 2025; Qi et al., 2025). However, most rely on heuristics and expert knowledge. Our work aims to automate curriculum discovery via LLM agents. Model Merging. Model merging combines pa- rameters from multiple models via arithmetic oper- ations. Wortsman et al. (2022) and Task Arithmetic (Ilharco et al., 2023) show that adding or subtract- ing parameters can enhance robustness or transfer task skills. Techniques like TIES-Merging (Yadav et al., 2023), DARE (Yu et al., 2024), and many oth- ers (Huang et al., 2024; Jang et al., 2024a,b; Khalifa et al., 2024; Ortiz-Jiménez et al., 2023; Liu et al., 2024a) continue to expand the field. MergeKit (Goddard et al., 2024) facilitates implementation of merging techniques. Evolutionary methods (Akiba et al., 2025) and skill-efficient merging (Morrison et al., 2024; Kuroki et al., 2024) optimize model merging parameters on taget tasks. Since merging and training are not independent, optimizing both jointly is crucial. This paper is the first to propose a unified approach
|
https://arxiv.org/abs/2505.21963v1
|
that automates both training and merging through LLM agents to construct optimal pipelines. 7 Conclusion In this work, we propose LaMDAgent, an auto- mated framework for constructing post-training pipelines via LLM-based agents. Empirical results across two experimental settings demonstrate that LaMDAgent substantially outperforms all base- lines by autonomously identifying effective yet often-overlooked strategies by practitioners. To re- duce the computational cost of pipeline exploration, we investigated scaling strategies and found that 8 data-size scaling offers benefits, whereas model- size scaling poses nontrivial challenges. These find- ings position LaMDAgent as a promising direction toward automating and systematizing post-training pipeline design, thereby reducing reliance on do- main expertise and facilitating broader accessibility in LLM adaptation. 8 Limitations Our experiments were conducted using Gemma 2 as the base model. It remains to be investigated how the outcomes might change when different base models are used. Furthermore, we only used English-language datasets. While our method is not expected to be highly language-dependent, it remains unclear whether it performs adequately on minority or low-resource languages. In principle, the proposed framework allows for arbitrary action types. However, in this study, we focused on TIES-Merging and Supervised Fine- Tuning. It would be highly interesting to explore what kinds of pipelines could be discovered by combining our method with other model merging techniques, preference learning approaches, or data generation strategies. Our experiments did not yield positive results in the context of model size scaling. Therefore, achieving positive transfer at larger scales may re- quire further innovation in future work. References Marah I Abdin, Sam Ade Jacobs, Ammar Ahmad Awan, Jyoti Aneja, Ahmed Awadallah, Hany Awadalla, Nguyen Bach, Amit Bahree, Arash Bakhtiari, Harki- rat S. Behl, Alon Benhaim, Misha Bilenko, Jo- han Bjorck, Sébastien Bubeck, Martin Cai, Caio César Teodoro Mendes, Weizhu Chen, Vishrav Chaudhary, Parul Chopra, Allie Del Giorno, Gustavo de Rosa, Matthew Dixon, Ronen Eldan, Dan Iter, Amit Garg, Abhishek Goswami, Suriya Gunasekar, Emman Haider, Junheng Hao, Russell J. Hewett, Jamie Huynh, Mojan Javaheripi, Xin Jin, Piero Kauff- mann, Nikos Karampatziakis, Dongwoo Kim, Ma- houd Khademi, Lev Kurilenko, James R. Lee, Yin Tat Lee, Yuanzhi Li, Chen Liang, Weishung Liu, Eric Lin, Zeqi Lin, Piyush Madan, Arindam Mitra, Hardik Modi, Anh Nguyen, Brandon Norick, Barun Patra, Daniel Perez-Becker, Thomas Portet, Reid Pryzant, Heyang Qin, Marko Radmilac, Corby Rosset, Sam- budha Roy, Olatunji Ruwase, Olli Saarikivi, Amin Saied, Adil Salim, Michael Santacroce, Shital Shah, Ning Shang, Hiteshi Sharma, Xia Song, Masahiro Tanaka, Xin Wang, Rachel Ward, Guanhua Wang, Philipp Witte, Michael Wyatt, Can Xu, Jiahang Xu, Sonali Yadav, Fan Yang, Ziyi Yang, Donghan Yu,Chengruidong Zhang, Cyril Zhang, Jianwen Zhang, Li Lyna Zhang, Yi Zhang, Yue Zhang, Yunan Zhang, and Xiren Zhou. 2024. Phi-3 technical report: A highly capable language model locally on your phone. CoRR , abs/2404.14219. Takuya Akiba, Makoto Shing, Yujin Tang, Qi Sun, and David Ha. 2025. Evolutionary optimization of model merging recipes. Nat. Mac. Intell. , 7(2):195–204. Anthropic. 2024. Claude 3.5 sonnet. Chen Chen, Xinlong Hao, Weiwen Liu, Xu Huang, Xingshan Zeng, Shuai Yu, Dexun Li, Shuai Wang, Weinan Gan, Yuefeng Huang, Wulong Liu, Xinzhi Wang, Defu
|
https://arxiv.org/abs/2505.21963v1
|
Lian, Baoqun Yin, Yasheng Wang, and Wu Liu. 2025. Acebench: Who wins the match point in tool usage? Preprint , arXiv:2501.12851. Mayee F. Chen, Nicholas Roberts, Kush Bhatia, Jue Wang, Ce Zhang, Frederic Sala, and Christopher Ré. 2023. Skill-it! A data-driven skills framework for understanding and training language models. In Ad- vances in Neural Information Processing Systems 36: Annual Conference on Neural Information Process- ing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023 . Zehui Chen, Kuikun Liu, Qiuchen Wang, Wenwei Zhang, Jiangning Liu, Dahua Lin, Kai Chen, and Feng Zhao. 2024. Agent-flan: Designing data and methods of effective agent tuning for large language models. In Findings of the Association for Compu- tational Linguistics, ACL 2024, Bangkok, Thailand and virtual meeting, August 11-16, 2024 , pages 9354– 9366. Association for Computational Linguistics. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. 2021. Training verifiers to solve math word prob- lems. CoRR , abs/2110.14168. DeepSeek-AI, Aixin Liu, Bei Feng, Bing Xue, Bingx- uan Wang, Bochao Wu, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, Damai Dai, Daya Guo, Dejian Yang, Deli Chen, Dongjie Ji, Erhang Li, Fangyun Lin, Fucong Dai, Fuli Luo, Guangbo Hao, Guanting Chen, Guowei Li, H. Zhang, Han Bao, Hanwei Xu, Haocheng Wang, Haowei Zhang, Honghui Ding, Huajian Xin, Huazuo Gao, Hui Li, Hui Qu, J. L. Cai, Jian Liang, Jianzhong Guo, Jiaqi Ni, Jiashi Li, Jiawei Wang, Jin Chen, Jingchang Chen, Jingyang Yuan, Junjie Qiu, Junlong Li, Junxiao Song, Kai Dong, Kai Hu, Kaige Gao, Kang Guan, Kexin Huang, Kuai Yu, Lean Wang, Lecong Zhang, Lei Xu, Leyi Xia, Liang Zhao, Litong Wang, Liyue Zhang, Meng Li, Miaojun Wang, Mingchuan Zhang, Minghua Zhang, Minghui Tang, Mingming Li, Ning Tian, Panpan Huang, Peiyi Wang, Peng Zhang, Qiancheng Wang, Qihao Zhu, Qinyu Chen, Qiushi Du, R. J. Chen, R. L. Jin, Ruiqi Ge, Ruisong Zhang, Ruizhe Pan, Runji Wang, Runxin Xu, Ruoyu Zhang, Ruyi Chen, S. S. Li, Shanghao 9 Lu, Shangyan Zhou, Shanhuang Chen, Shaoqing Wu, Shengfeng Ye, Shengfeng Ye, Shirong Ma, Shiyu Wang, Shuang Zhou, Shuiping Yu, Shunfeng Zhou, Shuting Pan, T. Wang, Tao Yun, Tian Pei, Tianyu Sun, W. L. Xiao, and Wangding Zeng. 2024. Deepseek-v3 technical report. CoRR , abs/2412.19437. Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samual Stevens, Boshi Wang, Huan Sun, and Yu Su. 2023. Mind2web: Towards a generalist agent for the web. In Advances in Neural Information Processing Systems 36: Annual Conference on Neural Informa- tion Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023 . Guanting Dong, Hongyi Yuan, Keming Lu, Chengpeng Li, Mingfeng Xue, Dayiheng Liu, Wei Wang, Zheng Yuan, Chang Zhou, and Jingren Zhou. 2024. How abilities in large language models are affected by supervised fine-tuning data composition. In Proceed- ings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), ACL 2024, Bangkok, Thailand, August 11-16, 2024 , pages 177–198. Association for Computational Linguistics. Abhimanyu
|
https://arxiv.org/abs/2505.21963v1
|
Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, Anirudh Goyal, Anthony Hartshorn, Aobo Yang, Archi Mitra, Archie Sravankumar, Artem Korenev, Arthur Hinsvark, Arun Rao, Aston Zhang, Aurélien Rodriguez, Austen Gregerson, Ava Spataru, Bap- tiste Rozière, Bethany Biron, Binh Tang, Bobbie Chern, Charlotte Caucheteux, Chaya Nayak, Chloe Bi, Chris Marra, Chris McConnell, Christian Keller, Christophe Touret, Chunyang Wu, Corinne Wong, Cristian Canton Ferrer, Cyrus Nikolaidis, Damien Al- lonsius, Daniel Song, Danielle Pintz, Danny Livshits, David Esiobu, Dhruv Choudhary, Dhruv Mahajan, Diego Garcia-Olano, Diego Perino, Dieuwke Hupkes, Egor Lakomkin, Ehab AlBadawy, Elina Lobanova, Emily Dinan, Eric Michael Smith, Filip Radenovic, Frank Zhang, Gabriel Synnaeve, Gabrielle Lee, Geor- gia Lewis Anderson, Graeme Nail, Grégoire Mialon, Guan Pang, Guillem Cucurell, Hailey Nguyen, Han- nah Korevaar, Hu Xu, Hugo Touvron, Iliyan Zarov, Imanol Arrieta Ibarra, Isabel M. Kloumann, Ishan Misra, Ivan Evtimov, Jade Copet, Jaewon Lee, Jan Geffert, Jana Vranes, Jason Park, Jay Mahadeokar, Jeet Shah, Jelmer van der Linde, Jennifer Billock, Jenny Hong, Jenya Lee, Jeremy Fu, Jianfeng Chi, Jianyu Huang, Jiawen Liu, Jie Wang, Jiecao Yu, Joanna Bitton, Joe Spisak, Jongsoo Park, Joseph Rocca, Joshua Johnstun, Joshua Saxe, Junteng Jia, Kalyan Vasuden Alwala, Kartikeya Upasani, Kate Plawiak, Ke Li, Kenneth Heafield, Kevin Stone, and et al. 2024. The llama 3 herd of models. CoRR , abs/2407.21783. Kawin Ethayarajh, Winnie Xu, Niklas Muennighoff, Dan Jurafsky, and Douwe Kiela. 2024. KTO: model alignment as prospect theoretic optimization. CoRR , abs/2402.01306.Charles Goddard, Shamane Siriwardhana, Malikeh Ehghaghi, Luke Meyers, Vladimir Karpukhin, Brian Benedict, Mark McQuade, and Jacob Solawetz. 2024. Arcee’s MergeKit: A toolkit for merging large lan- guage models. In Proceedings of the 2024 Confer- ence on Empirical Methods in Natural Language Processing: Industry Track , pages 477–485, Miami, Florida, US. Association for Computational Linguis- tics. Google. 2024. Our next-generation model: Gemini 1.5. Shih-Cheng Huang, Pin-Zu Li, Yu-Chi Hsu, Kuang- Ming Chen, Yu-Tung Lin, Shih-Kai Hsiao, Richard Tzong-Han Tsai, and Hung-yi Lee. 2024. Chat vector: A simple approach to equip llms with instruction following and model alignment in new languages. In Proceedings of the 62nd Annual Meet- ing of the Association for Computational Linguis- tics (Volume 1: Long Papers), ACL 2024, Bangkok, Thailand, August 11-16, 2024 , pages 10943–10959. Association for Computational Linguistics. Gabriel Ilharco, Marco Túlio Ribeiro, Mitchell Worts- man, Ludwig Schmidt, Hannaneh Hajishirzi, and Ali Farhadi. 2023. Editing models with task arithmetic. InThe Eleventh International Conference on Learn- ing Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023 . OpenReview.net. Yoichi Ishibashi, Taro Yano, and Masafumi Oyamada. 2025. Can large language models invent algorithms to improve themselves? In Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, NAACL 2025 - Volume 1: Long Papers, Albuquerque, New Mexico, USA, April 29 - May 4, 2025 , pages 10332–10363. Associ- ation for Computational Linguistics. Dong-Hwan Jang, Sangdoo Yun, and Dongyoon Han. 2024a. Model stock: All we need is just a few fine- tuned models. In Computer Vision - ECCV 2024 - 18th European Conference, Milan, Italy,
|
https://arxiv.org/abs/2505.21963v1
|
September 29-October 4, 2024, Proceedings, Part XLIV , volume 15102 of Lecture Notes in Computer Science , pages 207–223. Springer. Young Kyun Jang, Dat Huynh, Ashish Shah, Wen- Kai Chen, and Ser-Nam Lim. 2024b. Spherical lin- ear interpolation and text-anchoring for zero-shot composed image retrieval. In Computer Vision - ECCV 2024 - 18th European Conference, Milan, Italy, September 29-October 4, 2024, Proceedings, Part XIX, volume 15077 of Lecture Notes in Computer Science , pages 239–254. Springer. Mandar Joshi, Eunsol Choi, Daniel S. Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehen- sion. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers , pages 1601–1611. Association for Computational Linguistics. 10 Muhammad Khalifa, Yi Chern Tan, Arash Ahmadian, Tom Hosking, Honglak Lee, Lu Wang, Ahmet Üstün, Tom Sherborne, and Matthias Gallé. 2024. If you can’t use them, recycle them: Optimizing merging at scale mitigates performance tradeoffs. CoRR , abs/2412.04144. Jisu Kim and Juhwan Lee. 2024. Strategic data order- ing: Enhancing large language model performance through curriculum learning. CoRR , abs/2405.07490. So Kuroki, Taishi Nakamura, Takuya Akiba, and Yujin Tang. 2024. Agent skill acquisition for large lan- guage models via cycleqd. CoRR , abs/2410.14735. Tom Kwiatkowski, Jennimaria Palomaki, Olivia Red- field, Michael Collins, Ankur P. Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Ken- ton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natu- ral questions: a benchmark for question answering research. Trans. Assoc. Comput. Linguistics , 7:452– 466. Tian Yu Liu, Aditya Golatkar, and Stefano Soatto. 2024a. Tangent transformers for composition, pri- vacy and removal. In The Twelfth International Con- ference on Learning Representations, ICLR 2024, Vienna, Austria, May 7-11, 2024 . OpenReview.net. Weiwen Liu, Xu Huang, Xingshan Zeng, Xinlong Hao, Shuai Yu, Dexun Li, Shuai Wang, Weinan Gan, Zhengying Liu, Yuanqing Yu, Zezhong Wang, Yux- ian Wang, Wu Ning, Yutai Hou, Bin Wang, Chuhan Wu, Xinzhi Wang, Yong Liu, Yasheng Wang, Duyu Tang, Dandan Tu, Lifeng Shang, Xin Jiang, Ruiming Tang, Defu Lian, Qun Liu, and Enhong Chen. 2024b. Toolace: Winning the points of LLM function calling. CoRR , abs/2409.00920. Seyed-Iman Mirzadeh, Keivan Alizadeh, Hooman Shahrokhi, Oncel Tuzel, Samy Bengio, and Mehrdad Farajtabar. 2025. Gsm-symbolic: Understanding the limitations of mathematical reasoning in large lan- guage models. In The Thirteenth International Con- ference on Learning Representations, ICLR 2025, Singapore, April 24-28, 2025 . OpenReview.net. Swaroop Mishra, Arindam Mitra, Neeraj Varshney, Bhavdeep Singh Sachdeva, Peter Clark, Chitta Baral, and Ashwin Kalyan. 2022. Numglue: A suite of fundamental yet challenging mathematical reasoning tasks. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022 , pages 3505–3523. Association for Computational Linguistics. Jacob Morrison, Noah A. Smith, Hannaneh Hajishirzi, Pang Wei Koh, Jesse Dodge, and Pradeep Dasigi. 2024. Merge to learn: Efficiently adding skills to lan- guage models with model merging. In Findings of the Association
|
https://arxiv.org/abs/2505.21963v1
|
for Computational Linguistics: EMNLP 2024, Miami, Florida, USA, November 12-16, 2024 ,pages 15604–15621. Association for Computational Linguistics. Rémi Munos, Michal Valko, Daniele Calandriello, Mo- hammad Gheshlaghi Azar, Mark Rowland, Zhao- han Daniel Guo, Yunhao Tang, Matthieu Geist, Thomas Mesnard, Côme Fiegel, Andrea Michi, Marco Selvi, Sertan Girgin, Nikola Momchev, Olivier Bachem, Daniel J. Mankowitz, Doina Precup, and Bi- lal Piot. 2024. Nash learning from human feedback. InForty-first International Conference on Machine Learning, ICML 2024, Vienna, Austria, July 21-27, 2024 . OpenReview.net. Liangbo Ning, Ziran Liang, Zhuohang Jiang, Haohao Qu, Yujuan Ding, Wenqi Fan, Xiaoyong Wei, Shanru Lin, Hui Liu, Philip S. Yu, and Qing Li. 2025. A sur- vey of webagents: Towards next-generation AI agents for web automation with large foundation models. CoRR , abs/2503.23350. OpenAI. 2023. GPT-4 technical report. CoRR , abs/2303.08774. Guillermo Ortiz-Jiménez, Alessandro Favero, and Pas- cal Frossard. 2023. Task arithmetic in the tangent space: Improved editing of pre-trained models. In Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Pro- cessing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023 . Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welin- der, Paul F. Christiano, Jan Leike, and Ryan Lowe. 2022. Training language models to follow instruc- tions with human feedback. In Advances in Neural Information Processing Systems 35: Annual Confer- ence on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022 . Shishir G. Patil, Tianjun Zhang, Xin Wang, and Joseph E. Gonzalez. 2024. Gorilla: Large language model connected with massive apis. In Advances in Neural Information Processing Systems 38: Annual Conference on Neural Information Processing Sys- tems 2024, NeurIPS 2024, Vancouver, BC, Canada, December 10 - 15, 2024 . Pulkit Pattnaik, Rishabh Maheshwary, Kelechi Ogueji, Vikas Yadav, and Sathwik Tejaswi Madhusudhan. 2024. Enhancing alignment using curriculum learn- ing & ranked preferences. In Findings of the As- sociation for Computational Linguistics: EMNLP 2024, Miami, Florida, USA, November 12-16, 2024 , pages 12891–12907. Association for Computational Linguistics. Zehan Qi, Xiao Liu, Iat Long Iong, Hanyu Lai, Xueqiao Sun, Jiadai Sun, Xinyue Yang, Yu Yang, Shuntian 11 Yao, Wei Xu, Jie Tang, and Yuxiao Dong. 2025. We- brl: Training LLM web agents via self-evolving on- line curriculum reinforcement learning. In The Thir- teenth International Conference on Learning Repre- sentations, ICLR 2025, Singapore, April 24-28, 2025 . OpenReview.net. Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, and Maosong Sun. 2024. Toolllm: Fa- cilitating large language models to master 16000+ real-world apis. In The Twelfth International Con- ference on Learning Representations, ICLR 2024, Vienna, Austria, May 7-11, 2024 . OpenReview.net. Changle Qu, Sunhao Dai, Xiaochi Wei, Hengyi Cai, Shuaiqiang Wang, Dawei Yin, Jun Xu, and Ji-Rong Wen. 2025. Tool learning with large language
|
https://arxiv.org/abs/2505.21963v1
|
mod- els: a survey. Frontiers Comput. Sci. , 19(8):198343. Rafael Rafailov, Archit Sharma, Eric Mitchell, Christo- pher D. Manning, Stefano Ermon, and Chelsea Finn. 2023. Direct preference optimization: Your language model is secretly a reward model. In Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Sys- tems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023 . Morgane Rivière, Shreya Pathak, Pier Giuseppe Sessa, Cassidy Hardin, Surya Bhupatiraju, Léonard Hussenot, Thomas Mesnard, Bobak Shahriari, Alexandre Ramé, Johan Ferret, Peter Liu, Pouya Tafti, Abe Friesen, Michelle Casbon, Sabela Ramos, Ravin Kumar, Charline Le Lan, Sammy Jerome, An- ton Tsitsulin, Nino Vieillard, Piotr Stanczyk, Sertan Girgin, Nikola Momchev, Matt Hoffman, Shantanu Thakoor, Jean-Bastien Grill, Behnam Neyshabur, Olivier Bachem, Alanna Walton, Aliaksei Severyn, Alicia Parrish, Aliya Ahmad, Allen Hutchison, Alvin Abdagic, Amanda Carl, Amy Shen, Andy Brock, Andy Coenen, Anthony Laforge, Antonia Pater- son, Ben Bastian, Bilal Piot, Bo Wu, Brandon Royal, Charlie Chen, Chintu Kumar, Chris Perry, Chris Welty, Christopher A. Choquette-Choo, Danila Sinopalnikov, David Weinberger, Dimple Vijayku- mar, Dominika Rogozinska, Dustin Herbison, Elisa Bandy, Emma Wang, Eric Noland, Erica Moreira, Evan Senter, Evgenii Eltyshev, Francesco Visin, Gabriel Rasskin, Gary Wei, Glenn Cameron, Gus Martins, Hadi Hashemi, Hanna Klimczak-Plucinska, Harleen Batra, Harsh Dhand, Ivan Nardini, Jacinda Mein, Jack Zhou, James Svensson, Jeff Stanway, Jetha Chan, Jin Peng Zhou, Joana Carrasqueira, Joana Iljazi, Jocelyn Becker, Joe Fernandez, Joost van Amersfoort, Josh Gordon, Josh Lipschultz, Josh Newlan, Ju-yeong Ji, Kareem Mohamed, Kar- tikeya Badola, Kat Black, Katie Millican, Keelin McDonell, Kelvin Nguyen, Kiranbir Sodhia, Kish Greene, Lars Lowe Sjösund, Lauren Usui, Laurent Sifre, Lena Heuermann, Leticia Lago, and Lilly Mc- Nealus. 2024. Gemma 2: Improving open language models at a practical size. CoRR , abs/2408.00118.Maarten Sap, Hannah Rashkin, Derek Chen, Ronan Le Bras, and Yejin Choi. 2019. Socialiqa: Common- sense reasoning about social interactions. CoRR , abs/1904.09728. Noah Shinn, Federico Cassano, Ashwin Gopinath, Karthik Narasimhan, and Shunyu Yao. 2023. Re- flexion: language agents with verbal reinforcement learning. In Advances in Neural Information Pro- cessing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023 . Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, and Houfeng Wang. 2024. Pref- erence ranking optimization for human alignment. In Thirty-Eighth AAAI Conference on Artificial Intelli- gence, AAAI 2024, Thirty-Sixth Conference on Inno- vative Applications of Artificial Intelligence, IAAI 2024, Fourteenth Symposium on Educational Ad- vances in Artificial Intelligence, EAAI 2014, Febru- ary 20-27, 2024, Vancouver, Canada , pages 18990– 18998. AAAI Press. Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2019. Commonsenseqa: A question answering challenge targeting commonsense knowl- edge. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers) , pages 4149–4158. Association for Computational Linguistics. Guiyao Tie, Zeli Zhao, Dingjie Song, Fuyang Wei, Rong Zhou, Yurou Dai, Wen Yin, Zhejian Yang, Jiangyue Yan, Yao Su, Zhenhan Dai, Yifeng Xie,
|
https://arxiv.org/abs/2505.21963v1
|
Yihan Cao, Lichao Sun, Pan Zhou, Lifang He, Hechang Chen, Yu Zhang, Qingsong Wen, Tianming Liu, Neil Zhen- qiang Gong, Jiliang Tang, Caiming Xiong, Heng Ji, Philip S. Yu, and Jianfeng Gao. 2025. A survey on post-training of large language models. CoRR , abs/2503.06072. Ojasw Upadhyay, Abishek Saravanakumar, and Ayman Ismail. 2025. Synlexlm: Scaling legal llms with synthetic data and curriculum learning. Preprint , arXiv:2504.18762. Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, Wayne Xin Zhao, Zhewei Wei, and Jirong Wen. 2024. A survey on large language model based autonomous agents. Frontiers Comput. Sci., 18(6):186345. Mitchell Wortsman, Gabriel Ilharco, Samir Yitzhak Gadre, Rebecca Roelofs, Raphael Gontijo Lopes, Ari S. Morcos, Hongseok Namkoong, Ali Farhadi, Yair Carmon, Simon Kornblith, and Ludwig Schmidt. 2022. Model soups: averaging weights of multiple fine-tuned models improves accuracy without increas- ing inference time. In International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Bal- timore, Maryland, USA , volume 162 of Proceedings 12 of Machine Learning Research , pages 23965–23998. PMLR. Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuan- jing Huang, Qi Zhang, and Tao Gui. 2025. The rise and potential of large language model based agents: a survey. Sci. China Inf. Sci. , 68(2). Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, Qingwei Lin, and Daxin Jiang. 2024. Wizardlm: Empow- ering large pre-trained language models to follow complex instructions. In The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna, Austria, May 7-11, 2024 . OpenReview.net. Prateek Yadav, Derek Tam, Leshem Choshen, Colin A. Raffel, and Mohit Bansal. 2023. Ties-merging: Re- solving interference when merging models. In Ad- vances in Neural Information Processing Systems 36: Annual Conference on Neural Information Process- ing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023 . An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayi- heng Liu, Fei Huang, Haoran Wei, Huan Lin, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Yang, Jiaxi Yang, Jingren Zhou, Junyang Lin, Kai Dang, Keming Lu, Keqin Bao, Kexin Yang, Le Yu, Mei Li, Mingfeng Xue, Pei Zhang, Qin Zhu, Rui Men, Runji Lin, Tianhao Li, Tingyu Xia, Xingzhang Ren, Xuancheng Ren, Yang Fan, Yang Su, Yichang Zhang, Yu Wan, Yuqiong Liu, Zeyu Cui, Zhenru Zhang, and Zihan Qiu. 2024. Qwen2.5 technical report. CoRR , abs/2412.15115. Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik R. Narasimhan, and Yuan Cao. 2023. React: Synergizing reasoning and acting in language models. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023 . OpenReview.net. Le Yu, Bowen Yu, Haiyang Yu, Fei Huang, and Yongbin Li. 2024. Language models are super mario: Absorb- ing abilities from homologous models as a free
|
https://arxiv.org/abs/2505.21963v1
|
lunch. InForty-first International Conference on Machine Learning, ICML 2024, Vienna, Austria, July 21-27, 2024 . OpenReview.net. Yangyang Zhao, Zhenyu Wang, and Zhenhua Huang. 2021. Automatic curriculum learning with over- repetition penalty for dialogue policy learning. In Thirty-Fifth AAAI Conference on Artificial Intelli- gence, AAAI 2021, Thirty-Third Conference on In- novative Applications of Artificial Intelligence, IAAI2021, The Eleventh Symposium on Educational Ad- vances in Artificial Intelligence, EAAI 2021, Vir- tual Event, February 2-9, 2021 , pages 14540–14548. AAAI Press. Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. 2023. Judging llm-as-a-judge with mt-bench and chatbot arena. In Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Pro- cessing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023 . Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, and Gra- ham Neubig. 2024. Webarena: A realistic web en- vironment for building autonomous agents. In The Twelfth International Conference on Learning Rep- resentations, ICLR 2024, Vienna, Austria, May 7-11, 2024 . OpenReview.net. A Templates Table 8, 9, and 10 are prompt templates for action type selection, object selection, and memory gener- ation in our proposed LaMDAgent, respectively. Table 11 shows an example of configs for LaMDAgent. 13 Prompt template to select an action type You are a developer of Large Language Models ( LLMs ) who tests model improvement strategies based on a given hypothesis . You are provided with Self - Reflections obtained from analyzing the result of a previous trial conducted for model improvement . Based on the Self - Reflections , select one action type from the Action Type List to create a more performant model . Analyze the Self - Reflections to identify the most promising action type , and provide the number of the selected action type at the end . If the Self - Reflections are not provided , please select randomly . Self - Reflections : <reflection > Action List : <action_types > Selected Action Type NUMBER : Figure 8: Prompt template to select an action type. Prompt template to select objects You are a developer of Large Language Models ( LLMs ). Your task is to determine a configuration for creating an LLM . The configuration consists of multiple object types , and for each object type , you must select one object from a set of candidate objects . To aid in your selection , you are provided with introspective analysis based on past LLM configurations and their outcomes . Please output the selected objects in the order of the object types displayed , using comma separation and enclosed in [[ ]], e.g., [[1 , 0, 2]] at the end of the output . If the Self - Reflections are not provided , please select randomly and think of a combination that has not been tried in the past trials . Also , the k-th
|
https://arxiv.org/abs/2505.21963v1
|
model at step n is named in the format 0--n--k. Since such models also have promising potential , please include them in the search scope . Self - Reflections : <reflection > Object Candidates : <object_cands > Selected Object NUMBERs : Figure 9: Prompt template to select objects. Prompt template to update memory You are a developer of Large Language Models ( LLMs ) that can improve models based on self reflections . You will be given results and memories of the previous improving trials . The results consist of actions and scores , where the scores are out of 1 point . And also , You will be provided with newly aquired trials . In a few sentences , update your memories based on the previous trials , memoeries , and new results . # Previous Results <previous results > # Previous Memories Aquired from Previous Trials <previous memories > # Newly aquired Results <new results > Updated Memory : Figure 10: Prompt template to update memory. 14 An example config of our proposed method { " seed ": 42, " total_timesteps ": 100 , " controller ": " LaMDAgent_gpt ", " controller_model ": "gpt -4o -2024 -11 -20" , " objects ": { " base_models ": [" models /gemma -2 -2b"], " models ": [" models /gemma -2 -2b-- gsm8k_1k ", " models /gemma -2 -2b-- commonsense_qa_1k ", " models /gemma -2 -2b-- trivia_qa_1k_w_context "," models /gemma -2 -2b"], " sft_dataset ": [" data / sft_formatted / gsm8k_1k ", " data / sft_formatted / commonsense_qa_1k ", " data / sft_formatted / trivia_qa_1k_w_context ", " data / sft_formatted / gsm1k_cqa1k_tqa1k "], " sft_lr ": [0.000001] , " ties_weights ": [[0.5 , 0.5]] , " ties_density ": [0.5] , }, " action_types ": { " sft ": [" models ", " sft_dataset ", " sft_lr "], " ties_merging ": [" base_models ", " models ", " models ", " ties_weights ", " ties_density "] }, " eval_tasks ": [[" gsm8k ", " acc "], [" commonsenseqa ", " acc "], [" trivia_qa_w_context ", " acc "]] , " score_aggregation ": " mean " } Figure 11: An example config of our proposed method. 15
|
https://arxiv.org/abs/2505.21963v1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.