text
string
source
string
arXiv:2505.21938v1 [cs.LG] 28 May 2025Practical Adversarial Attacks on Stochastic Bandits via Fake Data Injection Qirun Zeng1Eric He2Richard Hoffmann2Xuchuang Wang3Jinhang Zuo4 1University of Science and Technology of China2California Institute of Technology 3University of Massachusetts Amherst4City University of Hong ...
https://arxiv.org/abs/2505.21938v1
(arm, reward) pairs into its interaction history. These fake samples must conform to valid feedback ranges (e.g., binary clicks or 1–5 star ratings), and their injection is subject to constraints such as system-level detection or resource limits. The learner processes these fake interactions indistinguishably from real...
https://arxiv.org/abs/2505.21938v1
Shroff [6]extended this to settings where the learning algorithm is unknown, and further to contextual bandits. Subsequent work has explored increasingly general and complex settings. Garcelon et al. [9]studied attacks on linear contextual bandits, where an adversary can perturb both the context vectors and rewards. Mo...
https://arxiv.org/abs/2505.21938v1
the standard threat model adopted in prior works on adversarial attacks against bandit algorithms [ 5,6,7,8]. In each round t, the learner selects an arm atto play, and the environment generates a pre-attack reward r0 tdrawn from the underlying distribution of arm at. The attacker then observes the tuple (at, r0 t)and ...
https://arxiv.org/abs/2505.21938v1
In the Fake Data Injection model, the attacker does notinterfere with the feedback received by the learner during normal interactions. Instead, the attacker is allowed to inject up to NFfake data samples , denoted by {(aF i, rF i)}NF i=1, into the learner’s history. Each fake data point mimics a legitimate user interac...
https://arxiv.org/abs/2505.21938v1
lowest expected reward.1 1This represents the most challenging case for the attacker and can be easily extended to target any other arm. 4 4.1 Warm-up: Injection Attacks with Unbounded Feedback We begin our study of fake data injection attacks by considering a relaxed setting in which the injected reward values rF ican...
https://arxiv.org/abs/2505.21938v1
of Non-Target Arms) .Suppose T > 2K, δ < 0.5. With probability at least 1−δ, for any non-target arm i∈[K−1]that has been pulled Ni(t)times, if a fake data point is injected according to Line 5 of Algorithm 1, then arm iwill not be selected again until at least round exp(Ni(t)δ2 0). Proof Sketch. After the injection, th...
https://arxiv.org/abs/2505.21938v1
this version, the influence of a single unbounded fake reward is approximated by injecting a batch of bounded fake samples simultaneously for each non-target arm. In practice, however, attackers may face an additional constraint on the number of fake samples that can be injected at any given time, due to resource limit...
https://arxiv.org/abs/2505.21938v1
4.1, the SBI algorithm can also be extended to attack the Thompson Sampling algorithm. Due to space constraints, we defer the full algorithm and details to the appendix and present the main theorem below. Theorem 4.4. Suppose T >2K, δ < 0.5. With probability at least 1−2δ, the modified Simultaneous Bounded Injection fo...
https://arxiv.org/abs/2505.21938v1
is maintained across time , which is guaranteed by the following lemma. Lemma 4.2. The choice of Riin Algorithm 3 ensures that once a batch of ffake data samples is injected into non-target arm i, the arm will not be selected again for at least the next Rirounds. Proof Sketch. This result builds on a modified version o...
https://arxiv.org/abs/2505.21938v1
on stochastic bandits. In contrast to prior models that assume per-round, unbounded reward perturbations, our framework captures real-world constraints such as bounded feedback, limited injection capability, and the attacker’s inability to modify genuine user data. Within this model, we develop a suite of effective att...
https://arxiv.org/abs/2505.21938v1
in Neural Information Processing Systems , volume 34, pages 22550–22561. Curran Associates, Inc., 2021. URL https://proceedings.neurips.cc/paper_files/ paper/2021/file/be315e7f05e9f13629031915fe87ad44-Paper.pdf . [13] Zichen Wang, Rishab Balasubramanian, Hui Yuan, Mengdi Wang, Huazheng Wang, et al. Adversarial attacks ...
https://arxiv.org/abs/2505.21938v1
condition. For simplicity, we define ˆℓ′ K(t) = ˆµK(t)−2β(NK(t))−q 8 logπ2K 3δ −4p Ni(t)δ0 Lemma A.3. For each non-target arm i∈[K−1], ifˆµi(t)≤ˆℓ′ K(t), then with probability at least 1−2δ, arm iwill not be selected again until at least round ⌊exp(Ni(t)δ2 0)⌋. Proof. Suppose that at round t1, the following inequalit...
https://arxiv.org/abs/2505.21938v1
cumulative attack cost and the number of non-target arm pulls are sublinear in T, completing the proof. 13 Algorithm 4: Least Injection Algorithm on Thompson sampling Input: Attack parameter δ0>0 1forround t= 1,2, . . . do 2 foreach non-target arm i∈[K−1]do 3 ifarmihas not been attacked andNi(t) =l logT δ2 0m then 4 In...
https://arxiv.org/abs/2505.21938v1
+β(Ni(t))Ni(t)) −K−1X i=1 ˆµK(t)−2β(NK(t))−r 8 logπ2K 3δ−4p Ni(t)δ0! (Ni(t) + ˜n) ≤K−1X i=1 βlogT δ2 0logT δ2 0 +µi ˆi ˜klogT δ2 0!! −K−1X i=1 µK−3β(1)−r 8 logπ2K 3δ−4p Ni(t)δ0−˜a+ ˜a!˜i ˜klogT δ2 0 =K−1X i=1 βlogT δ2 0logT δ2 0 + (µi−˜a)˜i ˜klogT δ2 0 −K−1X i=1 ˜ilogT δ2 0 =K−1X i=1 (µi−˜a...
https://arxiv.org/abs/2505.21938v1
after each batch of ffake samples, arm iwill not be pulled again until the next scheduled injection. Correction to Algorithm 3. In the original version of Algorithm 3, we set Ri=Ri(c= 1) according to Eq. (12). However, since Ri(c= 1) is not always the minimum over c, the corrected formulation should use the minimizatio...
https://arxiv.org/abs/2505.21938v1
dataset [ 16]. We considered a 10-armed stochastic bandit setup for for all experiments. For the synthetic setting, each arm’s reward distribution was modeled as a Gaussian with mean in the range [0,1]and fixed standard deviation σ= 1. We designated the arm with the lowest mean as the target arm to be attacked. Simulat...
https://arxiv.org/abs/2505.21938v1
arXiv:2505.21954v1 [cs.CV] 28 May 2025UniTalk: Towards Universal Active Speaker Detection in Real World Scenarios Le Thien Phuc Nguyen1∗Zhuoran Yu1∗Khoa Quang Nhat Cao1Yuwei Guo1 Tu Ho Manh Pham1Tuan Tai Nguyen1Toan Ngo Duc Vo1Lucas Poon1 Soochahn Lee2Yong Jae Lee1 1University of Wisconsin–Madison2Kookmin University Ab...
https://arxiv.org/abs/2505.21954v1
II) Dubbed Movies English / Chinese / Korean Audio Dubbed in French High / Low Noise Crowded / Uncrowded Scene Figure 1: Comparison between A V A and UNITALK.A V A [ 20] primarily consists of movie content often with clean audio and simple visual composition. It also includes dubbed videos, where the audio is artificia...
https://arxiv.org/abs/2505.21954v1
as V oxCeleb [ 19] and Columbia Dataset [ 6], explored audio-visual speaker detection but focused primarily on constrained scenarios like monologue-style speech or interview settings. A V A-ActiveSpeaker [ 20] later emerged as the largest and most widely-used benchmark, offering frame-level annotations but relying heav...
https://arxiv.org/abs/2505.21954v1
classifiers that use faandfvrespectively, and one main classifier that uses f′ av. All encoders and classifiers are trained jointly using the following loss function: Lasd=λavLav+λaLa+λvLv where Lav,La, andLvare the cross-entropy losses computed between the ground truth Yand the predictions ˆYfrom the embeddings f′ av,...
https://arxiv.org/abs/2505.21954v1
that are at least 1 second in duration to provide sufficient temporal context. Each retained track is paired with synchronized audio and video playback to facilitate accurate speaker labeling. Occasional tracking failures—such as identity switches or false-positive detections—are manually flagged and discarded by annot...
https://arxiv.org/abs/2505.21954v1
and face crops (4 million), indicating greater coverage of speaker appearances. Furthermore, UNITALK exhibits the highest speaker density, averaging 2.6 visible speakers per frame—compared to 2.3 for Talkies, 1.9 for ASW, and 1.5 for A V A—reflecting the increased interaction complexity in our benchmark. Demographic an...
https://arxiv.org/abs/2505.21954v1
test videos. As shown in Figure 4 c), the test set contains substantial coverage across all axes: 28.0% of samples feature underrepresented languages, 21.5% involve visually crowded scenes, 16.1% contain noisy audio, and 34.4% fall into the hard example category. Additionally, Figure 4 b) highlights the overall visual ...
https://arxiv.org/abs/2505.21954v1
shows that across a range of architectures, active speaker detection models trained on UNITALK achieve significantly lower mAP compared to their performance on A V A [ 20]. For example, LoCoNet [ 26], and TalkNCE [ 10]—which report mAP scores above 95 on A V A—obtain only 82.2, and 83.2 mAP, respectively, on UNITALK. E...
https://arxiv.org/abs/2505.21954v1
benchmarks. The consistently strong performance across all benchmarks indicates that UNITALK provides transferable learning signals that support robust ASD model development. Model Architecture In-domain Out-of-domain UNITALK A V A [20] Talkies [4] ASW [13] TalkNet [23] ResNet/LIM 75.7 78.4 89.2 88.9 LoCoNet [26] TalkN...
https://arxiv.org/abs/2505.21954v1
Talkies [4] 55.7 95.6 84.5 59.9 ASW [13] 29.2 58.8 96.1 33.8 UNITALK 88.0 91.4 90.4 83.2 Table 5: Fine-tuning a TalkNCE model [ 11] pretrained on UNITALK using A V A [ 20].Each row reports mAP after fine-tuning on a different amount of A V A training data (measured in video hours). The model quickly adapts to A V A whi...
https://arxiv.org/abs/2505.21954v1
interaction. By focusing on real-world variability, it encourages progress 9 beyond current benchmarks. However, as with any work in this area, there is potential for misuse, such as in surveillance or privacy-invading applications. These risks are not unique to our dataset, but are shared across the broader research d...
https://arxiv.org/abs/2505.21954v1
2021. [15] J. Liao, H. Duan, K. Feng, W. Zhao, Y . Yang, and L. Chen. A light weight model for active speaker detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 22932–22941, 2023. [16] Q. Lin, R. Yin, M. Li, H. Bredin, and C. Barras. Lstm based similarity measurement...
https://arxiv.org/abs/2505.21954v1
Group DiscussionGroup Interview PodcastSinging Performance Cockpit Video Site Report Sport InterviewLive Concert Group Sing Street InterviewGroup Vlog Crowded Scenes Noisy Background Hard Examples with Mixed Difficulty Figure A: Difficulty space of candidate video search terms. Each point represents a YouTube keyword q...
https://arxiv.org/abs/2505.21954v1
tool, then compute the Root Mean Square (RMS) energy over the remaining background audio. A threshold of 0.03 RMS distinguishes low and high noise levels. We plot each video search term in a 2D space (Figure A) using these metrics. The visualizations provide an interpretable overview of the diversity in audiovisual con...
https://arxiv.org/abs/2505.21954v1
this setting, with predictions frequently incorrect across frames. Green and Red indicate ground truth speaking and non-speaking frames; Orange marks incorrect predictions. Background noise + visual crowding (First Row in Figure C). This configuration combines acoustic and visual challenges, while the spoken language r...
https://arxiv.org/abs/2505.21954v1
arXiv:2505.21955v1 [cs.CV] 28 May 2025Towards Comprehensive Scene Understanding: Integrating First and Third-Person Views for LVLMs Insu Lee1∗,Wooje Park1∗,Jaeyun Jang1,Minyoung Noh1, Kyuhong Shim2,Byonghyo Shim1 1Seoul National University,2Sungkyunkwan University {islee, wjpark, jyjang, mynoh, bshim}@islab.snu.ac.kr ,...
https://arxiv.org/abs/2505.21955v1
a joint understanding of egocentric (first-person) and exocentric (third-person) views. In each scenario, the first question can be answered using only the egocentric view, while the subsequent two questions require integrating information from both views. Yellow and gray overlays indicate egocentric and exocentric vie...
https://arxiv.org/abs/2505.21955v1
the noticeable gain (8.93%) on numerical reasoning questions, demonstrating our method’s effectiveness at integrating dual-view information. In summary, our contributions are as follows: •We build the ego-exo multi-view VQA benchmark, E3VQA, consisting of 4K rigorously curated question–answer pairs with synchronized eg...
https://arxiv.org/abs/2505.21955v1
) that target typical failure patterns. These patterns include relying solely on one image (ego or exo), ignoring visual input altogether, or failing to merge complementary information from both views. These carefully crafted distractors enable E3VQA to precisely evaluate a model’s ability to reason across ego–exo imag...
https://arxiv.org/abs/2505.21955v1
entire process, we utilize GPT-4o [ 13], a powerful off-the-shelf LVLM. Figure 3 illustrates the overview of the pipeline. Step 1: Single-View QA Generation We begin by generating QA pairs independently from either the ego or exo image, under the assumption that recent LVLMs are capable of understanding a single image....
https://arxiv.org/abs/2505.21955v1
3 M3CoT: Multi-Perspective Scene Understanding 3.1 Multi-Perspective Scene Graph Generation In our proposed ego-exo multi-image question answering scenario, we expect the LVLM to generate the most appropriate answer given a query Qand a pair of ego and exo images I={Iego, Iexo}. To help the model understand ego and exo...
https://arxiv.org/abs/2505.21955v1
is selected from the response of F1. This iterative loop yields progressively richer scene representations and promotes convergence among the agents’ answers. 4 Experimental Results 4.1 LVLM Performance on E3VQA To assess ego–exo multi-image reasoning capabilities, we evaluate five closed-source and nine open- source L...
https://arxiv.org/abs/2505.21955v1
that require integrating clues from both egocentric and exocentric views, multi-view inputs improve accuracy compared to single-view setups; however, performance remains low, staying below 40%. For questions where each view alone contains all necessary information, providing both images yields marginal accuracy gain, i...
https://arxiv.org/abs/2505.21955v1
QA Generation Pipeline To examine how the source of distractors affects the question difficulty, we sample 160 questions (40 per category) and construct four alternative option sets. In each set, all four answer choices are drawn from a single source: text-only, ego view, exo view, or both views. This setup contrasts w...
https://arxiv.org/abs/2505.21955v1
either the ego or exo image; Ego subset questions require only the ego image; Exo subset questions require only the exo image; Both subset questions require both images. Table 3 shows that the Ego&Exo strategy achieves the largest accuracy gain in the Both subset, demon- strating its advantage in integrating complement...
https://arxiv.org/abs/2505.21955v1
Wu, Kechen Fang, Peng Li, Huaping Liu, and Yang Liu. Egothink: Evaluating first-person perspective thinking capability of vision-language models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 14291–14302, 2024. [5]Abhishek Das, Samyak Datta, Georgia Gkioxari, Stefan Lee, D...
https://arxiv.org/abs/2505.21955v1
Kristen Grauman. Ego-exo: Transferring visual representa- tions from third-person to first-person videos. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 6943–6953, 2021. [20] Kevin Qinghong Lin, Jinpeng Wang, Mattia Soldan, Michael Wray, Rui Yan, Eric Z Xu, Difei Gao, Rong-...
https://arxiv.org/abs/2505.21955v1
Team. Gemini: A family of highly capable multimodal models, 2024. [35] Peng Wang, Shuai Bai, Sinan Tan, Shijie Wang, Zhihao Fan, Jinze Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, et al. Qwen2-vl: Enhancing vision-language model’s perception of the world at any resolution. arXiv preprint arXiv:2409.12191 , 202...
https://arxiv.org/abs/2505.21955v1
Tang, Hong-Yu Zhou, and Sibei Yang. Ddcot: Duty-distinct chain-of-thought prompting for multimodal reasoning in language models. Advances in Neural Information Processing Systems , 36:5168–5191, 2023. [50] Jinguo Zhu, Weiyun Wang, Zhe Chen, Zhaoyang Liu, Shenglong Ye, Lixin Gu, Hao Tian, Yuchen Duan, Weijie Su, Jie Sha...
https://arxiv.org/abs/2505.21955v1
third-person cameras. MSVD-QA [ 40] and MSRVTT-QA [ 40] target general visual understanding through diverse question types, including what, how, when, where, and why. Pano-A VQA [ 44] evaluates spatial and audio-visual reasoning in panoramic 360 °scenes, while Social IQ [ 45] focuses on social understanding by inferrin...
https://arxiv.org/abs/2505.21955v1
of view can cause the same object to appear at varying positions in each image, become occluded in some views, or exhibit different spatial relationships with surrounding objects. To overcome these challenges, the model must align positional information from multiple views to construct a coherent understanding of spati...
https://arxiv.org/abs/2505.21955v1
Vision Encoder LLM Backbone Train w/ Ego Data InternVL3-14B InternViT-300M-448px-V2.5 Qwen2.5-14B Not Provided Qwen2.5-VL-7B ViT (customized) Qwen2.5-7B Not Provided Qwen2-VL-7B ViT-L Qwen2-7B ✗ LLaV A-NeXT-OneVision-7B SigLIP-SO Qwen2-7B ✓ InternVL2-8B InternViT-300M Qwen2.5-7B ✓ LLaV A-NeXT-Interleave-7B SigLIP-SO Qw...
https://arxiv.org/abs/2505.21955v1
increase in voting accuracy, reflecting not only the enhanced quality of individual predictions but also a stronger consensus across perspectives. However, beyond the second iteration, we find that both individual accuracy and voting accuracy plateau. We 16 61.562.563.564.565.566.5 Iteration 0Iteration 1Iteration 2Iter...
https://arxiv.org/abs/2505.21955v1
an LVLM’s ability to capture temporal cues and motion dynamics, an aspect we leave for future work. G Ethics Statement This work has the potential to positively impact society by enhancing the capabilities of visual assistants and embodied AI systems, particularly in scenarios that require comprehensive scene un- derst...
https://arxiv.org/abs/2505.21955v1
a strong capability to capture the information necessary for answering questions grounded in the exo view alone. 19 Scene graph:[{objects: [{"name": "man", "attributes": ["in light green shirt", "sitting"], "relation": "holding", "target": "swab", "hand": "left"}, {"name": "timer", "relation": "on", "target": "table"}]...
https://arxiv.org/abs/2505.21955v1
Generation Prompt {Ego Image} You are given the v i s u a l in p ut from the camera worn by the user ( r e f e r r e d to as ‘ I ’ ) . Based on t h i s v i s u a l input , generate three question −answer pa i r s . Ensure t h a t the generated question −answer p a i r s are d i r e c t l y based on the v i s u a l i n ...
https://arxiv.org/abs/2505.21955v1
c i f i c object ( e . g . , mug cup , laptop ) or describing an a t t r i b u t e of an object ( e . g . , navy blue , s t r i p e d pattern ) associated with me. The answer must be a noun or noun phrase , avoiding overly generic responses such as something or object . Question Categories & Templates : Object I d e n ...
https://arxiv.org/abs/2505.21955v1
[ another object ]? Examples : Q: What object i s closest to my l e f t hand? A: Coffee cup Q: Which object i s the f a r t h e s t from me? A: Bookshelf Q: What object i s on my r i g h t side? A: Tissue Figure 17: Egocentric single-view QA generation prompt: Spatial. 24 Egocentric Single-View QA Generation Prompt: Nu...
https://arxiv.org/abs/2505.21955v1
the v i s u a l i n pu t . {Category-wise Prompt} Requirements : Each answer should be a s i n g l e word or a short phrase . Ensure t h a t a l l three question −answer p a i r s meet these c r i t e r i a and are re levant to the v i s u a l i np u t . S t r i c t l y adhere to the format of the provided examples . F...
https://arxiv.org/abs/2505.21955v1
a s p e c i f i c object i n the scene ( e . g . , ‘mug cup ’ , ‘ laptop ’ ) or describing an a t t r i b u t e of an object ( e . g . , ‘ navy blue ’ , ‘ s t r i p e d pattern ’ ) . Questions should reference people or objects by d e s c r i p t o r s ( e . g . , ‘ the woman i n the white top ’ , ‘ the man with the s ...
https://arxiv.org/abs/2505.21955v1
. Question Categories & Templates : Object Proximity ( What i s closest or f a r t h e s t ?) − Which object i s closest to the person wearing [ s p e c i f i c item ]? − Which object i s the f a r t h e s t from [ reference p o in t ]? − What i s the nearest object to [ s p e c i f i c l o c a t i o n or object ]? Rel...
https://arxiv.org/abs/2505.21955v1
: Counting People (How many are there ?) − How many people are i n the scene? − How many i n d i v i d u a l s are facing the camera? Counting Objects (How many things are v i s i b l e ?) − How many objects i s [ person d e s c r i p t o r ] holding ? − How many items are on the t ab l e ? Q u a n t i t a t i v e Comp...
https://arxiv.org/abs/2505.21955v1
. Based on the v i s u a l inputs , generate the best possible answer . {Category-wise Prompt} Requirements : Each answer should be a s i n g l e word or a short phrase . Follow the provided format s t r i c t l y . Q: {Question} Figure 26: View-specific response expansion prompt: Both views. View-Specific Response Exp...
https://arxiv.org/abs/2505.21955v1
t e d elements . A l l numerical answers must be w i t h i n the range of 0 to 5. Output format : Q: How many people are i n the image excluding me? A: 3 Q: How many people are i n the scene? A: 3 Figure 31: View-specific response expansion prompt: Numerical. 32 Response-Based Question Filtering Prompt 1 Here i s the q...
https://arxiv.org/abs/2505.21955v1
Figure 35: Option generation prompt: Exo. 34 System Prompt & Question (Instruction) Prompt System Prompt You are a h e l p f u l a s s i s t a n t . You are provided with two v i s u a l inputs i n sequence , each captured from a d i f f e r e n t perspective : 1. The view from the camera worn by the user ( ‘ I ’ ) . 2...
https://arxiv.org/abs/2505.21955v1
scene graph i n JSON format as f o l l o w s : 1. Review and Update E x i s t i n g Objects and Relationships : Examine the objects and r e l a t i o n s h i p s i n the i n i t i a l scene graph . Update t h e i r a t t r i b u t e s or p o s i t i o n s based on observations from both views . Remove only elements t h...
https://arxiv.org/abs/2505.21955v1
s t h a t are re levant to answering the question . Just generate the scene graph i n JSON format . Do not say extra words . {Ego Image} {Question Prompt} Scene graph refinement phase (Exo2Ego) Task : For the provided image from a d i f f e r e n t view and the scene graph generated from the previous view , r e f i n e...
https://arxiv.org/abs/2505.21955v1
generate a u n i f i e d scene graph i n JSON format t h a t includes the f o l l o w i n g : 1. Objects t h a t are rel evant to answering the question . 2. Object a t t r i b u t e s t h a t are re levant to answering the question . 3. Object r e l a t i o n s h i p s t h a t are re levant to answering the question ....
https://arxiv.org/abs/2505.21955v1
the u n i f i e d scene graph as context and answer the f o l l o w i n g question : {Ego Image} {Exo Image} {Question Prompt} {Assistant’s response(Unified SG)} Figure 40: M3CoT prompt (4). 39 Other CoT Prompts - DDCoT For the provided images and t h e i r associated question , t h i n k step −by− step about the p r e...
https://arxiv.org/abs/2505.21955v1
arXiv:2505.21956v1 [cs.CV] 28 May 2025Cross-modal RAG: Sub-dimensional Retrieval-Augmented Text-to-Image Generation Mengdan Zhu1,Senhao Cheng2,Guangji Bai1,Yifei Zhang1,Liang Zhao1 1Department of Computer Science, Emory University 2Department of Electrical Engineering & Computer Science, University of Michigan Abstract...
https://arxiv.org/abs/2505.21956v1
or even missed (e.g., “third-generation Labubu”), leading to the distortion in the missed aspects. Also, during generation, existing RAG has not be precisely instructed about which aspects of each image should be leveraged, resulting in the superfluous lightning in the generated image by previous RAG. Therefore, instea...
https://arxiv.org/abs/2505.21956v1
23], have significantly advanced the capabilities of T2I-G. Notable examples include the DALL-E series [ 20,24], the Imagen series [ 2], and the Stable Diffusion (SD) series [ 1,25,26]. More recently, image generation functionalities have been integrated directly into advanced MLLMs such as GPT Image [ 4] and Gemini 2....
https://arxiv.org/abs/2505.21956v1
effective matching against subqueries. 2.3 Retrieval-Augmented Generation Retrieval-Augmented Generation has demonstrated significant progress in improving factuality for both natural language generation [ 30,31] and image generation [ 11,32]. Most RAG-based approaches for image generation are built upon diffusion mode...
https://arxiv.org/abs/2505.21956v1
to a shared multimodal embedding space, followed by layer normalization. The output vjirepresents Ij’s ith-dimensional vision embedding corresponding to the subquery qi, which is decomposed from Qand can be obtained by an off-the-shelf LLM (e.g. GPT-4o mini) using the structured prompt in Appendix A. The subquery embed...
https://arxiv.org/abs/2505.21956v1
i=1αisi(Ij) +β·nS(Q, Ij),s.t.∀αi:αi>0,Pn i=1αi= 1, β∈(0, βmax). Definition 3.4 (Pareto Front of the Pareto Optimal Images) .Pis sometimes referred to as the Pareto setin the decision space (here, the set of images). Pareto front Pfof the Pareto optimal image is the corresponding set of non-dominated tuples in the objec...
https://arxiv.org/abs/2505.21956v1
I∗ j], the MLLM learns to preserve the relevant subquery features that each retrieved image I∗ jcontributes. 4 Experiments 4.1 Experiment Setup Baselines and Evaluation Metrics We compare our proposed method with several baselines on text-to-image retrieval and text-to-image generation. •Text-to-Image Retrieval Baselin...
https://arxiv.org/abs/2505.21956v1
dense retriever is composed of a pretrained CLIP vision encoder (ViT-L/14) and an adaptor. We train the sub-dimensional dense retriever on the COCO training set using the InfoNCE loss with a temperature of 0.07. The adaptor is optimized using the Adam optimizer with an initial learning rate of 5e-5, and a StepLR schedu...
https://arxiv.org/abs/2505.21956v1
specific visual details in the retrieved images to facilitate generation. On ImageNet-LT, Cross-modal RAG improves CLIP similarity by 22%, DINO by 89%, and SigLIP by 24% over the second-best SDXL. This indicates that our retrieval method can retrieve images that best match the query and only use the relevant entity for...
https://arxiv.org/abs/2505.21956v1
results show Cross-modal RAG’s efficiency and scalability for large-scale text-to-image retrieval tasks without compromising effectiveness. 4.6 Ablation Study Ablation Study on Subquery Decomposition Figure 4: Ablation Study on Subquery De- composition on the WikiArt and CUB.We evaluate retrieval performance without su...
https://arxiv.org/abs/2505.21956v1
Jiahui Lu, Andrew Choi, Qirui Ye, and Liang Zhao. La- tentexplainer: Explaining latent representations in deep generative models with multi-modal foundation models. arXiv preprint arXiv:2406.14862 , 2024. [8]Guangji Bai, Zheng Chai, Chen Ling, Shiyu Wang, Jiaying Lu, Nan Zhang, Tingwei Shi, Ziyang Yu, Mengdan Zhu, Yife...
https://arxiv.org/abs/2505.21956v1
Scaling laws for neural language models. arXiv preprint arXiv:2001.08361 , 2020. [24] James Betker, Gabriel Goh, Li Jing, Tim Brooks, Jianfeng Wang, Linjie Li, Long Ouyang, Juntang Zhuang, Joyce Lee, Yufei Guo, et al. Improving image generation with better captions. Computer Science. https://cdn. openai. com/papers/dal...
https://arxiv.org/abs/2505.21956v1
Ross, Valentin Villecroze, Zhaoyan Liu, Anthony L Caterini, Eric Taylor, and Gabriel Loaiza-Ganem. Exposing flaws of generative model evaluation metrics and their unfair treatment of diffusion models. Advances in Neural Information Processing Systems , 36:3732–3784, 2023. [40] Leon A Gatys, Alexander S Ecker, and Matth...
https://arxiv.org/abs/2505.21956v1
D. We can score each image’s sparse textual match inO(N). We discard images that do not satisfy any subquery, leaving a reduced set eD⊆D of size eN. We then discretize the simplex of subquery weights αintoKpossible combinations. Each combination requires checkingP iαisi(Ij)inO(eN)time , thus O(K×eN)in total. Each adapt...
https://arxiv.org/abs/2505.21956v1
distinct test samples. The query for each test sample is formatted as: Draw a <speciesName>. <caption>. For each test sample, the retrieval candidates consist of all remaining images in the CUB dataset, excluding that test image. The ImageNet-LT dataset [ 45] is a long-tailed version of the original ImageNet dataset. I...
https://arxiv.org/abs/2505.21956v1
bird with black wings and a small black beak.1.Vermilion Flycatcher 2.small red bird3.black wings 1.Vermilion Flycatcher 2.small red bird4.small black beak1.Vermilion Flycatcher 3.black wings4.small black beak Figure 6: Visualizations on CUB compared with other baselines. 15 UserQueryPareto Optimal Images with Satisfie...
https://arxiv.org/abs/2505.21956v1
LaMDAgent: An Autonomous Framework for Post-Training Pipeline Optimization via LLM Agents Taro Yano NEC Corporation taro_yano@nec.comYoichi Ishibashi NEC Corporation yoichi-ishibashi@nec.comMasafumi Oyamada NEC Corporation oyamada@nec.com Abstract Large Language Models (LLMs) have demon- strated exceptional performance...
https://arxiv.org/abs/2505.21963v1
pipelines using LLM-based agents and continuously im- proves them based on feedback from the generated model’s performance on target tasks. LaMDAgent treats heterogeneous model improving methods such as supervised fine-tuning, preference learn- ing, or model merging in a unified manner and automates end-to-end post-tra...
https://arxiv.org/abs/2505.21963v1
data, then possible actions can be enumerated as (Gemma2 2B, GSM8k) and (Gemma2 2B, MATH). 2.3 Action Selection We use the agent to select one promising model improvement action from possible actions. Dur- ing action selection, we provide the agent with a prompt for action selection and parse its output to determine th...
https://arxiv.org/abs/2505.21963v1
metrics range up to 10, while AceBench metrics reach 1, so we set the weights for each score as αMT= 1/10, αAce= 1. 2.5 Memory Update We update memories of the agent based on the feedback received for the selected action. A mem- ory is a text summarizing experiences from the latest and past trials, and next promising d...
https://arxiv.org/abs/2505.21963v1
the same as CQA, and NQ uses the same as TriviaQA. For compared methods, in addition to the GSM8k, CQA, and TriviaQA specialists, we use TIES (Grid Search), which optimizes the weights of the three specialists through grid search, and Fully Fine-Tuned, which is trained on all avail- able training data. To evaluate the ...
https://arxiv.org/abs/2505.21963v1
0.674 (0.730) 0.508 (0.556) Policy=LLM, Actions=(TIES) 0.032 (0.030) 0.588 (0.670) 0.575 (0.670) 0.398 (0.456) data, outperformed TIES (Grid Search), which opti- mizes model merging weights, on all in-distribution tasks and 4 out of 5 out-of-distribution tasks, show- ing a 7.6 point higher average accuracy. Agent-based...
https://arxiv.org/abs/2505.21963v1
r10r5u7, Tool- bench tflan cot 30p, Agent instruct react, Agent instruct tflan, Toolbench instruct j1s1 3k, and Tool- bench negative), ToolACE4(Liu et al., 2024b), and general instruction-following data of Wiz- 3https://huggingface.co/datasets/internlm/Agent-FLAN 4https://huggingface.co/datasets/Team-ACE/ToolACE Figure...
https://arxiv.org/abs/2505.21963v1
not match. The score difference between Fully Fine-Tuned and LaMDAgent in Experiment 1 was smaller than in Experiment 2, indicating that LaMDAgent pro- vides greater benefits in Experiment 2. This is be- cause when trainining and target distributions are the same, simply minimizing the loss function on target tasks can...
https://arxiv.org/abs/2505.21963v1
of trans- ferred pipelines on Gemma2 9B, suggesting that al- though some performance gaps are maintained, they sometimes diminish with model size scaling. Method Top-1 Top-50 Top-80 Top-90 Top-100 2B-based 0.603 0.573 0.553 0.546 0.297 9B-based 0.797 0.803 0.783 0.783 0.200 in experiment 1. The results are shown in Tab...
https://arxiv.org/abs/2505.21963v1
that automates both training and merging through LLM agents to construct optimal pipelines. 7 Conclusion In this work, we propose LaMDAgent, an auto- mated framework for constructing post-training pipelines via LLM-based agents. Empirical results across two experimental settings demonstrate that LaMDAgent substantially...
https://arxiv.org/abs/2505.21963v1
Lian, Baoqun Yin, Yasheng Wang, and Wu Liu. 2025. Acebench: Who wins the match point in tool usage? Preprint , arXiv:2501.12851. Mayee F. Chen, Nicholas Roberts, Kush Bhatia, Jue Wang, Ce Zhang, Frederic Sala, and Christopher Ré. 2023. Skill-it! A data-driven skills framework for understanding and training language mod...
https://arxiv.org/abs/2505.21963v1
Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, Anirudh Goyal, Anthony Hartshorn, Aobo Yang, Archi Mitra, Archie Sravankumar, Artem Korenev, Arthur Hinsvark, Arun Rao, Aston Zhang, Aurélien Rodriguez, Austen Gregerson, Ava Spataru...
https://arxiv.org/abs/2505.21963v1
September 29-October 4, 2024, Proceedings, Part XLIV , volume 15102 of Lecture Notes in Computer Science , pages 207–223. Springer. Young Kyun Jang, Dat Huynh, Ashish Shah, Wen- Kai Chen, and Ser-Nam Lim. 2024b. Spherical lin- ear interpolation and text-anchoring for zero-shot composed image retrieval. In Computer Visi...
https://arxiv.org/abs/2505.21963v1
for Computational Linguistics: EMNLP 2024, Miami, Florida, USA, November 12-16, 2024 ,pages 15604–15621. Association for Computational Linguistics. Rémi Munos, Michal Valko, Daniele Calandriello, Mo- hammad Gheshlaghi Azar, Mark Rowland, Zhao- han Daniel Guo, Yunhao Tang, Matthieu Geist, Thomas Mesnard, Côme Fiegel, An...
https://arxiv.org/abs/2505.21963v1
mod- els: a survey. Frontiers Comput. Sci. , 19(8):198343. Rafael Rafailov, Archit Sharma, Eric Mitchell, Christo- pher D. Manning, Stefano Ermon, and Chelsea Finn. 2023. Direct preference optimization: Your language model is secretly a reward model. In Advances in Neural Information Processing Systems 36: Annual Confe...
https://arxiv.org/abs/2505.21963v1
Yihan Cao, Lichao Sun, Pan Zhou, Lifang He, Hechang Chen, Yu Zhang, Qingsong Wen, Tianming Liu, Neil Zhen- qiang Gong, Jiliang Tang, Caiming Xiong, Heng Ji, Philip S. Yu, and Jianfeng Gao. 2025. A survey on post-training of large language models. CoRR , abs/2503.06072. Ojasw Upadhyay, Abishek Saravanakumar, and Ayman I...
https://arxiv.org/abs/2505.21963v1
lunch. InForty-first International Conference on Machine Learning, ICML 2024, Vienna, Austria, July 21-27, 2024 . OpenReview.net. Yangyang Zhao, Zhenyu Wang, and Zhenhua Huang. 2021. Automatic curriculum learning with over- repetition penalty for dialogue policy learning. In Thirty-Fifth AAAI Conference on Artificial I...
https://arxiv.org/abs/2505.21963v1
model at step n is named in the format 0--n--k. Since such models also have promising potential , please include them in the search scope . Self - Reflections : <reflection > Object Candidates : <object_cands > Selected Object NUMBERs : Figure 9: Prompt template to select objects. Prompt template to update memory You a...
https://arxiv.org/abs/2505.21963v1