text
string
source
string
must have wings (Step 6), and it must be a bird (Step 4): this step incorrectly utilizes rule fly -> birds to derive rock must be a bird. Step 8: Given the disjunction in Step 7, the only consistent option is that Rock is a bird, because if Rock can fly, it must be a bird, and if Rock cannot breathe, it contradicts the premises. Therefore, Rock is a bird.\n\nStep 9: Since Rock is a bird, and all of Peter’ s pets are either monkeys or birds (Step 3), Rock cannot be a monkey. <end_of_nl_cot> <answer> The final answer is (B). <end_of_answer> <truth_table> Let: M: Rock is a monkey B: Rock is a bird F: Rock can fly A: Rock is an animal W: Rock has wings R: Rock can breathe Premises: 1. All of Peter’s pets that are mammals are also animals. (Not directly about Rock) 2. All monkeys are mammals. (Not directly about Rock) 35 3. Peter’s pets are all either monkeys or birds. (Not directly about Rock) 4. Peter’s birds can fly. (B →F) 5. All animals that are Peter’s pets can breathe. (A →R) 6. If Peter’s pet can fly, then it has wings. (F →W) 7. Rock is Peter’s pet. 8. Rock can fly, or Rock is a bird, or Rock cannot breathe. (F ∨B∨ ¬R) Conclusion to Evaluate: Rock is a monkey (M) ⇒Correct. The model correctly produce two cases that lead to different conclusions, i.e., True and False. Therefore, it finally produces Uncertain as the final answer. Truth Table (Key Possibilities): | M (Monkey) | B (Bird) | F (Can Fly) | A (Animal) | W (Wings) | R (Can Breathe) | | True | False | True | True | True | True | | False | True | True | True | True | True | <end_of_truth_table> <answer> The final answer is (C). </end_of_answer> <code> from typing import Optional class Pet: def __init__(self, is_monkey: Optional[bool] = None, is_bird: Optional[bool] = None, can_fly: Optional[bool] = None, has_wings: Optional[bool] = None, is_animal: Optional[bool] = None): self.is_monkey = is_monkey self.is_bird = is_bird self.can_fly = can_fly self.has_wings = has_wings self.is_animal = is_animal def apply_premises(pet: Pet) -> bool: changed = False # Premise 1: All of Peter’s pets that are mammals are also animals. # Since all of Peter’s pets are either monkeys or birds, and monkeys are mammals, # this premise does not directly affect the state of the pet. # Premise 2: All monkeys are mammals. if pet.is_monkey is True and pet.is_animal is not True: pet.is_animal = True changed = True # Premise 3: Peter’s pets are all either monkeys or birds. if pet.is_monkey is None and pet.is_bird is None: pet.is_monkey = False pet.is_bird = False changed = True 36 # Premise 4: Peter’s birds can fly. if pet.is_bird is True and pet.can_fly is not True: pet.can_fly = True changed = True # Premise 5: All animals that are Peter’s pets can breathe. if pet.is_animal is True and pet.can_fly is not True: pet.can_fly = True changed = True # Premise 6: If Peter’s pet can fly,
https://arxiv.org/abs/2505.15817v1
then it has wings. if pet.can_fly is True and pet.has_wings is not True: pet.has_wings = True changed = True # Premise 7: Rock is Peter’s pet. # This does not change the state of the pet but indicates that the pet is Rock. # Premise 8: Rock can fly, or Rock is a bird, or Rock cannot breathe. if pet.name == "Rock": if pet.can_fly is None: pet.can_fly = True changed = True if pet.is_bird is None: pet.is_bird = True changed = True if pet.can_fly is True and pet.has_wings is not True: pet.has_wings = True changed = True ⇒Incorrect. The model struggles to correctly interpret premises that involve multiple possible branches of reasoning, such as disjunctions ( ∨). It often commits prematurely to one or more branches instead of preserving logical uncertainty. return changed def run_inference(pet: Pet): while apply_premises(pet): pass def check_conclusion(pet: Pet) -> str: run_inference(pet) if pet.name == "Rock" and pet.is_monkey is True: return "True" else: return "False" def func(): pet = Pet( is_monkey=None, is_bird=None, can_fly=None, has_wings=None, is_animal=None ) pet.name = "Rock" return check_conclusion(pet) if __name__ == ’__main__’: result = func() 37 print("Conclusion: Rock is a monkey?", result) <end_of_code> <answer> The final answer is (B). </end_of_answer> 38
https://arxiv.org/abs/2505.15817v1
InfoDeepSeek: Benchmarking Agentic Information Seeking for Retrieval-Augmented Generation Yunjia Xi1, Jianghao Lin1∗, Menghui Zhu2, Yongzhao Xiao1, Zhuoying Ou1, Jiaqi Liu1, Tong Wan1, Bo Chen2, Weiwen Liu1, Yasheng Wang2, Ruiming Tang2, Weinan Zhang1, Yong Yu1 1Shanghai Jiao Tong University,2Huawei Noah’s Ark Lab {xiyunjia,chiangel,wnzhang}@sjtu.edu.cn https://infodeepseek.github.io/ Abstract Retrieval-Augmented Generation (RAG) enhances large language models (LLMs) by grounding responses with retrieved information. As an emerging paradigm, Agentic RAG further enhances this process by introducing autonomous LLM agents into the information seeking process. However, existing benchmarks fall short in evaluating such systems, as they are confined to a static retrieval environment with a fixed, limited corpus andsimple queries that fail to elicit agentic behavior . Moreover, their evaluation protocols assess information seeking effectiveness by pre-defined gold sets of documents, making them unsuitable for the open-ended and dynamic nature of real-world web environments. To bridge this gap, we present InfoDeepSeek , a new benchmark with challenging questions designed for assess- ing agentic information seeking in real-world, dynamic web environments . We propose a systematic methodology for constructing challenging queries satisfying the criteria of determinacy, difficulty, and diversity. Based on this, we develop the first evaluation framework tailored to dynamic agentic information seeking, including fine-grained metrics about the accuracy, utility, and compactness of in- formation seeking outcomes. Through extensive experiments across LLMs, search engines, and question types, InfoDeepSeek reveals nuanced agent behaviors and offers actionable insights for future research. 1 Introduction Despite remarkable capabilities across various domains [ 25,11,23,19,24,46,50,53,59,56,55], large language models (LLMs) still suffer from factual hallucinations [ 16,28], outdated knowl- edge [ 12], and limited access to real-time information [ 40]. To address these challenges, Retrieval- Augmented Generation (RAG) [ 4,5,62] has emerged as a promising solution, enabling LLMs to enhance their responses with retrieved external information. RAG typically consists of three stages: retrieval, augmentation, and generation [ 42,5]. The first two stages – retrieving relevant documents and selecting useful evidence – constitute the information seeking process. While traditional RAG systems rely on static workflows, recent advancements in Agentic RAG [ 42,39,37,14] integrate autonomous LLM agents into the RAG pipeline, allowing for dynamic planning, search, and re- flection to support more flexible and robust evidence acquisition. This paradigm has already been integrated into real-world systems, including Deep Research features in OpenAI [ 33], Gemini [ 9], and Perplexity [44], where agents iteratively search and synthesize information from the live web. The introduction of the agent primarily transforms the information seeking process of RAG, while the generation step remains largely unchanged, i.e., responds based on the external information. ∗Corresponding author. Preprint.arXiv:2505.15872v2 [cs.IR] 23 May 2025 Challening QueryAnswerSimple Query Answer ...Fixed Corpus ... Evidence SetAnswer LLM Answer LLM Evidence SetMulti-Hop Long-Tail False PremiseSingle-Hop Popular True Premise ...Static Evaluation w/ Golden Doc Set Dynamic Evaluation w/o Golden Doc Set🎯 Answer Accuracy 📦 Info. Quality by NDCG & Recall ✂ Rubost & Compact🎯 Answer Accuracy ⏱ Info. Quality by Top-K Accuracy Simple Retrieval Dynamic Web Agent... Static Info. Seeking Iterative Info. SeekingGolden Doc SetFigure 1: Comparison between traditional RAG benchmark (up) and our InfoDeepSeek (bottom). Consequently, a core goal in evaluating
https://arxiv.org/abs/2505.15872v2
Agentic RAG should be to assess the effectiveness of agentic information seeking. Rigorous benchmarking and evaluation are essential to quantify these improvements, identify potential weaknesses, and guide the development of more capable agentic systems. However, existing RAG benchmarks are inadequate for this purpose, as shown in Figure 1. Firstly, most benchmarks are constrained to static environment [ 58,2,27,43] with a fixed, limited corpus . Such setups fail to reflect the scale and dynamic of real-world web environments, characterized by massive document volume, content drift, URL decay, and frequent fluctuations in search engine results. As a result, these benchmarks misalign with the operational complexity that Agentic RAG systems must manage in deployment. Moreover, static benchmarks rely on pre-defining ground-truth documents and traditional metrics such as NDCG [ 58]. In contrast, the open-ended nature of the web makes it difficult to determine a gold evidence set in advance, rendering such metrics inapplicable. This presents a significant challenge for evaluating the quality of information seeking in dynamic environments. Secondly, existing benchmarks often fall short in terms of question complexity. Many of their queries are relatively simple and can be answered directly by LLMs with parametric knowledge or a single-turn search [ 48,20,43]. Such questions fail to elicit core agentic behaviors, e.g., planning, multi-turn tool use, and reasoning over multiple pieces of evidence, so they cannot meaningfully evaluate the effectiveness of agentic information seeking. To address the above limitations, we propose InfoDeepSeek , a benchmark with challenging ques- tions and novel evaluation metrics tailored for agentic information seeking under real-world web environments. First, we introduce a set of criteria and a systematic methodology for constructing challenging queries aimed at evaluating agentic information seeking. We manually curate and validate 245 high-quality questions, each carefully designed to exhibit the following properties: •Determinacy : Each question has a clear, unique, and temporally stable answer. •Difficulty : The questions are intentionally challenging for LLMs, even with single-turn web search. This highlights the need for multi-turn agentic information seeking capabilities •Diversity : Questions cover various domains, predominant languages, and attributes, i.e., multi-hop, long-tail, freshness, time-sensitive, distracting information, and false premises. Building on this, we develop an agentic information seeking system that integrates multiple search and browsing tools in live web environments. Facing such a noisy and dynamic environment, we propose a set of fine-grained evaluation metrics and protocols to dynamically assess the effectiveness of information seeking. Our evaluation metrics include answer accuracy, information accuracy, information compactness, and effective evidence utilization, offering a comprehensive view of the agent’s information seeking ability. We further conduct empirical evaluations across multiple dimensions, including different LLMs, search engines, and question types, revealing agents’ behaviors under complex and dynamic environments. Our key contributions are as follows: •We introduce a set of criteria and a systematic methodology for constructing challenging queries and present a new benchmark, InfoDeepSeek, for evaluating agentic information seeking in real-world settings. We believe these principles and methodologies are transfer- able and can benefit the research community of benchmarking AI agents for RAG. 2 •We propose an Agentic RAG framework coupled with the first fine-grained
https://arxiv.org/abs/2505.15872v2
evaluation metrics and protocols that assess information seeking effectiveness in dynamic environments. •We provide a comprehensive comparison of agents under different LLMs, search engines, and question types, identifying their limitations and outlining directions for future research. 2 Related Work Agentic RAG . RAG has emerged as a key technique for enhancing the factual accuracy and timeliness of LLMs [ 4,5,62,22,38,52,49,63]. To overcome the limitations of traditional RAG systems – which rely on static workflows and often struggle with complex tasks [ 42] – the Agentic RAG paradigm has introduced agents into the RAG pipeline [ 42,39,37,14,21,61]. These agents enable multi-turn, in-depth, and dynamic information seeking, enhancing the system’s performance and adaptability in complex scenarios. Notably, this paradigm has begun to see increasing adoption in practical applications [ 33,9,44],e.g., Deep Research from OpenAI [ 33], Gemini [ 9], and Perplexity [ 44], all employing agents to support users in completing multi-step information seeking tasks. Table 1: Comparison of RAG benchmarks in factual QA. Diff.Filt. means difficulty filtering (removing questions solvable by humans or LLMs through a single-round search). Dyna.Eval. means evaluating information seeking in dynamic environments. Symbol ✗signifies the lack of this attribute, while symbol ✗means it is not explicitly considered. BenchmarkEnvironment Question Real World Dyna. Eval. Diff. Filt. Multi-Hop Long-Tail Freshness Time-Sensitive Distracting Info. False Premise NQ [20] ✗ ✗ ✗ ✗ ✗ ✗ ✗ ✗ ✗ MultiHop-RAG [43] ✗ ✗ ✗ ✓ ✗ ✗ ✓ ✗ ✗ FreshLLM [48] ✓ ✗ ✗ ✓ ✗ ✓ ✓ ✗ ✓ RGB [2] ✗ ✗ ✗ ✓ ✗ ✗ ✓ ✓ ✗ CRAG [2] ✗ ✗ ✗ ✓ ✓ ✗ ✓ ✗ ✓ BrowseComp [51] ✓ ✗ Human ✓ ✗ ✗ ✗ ✗ ✗ BrowseComp-ZH [64] ✓ ✗ Human ✓ ✗ ✗ ✗ ✗ ✗ InfoDeepSeek (Ours) ✓ ✓ LLMs ✓ ✓ ✓ ✓ ✓ ✓ RAG Benchmarks . Early RAG researches rely on QA benchmarks, e.g., NQ [ 20], TriviaQA [ 18], and MS MARCO [ 30], for evaluation. With the rapid advancement of LLMs’ knowledge, recent RAG benchmarks for factual QA have begun to shift focus toward more challenging scenarios and tasks, e.g., multi-source information [ 2,58], noise [ 2], multi-hop reasoning [ 43,15,47], long-tail knowledge [ 58,13], long document [ 36], and temporally evolving answers [ 48,2]. Nevertheless, as illustrated in Table 1, most benchmarks still rely on static environments with limited corpora or limited question complexity and diversity [ 58,2,43,20]. In contrast, our work focuses on evaluating agents’ information seeking abilities in dynamic, real-world settings, with challenging questions. The evaluation of RAG involves information seeking and generation stages [ 38]. Most benchmarks include assessing generation quality, i.e., answer accuracy [ 51,58,2,54,48,43,64]. Some works evaluate information seeking quality, but they all employ retrieval metrics in static settings with pre- defined ground-truth documents [ 43,27,38], which is not applicable in dynamic environments without fixed ground-truth documents. Thus, we propose a new evaluation framework for information seeking quality in dynamic settings, incorporating dimensions like relevance, utility, and compactness. 3 Problem Formulation and Agentic RAG Framework Given a user query q∈ Q , the
https://arxiv.org/abs/2505.15872v2
goal of Agentic RAG is to acquire a set of evidence C= {c1, c2,···, cnq}of length nqby iteratively searching and browsing within an open environment, and to generate an response ˆyqthat closely approximates the groundtrue answer yq. Following the three-stage framework of RAG [ 42],i.e., retrieval, augmentation, and generation, we implement an Agentic RAG system tailored for real-world web environments. Note that we mainly focus on benchmarking the information seeking process ( i.e., the retrieval and augmentation stage), as it is the primary component transformed by the introduction of LLM agents into the RAG pipeline. Retrieval Stage . Upon receiving the input query q, the agent initiates a planning process π0=Plan(q) about how to seek information from the web. The agent then launches an information seeking trajectory of up to Tsteps. At each step t, the agent reflects on the current observation otand its memory ( i.e., previous trajectory) ht, and updates its plan πt+1=Reflect (ot, ht, πt). Based on the 3 Table 2: Different question attributes and their ratios in our benchmark. Attribute Definition Ratio (%) Multi-hop Questions requiring chaining multiple pieces of information to compose answers ( e.g., Who directed Anne Hathaway’s second film? ).76.73 Long-tail Questions focusing on obscure facts or entities that are hard to find on the web, e.g., a person or event about which little information is available.77.14 Time- SensitiveQuestions involving temporal constraints with implicit/explicit time an- chors ( e.g., Who was the British Prime Minister in 2013? ).66.12 Freshness Questions about recent (post-2025) events requiring real-time retrieval (e.g., What is 2025 Grammy Award for Best Album? )19.59 Distracting InformationSearch results contain significant noise, such as name ambiguity or mis- leading/false content ( e.g., fake news).31.02 False PremiseQuestions with incorrect assumptions, e.g., How is the champion of plain high diving at 9th Olympics? (No such event at 9th Olympics)10.61 plan, it selects tools ( e.g., search engines, browser, time-related utilities, or termination) and performs an action that yields the next observation: at+1=Act(πt+1)→ot+1,e.g., the information from web. Here we support some mainstream search engines such as Google, Bing, Yahoo, DuckDuckGo, and Selenium-based web browsing. This information seeking loop continues until the agent has sufficient information to terminate or hits the step limit T. This stage generates a sequence of observations O={o1, o2,···, oT}, representing retrieved contents from the web. Augmentation Stage . Given the potential volume and noise of retrieved content in the previous stage, the agent performs content filtering and distillation. It selects and summarizes the most relevant documents, yielding a focused set of evidence C=SelectRelevant (q, O). The agent will determine the size nqof the set Cand sort the evidence in Cby importance. Usually, we only stipulate that nq does not exceed a maximum number n, usually n= 5following previous works [34]. Generation Stage . Finally, the agent generates a response ˆyqbased on the curated content Cand query q,i.e.,ˆyq=Generate (q, C). More details about our framework are provided in Appendix A 4 Dataset Construction This section outlines the criteria and methodology we use to construct a challenging dataset for evaluating an agent’s information-seeking abilities. See
https://arxiv.org/abs/2505.15872v2
Appendix B for more details. 4.1 Criteria for Query Determinacy and Verifiability . Unlike static RAG settings with a fixed corpus and information, real-world environments have constantly changing information. Thus, questions in this context must preserve stability and verifiability to allow consistent and reliable evaluation. Thus, we collect factual questions with a clear, unambiguous, and time-invariant answer that can be verified through publicly available web sources. This ensures robust evaluation even in dynamic environments. Difficulty . If a question can be solved with LLMs’ internal knowledge or LLMs with one-turn search, it fails to activate the real abilities of agents. Hence, we focus on questions that LLMs cannot answer with a single-turn search. To enforce this constraint, we apply difficulty filtering and exclude questions that mainstream LLMs ( e.g., GPT-4o [ 17] and DeepSeek-R1 [ 10]) can already answer correctly with a single-turn search. Furthermore, we incorporate various difficulty attributes and present their definition and ratios in our benchmark in Table 2. Note that a question can contain multiple attributes, so the sum of their ratios is not 1. Diversity in Attributes, Domains, and Predominant Languages . Each query is constructed to capture a combination of at least two of the attributes in Table 2, ensuring coverage of real-world information seeking challenges. We also ensure domain diversity, including but not limited to sports, politics, science, history, geography, music, literature, art, film, gaming, and news. Besides, we consider the predominant language, cases where accurate information is more readily available in a particular language. While all questions in our dataset are provided in both English and Chinese, we include queries whose answers are primarily documented in other languages such as Japanese, 4 French, Korean, Italian, or Icelandic. This encourages more realistic, language-aware search behavior from the agent and creates additional challenges due to the multilingual nature of the web. Multilingual Web SourcesAnchor KnowledgeOrdinary Knowledge Draft Q&AAnchor and Diversity- driven Combination Refined Q&A Decider Accept Reject Verifier 1 Verifier 2 Determinacy Check Difficulty Check1. Fact-Grounded Question Drafting 3. Multi-Stage Validation 2. Filtering & Refinement Figure 2: The construction workflow of InfoDeepSeek dataset. 4.2 Methodology for Dataset Construction To operationalize the aforementioned criteria, we develop a set of practical heuristics and workflows for query generation as shown in Figure 2. We begin by extracting knowledge from web sources, based on which we produce draft questions. These draft questions are then subjected to two key filtering stages: determinacy check and difficulty check. Questions that pass both filters are retained as candidates, and subsequently go through a multi-stage validation process. Through iterative annotation and refinement, we have developed a set of practical methodologies and guidelines that produce questions aligned with our criteria. See Appendix B for more details. Fact-Grounded Query Drafting. To guarantee that each question has a verifiable answer, annota- tors are encouraged to adopt a reverse construction strategy – starting from known knowledge in authoritative and diverse web sources, and formulating a question with a unique answer. Annotators are required to reference credible web content, such as official websites, academic publications, or multilingual Wikipedia entries, to
https://arxiv.org/abs/2505.15872v2
validate both factual correctness and answer uniqueness. Expand from Anchor Knowledge. During the data collection, we observed that many seemingly complex questions, involving multi-hop reasoning, emerging facts, temporal sensitivity, or false premises, could still be solved by LLMs based on their knowledge or single-round search. To address this, we identify anchor knowledge, usually long-tail knowledge and distracting information , which are hard for LLMs to answer correctly without deeper search. Many such anchors can be derived from low-resource or non-mainstream language sources. Once we find one or more difficult anchors, we further compound their complexity by composing them with more ordinary knowledge or by linking them to additional difficulty attributes. In this way, these questions not only require deeper retrieval but also demand planning, disambiguation, and reasoning across noisy or obscure content. Diversification. To enhance the coverage of our dataset, we adopt a proactive strategy in diversifying queries. Annotators are guided to contribute questions targeting less frequently covered attributes, domains, or predominant languages. Besides, starting from anchor knowledge, we can introduce multi-hop reasoning that links to new attributes, domains, or languages. For example, given a long-tail fact about the founders of Saratov State Agrarian University, we might explore the founder’s other identities (e.g., agronomist, political leader) to connect it to different domains. This compositional approach allowed us to systematically increase both the complexity and the diversity of our dataset. Determinacy and Difficulty Filtering . In the determinacy check, each draft question undergoes cross-referencing against multiple independent sources to verify the correctness of the answer. Annotators ensure that (1) the answer is uniquely correct given the query, and (2) the answer is not time-sensitive or prone to change over time. For difficulty check, we evaluate each draft question with GPT-4o and DeepSeek-R1 in a web-enabled, single-turn search setting. If both models answer the question correctly, we discard the question. This ensures that only those challenging queries requiring deeper information seeking behavior are retained for the benchmark. Multi-Stage Validation for Reliability. To ensure data quality and compliance with our criteria, each question undergoes a two-stage review process. Each query is independently verified by two annotators, who assess its correctness, determinacy, difficulty, and normativity . A third adjudicator then makes the final decision regarding whether the question is eligible for inclusion. 5 For each verified question q, we record its ground-truth answer yq, the supporting source webpages Sq, and annotated metadata, e.g., difficulty attribute, domain, and predominant language. With the efforts of seven annotators, we collected 245 validated data entries, covering 14 domains and 19 predominant languages . More details about data statistics are provided in Appendix B.4. 5 Metrics and Evaluation This section introduces our evaluation framework, with more detail presented in Appendix C. 5.1 Metrics Here, we define four core metrics, assessing not only final answer accuracy but also agents’ informa- tion seeking capabilities to search, extract, and prioritize relevant information from noisy sources. We denote the answer generation stage as ϕ(·,·), usually implemented by an LLM. Answer Accuracy (ACC) refers to whether the answer generated based on all the observations Omatches the
https://arxiv.org/abs/2505.15872v2
groundtrue answer yq, that is ACC =P q∈QI(ϕ(q, O) =yq)/|Q|, where I(·)is indicator function to determine whether ϕ(q, O)andyqare the same, implemented by a judge LLM in Section 5.2. This is a coarse-grained correctness metric without considering augmentation stage. Information Accuracy (IA@k) measures quality of evidence obtained by information seeking process. In open web environments, predefining ground-truth documents is infeasible due to content volatility and source multiplicity, and multi-hop questions may involve different information sources. Instead, we evaluate the evidence quality by dynamically assessing whether the top- kevidence of C from the augmentation stage is sufficient to answer the question. Specifically, we generate an answer from the top- kevidence C1:k,i.e.,ϕ(q, C1:k), and compute IA@k =P q∈QI(ϕ(q, C1:k) =yq)/|Q|. A higher RA@k implies better evidence relevance. Effective Evidence Utilization (EEU) measures the agent’s ability to extract relevant information from the noisy observations Oand form the evidence set C. It is defined as the ratio between the best achievable accuracy across all top- ksubsets ( k= 1,···, n) and the answer accuracy with all observations, i.e.,EEU =max 1≤k≤nIA@k ACC. EEU significantly below 1 suggests that the agent’s evidence selection is suboptimal, and that key information is either buried or omitted. Information Compactness (IC) quantifies the information density of evidence set C. An ideal agent should gather concise, high-quality evidence with minimal noise or redundancy. We first define the information compactness for each query, IC q, as: ICq=nq/|Sq|, if∃k≤nqsuch that ϕ(q, C1:k) =yq (n+b)/|Sq|,otherwise, i.e., answer failures where nq=|C|denotes the length of evidence set (up to a maximum n),Sqis the human-annotated standard set of source webpages required to answer the query, and bis a penalty constant (typically b= 1) for answer failures. With ICq, IC can be defined as IC=P q∈QICq/|Q|.IC<1suggests that the agent either found compact sources (covering multiple hops) or successfully leveraged prior knowledge to reduce evidence dependency. IC>1implies over-retrieval or poor evidence filtering with the presence of redundant or irrelevant content, even though they answer the question correctly. 5.2 Evaluation Our proposed metrics highly rely on determining whether the LLM-generated answer, ϕ(q, C1:k) andϕ(q, O), semantically and factually aligns with the groundtrue answer yq. Prior work has demonstrated that LLM-based evaluators can closely approximate human judgment in factual QA [ 58,57]. Following these findings, we adopt both human evaluation ( human-eval ) and LLM- based automatic evaluation ( auto-eval ) to assess the agreement between answers. Specifically, we mainly employ two LLM evaluators, DeepSeek-V3 ( deepseek-chat ) [3] and Gemini-2.0-Flash (gemini-2.0-flash-preview-04-07 ) [6], to reduce self-preference bias [ 35], following [ 58]. If the two evaluators produce conflicting judgments, we resort to a third arbiter, GPT-4o-mini (gpt-4o-mini-2024-07-18 ) [31]) or a human annotator, and report the majority vote decision. 6 While LLM-based evaluation is generally reliable, we observe a common failure mode on false premise questions, where LLM evaluators often fail to identify incorrect assumptions in the query. To mitigate this issue, we explicitly annotate such groundtrue answers yqwith statements like “This question contains a false premise: ... ” , making the premise violation explicit. Additionally, we design separate evaluation prompts for
https://arxiv.org/abs/2505.15872v2
false-premise and other questions to encourage evaluators to condition their judgment appropriately. In our experiments, this strategy improves LLMs’ evaluation accuracy from 95.57% to 99.29% compared with human-eval. See Appendix C.2 for more details. 6 Benchmarking Agentic Information Seeking 6.1 Experiment Setup We evaluate a range of closed-source and open-source LLMs under our Agentic RAG framework, including GPT-4o ( gpt-4o-2024-08-06 ) [17], o3-mini ( o3-mini-2025-01-31 ) [32], Claude-3.7- Sonnet (claude-3-7-sonnet-20250219 ) [1], DeepSeek-V3 ( deepseek-chat ) [26], DeepSeek- R1 (deepseek-reasoner ) [10], Gemini-2.5-Flash ( gemini-2.5-flash-preview-04-17 ) [7], Gemini-2.5-Pro ( gemini-2.5-pro-exp-03-25 ) [8], Llama-4-Maverick-17B-128E-Instruct [ 29], and Qwen3-32B [ 45]. For Qwen3-32B, we test both its thinking mode (Qwen3-32B w/ think) and non-thinking mode (Qwen3-32B w/o think). Unless otherwise specified, the maximum step Tof retrieval stage is 5, and the maximum length of evidence set Cin augmentation stage is 5 ( n= 5), as the length of supporting source webpages Sqtypically ranges from 1 to 3. The default search engine is DuckDuckGo, due to its open accessibility. See Appendix D.1 for more details. During our experiments, when evaluating a specific LLM, we use this LLM across all stages, including retrieval, augmentation, and answer generation for computing ACC and IA@k, i.e.,ϕ(·,·). We also explore the impact of different answer LLMs for ϕ(·,·), where information seeking and generation use different LLMs. These results are provided in Appendix D.6. Table 3: Performance of different LLMs. ACC and IA@k are measured by %. Model ACC IA@1 IA@2 IA@3 IA@4 IA@5 EEU IC Llama-4-Maverick-17B-128E-Instruct 10.61 5.31 8.57 7.76 7.76 8.16 0.808 3.922 Qwen3-32B w/o think 8.98 4.90 6.53 6.94 6.94 7.76 0.864 4.012 Qwen3-32B w/ think 10.61 6.12 6.12 6.94 7.35 8.16 0.769 3.954 DeepSeek-V3 8.98 5.71 7.35 9.39 9.39 10.20 1.136 3.926 DeepSeek-R1 15.10 13.47 15.92 17.96 16.73 16.73 1.189 3.736 GPT-4o 10.20 9.39 8.16 9.39 8.57 8.98 0.920 3.878 o3-mini 11.43 8.98 10.20 9.39 9.80 10.20 0.893 3.829 Claude-3-7-Sonnet 12.65 9.80 12.24 12.65 11.43 12.24 1.000 3.909 Gemini-2.5-Flash 14.29 12.65 15.10 16.73 16.73 15.92 1.171 3.750 Gemini-2.5-Pro 22.45 18.78 20.82 20.82 21.63 21.63 0.964 3.762 6.2 Benchmarking on Different LLMs, Search Engines, and Question Attributes Different LLMs . Table 3 presents the performance of agents based on various LLMs on our benchmark, InfoDeepSeek, highlighting the challenge it presents for agentic information seeking tasks. Firstly, SOTA LLMs perform suboptimally on the agentic information seeking tasks. The best-performing model, Gemini-2.5-Pro, achieves only 22.45% on ACC and 21.63% on IA@5. This result underscores the complexity of the tasks, as even the strongest model struggles to provide accurate answers across our challenging queries. Secondly, LLMs optimized for reasoning and information retrieval outperform others. DeepSeek-R1 outperforms DeepSeek-V3, and O3-mini outperforms GPT-4o, indicating that reasoning models tend to perform better in agentic information seeking. Additionally, Gemini-2.5-Flash and Gemini-2.5-Pro, which are specifically optimized for search and deep research scenarios, show better performance compared to other models. In terms of information quality (IA@ k), most models perform poorly on IA@1, as many queries require multiple sources to provide a correct answer. A single document is often insufficient to fully address the question. As kincreases,
https://arxiv.org/abs/2505.15872v2
we observe a trend of initial improvement followed by a decline. This is likely due to the influence of irrelevant or distracting information from later retrieved sources, highlighting the importance of effective augmentation in selecting relevant evidence. 7 Effective Evidence Utilization (EEU) is mostly below 1, indicating that most LLMs struggle to extract useful evidence from the vast amount of information retrieved during the retrieval stage. Regarding information compactness (IC), most models exhibit significant redundancy in their responses. This is largely due to the low success rate of retrieval and the increased reliance on irrelevant information. Models with higher success rates typically exhibit lower redundancy, suggesting that reducing irrelevant evidence through better information extraction is critical for improving performance. Table 4: Performance of DeepSeek-V3 and Gemini-2.5-Flash under different search engines. Model Search Engine ACC IA@1 IA@2 IA@3 IA@4 IA@5 EEU IC Gemini-2.5-FlashDuckDuckGo 14.29 12.65 15.10 16.73 16.73 15.92 1.171 3.750 Bing 33.88 27.35 30.61 32.65 32.65 32.65 0.964 3.494 Google 34.29 29.39 34.69 37.55 37.96 36.33 1.107 3.499 Yahoo 33.47 28.98 32.24 35.51 35.10 36.73 1.098 3.341 DeepSeek-V3DuckDuckGo 8.98 5.71 7.35 9.39 9.39 10.20 1.136 3.926 Bing 19.18 12.24 15.92 17.96 18.37 17.96 0.957 3.771 Google 28.57 19.18 23.27 24.49 24.08 24.08 0.857 3.610 Yahoo 25.71 17.96 24.08 26.53 26.94 26.94 1.048 3.631 Different Search Engines . To better understand the effect of different search engines on information seeking performance, we conduct controlled experiments by fixing the agent and varying the search engine. Specifically, Table 4 presents results for two representative LLMs, DeepSeek-V3 and Gemini- 2.5-Flash, under four search engines: DuckDuckGo, Google, Bing, and Yahoo. Firstly, search engine significantly affects the performance of agentic information seeking . Google and Yahoo consistently outperform Bing and DuckDuckGo, with DuckDuckGo yielding the lowest scores. This highlights the importance of search engine quality in supporting effective agentic infromation seeking. General-purpose search engines, e.g., Google and Yahoo, provide broader coverage and higher-quality results, making them better suited as information entry for Agentic RAG systems. Secondly, a good search engine can partially compensate for model limitations . While DeepSeek-V3 generally underperforms Gemini-2.5-Flash in information seeking tasks, its performance improves substantially when paired with Google, achieving an ACC of 28.57%, which narrows the gap with Gemini. This suggests that access to higher-quality retrieval results is especially beneficial for models with weaker reasoning capabilities. Interestingly, EEU tends to be higher when using DuckDuckGo. However, this may be an artifact of poor retrieval quality: when most retrieved content is irrelevant, identifying even a small number of useful pieces can lead to a higher utilization rate. This further underscores the importance of selecting strong evidence sources to support robust answer generation. Multi-Hop Long-Tail Time-Sen. FreshnessDistr. Info.False Prem. 10%20%30%(a) Closed-source LLMs Gemini-2.5-Pro Gemini-2.5-Flash GPT-4o o3-miniMulti-Hop Long-Tail Time-Sen. FreshnessDistr. Info.False Prem. 10%20%30%(b) Open-source LLMs DeepSeek-R1 DeepSeek-V3 Qwen3-32B w/ think Qwen3-32B w/o thinkMulti-Hop Long-Tail Time-Sen. FreshnessDistr. Info.False Prem. 10%20%30%(c) Search Engines Google Yahoo Bing DuckDuckGo Figure 3: Performance of LLMs and search engines across different question attributes. Different Question Attributes. To further understand where agents succeed or struggle, we analyze performance across different question attributes.
https://arxiv.org/abs/2505.15872v2
Figures 3(a) and (b) show the performance of different LLMs under DuckDuckGo, while Figure 3(c) presents results of DeepSeek-V3 with different search engines. More results are available in Appendix D.2. Firstly, LLMs and search engines consistently perform better on simpler attributes ,e.g., false premise, time sensitivity, and freshness, and worse on multi-hop, long-tail, and distracting information questions. This aligns with our observations during data collection, long-tail and distracting questions often contain obscure entities, 8 which are inherently difficult to agentic information seeking. Multi-hop questions in our benchmark are frequently compositional, often combining long-tail and distracting information, compounding their difficulty. Secondly, reasoning-enhanced LLMs show clear advantages over base models, but these gains are primarily observed on the simpler question attributes. On harder attributes like multi-hop or long-tail, LLMs’ ( e.g., DeepSeek-R1 and Gemini-Pro) performance improvements are marginal. This suggests that current LLMs, even those optimized for reasoning, are still heavily bottlenecked by retrieval quality and web information noise, particularly when facing sparse or misleading information. Lastly, Google leads to more balanced and robust performance across attributes, indicating that Google has higher information coverage and relevance. Together, these findings highlight that while LLMs and agent capabilities are essential, retrieval source quality remains a dominant factor in addressing complex information seeking tasks. 6.3 In-depth Analysis 135 10 20 Maximum Step46810121416Performance (%) (a) Accuracy of DeepSeek-V3 ACC IA@1 IA@3 IA@5 135 10 20 Maximum Step7.510.012.515.017.520.022.5Performance (%) (b) Accuracy of Gemini-2.5-Flash ACC IA@1 IA@3 IA@5 135 10 20 Maximum Step3.63.73.83.94.0IC (c) IC of two LLMs DeepSeek-V3 Gemini-2.5-Flash Figure 4: Performance with different maximum step Tof information seeking. Test-time Scaling for Agentic Information Seeking . One of the key characteristics of an agent is that its performance scales with respect to the amount of compute available during test time. To investigate this, we allocate different levels of computational resources to the agent by varying the maximum step Tin the retrieval stage from 1 to 20, and present the results in Figure 4. As shown in the figure, both models demonstrate significant improvements in ACC, IA@k, and IC as Tincreases, indicating clear scaling effects. This suggests that the agent’s performance can be enhanced by scaling up the test-time computing for information seeking , with the ability to refine its search and gather more evidence as additional computation is allocated. See Appendix D.3 for more details. Retrieval Interference . In our experiments, we observe a notable phenomenon where certain questions can be answered correctly by an LLM with its parametric knowledge, but the same model fails to answer them after performing web-based retrieval . We refer to this behavior as retrieval interference, where external information introduces confusion or distracts the model from its original correct reasoning. To quantify this effect, we define a metric called the interference rate , which is the fraction of questions that an LLM answers correctly without retrieval but answers incorrectly after retrieval, normalized by the total questions it initially answered correctly without retrieval. Figure 5(a) shows the interference rates of DeepSeek-V3 and Gemini-2.5-Flash across different search engines. We find that retrieval interference is widespread, suggesting
https://arxiv.org/abs/2505.15872v2
that low-quality or tangentially relevant web content can often override or dilute the model’s internal confidence, leading to degraded performance. To mitigate this issue, future systems should explore methods to preserve model confidence in accurate internal knowledge and develop more precise retrieval strategies that avoid introducing misleading information. See Appendix D.4 for more results and potential solutions. Impact of Language . We also investigate the impact of languages on agentic information seeking process. For Chinese andEnglish , we employ Chinese and English versions of prompts and queries. Our experiments reveal that the search keywords used by LLMs to query search tools are strongly aligned with the language of the input. For predominant languages , we face challenges in directly converting prompts and queries to their respective language versions. Thus, we adopt a language- aware prompt that explicitly instructs the agent to use the predominant language during the retrieval stage (Appendix D.5). The results in Figure 5(b) demonstrate several important trends. First, English consistently outperforms Chinese across most metrics. This is likely due to the broader coverage of English-language content and search tools. Second, predominant language prompts yield the best 9 DuckDuckGo Bing Google Yahoo01020304050607080Interference Rate (%)(a) Retrieval Interference DeepSeek-V1 Gemini-2.5-Flash GPT-4o DeepSeek-V3 Gemini-2.5-Flash0.02.55.07.510.012.515.017.520.0ACC (%)(b) The Impact of Language Predominant Language Chinese EnglishFigure 5: Retrieval interference (a) and the impact of languages (b). results. This suggests that leveraging a language-aware retrieval strategy improves the agent’s ability to access and utilize high-quality, domain-relevant content. 7 Conclusion & Limitations This work introduces InfoDeepSeek, a novel benchmark for evaluating agentic information seeking in dynamic web environments, addressing the limitations of existing benchmarks confined to a static environment and simple queries. We propose a methodology for constructing challenging queries that satisfy the criteria of determinacy, difficulty, and diversity. Furthermore, we design fine-grained evaluation metrics tailored for the comprehensive assessment of agentic information seeking under dynamic environments. However, our current dataset relies on manual construction, which is costly and time-consuming. In future work, we plan to explore an automated data collection approach with manual verification to lower costs and expand the dataset. References [1]Anthropic. Claude 3.7 sonnet, 2025. https://www.anthropic.com/claude/sonnet , Ac- cessed on 2025-05-05. [2]Jiawei Chen, Hongyu Lin, Xianpei Han, and Le Sun. Benchmarking large language mod- els in retrieval-augmented generation. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 38, pages 17754–17762, 2024. [3]DeepSeek. Deepseek-v3-0324, 2025. https://api-docs.deepseek.com/news/ news250325 , Accessed on 2025-05-05. [4]Wenqi Fan, Yujuan Ding, Liangbo Ning, Shijie Wang, Hengyun Li, Dawei Yin, Tat-Seng Chua, and Qing Li. A survey on rag meeting llms: Towards retrieval-augmented large language models. In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining , pages 6491–6501, 2024. [5]Yunfan Gao, Yun Xiong, Xinyu Gao, Kangxiang Jia, Jinliu Pan, Yuxi Bi, Yixin Dai, Jiawei Sun, Haofen Wang, and Haofen Wang. Retrieval-augmented generation for large language models: A survey. arXiv preprint arXiv:2312.10997 , 2:1, 2023. [6]Gemini. gemini-2.0-flash, 2025. https://ai.google.dev/gemini-api/docs/models? hl=zh-cn#gemini-2.0-flash , Accessed on 2025-05-05. [7]Gemini. gemini-2.5-flash-preview, 2025. https://ai.google.dev/gemini-api/docs/ models?hl=zh-cn#gemini-2.5-flash-preview , Accessed on 2025-05-05. [8]Gemini. gemini-2.5-pro-preview, 2025. https://ai.google.dev/gemini-api/docs/ models?hl=zh-cn#gemini-2.5-pro-preview-05-06 , Accessed on 2025-05-06. [9]Gemini. Gemini deep research, 2025.
https://arxiv.org/abs/2505.15872v2
https://gemini.google/overview/ deep-research/ , Accessed on 2025-5-5. 10 [10] Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948 , 2025. [11] Muhammad Usman Hadi, Rizwan Qureshi, Abbas Shah, Muhammad Irfan, Anas Zafar, Muham- mad Bilal Shaikh, Naveed Akhtar, Jia Wu, Seyedali Mirjalili, et al. A survey on large language models: Applications, challenges, limitations, and practical usage. Authorea Preprints , 3, 2023. [12] Hangfeng He, Hongming Zhang, and Dan Roth. Rethinking with retrieval: Faithful large language model inference. arXiv preprint arXiv:2301.00303 , 2022. [13] Jie He, Nan Hu, Wanqiu Long, Jiaoyan Chen, and Jeff Z Pan. Mintqa: A multi-hop ques- tion answering benchmark for evaluating llms on new and tail knowledge. arXiv preprint arXiv:2412.17032 , 2024. [14] Yichen He, Guanhua Huang, Peiyuan Feng, Yuan Lin, Yuchen Zhang, Hang Li, et al. Pasa: An llm agent for comprehensive academic paper search. arXiv preprint arXiv:2501.10120 , 2025. [15] Xanh Ho, Anh-Khoa Duong Nguyen, Saku Sugawara, and Akiko Aizawa. Constructing a multi-hop qa dataset for comprehensive evaluation of reasoning steps. arXiv preprint arXiv:2011.01060 , 2020. [16] Lei Huang, Weijiang Yu, Weitao Ma, Weihong Zhong, Zhangyin Feng, Haotian Wang, Qiang- long Chen, Weihua Peng, Xiaocheng Feng, Bing Qin, et al. A survey on hallucination in large language models: Principles, taxonomy, challenges, and open questions. ACM Transactions on Information Systems , 43(2):1–55, 2025. [17] Aaron Hurst, Adam Lerer, Adam P Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, et al. Gpt-4o system card. arXiv preprint arXiv:2410.21276 , 2024. [18] Mandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. arXiv preprint arXiv:1705.03551 , 2017. [19] Enkelejda Kasneci, Kathrin Seßler, Stefan Küchemann, Maria Bannert, Daryna Dementieva, Frank Fischer, Urs Gasser, Georg Groh, Stephan Günnemann, Eyke Hüllermeier, et al. Chatgpt for good? on opportunities and challenges of large language models for education. Learning and individual differences , 103:102274, 2023. [20] Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, et al. Natural questions: a benchmark for question answering research. Transactions of the Association for Computational Linguistics , 7:453–466, 2019. [21] Meng-Chieh Lee, Qi Zhu, Costas Mavromatis, Zhen Han, Soji Adeshina, Vassilis N Ioannidis, Huzefa Rangwala, and Christos Faloutsos. Agent-g: An agentic framework for graph retrieval augmented generation. [22] Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, et al. Retrieval-augmented generation for knowledge-intensive nlp tasks. Advances in neural information processing systems , 33:9459–9474, 2020. [23] Jianghao Lin, Bo Chen, Hangyu Wang, Yunjia Xi, Yanru Qu, Xinyi Dai, Kangning Zhang, Ruiming Tang, Yong Yu, and Weinan Zhang. Clickprompt: Ctr models are strong prompt generators for adapting language models to ctr prediction. In Proceedings of the ACM Web Conference 2024 , pages 3319–3330, 2024. [24] Jianghao Lin, Rong Shan, Chenxu Zhu, Kounianhua Du, Bo Chen, Shigang Quan, Ruiming
https://arxiv.org/abs/2505.15872v2
Tang, Yong Yu, and Weinan Zhang. Rella: Retrieval-enhanced large language models for lifelong sequential behavior comprehension in recommendation. In Proceedings of the ACM Web Conference 2024 , pages 3497–3508, 2024. 11 [25] Jianghao Lin, Xinyi Dai, Yunjia Xi, Weiwen Liu, Bo Chen, Hao Zhang, Yong Liu, Chuhan Wu, Xiangyang Li, Chenxu Zhu, et al. How can recommender systems benefit from large language models: A survey. ACM Transactions on Information Systems , 43(2):1–47, 2025. [26] Aixin Liu, Bei Feng, Bing Xue, Bingxuan Wang, Bochao Wu, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, et al. Deepseek-v3 technical report. arXiv preprint arXiv:2412.19437 , 2024. [27] Y Lyu, Z Li, S Niu, F Xiong, B Tang, W Wang, H Wu, H Liu, T Xu, E Chen, et al. Crud-rag: A comprehensive chinese benchmark for retrieval-augmented generation of large language models. arxiv 2024. arXiv preprint arXiv:2401.17043 . [28] Ariana Martino, Michael Iannelli, and Coleen Truong. Knowledge injection to counter large language model (llm) hallucination. In European Semantic Web Conference , pages 182–185. Springer, 2023. [29] Meta. Llama 4 maverick, 2025. https://www.llama.com/docs/ model-cards-and-prompt-formats/llama4/ , Accessed on 2025-05-06. [30] Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. Ms marco: A human-generated machine reading comprehension dataset. 2016. [31] OpenAI. Gpt-4o mini, 2025. https://platform.openai.com/docs/models/ gpt-4o-mini , Accessed on 2025-05-05. [32] OpenAI. o3-mini, 2025. https://platform.openai.com/docs/models/o3-mini , Ac- cessed on 2025-05-05. [33] OpenAI. Introducing deep research, 2025. https://openai.com/index/ introducing-deep-research/ , Accessed on 2025-5-5. [34] Haojie Pan, Zepeng Zhai, Hao Yuan, Yaojia Lv, Ruiji Fu, Ming Liu, Zhongyuan Wang, and Bing Qin. Kwaiagents: Generalized information-seeking agent system with large language models. arXiv preprint arXiv:2312.04889 , 2023. [35] Arjun Panickssery, Samuel R Bowman, and Shi Feng. Llm evaluators recognize and favor their own generations, 2024. URL https://arxiv. org/abs/2404.13076 . [36] Ronak Pradeep, Nandan Thakur, Sahel Sharifymoghaddam, Eric Zhang, Ryan Nguyen, Daniel Campos, Nick Craswell, and Jimmy Lin. Ragnarök: A reusable rag framework and baselines for trec 2024 retrieval-augmented generation track. In European Conference on Information Retrieval , pages 132–148. Springer, 2025. [37] Chidaksh Ravuru, Sagar Srinivas Sakhinana, and Venkataramana Runkana. Agentic retrieval- augmented generation for time series analysis. arXiv preprint arXiv:2408.14484 , 2024. [38] Alireza Salemi and Hamed Zamani. Evaluating retrieval quality in retrieval-augmented gen- eration. In Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval , pages 2395–2400, 2024. [39] Florian Schneider, Narges Baba Ahmadi, Niloufar Baba Ahmadi, Iris V ogel, Martin Semmann, and Chris Biemann. Collex–a multimodal agentic rag system enabling interactive exploration of scientific collections. arXiv preprint arXiv:2504.07643 , 2025. [40] Xinyue Shen, Zeyuan Chen, Michael Backes, and Yang Zhang. In chatgpt we trust? measuring and characterizing the reliability of chatgpt. arXiv preprint arXiv:2304.08979 , 2023. [41] Significant-Gravitas. Autogpt: Build, deploy, and run ai agents, 2025. https://github.com/ Significant-Gravitas/AutoGPT , Accessed on 2025-05-06. [42] Aditi Singh, Abul Ehtesham, Saket Kumar, and Tala Talaei Khoei. Agentic retrieval-augmented generation: A survey on agentic rag. arXiv preprint arXiv:2501.09136 , 2025. [43] Yixuan Tang and Yi Yang. Multihop-rag: Benchmarking retrieval-augmented generation for multi-hop queries. arXiv preprint arXiv:2401.15391 , 2024. 12 [44]
https://arxiv.org/abs/2505.15872v2
Perplexity Team. Introducing perplexity deep research, 2025. https://www.perplexity. ai/hub/blog/introducing-perplexity-deep-research , Accessed on 2025-5-5. [45] Qwen Team. Qwen3 technical report, 2025. https://github.com/QwenLM/Qwen3/blob/ main/Qwen3_Technical_Report.pdf , Accessed on 2025-05-14. [46] Arun James Thirunavukarasu, Darren Shu Jeng Ting, Kabilan Elangovan, Laura Gutierrez, Ting Fang Tan, and Daniel Shu Wei Ting. Large language models in medicine. Nature medicine , 29(8):1930–1940, 2023. [47] Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot, and Ashish Sabharwal. Musique: Multihop questions via single-hop question composition. Transactions of the Association for Computational Linguistics , 10:539–554, 2022. [48] Tu Vu, Mohit Iyyer, Xuezhi Wang, Noah Constant, Jerry Wei, Jason Wei, Chris Tar, Yun-Hsuan Sung, Denny Zhou, Quoc Le, et al. Freshllms: Refreshing large language models with search engine augmentation. arXiv preprint arXiv:2310.03214 , 2023. [49] Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, et al. A survey on large language model based autonomous agents. Frontiers of Computer Science , 18(6):186345, 2024. [50] Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, et al. Emergent abilities of large language models. arXiv preprint arXiv:2206.07682 , 2022. [51] Jason Wei, Zhiqing Sun, Spencer Papay, Scott McKinney, Jeffrey Han, Isa Fulford, Hyung Won Chung, Alex Tachard Passos, William Fedus, and Amelia Glaese. Browsecomp: A simple yet challenging benchmark for browsing agents. arXiv preprint arXiv:2504.12516 , 2025. [52] Shangyu Wu, Ying Xiong, Yufei Cui, Haolun Wu, Can Chen, Ye Yuan, Lianming Huang, Xue Liu, Tei-Wei Kuo, Nan Guan, et al. Retrieval-augmented generation for natural language processing: A survey. arXiv preprint arXiv:2407.13193 , 2024. [53] Yunjia Xi, Weiwen Liu, Jianghao Lin, Xiaoling Cai, Hong Zhu, Jieming Zhu, Bo Chen, Ruiming Tang, Weinan Zhang, and Yong Yu. Towards open-world recommendation with knowledge augmentation from large language models. In Proceedings of the 18th ACM Conference on Recommender Systems , pages 12–22, 2024. [54] Yunjia Xi, Weiwen Liu, Jianghao Lin, Bo Chen, Ruiming Tang, Weinan Zhang, and Yong Yu. Memocrs: Memory-enhanced sequential conversational recommender systems with large language models. In Proceedings of the 33rd ACM International Conference on Information and Knowledge Management , pages 2585–2595, 2024. [55] Yunjia Xi, Hangyu Wang, Bo Chen, Jianghao Lin, Menghui Zhu, Weiwen Liu, Ruiming Tang, Weinan Zhang, and Yong Yu. A decoding acceleration framework for industrial deployable llm-based recommender systems. arXiv preprint arXiv:2408.05676 , 2024. [56] Yunjia Xi, Muyan Weng, Wen Chen, Chao Yi, Dian Chen, Gaoyang Guo, Mao Zhang, Jian Wu, Yuning Jiang, Qingwen Liu, et al. Bursting filter bubble: Enhancing serendipity recommenda- tions with aligned large language models. arXiv preprint arXiv:2502.13539 , 2025. [57] Fangyuan Xu, Yixiao Song, Mohit Iyyer, and Eunsol Choi. A critical evaluation of evaluations for long-form question answering. arXiv preprint arXiv:2305.18201 , 2023. [58] Xiao Yang, Kai Sun, Hao Xin, Yushi Sun, Nikita Bhalla, Xiangsen Chen, Sajal Choudhary, Rongze Gui, Ziran Jiang, Ziyu Jiang, et al. Crag-comprehensive rag benchmark. Advances in Neural Information Processing Systems , 37:10470–10490, 2024. [59] Yingxuan Yang, Huacan Chai, Yuanyi Song, Siyuan Qi, Muning Wen, Ning Li, Junwei Liao, Haoyi Hu, Jianghao Lin, Gaowei Chang, et
https://arxiv.org/abs/2505.15872v2
al. A survey of ai agent protocols. arXiv preprint arXiv:2504.16736 , 2025. 13 [60] yoheinakajima. Babyagi, 2025. https://github.com/yoheinakajima/babyagi , Accessed on 2025-05-06. [61] Weinan Zhang, Junwei Liao, Ning Li, Kounianhua Du, and Jianghao Lin. Agentic information retrieval. arXiv preprint arXiv:2410.09713 , 2024. [62] Penghao Zhao, Hailin Zhang, Qinhan Yu, Zhengren Wang, Yunteng Geng, Fangcheng Fu, Ling Yang, Wentao Zhang, Jie Jiang, and Bin Cui. Retrieval-augmented generation for ai-generated content: A survey. arXiv preprint arXiv:2402.19473 , 2024. [63] Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, et al. A survey of large language models. arXiv preprint arXiv:2303.18223 , 1(2), 2023. [64] Peilin Zhou, Bruce Leon, Xiang Ying, Can Zhang, Yifan Shao, Qichen Ye, Dading Chong, Zhiling Jin, Chenxuan Xie, Meng Cao, et al. Browsecomp-zh: Benchmarking web browsing ability of large language models in chinese. arXiv preprint arXiv:2504.19314 , 2025. 14 Table of Contents A Agentic Information Seeking Framework 16 A.1 Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 A.2 Retrieval Stage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 A.3 Augmentation Stage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 A.4 Generation Stage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 B Dataset Construction 19 B.1 Query Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 B.1.1 Draft Question Generation. . . . . . . . . . . . . . . . . . . . . . . . . . . 19 B.1.2 Filtering and Refinement. . . . . . . . . . . . . . . . . . . . . . . . . . . 20 B.1.3 Question Annotation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 B.2 Multi-stage Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 B.2.1 Validation from Two Verifiers . . . . . . . . . . . . . .
https://arxiv.org/abs/2505.15872v2
. . . . . . . . . . 20 B.2.2 Final Decision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 B.3 Data Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 B.4 Data Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 C Evaluation 24 C.1 Human Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 C.2 Automatic Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 C.3 Fail Cases for Unified Prompting Strategy . . . . . . . . . . . . . . . . . . . . . . 26 D More Experiment Results 26 D.1 Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 D.2 More Results for Question Attributes . . . . . . . . . . . . . . . . . . . . . . . . . 27 D.3 More Results for Test-time Scaling in Agentic Information Seeking . . . . . . . . 28 D.4 More Results and Potential Solutions for Retrieval Interference . . . . . . . . . . . 28 D.5 Details and More Results of Language Impact . . . . . . . . . . . . . . . . . . . . 30 D.6 Impact of Answer LLMs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 E Broader Impacts 33 15 A Agentic Information Seeking Framework To enable complex, multi-step information seeking in open-domain environments, we design a generalizable Agentic RAG framework composed of modular components for planning, memory, tool use, and generation. The agent is instantiated around a large language model (LLM), augmented with external tools and reflection capabilities to support iterative decision-making and information seeking. A.1 Components We begin by explaining the roles of LLMs, the memory bank, and the tool library. Subsequently, these components will be integrated into the primary agent loop.
https://arxiv.org/abs/2505.15872v2
LLMs . The LLM serves as the agent’s central reasoning engine. It is responsible for interpreting the user query, generating search plans, selecting tools, reflecting on past actions, filtering retrieved content, and synthesizing the final answer. Our framework supports a variety of LLMs, including both API-accessible remote models and locally hosted open-source models. To accommodate the varying requirements of different LLMs, we introduce a straightforward API calling that accepts a prompt as input and returns a response. All agents follow a unified interface that incorporates structured planning and reflection mechanisms: at each step, the agent is prompted to explicitly plan its next action and reflect on prior steps to refine its strategy. This iterative planning-reflection loop enhances the agent’s adaptability to noisy or ambiguous web content. Memory . The memory stores the evolving trajectory of the agent’s interaction with the web envi- ronment. Concretely, it includes (1) all past plans generated by the agent, (2) actions such as tool invocations and search queries, and (3) the corresponding observations retrieved from the web (e.g., snippets, page titles, contents). This memory is continuously updated and used as input to future planning and reflection steps, enabling the agent to reason over previously collected evidence, avoid redundancy, and refine its search strategy over time. Tool Library . Tools serve as the agent’s interface to the external world. For our information seeking agent, we support multiple real-time search engines, Google, Bing, Yahoo, and DuckDuckGo, as well as a Selenium-based browser tool that allows the agent to navigate, scroll, and extract content from live webpages. In addition, we support time tool for time calculation that may be involved in queries and webpages. This diverse toolset ensures robustness across query types and web content structures. A.2 Retrieval Stage The Retrieval Stage is responsible for actively exploring the web environment. The agent interacts with external tools (e.g., search engines, browsers) through multi-step planning and decision-making to acquire potentially relevant information. This stage emphasizes dynamic behavior: the agent iteratively queries, observes, reflects, and adapts its strategy based on the evolving context. Upon receiving the input query qfrom the user, the agent initiates an initial planning process π0=Plan(q), which specifies its strategy for acquiring relevant information from the web. The agent then launches an information seeking trajectory consisting of up to Titerative steps. At each step t, it receives an observation otfrom the environment ( e.g., search results or web content), and updates its internal plan by reflecting on the current observation otand memory ( i.e., previous trajectory) ht: πt+1=Reflect (ot, ht, πt) Based on the updated plan πt+1, the agent selects the next action using a tool from its available set – such as a search engine (Google, Bing, Yahoo, DuckDuckGo), a Selenium-based browser for webpage exploration, or auxiliary tools like a time utility or stop action: at+1=Act(πt+1)→ot+1 Each action yields an observation qt+1, typically the results of the tool, e.g., a snippet, webpage content, or search result. This planning-action-reflection loop allows the agent to dynamically adapt its strategy in response to retrieved evidence. The loop terminates either when the agent
https://arxiv.org/abs/2505.15872v2
deems the collected information sufficient or when the step limit Tis reached. The output of this stage is a sequence of retrieved observations O={o1, o2,···, oT}, representing the raw web content gathered during the search process. The detailed prompt of the retrieval stage is listed as follows: 16 You are a {agent_name}, {agent_bio}, {agent_instructions} Currently, you are in the task planning phase, where you will be given a specific query to address. Please utilize LLM’s advantages and pursue efficient strategies for task planning. 1. You have a short-term memory of approximately 4,000 characters. 2. You do not require assistance or response from users. 3. You can use the reference tools mentioned when planning. 4. Complex problems can be split into sub-problems and then information can be collected, aggregated and authenticated. Be sure to verify the truthfulness of the information. 5. Stay humble and call the tool for questions you are not sure about, but do not call the same tool with the same parameters repeatedly. 6. You can think and plan up to {max_iter_num} steps, so strive to plan tasks as efficiently as possible. 7. You have the capability for reflection and self-criticism; reflect on past decisions and improve your strategies. 8. If you have sufficient information to answer the given query, invoke the termination tool to terminate planning. Otherwise, continue planning new tasks while ensuring no duplication with prior tasks. {tool_specification} {current_date_and_time} {memory} Given Query: {query} Based on the given question and existing tasks, plan a new Task (no repetitions), and you can only generate the Task in the following **JSON list** format: [{ "task_name": "task description", "command":{ "name":"command name", "args":{ "arg name":"value" } } }] Even if there is only one task or no task, it needs to be returned in the form of a list. Ensure that the Task can be parsed by Python’s json.loads function. If the completed Tasks are sufficient to answer the query, terminate planning. Otherwise, create another Task that does not duplicate previous ones. A new Task: A.3 Augmentation Stage The Augmentation Stage focuses on filtering and organizing the retrieved content. Since web data is often noisy, redundant, or only partially relevant, this stage distills the raw observations into a compact, high-quality evidence set. It ensures that only the most pertinent information is retained for answer generation, improving factual grounding and mitigating hallucinations. Given the potentially large and noisy set of retrieved content O, the agent proceeds to filter and distill this information into a more concise, relevant set. Specifically, it applies a selection function to identify passages or documents that are most pertinent to answering the query: 17 C=SelectRelevant (q, O). This process includes both document-level and span-level selection, ranking evidence based on relevance, coverage, and redundancy. The resulting set C={c1, c2,·, cnq}is sorted by importance, where nqis determined dynamically by the agent, subject to an upper bound n. This stage is critical for reducing noise and focusing generation on high-quality information. The prompt for the augmentation stage is as follows: You are a {agent_name}, {agent_bio}, {agent_instructions} The current stage is webpage ranking stage.
https://arxiv.org/abs/2505.15872v2
In the previous interactions, you have already found several webpages in response to the user’s query. Now, you need to consolidate this information and select the {max_webpage_num} most relevant webpages, then rank them. 1. A webpage consists of a URL and webpage summary or information extracted from the webpage that is relevant to the query. 2. If multiple pieces of information come from the same webpage (determined by identical URLs), merge them rather than listing duplicates. 3. The output webpage list must include relevant webpages necessary to answer the question. If the question has multiple sub-questions, the relevant webpages of each sub-question must be included. 4. The number of webpages in the output webpage list can be less than {max_webpage_num}. If it is more than {max_webpage_num}, select {max_webpage_num} of the most important ones. 5. The output webpage list is sorted according to its importance to answering the question, that is, the webpage ranked first has the greatest contribution to answering the question. {current_date_and_time} {memory} Given Query: {query} You must generate the list of webpages strictly in the following **JSON list** format: [{ "url": "The webpage 's URL", "content": "Information extracted from the webpage that is relevant to the given query", ,→ }] Always return a list, even if there is no relevant web page, you need to return an empty list to ensure that the task can be parsed by Python’s json.loads Relevant webpages (ranked by importance): A.4 Generation Stage The Generation Stage uses the refined evidence to produce a final response. Grounded in the selected content, the language model synthesizes an answer that directly addresses the original query, ideally with high factual accuracy and traceability. In the final stage, the agent generates an answer ˆyqbased on the curated evidence set Cand the original query q. The generation function is typically a forward pass through the LLM, grounded in the selected content: ˆyq=Generate (C, q). 18 This answer reflects the agent’s ability to synthesize multiple sources of retrieved knowledge and produce a coherent, factually accurate response. You are {agent_name}, {agent_bio}, {agent_instructions} Currently, you are in the question-answering stage. Based on your own knowledge and relevant webpages, answer the given query from the user. 1. If the user’s query contains multiple answers, list all of them. 2. If the user’s query is based on a wrong premise, point out the error. {current_date_and_time} Given query: {query} Relevant webpages: {webpages} Generate a brief English answer to solve the user’s query: B Dataset Construction B.1 Query Generation Our query construction pipeline consists of three key stages: draft generation, validation, and annotation. This section outlines the practical methodology we adopt to ensure that the final queries are factually grounded, sufficiently challenging for LLMs, and diverse across difficulty attributes, domains, and languages. B.1.1 Draft Question Generation. We adopt a reverse construction strategy , where factual knowledge is first extracted from credible sources and then transformed into a question that requires that knowledge (and possibly more) to answer. The goal is to create queries that are multi-hop, complex, and grounded in long-tail or noisy information. The main process proceeds
https://arxiv.org/abs/2505.15872v2
as follows: 1.Anchor knowledge identification . Annotators begin by selecting an underrepresented domain or language. They search authoritative sources such as Wikipedia, reputable news sites, fact-checking platforms, expert forums, and academic databases to extract candidate facts—particularly long-tail or distracting information, which we refer to as anchor knowl- edge. Wikipedia’s link structure, category graphs, and multilingual variants are especially helpful for uncovering obscure entities or facts. Some webpages present different content across language versions; for example, Wikipedia entries in certain "advantage languages" often contain more detailed information. Knowledge unique to low-resource or non-English versions is more likely to be long-tail and underrepresented elsewhere. 2.Question composition . Based on the anchor knowledge, annotators either directly con- struct a question or further increase question difficulty by incorporating multiple challenge attributes or introducing additional domain-specific facts – either common or difficult – through multi-hop composition. Compositionality is encouraged to ensure that the question cannot be solved with shallow retrieval. 3.Temporal stability check . Annotators have to verify whether the answer varies with time. For potentially unstable questions (e.g., “Who is the current president of the USA?”), we explicitly add time constraints (e.g., “In 2025”) to ensure the answer remains fixed and verifiable. 4.Diversity control . We aim for each question to include at least two difficulty attributes, and for the dataset to span a wide range of domains, languages, and countries. Annotators actively switch focus when certain attributes or domains become overrepresented. 19 B.1.2 Filtering and Refinement. Each draft question undergoes a two-stage validation process to ensure that it meets both difficulty and determinacy standards. •Difficulty check . To ensure that a question cannot be answered by an LLM with internal knowledge or single-turn search, we test it with GPT-4o and DeepSeek-R1 with web access enabled. If both models correctly answer the question in a single turn, the question is discarded. •Determinacy check . We verify the correctness and uniqueness of the answer using multi- source cross-validation: (a) Searching and confirming the answer through multiple inde- pendent web sources. (b) Leveraging links and citations provided by web-enabled LLMs during difficulty filtering to trace and confirm factual accuracy. (c) Comparing content across different language versions of the same source (e.g., multilingual Wikipedia pages) to check for consistency and factual reliability. B.1.3 Question Annotation. For each validated question, we record: • The Chinese and English versions of question q. • The Chinese and English versions of ground-truth answer yq. • The set of source webpages Sqthat provide the factual basis for the answer. • Annotated metadata including difficulty attributes, domains, and predominant languages. For multi-hop questions, we annotate the answer and evidence source for each reasoning step. For false premise questions, we explicitly mark the flawed assumption in the answer using the phrase: “This question contains a false premise: ...”, a format critical for subsequent automatic evaluation. In addition, special attention should be paid to the translation of proper nouns, which should be accurately translated based on online sources or annotated with their original names in the predominant language. B.2 Multi-stage Validation B.2.1 Validation from Two Verifiers To
https://arxiv.org/abs/2505.15872v2
ensure the reliability and robustness of each constructed question, we implement a multi-stage human verification process involving two independent annotators (verifiers). Each verifier is required to evaluate the question from multiple perspectives. The validation process includes: 1.Content Verification : Check the correctness of the question and answer, and ensure that the listed sources support the answer. Special attention is paid to the accuracy of proper noun translations, which must be verified against online references or annotated using their original names in the predominant language. 2.Criteria Check : Evaluate whether the question meets the required conditions: Determinacy (Is the answer stable over time?) and Difficulty (Can GPT-4o and DeepSeek-R1 in a web- enabled, single-turn search model correctly answer the question?) Regardless of whether the question passes, the LLM responses are recorded in the validation notes for reference during final review. 3.Metadata Verification : Ensure the correctness of annotated attributes, domain, and advan- tage language. 4.Validation Outcome : Verifiers must fill out three fields: their name, a binary result (pass/fail), and explanatory notes. A question is marked as passed only if it fully sat- isfies all requirements (content, criteria, metadata). A fail may indicate outright rejection or suggest that further review or correction is needed. 20 B.2.2 Final Decision An additional decider makes the final decision based on the verification results from the two verifiers. If both verifiers mark the question as valid, it is accepted directly. If either verifier marks it as invalid, a third annotator conducts a further review, discarding questions that do not meet essential criteria and correcting others where appropriate. This includes the following cases: • Incorrect metadata: Fix and accept. • Time-dependent Answer: Add a time constraint and re-validate with LLMs. •Inaccurate answer: Replace with a correct one (confirmed by LLMs and sources) and accept. • Ill-formed or ambiguous question: Reject. •Inconsistent difficulty judgments: Since LLM behavior can vary, we accept questions if at least three out of six LLMs (by the collector, two verifiers, each using two LLMs for difficulty check) result in incorrect answers. To ensure question quality, we invested substantial manual effort throughout the collection and validation process. In total, seven annotators were involved in drafting questions, each of which was further reviewed by two independent verifiers and a final decider. Due to the complexity, cross-validation, and manual nature of our pipeline, we ultimately curated 245 high-quality questions. Moving forward, we plan to expand the dataset and integrate more automation into the dataset construction and validation pipeline to reduce cost and scale the collection of challenging, agent- oriented queries. B.3 Data Examples Here, we provide two examples of our dataset. The first question exemplifies a challenging combination of distracting information, long-tail knowledge, and multi-hop reasoning. To an- swer it, one must first identify the president in question and then determine which university awarded him a master’s degree, making it a classic multi-hop task. The query’s mention of “a president who is a comedian” naturally points to Ukrainian President V olodymyr Zelensky, and many search results indeed return Zelensky-related pages, demonstrating the presence of strong distractors. However,
https://arxiv.org/abs/2505.15872v2
the actual target is Jimmy Morales, former president of Guatemala. His English Wikipedia entry ( https://en.wikipedia.org/wiki/Jimmy_Morales ) only notes that he holds a master’s degree in strategic studies with a specialization in security and defense from Mariano Gálvez University. Only by consulting less-common Spanish-language sources (for example, an archived page at https://web.archive.org/web/20151028180235/http: //www.jimmymorales.gt/main01/?page_id=2 ) can one discover that he also earned “a mas- ter’s degree in media management from Universidad Panamericana de Guatemala.” This obscure detail represents the long-tail information necessary to fully answer the question. { "id": 307, "determinacy": true, "difficulty_GPT": true, "difficulty_DS": true, "multi_hop": true, "long_tail": true, "time_sensitive": false, "freshness": false, "distracting_info": true, "false_premise": false, "domain": [ "politics", "education" ], "advantage_language": [ "Spanish" ], 21 "query_zh": " 某个国家的第50任总统同时是一位喜剧演员,他在哪里获得硕士学 位?", ,→ "answer_zh": " 这位总统是危地马拉第50任总统吉米·莫拉莱斯(Jimmy Morales ),他从马里亚诺·加尔韦斯大学(Mariano Gálvez University )获得高级战 略研究硕士学位,从危地马拉泛美大学(Universidad Panamericana de Guatemala )获得媒体管理硕士学位",,→ ,→ ,→ "query_en": "The 50th president of a certain country is also a comedian. Where did he get his master 's degree?", ,→ "answer_en": "The president is Jimmy Morales, the 50th president of Guatemala. He holds a master 's degree in strategic studies with a specialization in security and defense from Mariano Gálvez University and a master 's degree in media management from Universidad Panamericana de Guatemala.",,→ ,→ ,→ ,→ "sources": [ "https://en.wikipedia.org/wiki/Jimmy_Morales", "https://web.archive.org/web/20151028180235/http://www.jimmymorales.gt/main01/?page_id=2" ,→ ], "verified": true } The second question illustrates a false-premise challenge. It asks: “Which composer was awarded to the 2022 Grammy Awards Songwriter of the Year?” The underlying premise – that the Grammy Award for Composer of the Year existed in 2022 – is false. In reality, this category was only established in 2023. During the search, one will find references to the 2023 recipient, creating additional noise and demonstrating how real-world information seeking can be misled by a question’s incorrect assumptions. { "id": 209, "determinacy": true, "difficulty_GPT": false, "difficulty_DS": true, "multi_hop": false, "long_tail": true, "time_sensitive": true, "freshness": false, "distracting_info": true, "false_premise": true, "domain": [ "entertainment" ], "advantage_language": [ "English" ], "query_zh": "2022 年格莱美奖年度作曲人颁给了哪位作曲家", "answer_zh": " 格莱美奖的年度作曲人(非古典)奖(Songwriter of the Year, Non-Classical )首创于2023年,隔年被正式加入通类。所以该问题存在虚假前 提,即22年没有这个奖",,→ ,→ "query_en": "Which composer was awarded to the 2022 Grammy Awards Songwriter of the Year?", ,→ "answer_en": "Grammy Award fo Songwriter of the Year (Non-Classical) was first presented in 2023 and was not previously part of any specific field. Therefore, this question has a false premise, that is, there is no such award in 2022.",,→ ,→ ,→ "sources": [ "https://zh.wikipedia.org/wiki/%E8%91%9B%E8%90%8A%E7%BE%8E%E7%8D%8E" ], 22 "verified": true } B.4 Data Statistics After rigorous drafting, refinement, and multi-stage verification by seven annotators, our dataset comprises 245 high-quality queries, each exhibiting at least two of the following six difficulty attributes: Multi-Hop, Long-Tail, Time-Sensitivity, Freshness, Distracting Information, False Premise. The exact number of queries involving each attribute is reported in Table 5. Because most questions combine two or more attributes, the total exceeds 245 when summed across attributes. Table 5: The number of questions for each attribute. Time-Sen. denotes Time-Sensitive. Distr.Info. means Distracting Information. False Prem. stands for False Premise. Attributes Multi-Hop Long-Tail Time-Sen. Freshness Distr. Info. False Prem. Numbers 188 187 162 48 76 27 We also ensured domain diversity by encouraging annotators to
https://arxiv.org/abs/2505.15872v2
cover a wide range of topics. In total, queries span 14 broad domains, including history, geography, film & TV , science & technology, literature & art, politics, education, music, news, sports, humanities, entertainment, games, and social sciences. For visualization, closely related subfields (e.g., astronomy, biology, medicine, and computer science) are grouped under “science & technology,” while economics, sociology, and law fall under “social sciences.” The question ratio of each domain is presented in Figure 6; since multi-hop questions may touch multiple domains, the percentages sum to more than 100%. history geography film_and_tv science_and_technologyliterature_and_artpolitics educationmusic newssports humanities entertainmentgames social_sciences Domain0.000.050.100.150.200.25Ratio (%) Figure 6: The ratio of questions for different domains. Finally, we annotated each query with its predominant language, the language in which relevant evidence is most readily available. While English and Chinese are dominate, reflecting the abundance of resources in these languages, our dataset also includes 17 less common predominant languages: Japanese (11), Russian (9), Korean (8), Italian (6), Arabic (6), French (5), Spanish (4), German (3), Portuguese (3), Icelandic (3), Slovene (3), Malay (2), Bengali (2), Croatian (1), Czech (1), Dutch (1), and Hindi (1). This linguistic variety ensures that our benchmark challenges agents to perform cross-lingual retrieval and to leverage non-English sources when appropriate, further broadening the realism and difficulty of the tasks. 23 C Evaluation To ensure reliable assessment of agent-generated answers, we adopt both human evaluation ( human- eval) and automatic evaluation ( auto-eval ) using LLMs, each with carefully designed guidelines to handle diverse question types, including those with false premises. C.1 Human Evaluation Human annotators are asked to determine whether the agent’s answer ˆyqcorrectly answers the given question, with respect to the ground-truth answer yq. The evaluation guidelines emphasize the following key aspects: •Entity consistency : Ensure that events, people, and locations mentioned in the answer match the ground truth. •False premise detection : If the question contains a false assumption, the answer must explicitly point it out. •Answer completeness : If the question requires multiple pieces of information, all must be answered correctly. Each instance is independently annotated by two human raters, and they reach a high agreement level of 97%, indicating strong reliability. In cases of disagreement, a third expert annotator adjudicates the final decision. C.2 Automatic Evaluation We primarily employ two strong LLM evaluators, DeepSeek-V3 ( deepseek-chat )[3] and Gemini- 2.0-Flash ( gemini-2.0-flash-preview-04-07 )[6], to mitigate model-specific bias and self- preference, following the approach proposed in [ 58]. If the two models produce conflicting judgments, we resort to a third evaluator, either GPT-4o-mini ( gpt-4o-mini-2024-07-18 ) [31]) or a human judge, and take the majority vote as the final answer correctness label. Initially, we employed a single unified prompt to evaluate all questions, regardless of type. The general-purpose prompt template was as follows: Given a question and its groundtrue answer, determine whether the candidate answer is consistent with the correct answer. Note the following points: 1. The candidate answer must contain content consistent with groundtrue answer to be considered correct (pay attention to the consistency of time, place, and person), but more details
https://arxiv.org/abs/2505.15872v2
can be provided. 2. If there are multiple items in the groundtrue answer, the candidate answer must also contain all the items to be correct. 3. If the groundtrue answer mentions that the premise of the question is wrong, such as some assumptions of the question are wrong, the candidate answer must point out the wrong premise to be considered correct. 4. When the groundtrue answer does not mention the wrong premise, the question itself is not problematic. If the candidate answer says that the question has a wrong premise„ such as person/event may not exist, it is a wrong answer. 5. The given question has a clear answer and the given groundtrue answer is absolutely right. If the candidate answer says it doesn’t know or lacks information, it is a wrong answer Just answer Yes or No. Given Question: {query} 24 Table 6: The Accuracy of unified prompt and separate prompt strategies in automatic evaluation compared to human evaluation. DeepSeek-V3 Gemini-2.5-Flash unified prompt separate prompt unified prompt separate prompt Accuracy 95.57% 99.29% 96.43% 98.71% Groundtrue Answer: {reference_answer} Candidate Answer: {candidate_answer} Is the candidate answer correct? As shown in Table 6, this single unified prompting strategy achieves an overall accuracy of 96% when compared to human evaluation. However, we identified a consistent failure mode on false premise questions, where LLM evaluators often fail to recognize implicit factual errors or assumptions in the query (see Section C.3 for examples). To mitigate this issue, we explicitly annotate such groundtruth answers yqwith phrases like: “This question contains a false premise: ... ” , making the logical error explicit. Furthermore, we develop aseparate-prompt strategy , with one prompt tailored for false-premise questions and another specifically for other questions. The specialized prompt for false-premise questions is as follows: Given a question and its ground-truth answer, determine whether the candidate answer correctly answers the given question. Pay attention to the following points: 1. This question has a false premise, which has been pointed out in the groundtruth answer. If the candidate answer does not point out or correct this false premise, it is incorrect. 2. If the false premise pointed out by the candidate answer is different from the groundtruth answer (time, place, event, person, meaning inconsistent), it is incorrect. 3. If the groundtruth answer still answers the question in addition to pointing out the false premise, the candidate answer should also answer the question. In this case, if the candidate answer does not answer the question or the meaning of the answer content is inconsistent with the groundtruth answer (pay attention to the consistency of time, place, person, and quantity), it is incorrect. Just answer Yes or No. Given question: {query} Groundtruth answer: {reference_answer} Candidate answer: {candidate_answer} Does the candidate answer correctly answer the given question? The prompt for other questions is as follows: Given a question and its groundtrue answer, determine whether the candidate answer correctly answers the given question. Pay attention to the following points: 1. The candidate answer must contain content that is consistent with the groundtrue answer to be considered correct (pay attention
https://arxiv.org/abs/2505.15872v2
to the consistency of time, place, person, and quantity), but more details can be provided. 2. If there are multiple contents/events/persons in the groundtrue answer, the candidate answer must also contain all the contents/events/persons to be considered correct. 3. The given question does not have a wrong premise, and the relevant person/event must exist and be unique. If the 25 candidate answer proposes a wrong premise or cannot determine whether the person/event exists, it is a wrong answer. 4. The given question has a clear answer and the given groundtrue answer must be correct. If the candidate answer does not answer the question correctly but proposes the need to further query relevant information, it is a wrong answer. Just answer Yes or No. Given question: {query} Groundtruth answer: {reference_answer} Candidate answer: {candidate_answer} Does the candidate answer correctly answer the given question? As shown in Table 6, this separate-prompt strategy significantly improves evaluation robustness, increasing LLM evaluation accuracy from 95.57% to 99.29% compared to human annotations. C.3 Fail Cases for Unified Prompting Strategy Here, we present two failure cases related to false premises. The first involves a question that contains a false premise, but the agent’s answer fails to explicitly identify it. Given question : Is Yamoussoukro the capital of the East African country of Côte d’Ivoire? Groundtruth answer : Yes, but the Republic of Côte d’Ivoire is a West African country. Candidate answer : Yamoussoukro is the capital of Côte d’Ivoire, as confirmed by multiple sources, including Wikipedia and Britannica. However, Abidjan remains the de facto economic center and largest city. Judgment from unified prompting : True Judgment from human : False The second involves a question that does not contain a false premise, but the agent incorrectly assumes one due to missing or unavailable evidence. Both types of cases sometimes pose challenges for LLM-based evaluators to accurately assess. Given question : What novels did the Prime Minister of Iceland in 1904 write during his lifetime? Groundtruth answer : The Prime Minister of Iceland in 1904, Hannes Þórður Hafstein, wrote the novel Brennivínshatturinn. Candidate answer : According to the search results, the name of the Prime Minister of Iceland in 1904 is not explicitly mentioned in the information provided (the link to Wikipedia does not return a specific name). In addition, all search results do not show that the Prime Minister of Iceland in 1904 had any literary experience. Therefore, the user’s question may have a wrong premise - the name of the Prime Minister of Iceland in 1904 cannot be confirmed at present, and there is no evidence that he wrote a novel. Judgment from unified prompting : True Judgment from human : False D More Experiment Results D.1 Setup All models are configured with a context window of 4096 tokens. For queries that exceed this limit, we truncate earlier turns in the conversation history. During web browsing, if a document exceeds the limit, we segment it into chunks of up to 4096 tokens, summarize each chunk independently, and then aggregate the summaries. Our Agentic RAG framework evolves from open-source projects
https://arxiv.org/abs/2505.15872v2
like 26 BabyAGI [ 60], AutoGPT [ 41], and KwaiAgent [ 34]. Unless otherwise specified, our default search engine is DuckDuckGo. For the implementation of the search engine, we first use the API to retrieve results. If the API fails to return the correct results, we then use a web scraper to fetch the results. Our evaluated LLMs are all implemented via API calls. For models with official deployments, we use the official APIs; for those without ( e.g., Llama-4-Maverick-17B-128E-Instruct), we rely on third-party hosted APIs. Under our typical experimental settings (maximum step of retrieval stage T= 5, maximum evidence-set length n= 5), each query roughly requires 36 API calls: 1. Retrieval stage: 5 calls for planning and action of the agent. 2. Augmentation stage: 1 call to extract relevant evidence. 3.Answer-generation stage: 5 calls for answering based on five evidence from the evidence set and one call for answering based on all the observations. 4. Evaluation stage: 12 calls (two LLMs) to evaluate six candidate answers. Each query also consumes about 24k input tokens and produces roughly 4k output tokens. Most of the input tokens come from the retrieval and augmentation stages, since those involve lengthy interaction histories and, at times, reading very long external documents. Table 7: Performance of different LLMs on question attributes, measured by %. Time-Sen. denotes Time-Sensitive. Distr.Info. means Distracting Information. False Prem. stands for False Premise. Model Multi-Hop Long-Tail Time-Sen. Freshness Distr. Info. False Prem. Llama-4-Maverick-17B-128E-Instruct 9.04 9.04 12.35 10.53 9.59 20.00 Qwen3-32B w/o think 7.98 8.56 7.41 16.67 10.96 7.69 Qwen3-32B w/ think 10.64 8.56 10.49 10.66 8.22 19.23 DeepSeek-V3 6.38 8.02 9.26 12.50 12.33 19.23 DeepSeek-R1 14.89 12.30 16.05 16.67 13.70 30.77 GPT-4o 6.91 8.02 9.88 14.58 9.59 23.08 o3-mini 9.57 10.16 12.35 8.33 6.85 19.23 Claude-3-7-Sonnet 11.70 10.16 12.96 22.92 12.33 26.92 Gemini-2.5-Flash 12.77 13.37 13.58 20.83 9.59 26.92 Gemini-2.5-Pro 20.74 19.79 22.22 39.58 21.92 38.46 D.2 More Results for Question Attributes This section presents the performance of the information seeking agent across different LLMs, search engines, and retrieval steps on various question attributes. The results for different LLMs are summarized in Table 7, for different search engines in Table 8, and for different retrieval step configurations in Table 9. From these results, we can draw several key conclusions: LLM reasoning capabilities play a significant role in improving the agent’s performance across multiple question attributes. Stronger reasoning models, such as DeepSeek-R1 and Gemini-2.5-Pro, show a marked improvement in answering both simple and complex question attributes compared to base models, suggesting that enhanced reasoning abilities allow the agent to better utilize retrieved evidence for more accurate answer generation. Search engine quality also impacts the agent’s performance, with Google and Yahoo outperforming other engines like DuckDuckGo and Bing in most cases (as shown in Table 8). This is consistent with the previous analysis, where search engines with better information coverage and relevance lead to higher accuracy. Models paired with high-quality search engines, especially for multi-hop or long-tail questions, consistently show better results. Increasing the number of retrieval steps ( T) improves the agent’s accuracy, with
https://arxiv.org/abs/2505.15872v2
a noticeable en- hancement in both ACC and IA@k as the maximum number of retrieval steps increases. This scaling effect highlights the agent’s ability to refine its search and gather more evidence with additional computation time. However, the performance improvements for long-tail and distracting information questions are more limited, despite increasing the maximum number of retrieval steps. These types of questions are inherently more difficult to answer due to the sparse and noisy nature of the relevant information available on the web. As a result, even with more retrieval steps, the agent still struggles to effectively parse through irrelevant or misleading content. 27 Table 8: Performance of different search engines on question attributes, measured by %. Time-Sen. denotes Time-Sensitive. Distr.Info. means Distracting Information. False Prem. stands for False Premise. Model Search Engine Multi-Hop Long-Tail Time-Sen. Freshness Distr. Info. False Prem. Gemini-2.5-FlashDuckDuckGo 12.77 13.37 13.58 20.83 9.59 26.92 Bing 31.91 31.55 34.57 37.50 24.66 34.62 Google 32.98 33.16 38.27 39.58 24.66 42.31 Yahoo 29.79 33.16 33.95 31.25 26.03 50.00 DeepSeek-V3DuckDuckGo 6.38 8.02 9.26 12.50 12.33 19.23 Bing 15.43 18.18 18.52 16.67 16.67 38.46 Google 25.53 29.41 24.69 27.08 31.51 26.92 Yahoo 22.87 26.20 23.46 16.67 16.44 34.62 In summary, both LLM reasoning capabilities and search engine quality have a profound impact on the agent’s ability to accurately answer different types of questions. Increasing the retrieval steps provides noticeable improvements, particularly for simpler questions. However, long-tail and distracting information questions remain more challenging, indicating that better evidence filtering and improved retrieval strategies are crucial for handling these complex scenarios. Table 9: Performance on question attributes under different maximum step Tlimits, measured by %. Time-Sen. denotes Time-Sensitive. Distr.Info. means Distracting Information. False Prem. stands for False Premise. Model Max Step Multi-Hop Long-Tail Time-Sen. Freshness Distr. Info. False Prem. DeepSeek-V31 3.19 4.28 3.09 4.17 5.48 11.54 3 7.45 6.95 9.88 16.67 8.22 15.38 5 6.38 8.02 9.26 12.50 12.33 19.23 10 11.17 8.56 13.58 22.92 14.67 26.92 20 12.23 16.58 15.43 14.58 10.96 26.92 Gemini-2.5-Flash1 4.79 8.02 7.41 6.25 5.48 30.77 3 9.04 9.09 11.73 18.75 9.59 30.77 5 12.77 13.37 13.58 20.83 9.59 26.92 10 15.96 18.18 19.75 33.33 19.18 38.46 20 20.74 20.32 23.46 37.50 20.55 38.46 D.3 More Results for Test-time Scaling in Agentic Information Seeking We conducted experiments to assess the performance of two models, DeepSeek-V3 and Gemini-2.5- Flash, as the maximum step Tin information seeking was increased from 1 to 20. The evaluation metrics include ACC, IA@k, EEU, and IC, and we present the results in Table 10. From these results, we can draw several key conclusions. Both models benefit from an increased number of steps, demonstrating that more retrieval actions lead to better accuracy and information relevance. Gemini-2.5-Flash performs better than DeepSeek-V3 at all retrieval rounds. For example, at 1 round, Gemini-2.5-Flash has an ACC of 7.35%, while DeepSeek-V3 has 4.49%. As the number of retrieval rounds increases, Gemini-2.5-Flash also sees significant improvement, with ACC reaching 22.86% at 20 rounds, outperforming DeepSeek-V3. EEU for both models increases with more retrieval rounds, reflecting a higher utility of the information retrieved. However,
https://arxiv.org/abs/2505.15872v2
Gemini-2.5-Flash consistently shows a higher EEU compared to DeepSeek-V3, particularly at 3, 5, and 10 rounds. IC (Information Compactness) remains relatively stable for both models across different retrieval rounds, with Gemini-2.5-Flash maintaining a slightly better performance compared to DeepSeek-V3. D.4 More Results and Potential Solutions for Retrieval Interference This section presents additional experimental results on the phenomenon of retrieval interference, where the retrieval of external information negatively impacts the model’s ability to answer questions correctly. Table 11 displays the results for open-source LLMs, Table 12 for closed-source LLMs, Table 13 for different search engines, and Table 14 for varying maximum retrieval steps. 28 Table 10: Performance with varying maximum action rounds in the retrieval stage. ACC and IA@k are measured by %. Model Max Turn ACC IA@1 IA@2 IA@3 IA@4 IA@5 EEU IC DeepSeek-V31 4.49 4.08 2.86 4.08 4.08 4.08 0.909 4.052 3 8.57 5.31 8.57 8.16 8.16 8.16 1.000 3.965 5 8.98 5.71 7.35 9.39 9.39 10.20 1.136 3.926 10 12.65 6.94 10.61 10.61 11.43 11.84 0.935 3.826 20 15.92 11.43 12.65 13.88 15.51 15.10 0.974 3.759 Gemini-2.5-Flash1 7.35 8.16 7.76 7.76 8.57 8.16 1.167 3.908 3 11.84 13.88 13.47 14.69 14.29 13.88 1.241 3.771 5 14.29 12.65 15.10 16.73 16.73 15.92 1.171 3.750 10 19.59 15.92 17.96 20.41 20.00 20.41 1.042 3.602 20 22.86 20.82 22.04 22.45 23.67 22.86 1.036 3.573 Table 11: Interference rates of open-source LLMs, measured by %. Llama-4-Maverick denotes Llama-4-Maverick-17B-128E-Instruct. Model Interference Rate Llama-4-Maverick 87.50 Qwen3-32B w/o think 100.00 Qwen3-32B w/ think 88.89 DeepSeek-V3 84.21 DeepSeek-R1 53.13Table 12: Interference rates of closed-source LLMs, measured by %. Model Interference Rate GPT-4o 61.54 o3-mini 61.11 Claude-3-7-Sonnet 58.33 Gemini-2.5-Flash 68.97 Gemini-2.5-Pro 60.34 Our experiments reveal that certain models are able to correctly answer some questions based solely on their internal knowledge. However, when these same questions are queried with online retrieval, the answers become incorrect, which we define as retrieval interference, where the additional information gathered from the web undermines the agent’s initial response. To quantify the extent of this interference, we introduce the interference rate, which measures the proportion of questions that an LLM can answer correctly without retrieval but fails to answer correctly when web-based information retrieval is applied. Specifically, the interference rate is calculated as the fraction of questions that an LLM answers correctly without retrieval but answers incorrectly after retrieval, normalized by the total questions it initially answered correctly without retrieval. Our findings across various open-source and closed-source LLMs, search engines, and retrieval max turns show that retrieval interference is a widespread issue, with interference rates ranging from 40% to 80%. This high interference rate significantly reduces the model’s probability of answering questions correctly, as irrelevant or conflicting web content can override the model’s confident internal knowledge. Table 13: interference rates under different search engines, measured by %. Model Search Engine Interference Rate Gemini-2.5-FlashDuckDuckGo 68.97 Bing 50.00 Google 46.87 Yahoo 42.31 DeepSeek-V3DuckDuckGo 84.21 Bing 84.21 Google 53.33 Yahoo 42.11 To mitigate this issue, several strategies can be considered: 29 •Improving Model Confidence in Internal Knowledge : One possible approach is to develop mechanisms that increase
https://arxiv.org/abs/2505.15872v2
the model’s confidence in its own accurate knowledge, reducing its tendency to override correct internal answers when external information contradicts it. This could involve enhancing the model’s self-reflection capabilities or providing additional confidence scores for internally generated answers before querying external sources. •Better Evidence Filtering : A more effective evidence selection mechanism can help minimize irrelevant or conflicting information. For example, the model could prioritize high-confidence sources or introduce a ranking mechanism that filters out low-quality, noisy, or contradictory web pages. Contextual relevance checks could also be incorporated to ensure that only information that aligns well with the query’s context is used. •Knowledge Consistency Checks : Implementing consistency checks between the retrieved evidence and the model’s internal knowledge could further improve accuracy. If a retrieved document contradicts previously confirmed internal knowledge, the agent could either ignore the external information or flag it for additional verification before using it in the final answer generation. •Hybrid Retrieval and Reasoning Approaches : A hybrid approach that combines retrieval- augmented reasoning with internal knowledge checks may help. For instance, the agent could first check its internal knowledge and retrieve only supplementary information when necessary, minimizing reliance on external sources. This would reduce the risk of introducing irrelevant information while still benefiting from dynamic search results when needed. •Search Engine Optimization : Since certain search engines, such as Google and Yahoo, tend to return more relevant results, using a more efficient search engine for information retrieval may help reduce the chance of encountering conflicting or misleading data. Moreover, optimizing search queries to be more specific or context-aware could lead to more relevant results, thereby reducing retrieval interference. Table 14: Interference rates under varying maximum step T, measured by %. Model Max Step Interference Rate DeepSeek-V31 85.71 3 80.00 5 84.21 10 53.33 20 52.94 Gemini-2.5-Flash1 73.33 3 62.07 5 68.97 10 58.33 20 65.52 The phenomenon of retrieval interference highlights a significant challenge in agentic information seeking tasks, where additional information retrieved from the web can degrade the model’s per- formance. Our results suggest that improving the model’s ability to confidently rely on internal knowledge, optimizing retrieval strategies, and employing better filtering mechanisms are crucial steps in mitigating this interference. Further research into these strategies could enhance the reliability and robustness of agentic RAG systems in real-world applications. D.5 Details and More Results of Language Impact We also investigate how different languages (e.g., Chinese, English, and each query’s predomi- nant language) affect an agent’s information-seeking performance, with DuckDuckGo as the fixed search engine. For the Chinese and English settings, we crafted both prompts and answers in the respective languages, and observed that the language of the search queries generated by the LLM closely matches the language of the prompt and question. As shown in Table 15, English queries substantially outperform Chinese ones. This is likely due to the broader coverage of English-language content and search tools. LLMs see far more English text during pre-training, so they’re stronger at 30 Table 15: Performance with different languages. ACC and IA@k are measured by %. Pred. Lang. denotes Predominant Language.
https://arxiv.org/abs/2505.15872v2
Model Language ACC IA@1 IA@2 IA@3 IA@4 IA@5 EEU IC DeepSeek-V3Chinese 8.98 5.71 7.35 9.39 9.39 10.20 1.136 3.926 English 13.47 12.24 11.84 11.84 11.84 12.65 0.939 4.032 Pred. Lang. 17.14 11.02 15.92 17.96 17.55 17.96 1.048 3.919 GPT-4oChinese 10.20 9.39 8.16 9.39 8.57 8.98 0.920 3.878 English 11.02 8.16 9.80 10.61 11.02 11.84 1.074 3.889 Pred. Lang. 14.69 12.65 12.24 11.43 12.24 12.24 0.861 3.870 Gemini-2.5-FlashChinese 14.29 12.65 15.10 16.73 16.73 15.92 1.171 3.750 English 17.55 14.29 15.92 17.55 18.78 18.78 1.070 3.761 Pred. Lang. 18.78 15.92 15.92 17.14 17.55 17.96 0.957 3.802 understanding and generating English search queries. Search engines index and rank English pages more comprehensively, yielding higher-quality results. Table 16: Accuracy on question attributes with different languages, measured by %. Pred. Lang. denotes Predominant Language. Model Language Multi-Hop Long-Tail Time-Sen. Freshness Distr. Info. False Prem. DeepSeek-V3Chinese 6.38 8.02 9.26 12.50 12.33 19.23 English 10.11 12.83 12.96 16.67 10.96 30.77 Pred. Lang. 14.89 14.97 14.81 20.83 15.79 36.00 GPT-4oChinese 6.91 8.02 9.88 14.58 9.59 23.08 English 8.51 9.09 8.64 22.92 9.59 30.77 Pred. Lang. 13.30 14.44 12.35 18.75 11.84 32.00 Gemini-2.5-FlashChinese 12.77 13.37 13.58 20.83 9.59 26.92 English 16.49 15.51 14.20 25.00 19.74 32.00 Pred. Lang. 18.62 17.11 17.28 20.83 15.79 28.00 For the predominant language setting, although we recorded each instance’s predominant language in our dataset, it proved difficult to translate prompts and questions into every target language. Instead, we designed a language-aware prompt instructing the agent to search in its dominant language (prompt details provided below). Results in Table 15 show that this language-aware prompting yields the best overall performance, indicating that specifying the dominant language indeed helps the agent retrieve more relevant information online. Furthermore, prompting for the dominant language yields larger improvements on models with weaker innate multilingual capabilities—such as DeepSeek-V3 and GPT-4o—which cannot autonomously switch their search language and thus require explicit prompt cues. By contrast, stronger multilingual models like Gemini-2.0-Flash generally auto-adapt their search language and depend less on prompt instructions, resulting in smaller gains from our language-aware prompting strategy. We also present the retrieval interference and answer accuracy of various question attributes under different language settings in Tables 17 and Table 16. The language-aware prompt is as follows: You are a {agent_name}, {agent_bio}, {agent_instructions} Currently, you are in the task planning phase, where you will be given a specific query to address. Please utilize LLM’s advantages and pursue efficient strategies for task planning. 1. You have a short-term memory of approximately 4,000 characters. 2. You do not require assistance or response from users. 3. You can use the reference tools mentioned when planning. 31 Table 17: Retrieval interference under different languages, measured by %. Pred. Lang. denotes Predominant Language. Model Language Interference Rate DeepSeek-V3Chinese 84.21 English 64.71 Pred. Lang. 56.25 GPT-4oChinese 61.54 English 65.00 Pred. Lang. 52.94 Gemini-2.5-FlashChinese 68.97 English 53.85 Pred. Lang. 48.39 4. Complex problems can be split into sub-problems and then information can be collected, aggregated and authenticated. Be sure to verify the truthfulness of the information. 5. Stay humble and call the tool for questions you are not
https://arxiv.org/abs/2505.15872v2
sure about, but do not call the same tool with the same parameters repeatedly. 6. You can flexibly switch the language of the search term to get more information. You can choose to search in Chinese, English, or the language related to the entity involved in the question (for example, if the question involves a French person, you can search in French) 7. You can think and plan up to {max_iter_num} steps, so strive to plan tasks as efficiently as possible. 8. You have the capability for reflection and self-criticism; reflect on past decisions and improve your strategies. 9. If you have sufficient information to answer the given query, invoke the termination tool to terminate planning. Otherwise, continue planning new tasks while ensuring no duplication with prior tasks. {tool_specification} {current_date_and_time} {memory} Given Query: {query} Based on the given question and existing tasks, plan a new Task (no repetitions), and you can only generate the Task in the following **JSON list** format: [{ "task_name": "task description", "command":{ "name":"command name", "args":{ "arg name":"value" } } }] Even if there is only one task or no task, it needs to be returned in the form of a list. Ensure that the Task can be parsed by Python’s json.loads function. 32 If the completed Tasks are sufficient to answer the query, terminate planning. Otherwise, create another Task that do not duplicate previous ones. A new Task: D.6 Impact of Answer LLMs In previous experiments, when evaluating a specific LLM, we use it across all stages of the pipeline, including retrieval, augmentation, generation, and ϕ(·,·)for computing ACC and IA@k. In this section, we explore different answer LLMs ϕ(·,·), where the information seeking and generation stages use different LLMs. The results are presented in Table 18. Here, the term "Original" denotes the scenario where the same LLM generates the answer for computing IA@k, while "Fixed" refers to using a fixed LLM, DeepSeek-V3, for answer generation when computing IA@k, regardless of the model used in the retrieval and augmentation stages. Table 18: Performance of different answer LLMs for ϕ(·,·). “Original” denotes that the answer is generated by the same LLM used in the retrieval and augmentation stage, while “Fixed” means employing a fixed LLM, DeepSeek-V3, to generate an answer. ACC and IA@k are measured by %. Model answer LLM IA@1 IA@2 IA@3 IA@4 IA@5 EEU IC Qwen3-32B w/ think Original11.43 10.48 12.38 12.38 14.29 0.833 4.116 DeepSeek-R1 20.00 24.76 25.71 24.76 24.76 1.286 3.895 GPT-4o 14.29 13.33 14.29 12.38 12.38 0.882 4.071 o3-mini 18.10 17.14 17.14 18.10 18.10 1.056 3.875 Claude-3-7-Sonnet 18.10 18.10 20.95 20.00 20.00 0.957 4.044 Gemini-2.5-Flash 20.00 21.90 22.86 24.76 21.90 1.040 3.842 Gemini-2.5-Pro 29.52 28.57 29.52 28.57 29.52 0.886 3.977 Qwen3-32B w/ think Fixed13.33 14.29 13.33 13.33 14.29 0.833 4.103 DeepSeek-R1 20.00 21.90 23.81 23.81 23.81 1.190 3.938 GPT-4o 11.43 14.29 15.24 14.29 14.29 0.941 4.068 o3-mini 18.10 17.14 17.14 17.14 18.10 1.056 3.910 Claude-3-7-Sonnet 17.14 18.10 20.95 21.90 21.90 1.000 4.063 Gemini-2.5-Flash 20.95 20.00 24.76 24.76 24.76 1.040 3.773 Gemini-2.5-Pro 25.71 28.57 28.57 26.67 27.62 0.857 3.997 From the results in Table 18, we
https://arxiv.org/abs/2505.15872v2
observe that the performance difference between the Original and Fixed configurations is relatively small. However, fixed LLM configurations generally perform slightly worse. This may be because the information seeking LLM is aware of the knowledge gaps it has and selects corresponding documents to serve as evidence. Doing so can compensate for any missing knowledge from the original LLM, resulting in higher answer accuracy and lower information redundancy. However, when switching to a different answer LLM in the fixed setup, this advantage is lost. The answer LLM might not possess the same domain-specific knowledge as the information seeking LLM, leading to inaccuracies in the final answer generation. This demonstrates that the alignment between the LLMs used for information seeking and answer generation plays a crucial role in achieving higher performance in agentic information seeking tasks. E Broader Impacts The work presented in this paper, specifically the development of the InfoDeepSeek benchmark for agentic information seeking tasks, has several potential positive societal impacts. By improving the ability of language models (LLMs) to accurately retrieve and synthesize information, this research can enhance various applications such as virtual assistants, educational tools, and decision-support systems, making them more reliable and efficient. These improvements could contribute to advancing fields such as healthcare, law, and research by providing accurate, up-to-date, and contextually relevant information. Our experiments reveal that current LLMs still exhibit significant shortcomings in agentic information seeking, exposing two primary areas of weakness: (1) the intrinsic reasoning and domain-knowledge 33 capabilities of the LLM itself, and (2) the quality and relevance of the search engine results it relies on. These findings carry both positive and cautionary implications: •Enhanced Reasoning Abilities of LLMs : Stronger reasoning models (e.g., DeepSeek-R1, Gemini-2.5-Pro) consistently outperform baseline LLMs, pointing toward investment in specialized reasoning architectures. •Search Optimization : Tailoring search queries and engines (as seen with Google/Yahoo gains) can substantially improve retrieval relevance. Future work might develop model- driven query rewriting or search-engine-specific adapters. •Long-Tail & Noise Handling : Equipping agents with dedicated modules for identifying and filtering long-tail entities and distracting or conflicting information can reduce retrieval failures and improve focus. •Compute Scaling : Allowing agents more compute at test time (i.e., increased retrieval steps) leads to clear scaling gains, suggesting that adaptive budgets or dynamic step policies could yield large benefits. •Mitigating Retrieval Interference : Techniques such as internal-knowledge confidence checks or selective evidence fusion can prevent external noise from overriding correct model priors. •Language-Aware Retrieval : Explicitly prompting agents to search in predominant lan- guages unlocks richer, domain-specific resources, particularly for under-represented knowl- edge. However, there are also negative societal impacts that must be considered. The advancements in LLMs and agentic RAG systems could potentially lead to misinformation amplification if the models are not properly evaluated or if they retrieve and generate content based on biased or misleading sources. Inaccurate or incomplete answers generated by models could exacerbate existing societal challenges, such as the spread of fake news or the reinforcement of harmful stereotypes. Additionally, as the technology becomes more powerful, there is the risk of misuse in areas like
https://arxiv.org/abs/2505.15872v2
arXiv:2505.15874v1 [cs.IR] 21 May 2025Text-to-Pipeline: Bridging Natural Language and Data Preparation Pipelines Yuhang Ge, Yachuan Liu, Yuren Mao, Yunjun Gao Zhejiang University yuhangge@zju.edu.cn, liuyachuan@zju.edu.cn, yuren.mao@zju.edu.cn, gaoyj@zju.edu.cn Abstract Data preparation (DP) transforms raw data into a form suitable for downstream applications, typically by composing operations into executable pipelines. Building such pipelines is time-consuming and requires sophisticated programming skills. If we can build the pipelines with natural language (NL), the technical barrier of DP will be significantly reduced. However, constructing DP pipelines from NL instructions remains underexplored. To fill the gap, we introduce Text-to- Pipeline , a new task that translates NL data preparation instructions into DP pipelines. Furthermore, we develop a benchmark named PARROT to support systematic evaluation. To simulate realistic DP scenarios, we mined transformation patterns from production pipelines and instantiated them on 23,009 real-world tables collected from six public sources. The resulting benchmark comprises ~18,000 pipelines covering 16 core DP operators. We evaluated cutting-edge large language models on PARROT and observed that they only solved 72.86% of the cases, revealing notable limitations in instruction understanding and multi- step reasoning. To address this, we propose Pipeline-Agent , a stronger baseline that iteratively predicts and executes operations with intermediate table feedback, achieving the best performance of 76.17%. Despite this improvement, there remains substantial room for progress on Text-to-Pipeline . Our data, codes, and evaluation tools are available at https://anonymous.4open.science/r/Text-to-Pipeline. 1 Introduction Data preparation (DP) refers to the process of transforming raw data into a form suitable for down- stream applications such as business intelligence (BI) and machine learning (ML) [ 5,11,47,48]. As a core component of modern data management, tabular DP plays a central role in supporting workflows in data warehouses and BI systems [ 5]. Preparing tabular data typically involves multiple operations such as filtering [ 31], joining [ 8], grouping [ 43], and reshaping [ 13]. These operations are often composed into pipelines where each step incrementally transforms the table and feeds the result into the next [ 44,21,43]. However, building correct and efficient pipelines is time-consuming and requires sophisticated programming skills, which is challenging even for experienced data engineers, as they need to compose pipelines in a large compositional space. Besides, this poses a significant technical barrier for non-experts and prevents them from participating in DP. To lower this barrier, a natural language (NL) interface that allows users to complete DP tasks by writing NL instructions is a natural choice. This paradigm has been widely explored in tasks such as Text-to-SQL [ 46,22], spreadsheet formula generation [ 30,51], and code gen- eration [ 41,49]. However, these tasks primarily focus on database queries, cell-wise func- tions, or logic expressions, and are not designed to support DP pipeline generation. In par- allel, several methods have explored automated pipeline construction [ 44,21,43], which of- ten relies on structured supervision, such as input-output table pairs [ 13,4,44] or schema graphs [ 21], making them inapplicable in scenarios where only NL instructions are available. Preprint. /gid00020/gid00048/gid00040/gid00040/gid00028/gid00045/gid00036/gid00053/gid00032/gid00001/gid00020/gid00028/gid00039/gid00032/gid00046/gid00001 /gid00029/gid00052/gid00001/gid00045/gid00032/gid00034/gid00036/gid00042/gid00041/gid00506/gid00001/gid00028/gid00041/gid00031/gid00001/gid00046/gid00032/gid00039/gid00032/gid00030/gid00047 /gid00047/gid00035/gid00032/gid00001/gid00047/gid00042/gid00043/gid00526/gid00470/gid00001/gid00043/gid00045/gid00042/gid00031/gid00048/gid00030/gid00047/gid00046/gid00505/gid00031/gid00033/gid00505/gid00034/gid00045/gid00042/gid00048/gid00043/gid00029/gid00052/gid00535/gid00517/gid00045/gid00032/gid00034/gid00036/gid00042/gid00041/gid00517/gid00536 /gid00505/gid00047/gid00042/gid00043/gid00038/gid00535/gid00470/gid00536 /gid00505/gid00046/gid00032/gid00039/gid00032/gid00030/gid00047/gid00535/gid00517/gid00043/gid00045/gid00042/gid00031/gid00048/gid00030/gid00047/gid00517/gid00536 /gid00021/gid00032/gid00051/gid00047/gid00526/gid00047/gid00042/gid00526/gid00017/gid00036/gid00043/gid00032/gid00039/gid00036/gid00041/gid00032 /gid00010/gid00041/gid00043/gid00048/gid00047/gid00046 /gid00017/gid00036/gid00043/gid00032/gid00039/gid00036/gid00041/gid00032/gid00001/gid00050/gid00036/gid00047/gid00035/gid00001/gid00005/gid00020/gid00013 /gid00006/gid00051/gid00032/gid00030/gid00048/gid00047/gid00036/gid00042/gid00041/gid00001/gid00019/gid00032/gid00046/gid00048/gid00039/gid00047/gid00021/gid00002/gid00003/gid00013/gid00006 /gid00021/gid00002/gid00003/gid00013/gid00006/gid00021/gid00002/gid00003/gid00013/gid00006 /gid00002 /gid00003/gid00004 /gid00005/gid00010/gid00005 /gid00002 /gid00003 /gid00004/gid00017/gid00045/gid00042/gid00031/gid00048/gid00030/gid00047/gid00007/gid00010/gid00013/gid00021/gid00006/gid00019 /gid00020/gid00006/gid00013/gid00006/gid00004/gid00021 /gid00010/gid00041/gid00046/gid00047/gid00045/gid00048/gid00030/gid00047/gid00036/gid00042/gid00041/gid00001/gid00464/gid00001/gid00021/gid00028/gid00029/gid00039/gid00032/gid00046 /gid00014/gid00042/gid00031/gid00032/gid00039/gid00001/gid00464/gid00001/gid00002/gid00034/gid00032/gid00041/gid00047 /gid00004/gid00042/gid00040/gid00043/gid00039/gid00036/gid00039/gid00032/gid00031/gid00001/gid00004/gid00042/gid00031/gid00032
https://arxiv.org/abs/2505.15874v1
/gid00008/gid00019/gid00016/gid00022/gid00017/gid00003/gid00026/gid00023/gid00032/gid00045/gid00036/gid00033/gid00036/gid00032/gid00031/gid00001/gid00016/gid00048/gid00047/gid00043/gid00048/gid00047 /gid00021/gid00042/gid00043/gid00012Figure 1: Task overview of Text-to-Pipeline .To bridge the gap, we introduce Text-to- Pipeline task: translating NL instructions into executable DP pipelines over tabular data. We formalize it as symbolic program generation in a domain-specific language (DSL), which can be compiled into exe- cutable backend code such as Pandas or SQL. DSL offers a structured representa- tion, backend flexibility, and stronger sup- port for verification and evaluation than directly generating code (see Sec. 3). As shown in Fig. 1, given the instruction “ Sum- marize sales by region, and select the top-5 products. ” and input tables, the system generates a pipeline in DSL, e.g., GroupBy ,Topk ,Select , which is compiled into backend code and executed to produce the final output table. To support this task, we introduce PARROT , a large-scale benchmark for Text-to-Pipeline , which contains ~18,000 multi-step DP tasks spanning 16 core operators, built on 23,009 real tables from six public data sources. Constructing such a benchmark poses several challenges: First, pipelines must be valid and faithfully aligned with NL instructions. Second, NL should capture diverse and realistic intent. Third, operator sequences must be grounded in real table semantics. To address these challenges, we design a five-stage synthesis framework. Specifically, we begin by curating tables from diverse domains and extracting transformation patterns from production pipelines, ensuring that pipelines align with practical usage. Next, we sample DSL programs via Markov processes guided by empirically derived transition matrices, capturing realistic compositional structures. We then generate intent-aligned NL instructions using LLMs and refine them to reflect natural, human-like expression. Each DSL program is compiled into executable backend code, enabling automated evaluation. Finally, we perform multi-phase validation involving six PhD-level human experts to ensure consistency between instructions and programs and verify correctness through execution results. To evaluate the Text-to-Pipeline task and the PARROT benchmark, we compare three settings: zero- shot LLM prompting, structured generation, and agent-based planning. Zero-shot prompting with GPT-4o yields high program validity ( 81.12% ) but limited execution accuracy ( 71% ). Fine-tuned models like Qwen2.5-Coder-7B improve execution accuracy to 74.15% , showing the benefit of high- quality supervised data. Among structured generation, symbolic DSLs achieve the best execution accuracy ( 57.48% ), outperforming Pandas ( 33.8% ) and SQL ( 3.05% ), highlighting the advantage of modular, schema-aware DSL formalism. To explore the planning capabilities of LLM agents, we propose Pipeline-Agent , which follows an iterative predict-execute-reflect cycle and uses intermediate table states for contextual reasoning, achieving the best performance of 76.17% . Nevertheless, there remains substantial room for progress on PARROT . Current LLM-based methods still struggle with instruction grounding ,compositional reasoning , and dynamic schema evolution , especially in longer or more complex pipelines. Beyond serving as a benchmark for Text-to-Pipeline ,PARROT provides a testbed for exploring LLM-based tool use in data engineering. Combined with the Model Context Protocol (MCP) [ 16], which standardizes how LLMs interact with tools and data sources, it lays the foundation for scalable and autonomous data management over large-scale data lakes. 2 Related Work NL-Driven Program Generation. Prior research has
https://arxiv.org/abs/2505.15874v1
extensively studied how to translate NL into executable programs. Specifically, Text-to-SQL methods [ 46,22,25,23] map NL queries to SQL statements, emphasizing semantic understanding and schema linking. These approaches are typically benchmarked on datasets such as WikiSQL [ 52], Spider [ 46], Spider 2.0 [ 22], and BIRD [ 25]. Text-to-Formula [ 51,24,30] techniques like NL2Formula [ 51], SheetCopilot [ 24], and SpreadsheetBench [ 30] focus on converting NL into spreadsheet formulas or targeted cell- level edits, suitable for localized spreadsheet manipulations. Additionally, general Text-to-Code frameworks [ 10,6,50,15] translate NL instructions into scripts in languages like Python, C++, and Java, addressing diverse standalone programming tasks tested on HumanEval [ 6] and APPS [ 15]. Although these paradigms leverage NL, these tasks typically target the level of SQL queries, cell-wise formulas for spreadsheets, or logic functions for general programming. In contrast, Text-to-Pipeline focuses on generating multi-step, executable pipelines for data preparation, where the objective is to transform input tables into expected outputs through schema-aware DSL programs. 2 Automatic Data Pipeline Generation. Automating data pipeline construction is often framed as program synthesis. Early methods rely on manual coding or visual tools [ 2,1], while example-driven approaches [ 13,12,18,36,19,14,3,53] require input-output (IO) table pairs and struggle with multi-step logic. Subsequent work [ 4,38,44,37] extends to multi-step synthesis but still depends on output supervision. Auto-Tables [ 27] and Auto-Prep [ 21] remove this constraint using self-supervised learning and mining existing operator traces, respectively. However, none of these methods support open-ended pipeline generation directly from NL instructions. In parallel, several human-interactive systems assist users during pipeline construction. EDAssistant [ 28] supports in-situ code search, and Auto-Suggest [ 43] mines notebook patterns to recommend transformations. ChatPipe [ 7] enables conversational construction with LLMs. These systems rely on user interaction or code context, focusing on usability over automation. AutoPrep [ 9] uses a multi-agent framework for question-aware preparation in TableQA, but does not support general-purpose pipeline generation. In contrast, our Text-to-Pipeline task with PARROT targets fully automatic pipeline generation from NL instructions, enabling end-to-end generation without human feedback or IO supervision. 3 Benchmark Construction 3.1 Task Definition We define Text-to-Pipeline as the task of synthesizing executable, multi-step data preparation pipelines from natural language instructions over input tables. The data pipelines are symbolic programs defined over a domain-specific language (DSL), which can be compiled into different backend codes. Problem Setup. LetXdenote the space of input tables, Ythe space of output tables, Lthe space of natural language instructions, and Pthe space of DSL. Each instance is associated with a reference output table y∈ Y , which reflects the expected transformation result given an instruction and input. Given an instruction ℓ∈ L and an input table x∈ X , the objective is to learn a mapping f:L × X → P that yields a symbolic program p=f(ℓ, x), which can be compiled and executed to produce an output ˆy: ˆy=Exec(c, x)where c=Compile (p). (1) Here, Compile (p)the compiled executable code, Exec denotes the execution engine (e.g., Pandas or SQL), and ˆyis expected to equivalent to the reference output y, either semantically or structurally.
https://arxiv.org/abs/2505.15874v1
In summary, each instance of this Text-to-Pipeline can be represented as a five-tuple (x, ℓ, p, c, y ). Pipeline Structure. Each program p∈ P is represented as a left-to-right chain of ksymbolic operators: p=o1◦o2◦ ··· ◦ ok, where each operator oi∈ O is drawn from a core set covering common DP actions: O={groupby ,sort,join,fillna ,filter , . . .}. (2) Each operator oiis parameterized by structured arguments (e.g., column names, aggregation func- tions). For a complete list of the 16 supported operators, refer to Sec. 4. Despite its modular design, synthesizing such pipelines remains highly non-trivial. The space of valid operator sequences grows combinatorially with length, and each step is constrained by schema compatibility, parameter validity, and cross-step dependencies. Moreover, subtle interactions among operators (e.g., column renaming before aggregation) can significantly affect program correctness and execution outcomes. Why DSL over Direct Code. Compared to generating Pandas code directly, DSL offers three key advantages: (1) Stable structure : DSL uses a fixed set of operations with clear parameter formats, avoiding the syntactic variance of Pandas (e.g., multiple ways to filter or group data). This improves model learnability and consistency. (2) Backend flexibility : DSL can be compiled to Pandas, SQL, or Spark, making it adaptable to different runtime environments. In contrast, Pandas ties the output to Python execution. (3) Better planning and verification : DSL supports step-wise reasoning, schema validation, and error tracing. It enables fine-grained evaluation (e.g., operator accuracy) and is easier to synthesize at scale. Experiments further confirm this choice, showing consistent performance advantages over direct Pandas and SQL generation (see Sec. 4). Evaluation Metrics. We employ three primary metrics to provide a multifaceted view of model performance, evaluating both execution correctness and program structure fidelity. •Execution Accuracy (EA) : Given an input table xand a generated program ˆp, we execute ˆpto obtain ˆy=Exec(ˆp, x). EA measures the proportion of samples where ˆy⋆=y, i.e., the predicted 3 output ˆymatches the ground truth yup to canonical equivalence (e.g., row/column permutations, floating-point tolerance): EA=1 NNX i=1I(ˆyi⋆=yi), where Nis the number of test samples, and I(·)is the indicator function. •Program Validity (PV) : The proportion of generated programs ˆpthat are syntactically valid and executable without runtime errors, regardless of output correctness: PV=1 NNX i=1I(Valid (ˆpi)), where Valid (ˆpi)returns true if ˆpicompiles and executes successfully. •Operator Accuracy (OA) : The proportion of correctly generated operators in the sequence, regardless of the order of operations. For each generated program ˆp= [ˆo1, . . . , ˆok]and ground truth program p= [o1, . . . , o k], OA is computed as: OA=1 NNX i=1|ˆpi∩pi| |pi|, where |ˆpi∩pi|denotes the number of correctly generated operators at each position. 3.2 Automatic Pipeline Data Synthesis Framework To support systematic evaluation of Text-to-Pipeline , we synthesize a high-quality benchmark of about 18,000 instances through a five-stage data construction framework. Stage 1: Table Curation. To simulate realistic DP scenarios, we curated 23,009 tables from six open-source datasets, Auto-Pipeline [44],Auto-Tables [27],Spider [46],BIRD [25],Lakebench [8], andTableBench [42], spanning domains such as blockchain, finance, healthcare, and education. These tables exhibit diverse structures, ranging from
https://arxiv.org/abs/2505.15874v1
2 to 120 columns, and include both wide and long formats. Notably, 34% involve multi-table compositions. We standardize column names, normalize cell formats, and align schema types to ensure downstream compatibility. Algorithm 1 Operator Chain Construction Require: Curated table x, transition matrix P, validator S Ensure: Valid operator chain p={o1, o2, . . . , o k} 1:Sample chain length k∼TruncGeom (1,8) 2:Sample first operator o1uniformly from operator set O 3:Initialize chain p←[o1] 4:fori= 2tokdo 5: Sample oifromP(oi−1|x,{o1, . . . , o i−1}) 6: ifS(oi, x, p)is valid then 7: Append oito chain p 8: end if 9:end for 10:ifk≤3and all ops are atomic then 11: Set difficulty ←Easy 12:else if k≤6and includes groupby or simple join then 13: Set difficulty ←Medium 14:else 15: Set difficulty ←Hard 16:end if 17:return pStage 2: Operator Chain Construc- tion. To capture realistic transfor- mation behaviors, we collect 1,200 data pipeline scripts from open-source data science repositories (e.g., Kaggle notebooks, GitHub ETL pipelines), extracting transition patterns to con- struct an empirical transition ma- trixP∈R|O|×|O|that models co- occurrence probabilities in practical pipelines (see App.A.5 for details). We define a DSL with 16 core op- erators, categorized into Data Clean- ing,Data Integration ,Structural Re- construction , and Assisted Operations (see Tab.8). Operator chains are sam- pled via a Markov process [ 35], where the initial operator is drawn uniformly and subsequent steps are selected ac- cording to P, conditioned on schema and history. Schema validity is en- forced (e.g., joins require shared keys), and the chain length kis sampled from a truncated geometric distribution over [1,8]. To control task complexity, we adopt a three-level difficulty scheme based on program length (see App.C.6 for definitions and examples). This construction strategy ensures that the resulting pipelines are diverse, executable, and reflective of real-world DP scenarios across varying levels of reasoning difficulty. A summary of the sampling process is provided in Algo.1. 4 Table 1: Benchmark comparison across task scale and key execution properties. PARROT (last row) supports multi-step DSL programs grounded in natural language, with execution verified across multiple backends. Task types: TS = Text-to-SQL, TF = Text-to-Formula, P = Pipeline Synthesis, TP = Text-to-Pipeline. The symbol ‘-’ indicates that it does not provide codes or instructions. Benchmark Task Type #Inst. #Tabs #Ops Avg. Steps Atomic Exec NL-Driven Multi-Backend Spider [46] TS 10,181 1,056 27 4.78 BIRD [25] TS 12,751 693 27 6.52 SpreadsheetBench [30] TF 912 2,729 10 - NL2Formula [51] TF 70799 21670 57 10.2 Auto-Tables [27] P 244 244 8 1.11 Auto-Pipeline [44] P 716 4,680 12 4.10 PARROT(Ours) TP 17,168 23,009 16 4.24 Stage 3: Instruction Generation. Given input-output preview (10 rows) and operator chains, we generate semantically aligned natural language instructions in two steps. First, we prompt an LLM (e.g., GPT-4o) to generate schema-aware pipeline descriptions grounded in {x, y, p}. Then, we apply style-controlled refinement to convert the structured draft into fluent, user-centric instructions. We provide 10 in-context demonstrations to anchor operator-language mappings and ensure semantic preservation. The details of the demonstration can be referred to App. A.4. This matrix guides the construction
https://arxiv.org/abs/2505.15874v1
of realistic operator chains and reflects common transformation motifs such as select –filter –groupby (see Fig. 3, right). Stage 4: Rule-Based Code Compilation. Each DSL {oi}is deterministically compiled into ex- ecutable Pandas code via a rule engine. The compiler fills parameter templates and maintains a running schema state across steps to guarantee correct argument resolution and type-safe execu- tion. Specifically, after applying each operator, the schema is updated via an operator-specific rule δ(ot,St)→ S t+1, where Strepresents the current column names and types. This schema propagation mechanism supports early validation of column dependencies (e.g., ensuring join keys exist, or that aggregations are applied to numeric fields) and helps prevent execution failures. It also enables compatibility checks for multi-table operations (e.g., schema alignment for joins and unions). Our code templates are modular and extendable to SQL or Spark with minimal effort. Full specifications of schema propagation are provided in App. A.6. Stage 5: Multi-phase Validation. To ensure quality and reliability, we perform both automatic and human-in-the-loop validation. First, we execute the compiled code con the input table xand compare the result ˆyagainst the reference output yto verify execution correctness ( ˆy⋆=y). Second, we assess instruction fidelity by checking semantic alignment between the natural language instruction and the underlying operator chain {Oi}k i=1. This is done via a two-stage strategy: (1) LLM-based consistency scoring using zero-shot classification prompts, and (2) manual review. For the latter, we recruited six graduate-level experts (MSc/PhD students in data science and NLP) to review a stratified sample of 3,000 instances, scoring each on instruction accuracy, operator coverage, and semantic clarity. The inter-annotator agreement reached 91.4% (Cohen’s κ= 0.82), confirming label reliability. Only samples that pass both phases are retained. The resulting high-quality data serves as the development and test set. Additionally, to assist human experts in reviewing the data, we further designed a data visualization platform (see App. D). All implementation details can be found in App. A. 4 Dataset Statistics We provide a benchmark comparison in Tab. 1 and a statistical overview in Tab. 2. Compared to prior benchmarks, PARROT offers large-scale, instruction-driven multi-step programs with broad operator coverage, verified atomic execution, and support for multiple execution backends. Compositional and Parameter Complexity. Fig. 3 (left) illustrates the distribution of transforma- tion chain lengths within PARROT . The 49.07% of instances contain 4 to 6 operations, with 18.57% exceeding 7 steps. As shown in Tab. 2, the average chain length of PARROT is 4.24, substantially higher than prior pipeline synthesis benchmarks Auto-Tables [ 46] (1.11) and Auto-pipeline [ 25] (4.28), despite our dataset being two orders of magnitude larger. This distribution accurately reflects the compositional nature of real-world data preparation tasks and their inherent complexity. Beyond program depth, as illustrated in Fig. 5 and Fig. 6, the parameter complexity varies significantly across different operations. The heatmap reveals that operations like join ,pivot , and groupby 5 32.3% 31.5%28.6%7.7%sort 8588 rename 5937 select 5762 topk 4074 filter 8876 deduplicate 7065dropna 3925cast 3888groupby 9271unpivot 4365pivot 2933transpose 2271explode 2135join union3643 2198 Operations Distribution Categories Assisted Operations Data CleaningStructural Reconstruction Data Integrationwide_to_long 588Figure
https://arxiv.org/abs/2505.15874v1
2: Operation distribution.Table 2: Statistics of the PARROT. Statistics Number Total Tasks 17,168 (100%) Single Tab. 11,327 (66.0%) Multi-Tab. 5,841 (34.0%) Train / Dev / Test 14,388 / 1,387 / 1,393 Input Table Avg. Columns Per Tab. 6.7 Avg. Row Count 134.2 Num / Cat / Mixed 44.3% / 42.5% / 13.2% Chain Complexity Easy (≤3 ops) 32.36% Medium (4 ∼6 ops) 49.07% Hard (≥7 ops) 18.57% Avg. Chain Length 4.24 Instructions Avg. Characters 192.3 Avg. Tokens 100.5 2 3 4 5 6 7 8 Pipeline Length Distribution0500100015002000250030003500Number of TasksEasy Medium Hard 0 50 100 150 200 250 300 Number of Tokens0500100015002000250030003500FrequencyMean: 107.4 tokens Median: 105.0 tokens cast deduplicate dropna explode filter groupby join pivot rename select sort topk transpose union unpivot w2lcast deduplicate dropna explode filter groupby join pivot rename select sort topk transpose union unpivot w2l 0500100015002000250030003500 Figure 3: Left: pipeline length distribution over three difficulty levels. Middle: instruction length distribution by token frequency. Right: operation transition matrix. w2l is wide-to-long operator. have the highest parameter complexity, while simpler operations like rename andselect have fewer parameters. The consolidated view shows the detailed distribution of parameter values for complex operations, highlighting the diverse ways in which these parameters are used. Operational and Structural Diversity. As shown in Fig. 3 (mid), PARROT supports 16 operations with distinct frequencies, systematically capturing the breadth of data preparation requirements in real-world pipelines. Aggregation, reshaping, and integration account for 48.6% of operator usage, collectively constituting the majority of operations, while selection and ordering operations provide essential complementary functionality. This distribution effectively reflects empirically observed patterns in data science practice. Fig. 3 (right) shows the operator transition graph among the most frequently occurring operations, revealing diverse and non-linear patterns among common operations. The dense connectivity and heterogeneous edge weights underscore the rich compositional patterns present in multi-step pipelines, necessitating sophisticated reasoning capabilities for models to successfully predict operation sequences that maintain schema compatibility and semantic coherence. Instruction Characteristics. Fig. 3 (mid) illustrates the length distribution of instructions in PARROT , with an average of 107.4 tokens. The distributions are right-skewed and bimodal, reflecting both concise directives and longer, context-rich descriptions. Instruction complexity correlates strongly with transformation chain length, suggesting that linguistically complex prompts often entail more compositional operations. In addition, our diversity analysis (Tab. 9) shows that PARROT instructions significantly surpass prior datasets (e.g., Spider, NL2Formula) in lexical richness (higher Distinct-n) and structural variation (lower Self-BLEU), ensuring a broad coverage of linguistic patterns. Importantly, while the instructions are synthesized for control and consistency, they are grounded in thousands of noisy real-world tables collected from the open web, many of which contain non-standard field names, multilingual headers, or schema noise (see App. C.1). As a result, models must interpret instructions grounded in real-world tables with noisy and diverse schemas. 6 5 Experiments and Analysis 5.1 Experimental Setup We conduct a comprehensive evaluation of various models on the PARROT benchmark to assess their capabilities in tackling the Text-to-Pipeline task. Our experimental setup is detailed below. LLM Baselines. To establish a comprehensive performance baseline on PARROT, we evaluate a diverse
https://arxiv.org/abs/2505.15874v1
set of LLMs, encompassing both zero-shot inference capabilities of proprietary models and the performance of fine-tuned open-source models. These models represent the current state-of-the-art in code generation and natural language understanding: (1) Zero-shot LLMs: We utilize several leading API-based models, including GPT-4o [ 32], GPT-4o-mini [ 32], Gemini-2.5-Pro [ 34], and DeepSeek-V3 [ 29]. These models are prompted with the task instruction and table schema without any task-specific fine-tuning. (2) Fine-tuned LLMs: We fine-tune a series of strong open-source code generation models: Qwen2.5-Coder-1.5B [ 17], Qwen2.5-Coder-3B [ 17], and Qwen2.5-Coder- 7B [17]. These models are trained on the PARROT training set, which is derived from the PARROT benchmark, to adapt them specifically to the Text-to-Pipeline task. Structured Generation Approaches. We evaluate three distinct target output formalisms for generating executable data pipelines to understand their efficacy in representing and synthesizing complex transformations: (1) Text-to-Code (Pandas) : Direct generation of executable Pandas code. (2)Text-to-SQL : Generation of SQL statements, assessing the adaptability of SQL-centric approaches to broader data preparation tasks not typically addressed by SQL. (3) Text-to-Pipeline : Our primary approach, where models generate operation sequences in our DSL, subsequently compiled to Pandas. The DSL is architected for modular planning, type safety through enforcement, and schema-aware validation, aiming for a more robust generation pathway. Planning and Agent-based Approaches. To assess the capabilities of more sophisticated reasoning strategies for multi-step pipelines, we evaluate several planning and agent-based paradigms: (1) Tool Calling API (One-shot) [ 33]: LLMs are instructed to generate the full multi-step execution plan or program for a PARROT task in a single pass, simulating a direct tool-use scenario. (2) Plan-and-Solve [ 39]: This approach first generates a high-level operation plan (e.g., a sequence of operations), then synthesizes the executable program based on that plan. (3) Chain-of-Tables [ 40]: This strategy involves evolving and manipulating intermediate tabular states throughout the reasoning chain to guide the transformation process for tasks from PARROT . (4)Pipeline-Agent : Our proposed ReAct-style [ 45] agent that iteratively predicts an operation, executes it on the current table, and reflects on the result. By leveraging intermediate states, it enables context-aware planning and handles schema evolution in long-horizon PARROT tasks. Implementation Details. For zero-shot LLM evaluations, we utilized consistent prompt templates. Each prompt included: (1) a clear definition of the Text-to-Pipeline task. (2) the input table (column names and data types). (3) 10 sample rows from the input table to provide context on data values, and (4) the natural language instruction. We used a temperature of 0.7 for deterministic or near- deterministic outputs for evaluation purposes. For fine-tuned models, we performed supervised fine-tuning on the PARROT training set. Models were trained for 3 epochs using the AdamW [ 20] optimizer with a learning rate of 1e-3 and a batch size of 16. A learning rate scheduler of linear decay with warmup was employed. Early stopping was triggered based on loss on a dedicated validation split of PARROT to prevent overfitting. All fine-tuning experiments were conducted on a cluster of 4 NVIDIA 4090 (24GB) GPUs. We use the GPT-4o-mini as the default LLM
https://arxiv.org/abs/2505.15874v1
backbone unless specified otherwise. For Text-to-Pandas, the DSL-to-Pandas compilation is rule-based and deterministic (see App.C.2). For Text-to-SQL, the generated SQL is subsequently executed via the SQLite engine. We have provided partial comments for operator types that are not supported by SQL. More experimental details of baselines and prompt design are provided in App. E. 5.2 Evaluation Results Performance across Difficulty Levels. Tab. 3 reports the performance of LLM baselines across Easy, Medium, and Hard tasks in PARROT . Zero-shot models with explicit reasoning prompts (e.g., GPT-o3-mini) outperform non-reasoning variants, achieving 72.86% EA vs. 72.38% (GPT- 4o). GPT-4o (non-reasoning) performs well on Easy ( 89.03% ) and Medium ( 79.06% ) tasks, while showing a struggle with competitive Hard tasks (EA 71.00% ). Fine-tuned models deliver the largest gains. Qwen2.5-Coder-7B achieves 74.15% overall EA and 68.20% on Hard tasks—outperforms leading closed-source LLMs (Deepseek and GPT series) despite having fewer parameters. It also 7 Table 3: Performance of baseline models on the PARROT test set across difficulty levels. We report EA, PV , and OA in percentages (%). Best results per category are in bold . ModelExecution Accuracy (EA) Program Validity (PV) Operator Accuracy (OA) Easy Medium Hard Overall Easy Medium Hard Overall Easy Medium Hard Overall Zero-shot LLMs (non-reasoning) GPT-4o-mini 75.17 59.55 49.79 62.88 87.92 73.83 62.34 76.38 67.08 70.44 69.6 69.22 GPT-4o 89.03 79.06 72.38 71.00 89.04 79.07 72.38 81.12 64.24 70.70 74.44 69.27 Gemini-2.5-Pro 80.09 62.09 52.72 66.26 91.5 77.65 67.78 80.40 68.75 64.54 67.72 66.95 DeepSeek-V3 78.52 63.79 56.07 67.19 89.49 77.09 68.62 67.31 66.70 68.32 72.62 68.54 Zero-shot LLMs (reasoning) GPT-o3-mini 82.10 68.88 67.36 72.86 92.39 81.47 79.92 84.70 67.86 70.51 72 69.91 GPT-o4-mini 81.87 69.17 64.02 72.36 94.85 83.31 82.01 86.79 64.47 68.76 68.61 67.36 DeepSeek-R1 77.18 58.84 41.84 61.81 89.49 72.70 55.23 75.09 67.45 68.06 67.39 67.75 Fine-tuned LLMs Qwen2.5-Coder-1.5B 67.79 59.41 50.63 60.59 77.85 70.58 64.02 71.79 67.11 70.33 66.55 68.65 Qwen2.5-Coder-3B 83.67 69.73 68.62 74.01 91.05 80.91 82.01 84.35 82.03 81.76 81.93 81.87 Qwen2.5-Coder-7B 84.12 69.87 68.20 74.15 91.5 82.18 81.59 85.07 82.96 82.96 81.59 82.53 Table 4: Performance comparison of different target generation formalisms on the PARROT test set. We report EA, PV , and OA in percentages (%). Best results per category are in bold . ModelExecution Accuracy (EA) Program Validity (PV) Operator Accuracy (OA) Easy Medium Hard Overall Easy Medium Hard Overall Easy Medium Hard Overall Text-to-SQL 10.81 2.59 0 3.05 93.92 67.31 55.81 73.31 - - - - Text-to-Code 48.64 32.59 28.57 33.8 70.27 57.77 53.74 58.45 - - - - Text-to-Pipeline 75.17 59.55 49.79 62.88 87.92 73.83 62.34 76.38 67.08 70.44 69.6 69.22 attains top PV ( 85.07% ) and OA ( 82.53% ), highlighting the quality of our supervised data. Across all models, performance consistently declines from Easy to Hard tasks, with a sharper drop in EA than PV . This suggests that while many models produce syntactically valid outputs (high PV), ensuring execution correctness in complex, multi-step settings remains a core challenge. For instance, Qwen2.5-Coder-3B’s EA drops from 83.67% (Easy) to 68.62% (Hard), reflecting the compositional difficulty inherent
https://arxiv.org/abs/2505.15874v1
in PARROT. Impact of Structured Target Generation. Our proposed PARROT approach, which generates transformation programs in a DSL, substantially outperforms direct code and SQL generation. As shown in Tab. 4, it achieves an overall EA of 62.88% , exceeding Text-to-Code (Pandas, 33.80% ) by +29.08 points and Text-to-SQL ( 3.05% ) by a wide margin. This advantage holds across difficulty levels, particularly on Easy ( 75.17% ) and Medium ( 59.55% ) tasks. Although Text-to-SQL yields high PV ( 73.31% ), its low EA indicates poor semantic grounding in complex, multi-step tasks beyond standard SQL patterns. Text-to-Pipeline also achieves the highest OA ( 69.22% ), reflecting stronger structural fidelity. OA is not reported for Text-to-Code and Text-to-SQL since their outputs do not use atomic operations as our DSL. These results highlight the DSL’s effectiveness in enabling more accurate planning and execution for compositional data preparation tasks. Efficacy of Planning and Agent-based Approaches. Structured planning with agent-based meth- ods significantly improves multi-step reasoning. As shown in Tab. 5, our proposed Pipeline-Agent (GPT-4o) achieves the highest overall EA of 76.17% , outperforming Tool Calling API ( 60.48% ) and Plan-and-Solve ( 47.40% ) by large margins. Even with the weaker GPT-4o-mini backbone, Pipeline-Agent still outperforms both baselines ( 62.72% EA), confirming its robustness. Chain- of-Tables, while exhibiting strong PV on Hard tasks ( 79.07% ), suffers from poor EA ( 26.27% ), likely due to unstable intermediate manipulations. Notably, upgrading the Pipeline-Agent’s backbone from GPT-4o-mini to GPT-4o yields a substantial +13.45 point EA gain, demonstrating its ability to leverage stronger models effectively. Across difficulty levels, Pipeline-Agent (GPT-4o) maintains high accuracy: 77.70% (Easy), 78.21% (Medium), and 67.44% (Hard). Its high OA ( 72.92% with GPT-4o, 54.79% with GPT-4o-mini) further highlights its strength in structuring valid transformation sequences. Since Chain-of-Tables produces non-symbolic intermediate states, OA is not reported for this method. These results underscore that agent-based strategies with explicit planning are key to tackling the compositional challenges, especially when paired with capable LLMs. 5.3 Error Analysis and Case Study We conduct an error analysis of the gpt-4o model for zero-shot prompting, as illustrated in Fig. 4. Compared to prior tasks, Text-to-Pipeline introduces new challenges centered around symbolic com- positionality and schema-aware execution. We summarize key failure modes below: (1) Type and Column Errors (63.6%). Nearly half of all failures (48.7%) are type-related, such as applying aggre- 8 Table 5: Evaluation results of agent methods on the PARROT test set. We report EA, PV , and OA in percentages (%). Best results per category are in bold . OA is omitted for Chain-of-Tables as it does not produce atomic operation sequences. ModelExecution Accuracy (EA) Program Validity (PV) Operator Accuracy (OA) Easy Medium Hard Overall Easy Medium Hard Overall Easy Medium Hard Overall Tool Calling 71.62 58.36 47.67 60.48 86.48 66.53 58.13 71.07 67.79 47.15 35.40 51.31 Plan-and-Solve 63.69 43.19 30.23 47.40 74.52 52.53 38.37 57.00 61.04 42.83 30.09 46.36 Chain-of-Tables 50.67 17.90 9.30 26.27 81.76 74.32 79.07 77.39 - - - - Pipeline-Agent - GPT-4o-mini 70.27 61.08 54.65 62.72 88.51 76.26 67.44 78.41 67.56 52.04 41.02 54.79
https://arxiv.org/abs/2505.15874v1
- GPT-4o 77.70 78.21 67.44 76.17 96.62 88.33 82.56 89.82 78.04 72.32 65.91 72.92 - Deepseek-V3 66.89 60.70 48.84 60.49 91.21 78.99 70.93 81.26 68.47 58.18 47.53 59.42 gation to non-numeric fields or filtering without proper type casting. A further 16.7% of errors are due to incorrect column or index references—often resulting from outdated schema after transformations likerename ,select , orjoin . (2) Attribute and Value Errors (7.5%). A smaller portion of errors are caused by misused function calls (6.6%) or incorrect constant values (0.9%), such as in filter conditions or aggregation parameters. These reflect shallow semantic parsing and limited contextual grounding. (3) Semantic Failures (27.2%). Over a quarter of failures fall under the Semantic Errors category, often involving hallucinated steps, missing operations, or broken execution logic. These errors suggest limitations in multi-step reasoning. For additional model comparisons, see App. E.7. Column/Index Error 16.7% Type Error48.7%Semantic Error 27.2%Attribute Error: 6.6% Key Error: 0.9% Figure 4: Error type distribution.Case Study. We examine a failure case, shown in App. E.8, where the model is instructed to perform a series of opera- tions: extract relevant columns ( select ), filter rows by value (filter ), aggregate results ( groupby ), and finally rename a column ( rename ). The user instruction states: “Extract the ‘Transaction ID’, ‘Buyer Name’, and ‘Amount Paid’ columns from table_1. Filter the rows where ‘Amount Paid’ is greater than 1000. Then, group the data by ‘Buyer Name’ and calculate the total sum of ‘Amount Paid’ for each buyer. Finally, rename ‘Amount Paid’ to ‘Total Payment’. ” The ground truth program includes all four operations in the specified order. However, the model-generated pipeline omits the initial select step and begins directly with filter , fol- lowed by groupby andrename . This mistake illustrates mul- tiple compounding challenges: (1) Compositional Reasoning. The model fails to recognize the structural dependency between early-stage column pruning and subsequent transformations. The omission of select results in a program that potentially operates on extraneous or irrelevant fields, which may lead to execution failures or incorrect aggregation. (2) Instruction Grounding. While the model correctly identifies the need to apply filter ,groupby , and rename , it misinterprets the scope of the task—particularly the requirement to operate only over a subset of columns. This suggests a shallow parsing of the intent rather than a grounded mapping to executable semantics. (3) Schema Evolution Awareness. Downstream operations assume a schema transformed by previous steps. By skipping select , the model introduces a mismatch between the assumed and actual schema, particularly impacting the correctness of the groupby andrename steps, which depend on Amount Paid being cleanly isolated and unambiguous. 6 Conclusion We propose Text-to-Pipeline , a novel task that maps NL instructions to executable data prepara- tion pipelines, and introduce PARROT , a large-scale benchmark built from real-world tables and production-inspired transformation patterns. Empirical results show that current LLMs struggle with compositionality and schema evolution, motivating the need for more robust reasoning. To this end, we present Pipeline-Agent , an execution-aware baseline that iteratively plans and reflects over intermediate states. Beyond these contributions,
https://arxiv.org/abs/2505.15874v1
our work advocates a paradigm shift in data preparation, from manual processing to intelligent, agent-driven systems. By grounding pipeline synthesis in NL and execution feedback, we pave the way toward autonomous DP agents capable of planning, adaptation, and self-correction, moving closer to fully automated data management. 9 References [1]Power query: Merge queries. https://support.office.com/en-us/article/Merge-queries-Power- Query-fd157620-5470-4c0f-b132-7ca2616d17f9. [2]Trifacta: Standardize Using Patterns. https://docs.trifacta.com/display/DP/ Standardize+Using+Patterns . [3]Daniel W Barowy, Sumit Gulwani, Ted Hart, and Benjamin Zorn. Flashrelate: extracting relational data from semi-structured spreadsheets using examples. ACM SIGPLAN Notices , 50(6):218–228, 2015. [4]Rohan Bavishi, Caroline Lemieux, Roy Fox, Koushik Sen, and Ion Stoica. Autopandas: neural-backed generators for program synthesis. PACMPL , 2019. [5]Chengliang Chai, Jiayi Wang, Yuyu Luo, Zeping Niu, and Guoliang Li. Data management for machine learning: A survey. TKDE , 35(5):4646–4667, 2022. [6]Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde De Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. arXiv:2107.03374 , 2021. [7]Sibei Chen, Hanbing Liu, Waiting Jin, Xiangyu Sun, Xiaoyao Feng, Ju Fan, Xiaoyong Du, and Nan Tang. Chatpipe: orchestrating data preparation pipelines by optimizing human-chatgpt interactions. In SIGMOD , 2024. [8]Yuhao Deng, Chengliang Chai, Lei Cao, Qin Yuan, Siyuan Chen, Yanrui Yu, Zhaoze Sun, Junyi Wang, Jiajun Li, Ziqi Cao, et al. Lakebench: A benchmark for discovering joinable and unionable tables in data lakes. VLDB , 2024. [9]Meihao Fan, Ju Fan, Nan Tang, Lei Cao, Guoliang Li, and Xiaoyong Du. Autoprep: Natural language question-aware data preparation with a multi-agent framework. arXiv:2412.10422 , 2024. [10] Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xiaocheng Feng, Ming Gong, Linjun Shou, Bing Qin, Ting Liu, Daxin Jiang, et al. Codebert: A pre-trained model for programming and natural languages. arXiv:2002.08155 , 2020. [11] Raul Castro Fernandez, Aaron J Elmore, Michael J Franklin, Sanjay Krishnan, and Chenhao Tan. How large language models will disrupt data management. VLDB , 2023. [12] Sumit Gulwani, William R Harris, and Rishabh Singh. Spreadsheet data manipulation using examples. Communications of the ACM , 55(8):97–105, 2012. [13] Yeye He, Xu Chu, Kris Ganjam, Yudian Zheng, Vivek Narasayya, and Surajit Chaudhuri. Transform-data-by-example (tde) an extensible search engine for data transformations. VLDB , 2018. [14] Jeffrey Heer, Joseph M Hellerstein, and Sean Kandel. Predictive interaction for data transforma- tion. In CIDR , 2015. [15] Dan Hendrycks, Steven Basart, Saurav Kadavath, Mantas Mazeika, Akul Arora, Ethan Guo, Collin Burns, Samir Puranik, Horace He, Dawn Song, et al. Measuring coding challenge competence with apps. arXiv:2105.09938 , 2021. [16] Xinyi Hou, Yanjie Zhao, Shenao Wang, and Haoyu Wang. Model context protocol (mcp): Landscape, security threats, and future research directions. arXiv:2503.23278 , 2025. [17] Binyuan Hui, Jian Yang, Zeyu Cui, Jiaxi Yang, Dayiheng Liu, Lei Zhang, Tianyu Liu, Jiajun Zhang, Bowen Yu, Keming Lu, et al. Qwen2.5-coder technical report. arXiv:2409.12186 , 2024. [18] Zhongjun Jin, Michael R Anderson, Michael Cafarella, and HV Jagadish. Foofah: Transforming data by example. In SIGMOD , 2017. 10 [19] Zhongjun Jin, Michael Cafarella, HV Jagadish, Sean Kandel, Michael Minar, and Joseph M Hellerstein. Clx: Towards verifiable pbe data transformation. arXiv preprint
https://arxiv.org/abs/2505.15874v1
arXiv:1803.00701 , 2018. [20] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv:1412.6980 , 2014. [21] Eugenie Y Lai, Yeye He, and Surajit Chaudhuri. Auto-prep: Holistic prediction of data preparation steps for self-service business intelligence. arXiv:2504.11627 , 2025. [22] Fangyu Lei, Jixuan Chen, Yuxiao Ye, Ruisheng Cao, Dongchan Shin, Hongjin Su, Zhaoqing Suo, Hongcheng Gao, Wenjing Hu, Pengcheng Yin, et al. Spider 2.0: Evaluating language models on real-world enterprise text-to-sql workflows. arXiv:2411.07763 , 2024. [23] Haoyang Li, Jing Zhang, Hanbing Liu, Ju Fan, Xiaokang Zhang, Jun Zhu, Renjie Wei, Hongyan Pan, Cuiping Li, and Hong Chen. Codes: Towards building open-source language models for text-to-sql. SIGMOD , 2024. [24] Hongxin Li, Jingran Su, Yuntao Chen, Qing Li, and ZHAO-XIANG ZHANG. Sheetcopilot: Bringing software productivity to the next level through large language models. NeurIPS , 2023. [25] Jinyang Li, Binyuan Hui, Ge Qu, Jiaxi Yang, Binhua Li, Bowen Li, Bailin Wang, Bowen Qin, Ruiying Geng, Nan Huo, et al. Can llm already serve as a database interface? a big bench for large-scale database grounded text-to-sqls. NeurIPS , 2023. [26] Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. A diversity-promoting objective function for neural conversation models. arXiv:1510.03055 , 2015. [27] Peng Li, Yeye He, Cong Yan, Yue Wang, and Surajit Chaudhuri. Auto-tables: Synthesizing multi-step transformations to relationalize tables without using examples. VLDB , 2023. [28] Xingjun Li, Yizhi Zhang, Justin Leung, Chengnian Sun, and Jian Zhao. Edassistant: Sup- porting exploratory data analysis in computational notebooks with in situ code search and recommendation. TiiS, 13(1):1–27, 2023. [29] Aixin Liu, Bei Feng, Bing Xue, Bingxuan Wang, Bochao Wu, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, et al. Deepseek-v3 technical report. arXiv:2412.19437 , 2024. [30] Zeyao Ma, Bohan Zhang, Jing Zhang, Jifan Yu, Xiaokang Zhang, Xiaohan Zhang, Sijia Luo, Xi Wang, and Jie Tang. Spreadsheetbench: towards challenging real world spreadsheet manipulation. arXiv:2406.14991 , 2024. [31] Zan Ahmad Naeem, Mohammad Shahmeer Ahmad, Mohamed Eltabakh, Mourad Ouzzani, and Nan Tang. Retclean: Retrieval-based data cleaning using llms and data lakes. VLDB , 2024. [32] R OpenAI. Gpt-4 technical report. arxiv 2303.08774. View in Article , 2:13, 2023. [33] Yujia Qin, Shengding Hu, Yankai Lin, Weize Chen, Ning Ding, Ganqu Cui, Zheni Zeng, Xuanhe Zhou, Yufei Huang, Chaojun Xiao, et al. Tool learning with foundation models. ACM Computing Surveys , 57(4):1–40, 2024. [34] Machel Reid, Nikolay Savinov, Denis Teplyashin, Dmitry Lepikhin, Timothy Lillicrap, Jean- baptiste Alayrac, Radu Soricut, Angeliki Lazaridou, Orhan Firat, Julian Schrittwieser, et al. Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context. arXiv:2403.05530 , 2024. [35] Christian P Robert, George Casella, and George Casella. Monte Carlo statistical methods , volume 2. Springer, 1999. [36] Rishabh Singh. Blinkfill: Semi-supervised programming by example for syntactic string transformations. VLDB , 2016. [37] Quoc Trung Tran, Chee-Yong Chan, and Srinivasan Parthasarathy. Query by output. In SIGMOD , 2009. 11 [38] Chenglong Wang, Alvin Cheung, and Rastislav Bodik. Synthesizing highly expressive sql queries from input-output examples. In SIGPLAN , 2017. [39] Lei Wang, Wanyu Xu, Yihuai Lan, Zhiqiang Hu, Yunshi Lan, Roy
https://arxiv.org/abs/2505.15874v1
Ka-Wei Lee, and Ee-Peng Lim. Plan-and-solve prompting: Improving zero-shot chain-of-thought reasoning by large language models. arXiv:2305.04091 , 2023. [40] Zilong Wang, Hao Zhang, Chun-Liang Li, Julian Martin Eisenschlos, Vincent Perot, Zifeng Wang, Lesly Miculicich, Yasuhisa Fujii, Jingbo Shang, Chen-Yu Lee, et al. Chain-of-table: Evolving tables in the reasoning chain for table understanding. arXiv:2401.04398 , 2024. [41] Yuxiang Wei, Federico Cassano, Jiawei Liu, Yifeng Ding, Naman Jain, Zachary Mueller, Harm de Vries, Leandro V on Werra, Arjun Guha, and Lingming Zhang. Selfcodealign: Self-alignment for code generation. NeurIPS , 2024. [42] Xianjie Wu, Jian Yang, Linzheng Chai, Ge Zhang, Jiaheng Liu, Xeron Du, Di Liang, Daixin Shu, Xianfu Cheng, Tianzhen Sun, et al. Tablebench: A comprehensive and complex benchmark for table question answering. In AAAI , 2025. [43] Cong Yan and Yeye He. Auto-suggest: Learning-to-recommend data preparation steps using data science notebooks. In SIGMOD , 2020. [44] Junwen Yang, Yeye He, and Surajit Chaudhuri. Auto-pipeline: synthesizing complex data pipelines by-target using reinforcement learning and search. VLDB , 2021. [45] Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. React: Synergizing reasoning and acting in language models. In ICLR , 2023. [46] Tao Yu, Rui Zhang, Kai Yang, Michihiro Yasunaga, Dongxu Wang, Zifan Li, James Ma, Irene Li, Qingning Yao, Shanelle Roman, et al. Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-sql task. arXiv:1809.08887 , 2018. [47] Daochen Zha, Zaid Pervaiz Bhat, Kwei-Herng Lai, Fan Yang, Zhimeng Jiang, Shaochen Zhong, and Xia Hu. Data-centric artificial intelligence: A survey. ACM Computing Surveys , 57(5):1–42, 2025. [48] Haochen Zhang, Yuyang Dong, Chuan Xiao, and Masafumi Oyamada. Jellyfish: Instruction- tuning local large language models for data preprocessing. In EMNLP , 2024. [49] Kechi Zhang, Jia Li, Ge Li, Xianjie Shi, and Zhi Jin. Codeagent: Enhancing code generation with tool-integrated agent systems for real-world repo-level coding challenges. arXiv:2401.07339 , 2024. [50] Quanjun Zhang, Chunrong Fang, Yang Xie, Yaxin Zhang, Yun Yang, Weisong Sun, Shengcheng Yu, and Zhenyu Chen. A survey on large language models for software engineering. arXiv:2312.15223 , 2023. [51] Wei Zhao, Zhitao Hou, Siyuan Wu, Yan Gao, Haoyu Dong, Yao Wan, Hongyu Zhang, Yulei Sui, and Haidong Zhang. Nl2formula: Generating spreadsheet formulas from natural language queries. In EACL , 2024. [52] Victor Zhong, Caiming Xiong, and Richard Socher. Seq2sql: Generating structured queries from natural language using reinforcement learning. arXiv:1709.00103 , 2017. [53] Erkang Zhu, Yeye He, and Surajit Chaudhuri. Auto-join: Joining tables by leveraging transfor- mations. VLDB , 2017. [54] Yaoming Zhu, Sidi Lu, Lei Zheng, Jiaxian Guo, Weinan Zhang, Jun Wang, and Yong Yu. Texygen: A benchmarking platform for text generation models. In SIGIR , 2018. 12 A Details of Automatic Pipeline Data Synthesis Framework In this section, we provide additional details of our automatic pipeline data synthesis framework, including implementation details, dataset collection, DSL-to-code compilation, operator transition modeling, and schema propagation. A.1 Implementation Details Natural language instructions are generated, refined, and verified using GPT-4o-mini via OpenAI’s API ( gpt-4o-mini-2024-07-18 ), with a temperature of 0.7 to ensure deterministic outputs. The prompt typically includes the
https://arxiv.org/abs/2505.15874v1
task description, table schema, and sampled rows (up to 10) to provide contextual grounding. Additionally, we adopt a few-shot prompting format with three curated demonstrations at the hard level to control the generation style during instruction refinement. All prompts used for data synthesis are listed in App. E.9. The process of data synthesis is conducted on a compute cluster equipped with 64-core AMD EPYC CPUs. Synthesis of the full dataset takes approximately 16 hours with three threads. We implement automated validation to filter out ill-formed programs. Each synthesized DSL program is compiled and executed on its corresponding table; results are compared against the reference output to verify execution success and type compatibility. Invalid programs are resampled up to three times. The full codebase, datasets, and evaluation tools are available at https://anonymous.4open.science/r/Text-to-Pipeline. This setup ensures scalable, high-quality synthesis of instruction-program pairs grounded in realistic data preparation scenarios. A.2 Data Collection Our benchmark data is sourced from six widely used public datasets. These sources offer a broad range of real-world tables across different domains, enabling realistic scenarios for our model. Below, we provide a brief introduction to each source. Spider [46] is a dataset developed by researchers at Yale University. It is a large-scale benchmark annotated by humans, designed to advance the development of semantic parsing and natural language interfaces for databases.. It comprises 10,181 natural language questions and 5,693 unique complex SQL queries across 200 databases spanning 138 diverse domains. Each database contains multiple tables, requiring models to understand intricate relationships and perform multi-step reasoning. A distinctive feature of Spider is its emphasis on cross-domain generalization: the training and testing sets contain different databases and SQL queries, compelling models to adapt to new schemas and query patterns. Bird [25] is a large-scale benchmark. It is specifically constructed to evaluate the capabilities of large language models in real-world database interface scenarios. The dataset includes over 12,751 natural language questions paired with SQL queries across 95 large databases, reflecting realistic business intelligence (BI) tasks. These databases are derived from actual industry schemas, featuring varied table structures, real-world naming conventions, and complex join conditions. Bird emphasizes practical cross-domain reasoning, requiring models to handle diverse and noisy schema representations, ambiguous question formulations, and the need for context-aware SQL generation. Unlike synthetic or simplified benchmarks, Bird aims to mirror the challenges faced in enterprise environments, making it a critical resource for assessing the robustness and generalizability of Text-to-SQL systems. Auto-Pipeline [44] is a dataset developed by researchers at the University of Chicago and Microsoft Research. It is a benchmark designed to advance the automation of complex data preparation workflows by leveraging reinforcement learning and search techniques. The dataset encompasses 716 real-world data pipelines, systematically curated from GitHub repositories and commercial vendors. These pipelines encompass a variety of operations, including string transformations and table manipulations such as joins, group-bys, and pivots. A distinctive feature of Auto-Pipeline is its adoption of the "by-target" paradigm, wherein users specify the desired output schema instead of providing step-by-step examples. This approach necessitates models to infer the sequence of transformations required to achieve the
https://arxiv.org/abs/2505.15874v1
target schema, often relying on implicit table constraints like functional dependencies and keys to constrain the search space. Auto-Tables [27] is a benchmark dataset designed to evaluate systems that automatically synthesize multi-step transformations to convert non-relational tables into standard relational formats without requiring input-output examples. The dataset comprises 244 real-world test cases collected from user spreadsheets and online forums, reflecting common challenges in data preparation tasks. These cases 13 necessitate complex restructuring operations—such as unpivoting, transposing, and filling missing values—that are often difficult for users to program manually. TableBench [42] is a comprehensive benchmark designed to evaluate Table Question Answer- ing (TableQA) systems, emphasizing complex reasoning over tabular data. It encompasses 886 question-answer pairs across 18 subcategories within four major domains: Fact Checking, Numerical Reasoning, Data Analysis, and Visualization. This dataset was meticulously curated to bridge the gap between academic benchmarks and real-world applications, reflecting the intricate reasoning required in practical scenarios. LakeBench [8] is a large-scale benchmark designed to evaluate methods for discovering joinable and unionable tables within data lakes. It includes over 16 million real tables—approximately 1,600 times larger than existing datasets—and spans more than 1 TB in storage size. The benchmark features over 10,000 queries with ground truth annotations, covering binary classification, regression, and multi-label classification tasks. These queries were meticulously labeled over 7,500 human hours, encompassing diverse categories such as government data from CKAN, Socrata, and the European Central Bank. LakeBench serves as a comprehensive resource for evaluating the effectiveness, efficiency, and scalability of table discovery methods in data lakes. A.3 DSL-to-Code Compilation We implement a compiler to translate DSL programs into executable code. The process maps each operator to backend-specific templates, ensuring correct parameter binding, schema handling, and code generation. Below, we outline the workflow and backend support. Compilation workflow. Given a DSL program p= [o1, o2, . . . , o k], the compiler performs the following: 1.Parameter binding: For each operator oi, extract its parameters and map them to template slots; 2.Schema-aware quoting: Apply column name escaping (e.g., _quote(col) ) to handle special characters; 3.Code merging: Concatenate code lines using a left-to-right chaining convention (e.g., method chaining in Pandas); 4.Execution trace annotation: Optionally insert line comments to denote DSL operator source (useful for debugging). Backend support. While Pandas is the primary backend in this paper, the compiler is modular and supports alternate targets such as SQL or Spark by switching template registries. Each backend maintains an operator-to-template mapping, enabling flexible deployment without changing the symbolic layer. Example. Given the DSL sequence: [ { "op": "groupby", "params": { "by": ["region"], "agg": { "sales": "sum" } } }, { "op": "sort", "params": { "by": "sales", "ascending": false } } ] 14 The compiler generates the following Pandas code: df. groupby ( 'region ')['sales '] . sum () . reset_index () . sort_values (by= 'sales ', ascending = False ) This translation preserves the semantics of the symbolic pipeline while ensuring correctness and interpretability. A.4 Few-shot Demonstration for Instruction Refinement To improve fluency and naturalness, we refine the draft instructions by using few-shot demonstrations to guide the refinement
https://arxiv.org/abs/2505.15874v1
style. Below, we provide examples for each DSL operator along with their corresponding expressions. Table 6: Examples of DSL operators and their corresponding rewritten expressions. Operator Intent Expression pivot Pivot the resulting table with the station names ( STN_NAM ) as the index, the data provider ( DATA_PVDR ) as the columns, and use the minimum longitude (LONGITUDE) as the values. groupby Group the data by allergy name and count the number of ’ stuid ’ entries for each allergen to determine the number of students associated with each allergy. join Performing an inner join between table_1 and table_2 on ’ allergy name ’ and ’ allergy ’, with suffixes ’_left’ and ’_right’. sort Sort the data by the student count in ascending order. filter Apply a filter to keep only the records where the student ID is greater than 3. stack Transforms data from wide format to long format, the columns specified in id_vars ("Index", "Where are we?") remain unchanged and serve as identifiers for each row. The values in value_vars ("PUZZLE A" and "PUZZLE B") are then stacked into two new columns: one for the variable names and another for the values. explode Explode the "PUZZLE B" column to create separate rows for each puzzle listed. union Performing a union operation on table_1 and table_2 to consolidate all records. transpose Swap rows and columns in the sensor data for alternative viewing. rename Rename the column STN_NAM to "Station Name". dropna Remove any rows with missing values in the "NA V CANADA" column. drop_duplicates Deduplicate the data based on the " NA V CANADA " column, keeping the last occur- rence. topk Select the top 7 entries. select Choose the only column "NA V CANADA". astype Convert signup date fields to standard date format for analysis. wide_to_long Reshape the data by collapsing columns that start with " PUZZLE B " or " PUZZLE A " into a long format, while keeping the specified index columns ("Index", "Where are we?") unchanged. A.5 Operator Transition Matrix To guide the construction of realistic and semantically coherent operator chains, we build an operator transition matrix P∈R|O|×|O|, as introduced in Sec. 3.2. This matrix models the empirical likelihood of transitioning from one operator oito another oj, capturing common patterns observed in real-world data pipelines. Below, we introduce the details of the construction of the operator transition matrix. •Source corpus. We collect 1,200 data transformation scripts from open-source data science repositories (e.g., Kaggle notebooks, GitHub ETL pipelines) [ 27,44,43], focusing on Pandas and SQL-based workflows. •Operator normalization. Scripts are parsed into abstract syntax trees (ASTs), from which high-level transformation calls are extracted and mapped to our DSL operator set via rule-based normalization (e.g., drop_duplicates →dedup ). •Transition statistics. For each script, we record observed consecutive operator pairs (oi→oj) and compute transition frequencies. These are normalized to conditional probabilities using: P(oj|oi) =Count (oi→oj) +αP kCount (oi→ok) +α|O| 15 where α= 0.5is the Laplace smoothing coefficient to mitigate sparsity in low-frequency transitions. •Semantic constraints. To prevent invalid or semantically implausible operator chains, we impose heuristic transition filters based on
https://arxiv.org/abs/2505.15874v1
operator categories and schema dependencies. For instance, transitions like groupby →dropna are pruned unless dropna targets non-aggregated columns, while sort is allowed to follow most aggregation or integration steps. •Sampling integration. During Markov-based operator chain synthesis, we use Pas a dynamic prior, further modulated by table-specific features (e.g., presence of missing values, column types) and previously sampled operator context to promote coherent and executable transformation workflows. This transition matrix supports structurally diverse yet realistic program generation, ensuring sampled pipelines reflect common data preparation practices while preserving semantic feasibility. A.6 Schema Propagation Mechanism To ensure semantic correctness and column dependency integrity throughout the execution of a transformation program, we implement a schema propagation mechanism that tracks and updates the table schema after each operator in the DSL sequence. •Schema representation. At each step tin a pipeline p= [o1, o2, . . . , o k], the schema is represented as a structured object: St={(ci, τi)}n i=1 where ciis the column name and τiis its data type (e.g., int,str,float ,datetime ). •Operator effect rules. Each DSL operator is associated with a transformation rule δ:St→ S t+1 that specifies how it modifies the schema. For example: –groupby introduces new aggregation columns and may drop non-grouped fields. –join merges two schemas, with name disambiguation (e.g., suffixes _x,_y) if overlapping columns exist. –rename updates column names while preserving types. –dropna does not modify schema but may remove rows. –pivot restructures column names based on index and column values. •Propagation strategy. During operator chain construction or execution, we maintain a running schema St: St+1=δ(ot,St) This enables type-checking, operator compatibility filtering, and parameter validation at each stage. •Error prevention. Before executing each operator ot, we validate whether its required arguments (e.g., column names, aggregation targets) are present and well-typed in St. If the check fails, the pipeline is rejected or resampled. •Multi-table support. For multi-source operations (e.g., join ,union ), we track a set of schemas {S(1) t,S(2) t, . . .}and apply compatibility checks across tables (e.g., matching join keys, union column alignment). This schema propagation mechanism ensures that each transformation step operates on a valid, context-aware schema, supporting both program correctness and dynamic code generation across execution backends (e.g., Pandas, SQL). B Broader Discussion Limitation The main limitations of our PARROT benchmark lie in instruction generation and schema coverage. (1) Instruction Generalization : Our natural language instructions are constructed through LLM-based synthesis with controlled diversity and refinement, resulting in high lexical and structural coverage (as shown in Sec. 4). While this design captures a wide range of realistic user intents, there may still exist extremely informal, ambiguous, or domain-specific expressions in the wild that fall outside our constructed distribution. Addressing such tail cases typically requires direct user-authored inputs or context-rich interactions, which are beyond the scope of current benchmark 16 Table 7: Overview statistics of the PARROT benchmark. Statistic Value Description Total instances 17168 End-to-end verified task samples Unique input tables 23009 Distinct real-world source tables Average operations per task 4.24 Multi-step program depth Number of DSL operators 16 Span reshaping, aggregation, integration, etc. Average instruction length
https://arxiv.org/abs/2505.15874v1
100.5 tokens Tokenized instruction (LLM-generated) Unique instruction vocabulary 192.3 Total distinct tokens across all instructions Average columns per input table 6.7 Captures schema size Average rows per input table 134.2 Captures table content scale Tables with non-English column names 3.19% Includes latin, code-mixed, abbreviated fields Instructions with implicit references 31.8% No direct mention of column names Execution validation success rate 99.4% Percentage of DSL programs that execute correctly automation. (2) Schema and Language Bias : Our benchmark is built on 23,000+ real-world tables from public sources, which include diverse domains and formats. However, some domains (e.g., English-speaking) are overrepresented, and tables with non-English headers or code-mixed schemas, though present, remain a minority. As a result, models trained on PARROT may exhibit lower robustness when encountering underrepresented languages. Nevertheless, these limitations have minimal impact on the benchmark’s primary goal: enabling standardized, large-scale evaluation of executable data preparation pipelines from natural language instructions under realistic table semantics. Social Impact PARROT is developed with the goal of advancing natural language interfaces for data preparation, ultimately reducing the technical barriers faced by non-programmers. By grounding transformation logic in symbolic pipelines and ensuring execution validity, our benchmark promotes transparency and debuggability in LLM-driven systems. In addition, we exclusively use publicly available datasets and exclude any sensitive or personal data during construction. Instructions are synthetically generated and manually validated to avoid offensive content, misinformation, or privacy risks. We believe the probability of our benchmark posing adverse effects related to safety, fairness, or misuse is negligible. Instead, it offers a valuable testbed for future research into controllable, interpretable, and inclusive automation of data workflows. C Details of Benchmark C.1 Benchmark Characteristics Tab. 7 summarizes core statistics of the PARROT benchmark. The dataset exhibits high semantic and structural diversity, with multi-step transformation depth, real-world noisy schemas, and verified execution outputs. Notably, over 40% of tables contain non-English or domain-specific column names, and nearly one-third of instructions omit explicit field references—making schema grounding a key challenge. C.2 DSL Specification Tab. 8 summarizes the operator set used in PARROT , covering a wide spectrum of tabular transforma- tion actions. Each operator is annotated with its functional category, parameter schema, representative usage in Pandas, and empirical frequency in the benchmark. This operator set serves as the symbolic foundation for program generation and evaluation. Operator-level code templates. To enable executable supervision and semantic validation, we implement a deterministic compiler that translates each symbolic operator oi∈ O in the DSL sequence into executable code ciin a target backend (e.g., Pandas). The compilation pipeline ensures structural correctness, type safety, and backend compatibility. Each DSL operator is associated with a code generation template, specified as a function-call pattern with named arguments. 17 Table 8: Supported operators in PARROT : categorized by task type, with typical parameters, Pandas- style usage examples, and observed frequencies. Operator Typical Parameters Example (Pandas-style) Frequency Data Cleaning filter condition df.query("value != 1") 8876 dropna axis, how, subset df.dropna() 3925 deduplicate subset, keep df.drop_duplicates() 7065 cast column, dtype df["value"].astype("float") 3888 Data Integration join left, right, on, how df1.merge(df2, on="id", how="inner") 3643 union dataframes pd.concat([df1,
https://arxiv.org/abs/2505.15874v1
df2]) 2198 Structural Reconstruction groupby by, agg df.groupby("region").sum() 9271 pivot index, columns, values df.pivot("id", "type", "score") 2933 unpivot id_vars, value_vars df.melt(id_vars=["id"], value_vars=["value"]) 4365 explode column df.explode("tags") 2135 transpose - df.transpose() 2271 wide_to_long stubnames, i, j pd.wide_to_long(...) 588 Assisted Operations sort by, ascending df.sort_values("time") 8588 topk columns, k df.head(5) 4074 select columns df[["name", "value"]] 5762 rename columns df.rename(columns={"old": "new"}) 5937 C.3 Task Complexity Parameter dependencies increase task complexity. The analysis reveals that many operations require careful coordination of multiple parameters. For instance, in join operations, the left_on andright_on parameters must be properly paired, while in pivot operations, the columns ,index , andvalues parameters must form a valid combination. These parameter dependencies significantly increase the complexity of task specification and execution. agg ascendingby columns conditionid_varsk keep rename_mapsubset value_vars Parameter Typededuplicate filter groupby rename select sort topk unpivotOperation Type0 0 0 0 0 0 0 7065 0 7065 0 0 0 0 0 8876 0 0 0 0 0 0 9271 0 9271 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 5937 0 0 0 0 0 5762 0 0 0 0 0 0 0 0 8588 8588 0 0 0 0 0 0 0 0 0 0 0 0 0 0 4074 0 0 0 0 0 0 0 0 0 4365 0 0 0 0 4365 02000400060008000 Usage Count Figure 5: Heatmap of parameter usage across dif- ferent operations. Darker colors indicate higher parameter complexity. deduplicatefilter groupby renameselectsort topk unpivot Operation Type02000400060008000Usage CountParameters by agg condition ascending keep subset rename_map columns id_vars value_vars kFigure 6: Consolidated view of parameter distri- butions for complex operations. C.4 Instruction Characteristics To quantify lexical and semantic diversity across natural language instructions in PARROT , we compute standard generation diversity metrics including Distinct-n and Self-BLEU: •Distinct-n [26]: Measures the proportion of unique n-grams in the instruction corpus. Higher values indicate richer lexical variety. We report Distinct-1 and Distinct-2 on the full instruction set. •Self-BLEU [54]: Measures overlap between an instruction and the rest of the corpus. Lower Self-BLEU implies lower redundancy and more diverse phrasing. Tab. 9 summarizes the results. Compared to other instruction-driven datasets such as Spider and NL2Formula, PARROT exhibits significantly higher lexical diversity and lower redundancy, reflect- ing its open-ended, LLM-generated language design. 18 Table 9: Lexical diversity comparison across instruction datasets. Dataset Distinct-1 Distinct-2 Self-BLEU-4 Spider [46] 0.39 0.62 0.81 NL2Formula [51] 0.42 0.68 0.77 PARROT (Ours) 0.58 0.74 0.61 These results confirm that PARROT instructions exhibit greater lexical richness and structural variation, which helps benchmark model generalization to diverse user intents and phrasing styles. 0 500 1000 1500 2000 2500 3000 Number of Characters050010001500200025003000FrequencyMean: 677.2 chars Median: 665.0 chars 0 50 100 150 200 250 300 Number of Tokens0500100015002000250030003500FrequencyMean: 107.4 tokens Median: 105.0 tokens Figure 7: Instruction characteristics in PARROT . Left: instruction length distribution (characters). Right: instruction word count distribution (tokens). C.5 Diversity of Table Structure To further examine the diversity of table, Fig. 8 reports the distribution of column types in input versus output tables. Notably, output tables exhibit an increased proportion of
https://arxiv.org/abs/2505.15874v1
textual columns and reduced numeric fields, reflecting structural transformations such as pivoting or aggregations that change schema layouts. These statistics collectively ensure that PARROT presents operational and Figure 8: Column type distributions in input vs. output tables. structural diversity, challenging models across both symbolic planning and schema reasoning. 19 C.6 Difficulty Level Definition and Illustrative Examples Difficulty Level Definitions To simulate tasks of varying complexity, we classify operator chains into three difficulty levels based on chain length and the semantic nature of operations involved: •Easy (1–3 steps): These chains typically consist of atomic, table-local operations such as filter , sort ,select , ordropna . •Medium (4–6 steps): Chains in this category often involve combinations of aggregation and light integration, such as groupby -agg,rename , or simple join operations. •Hard (7–8 steps): These chains incorporate multi-table joins, nested reshaping (e.g., pivot following groupby ), and schema-evolving transformations requiring global reasoning. This stratified scheme allows us to assess model performance across compositional depths and reasoning challenges, while maintaining control over the distribution of task complexity in the benchmark. Illustrative Examples Each example consists of a natural language instruction, the corresponding DSL operator sequence, the compiled code, and input/output tables. These examples demonstrate different reasoning demands, such as filtering, sorting and multi-table aggregation like union and joining with nested operations. Example (Easy) { "instruction": "Sort the data in 'table_1 'by'Civil Liberties 'and 'President 'in ascending order to organize the entries accordingly.",,→ ,→ "input_table": "input_E001.csv", "output_table": "output_E001.csv", "transformation_sequence": [ "op": "sort", "params": {"by": ["Civil Liberties","President"],"ascending": [true,true]},} ,→ ], "gold_code": "df.sort_values(by=[ 'Civil Liberties ','President '], ascending=[True, True]))" ,→ } Input Table (E001) Year Political Rights Civil Liberties Status President 1972 6 6 Not Free Hamani Diori 1973 6 6 Not Free Hamani Diori 1974 7 6 Not Free Hamani Diori 1975 7 5 Not Free Seyni Kountché 1976 7 5 Not Free Seyni Kountché 1977 7 5 Not Free Seyni Kountché Output Table (E001) Year Political Rights Civil Liberties Status President 1975 7 5 Not Free Seyni Kountché 1976 7 5 Not Free Seyni Kountché 1977 7 5 Not Free Seyni Kountché 1972 6 6 Not Free Hamani Diori 1973 6 6 Not Free Hamani Diori 1974 7 6 Not Free Hamani Diori 20 Example (Medium) { "instruction": "Start by excluding the rows where the 'Year 'is 2013. Then, remove duplicate rows in table_1, keeping the last occurrence for each duplicate. After that, group the resulting data by 'Name ', computing the minimum of 'Number of Contestants 'for each group. Finally, sort the grouped data by 'Number of Contestants 'in ascending order.",,→ ,→ ,→ ,→ ,→ "input_table": "input_M001.csv", "output_table": "output_M001.csv", "transformation_sequence": [ {"op": "filter", "params": {"column": "Year", "condition": "!= 2013"}}, ,→ {"op": "duplicate", "params": {"subset": null, "keep": "last"}}, {"op": "groupby", "params": {"by": ["Name"], "agg": {"Number of Contestants": "min"}}}, ,→ {"op": "sort", "params": {"by": ["Number of Contestants"], "ascending": [true]}} ,→ ], "gold_code": "df.query( 'Year != 2013 ').drop_duplicates(keep= 'last ').groupby( 'Name ', as_index=False).agg({ 'Number of Contestants ': 'min'}).sort_values(by= 'Number of Contestants ', ascending=True)",→ ,→ ,→ } Input Table (M001) Name Number of Contestants Number of Approved Year University of
https://arxiv.org/abs/2505.15874v1
Chile 253 125 2014 Pontifical Catholic University of Chile 202 118 2014 University of Concepción 108 46 2013 University of Chile 74 33 2015 Pontifical Catholic University of Chile 69 31 2015 Output Table (M001) Name Number of Contestants Pontifical Catholic University of Chile 69 University of Chile 74 Example (Hard) { "instruction": "Start by performing a right join between table_1 and table_2 on 'ship id 'with suffixes '_left 'and '_right '. Then, remove any rows with missing values across all columns. Next, explode the 'location 'column by splitting its values at commas. Group the data by the 'type 'column, counting occurrences of 'speed knots 'and calculating the mean of 'ship id '. Sort the grouped data by'speed knots 'and 'ship id 'in descending order. Deduplicate the results based on 'speed knots 'and 'ship id ', keeping the first occurrence. Select the top two entries from the sorted results. Finally, rename the columns to 'category 'for 'type ','velocity in nautical miles 'for 'speed knots ', and 'vessel identifier 'for 'ship id '.",,→ ,→ ,→ ,→ ,→ ,→ ,→ ,→ ,→ ,→ ,→ 21 "input_table": [ "input_H001_ships.csv","input_H001_ship_missions.csv"], ,→ "output_table": "output_H001.csv", "transformation_sequence": [ { "op": "join", "params": { "on": "ship id", "how": "right", "suffixes": ["_left", "_right"] } }, ,→ { "op": "dropna", "params": { "how": "all" } }, { "op": "explode", "params": { "column": "location", "split_comma": true } }, ,→ { "op": "groupby", "params": { "by": ["type"], "agg": { "speed knots": "count", "ship id": "mean" } } }, ,→ { "op": "sort_values", "params": { "by": ["speed knots", "ship id"], "ascending": [false, false] } }, ,→ { "op": "deduplicate", "params": { "subset": ["speed knots", "ship id"], "keep": "first" } }, ,→ { "op": "head", "params": { "n": 2 } }, { "op": "rename", "params": { "rename_map": { "ship id": "vessel identifier", "speed knots": "velocity in nautical miles", "type": "category" } } },→ ,→ ], "gold_code": "df = ( table_1.merge(table_2, on= 'ship id ', how= 'right ', suffixes=( '_left ', '_right ')) ,→ .dropna(how= 'all') .assign(location=lambda df: df[ 'location '].str.split( ',')) .explode( 'location ') .groupby( 'type ', as_index=False) .agg({ 'speed knots ':'count ','ship id ':'mean '}) .sort_values(by=[ 'speed knots ','ship id '], ascending=[False, False]) .drop_duplicates(subset=[ 'speed knots ','ship id '], keep= 'first ') .head(2) .rename(columns={ 'ship id ':'vessel identifier ', 'speed knots ':'velocity in nautical miles ', 'type ':'category ' })" } Input Table 1: Ships (H001) ship id name type nationality tonnage 1 Corbridge Cargo ship United Kingdom 3687 2 Farringford Battle ship United States 3146 3 Dromonby Cargo ship United Kingdom 3627 4 Author Cargo ship United Kingdom 3496 5 Trader Battle ship United Kingdom 3608 6 Ariadne Cargo ship United States 3035 7 Appam Battle ship United Kingdom 7781 8 Clan McTavish Cargo ship United States 5816 Input Table 2: Ship_missions (H001) 22 mission id ship id launched year location speed (knots) fate 1 1 1930 Germany 25 Decommissioned 1950 2 2 1930 Germany 25 Decommissioned 1950 3 3 1930 Helsinki, Finland 23 Lost (burned) 1931 4 5 1916 Norway 16 Retired 1939 5 6 1931 Uusikaupunki, Finland 23 Decommissioned 1959 6 7
https://arxiv.org/abs/2505.15874v1
1931 Uusikaupunki, Finland 23 Decommissioned 1960 7 8 1932 Turku, Finland 23 Lost (burned and sunk) 1933 Output Table (H001) category velocity in nautical miles vessel identifier Cargo ship 7 4.875 Battle ship 4 5.25 C.7 Operator-Level Execution Characteristics. To quantify the computational characteristics of different operations, we analyze the per-operation execution time distribution across all tasks. Fig. 9 presents a box plot of execution times for each operation type, excluding the top 5% of outliers to mitigate the impact of extreme values. The analysis reveals a clear computational hierarchy: data-intensive operations such as unpivot andunion exhibit both higher median execution times, while lightweight operations like select anddropna typically complete within 2 seconds. This computational heterogeneity has important implications for pipeline optimization, as it suggests that the choice and ordering of operations can significantly impact overall execution time. The results underscore the need for operation-aware scheduling and synthesis strategies. filter select deduplicateexplode groupbyunionsortunpivottopkdropnatransposecastrenamepivot join wide_to_long Operation Type1.01.52.02.53.03.54.0Execution Time (seconds) Figure 9: Distribution of per-operation execution times across all tasks, with the top 5% of outliers removed. Heavier operations such as union andjoin incur higher costs, while others like filter or dropna are faster. D Visualization Platform To facilitate inspection and debugging of individual tasks in the PARROT benchmark, we develop an interactive Data Synthesis Visualization Platform, which provides detailed insights into the pipeline information and execution outcomes. The interface features a benchmark overview dashboard and a data demonstration module, which includes a task description panel, a code implementation panel, and a table comparison panel. 23 Benchmark Overview Dashboard. This overview dash board (Fig. 10) displays metadata for each task, including status, execution time, predicted complexity, and involved operations (e.g., groupby , sort ,topk ). Figure 10: Benchmark Overview. The dashboard summarizes total task count, success rate, execution time statistics, operator distributions, and task complexity. Task Description. This panel (Fig. 11) shows the original natural language instruction, its inter- preted transformation intent, and the corresponding symbolic transformation chain. Code Implementation. This panel (Fig. 12) presents the compiled Python (Pandas) implementation generated from the symbolic program. Table Comparison. This panel (Fig. 13) provides side-by-side visualization of the input table, ground-truth output, and actual model execution result. E Details of Experiments E.1 Zero-shot LLMs To maintain evaluation consistency across models, we employed a unified prompt format, the details of which are provided in E.10. This schema-aware, instruction-driven, and output-targeted approach was consistently applied unless otherwise specified. Inference Environment To ensure consistency and reproducibility across all experiments, we set the temperature of the LLM to 0.7 and limit the maximum output length to 4000 tokens. For models that do not support the temperature parameter (e.g., o3-mini ando4-mini ), this setting is omitted. 24 Figure 11: Task Description Panel. It displays the original natural language instruction, the rewritten instruction, and a structured transformation chain in DSL. Figure 12: Code Implementation Panel. It presents the synthesized Python (Pandas) code that executes the DSL logic. E.2 Fine-Tuning of LLMs To better align large language models (LLMs) with data preparation tasks, we conducted supervised fine-tuning on a set of recent
https://arxiv.org/abs/2505.15874v1
instruction-tuned LLMs, including Qwen2.5-Coder-1.5B-Instruct , Qwen2.5-Coder-3B-Instruct , and Qwen2.5-Coder-7B-Instruct . The goal is to improve the model’s ability to interpret natural language instructions and generate accurate transformation plans. Training Environment Training is conducted on a server equipped with four NVIDIA RTX 4090 GPUs. We utilize torchrun with 4 GPUs per run. The models are trained using mixed-precision (bf16), with a batch size of 2 for the 1.5B model and 1 for the 3B and 7B models. Training runs for 3 epochs, with a maximum sequence length of 4096. We employ Parameter-Efficient Fine- Tuning (PEFT) to reduce computational cost. DeepSpeed is used to manage memory and optimize performance. E.3 Text-to-Pandas The Text-to-Pandas module is designed to convert natural language instructions into executable pandas transformation code. It leverages carefully engineered prompts to guide large language models in interpreting user intents and generating accurate, efficient pandas scripts. This approach centers on prompt engineering—designing clear, context-aware prompts that help the model accurately understand user intent and generate precise Pandas code. The module detects the data operation type (e.g., filtering, grouping, pivoting) and converts natural language instructions 25 Figure 13: Data Comparison Panel. It visualizes the input table, target output, and actual execution result side-by-side for verification. into corresponding Python code based on the data schema, lowering the barrier for complex data transformations. Detailed prompt templates are provided to facilitate reproducibility and further development (see Sec. E.11). E.4 Text-to-SQL To evaluate models in the Text-to-SQL setting, we simulate a realistic database-backed environment. All data used in our transformation tasks is stored in a relational database (SQLite), ensuring that the models operate over structured data with executable queries. Database Construction. We construct a normalized relational database that mirrors the tabular inputs used in the benchmark. Each table in the database is defined with an explicit schema including column names and data types. Where applicable, foreign key relationships are established to support multi-table joins. This setup ensures that the generated SQL can be directly executed for validation. Prompt Template. To support accurate and consistent code generation in the Text-to-Pandas module, we design structured prompt templates (see Sec.E.12) that pair natural language instructions with table schemas. E.5 Agent-Based Methods We evaluate three major classes of agent-based methods—Tool Calling Agent, Plan-and-Solving Agent, and Chain-of-Tables Agent—as described in Sec. 5.1. To reduce redundancy, we omit high- level descriptions here and instead provide implementation and execution details for reproducibility. Tool and Environment Setup. Each data preparation operation is implemented as a callable tool (seeE.6). Agents interact with these tools using LangChain’s tool calling . All agents are executed within a Python environment with access to the full table context (i.e., input, schema, partial states). Tool Calling API. This agent directly maps natural language instructions to tool calls via LLM reasoning. We use LangChain’s native tool calling support with GPT-4o, GPT-4o-mini, and DeepSeek- V3. The input to the agent includes the partial table preview, instructions, and tool schema documen- tation. Tool descriptions are standardized to ensure consistency across models. 26 Plan-and-Solve. This agent follows a two-phase paradigm: it first prompts the LLM to generate
https://arxiv.org/abs/2505.15874v1
a high-level transformation plan as a sequence of operations (e.g., filter →groupby→sort), and then executes each operation step-by-step via tool invocation. Chain-of-Tables. The Chain-of-Tables Agent operates by tracking and updating intermediate tabular states, enabling stepwise manipulation of evolving data frames. It is especially suited for multi-hop tabular tasks. We reuse the plan-execute-react loop similar to [ 40], but adapt it to work with our DSL-based transformation layer rather than raw SQL. E.6 Pipeline-Agent We propose Pipeline-Agent , a unified framework for data preparation that integrates iterative rea- soning and tool-based execution in a closed-loop interaction cycle. This design enables the agent to make step-by-step decisions, invoke appropriate tools, and leverage execution feedback to guide subsequent actions, thereby ensuring accurate and consistent transformation outcomes. Specifically, we first adopt the ReAct framework [ 45], enabling the agent to reason about intermediate table states and determine the next operation to apply. This reasoning is interleaved with execution, allowing the agent to refine its plan based on observed table changes. We then define a set of modular tools, each corresponding to a specific data preparation operation such asfilter ,groupby ,sort , orpivot . These tools share a standard interface (e.g., transform(df) ) and accept structured input arguments, making them readily callable by LLM-based agents. Finally, Pipeline-Agent dynamically invokes these tools along its reasoning trajectory, forming a toolchain that adapts to the evolving table state. This tight coupling of reasoning and execution enables the agent to maintain a coherent transformation process, even in complex multi-step scenarios. Comparison with Baseline Agents. Unlike the Tool Calling API, which executes a one-shot instruction without considering intermediate states, or the Plan-and-Solve approach, which separates planning from execution, our Pipeline-Agent tightly integrates both aspects in a single loop. Compared to Chain-of-Tables, which focuses on static table reasoning, Pipeline-Agent emphasizes dynamic state tracking and adaptive tool chaining, making it more suitable for iterative table transformation tasks. Tool Definitions and Interaction. Each tool in the Pipeline-Agent is a self-contained module that encapsulates a particular transformation logic. Tools expose a unified interface for execution and are compatible with structured reasoning inputs from LLMs. During the agent’s reasoning process, these tools are dynamically selected and applied, enabling seamless integration into the closed-loop pipeline. The toolset spans essential operations such as filtering, grouping, sorting, pivoting, and more, supporting a wide range of data preparation needs. Tools Definition class BaseOpInput(BaseModel): table_names: str class FilterInput(BaseOpInput): condition: str class SortInput(BaseOpInput): by: List[str]; ascending: List[bool] class PivotInput(BaseOpInput): index: str columns: str values: str aggfunc: str class StackInput(BaseOpInput): id_vars: List[str] 27 value_vars: List[str] class ExplodeInput(BaseOpInput): column: str split_comma: bool class WideToLongInput(BaseOpInput): subnames: List[str] i: List[str] j: str sep: str suffix: str class UnionInput(BaseOpInput): left_table: str right_table: str how: str class JoinInput(BaseOpInput): left_table: str right_table: str left_on: str right_on: str how: str suffixes: List[str] class TransposeInput(BaseOpInput): pass class DropnaInput(BaseOpInput): subset: List[str]; how: str class DeduplicateInput(BaseOpInput): subset: Union[List[str], None] keep: str class TopKInput(BaseOpInput): k: int class SelectInput(BaseOpInput): columns: List[str] class CastInput(BaseOpInput): column: str dtype: str class RenameItem(BaseModel): old_name: str new_name: str class RenameInput(BaseOpInput): rename_items: List[str] class AggItem(BaseModel): column: str agg_func: str 28 class GroupByInput(BaseOpInput):
https://arxiv.org/abs/2505.15874v1
by: List[str] aggregations: List[AggItem] E.7 Error Analysis As shown in Fig. 14, type errors consistently account for the largest proportion across all models, ranging from 41.2% (GPT-4o-mini) to 54.5% (DeepSeek-R1). These errors typically stem from applying transformations to columns with incompatible data types, such as aggregating over non- numeric fields or sorting categorical columns without conversion. Models with weaker schema- tracking capabilities (e.g., DeepSeek-R1/V3) exhibit higher type error rates. Semantic errors constitute the second largest category, with models like GPT-4o and Gemini-2.5-Pro showing around 27% semantic failures. These reflect incorrect or missing operations, broken logic chains, or hallucinated steps that deviate from the task intent. The relatively lower semantic error rates of DeepSeek models may indicate a conservative generation style, albeit at the cost of lower execution accuracy. Column/index errors are more prominent in GPT-o3/4-mini and Gemini-2.5-Pro, often resulting from misaligned references due to schema evolution (e.g., renaming or selection). This suggests limitations in maintaining coherent schema state across steps. Attribute errors (e.g., wrong aggregation function or sorting order) appear in 3%–7% of cases across models, indicating shallow mapping between instruction semantics and operator parameters. Notably, GPT-4o demonstrates a more balanced error profile, with relatively lower type error rates and a higher proportion of semantic errors—implying its stronger capacity in basic type resolution but remaining challenges in compositional reasoning. In contrast, DeepSeek-R1 shows a high concentration of type errors, suggesting brittle type inference. Overall, these results highlight the diverse failure modes of different LLMs in Text-to-Pipeline , and emphasize the importance of schema tracking, operator grounding, and long-range reasoning in achieving robust program synthesis. Column/Index Error 16.7% Type Error48.7%Semantic Error 27.2%Attribute Error: 6.6% Key Error: 0.9% (a) GPT-4 O Attribute Error: 5.4% Type Error 41.3% Column/Index Error24.5%Semantic Error 28.8% (b) GPT- O4-MINI Parameter Error: 0.9% Type Error 51.2% Column/Index Error15.5%Attribute Error: 7.0%Semantic Error 24.9%Key Error: 0.5% (c) GPT- O3-MINI Type Error 52.4% Semantic Error27.8%Column/Index Error 13.9%Attribute Error: 4.4%Parameter Error: 1.5% (d) G EMINI -2.5-P RO Type Error 54.5% Column/Index Error12.4%Attribute Error: 5.2%Semantic Error 27.1%Key Error: 0.3%Parameter Error: 0.6% (e) D EEPSEEK-R1 Semantic Error21.8% Type Error 58.5%Attribute Error: 4.2%Column/Index Error 14.4%Parameter Error: 0.4% Key Error: 0.7% (f) D EEPSEEK-V3 Figure 14: Distribution of error types across six large language models. 29 E.8 Case Study This example presents a representative failure case encountered during zero-shot prompting with GPT-4o. The user instruction specifies a four-step transformation: (1) select three relevant columns, (2) filter rows based on a numeric threshold, (3) aggregate the results grouped by buyer, and (4) rename the resulting column for presentation. However, the predicted pipeline skips the initial select operation and begins directly with filtering. While this omission may appear minor, it triggers a chain of downstream errors. First, the filter and groupby steps are applied over the full original schema, causing the inclusion of irrelevant fields such asProduct . Second, due to the missing projection, records not intended for analysis, such as rows with values below the threshold or involving different aggregation semantics, are mistakenly retained in the final output. The resulting prediction not only introduces extra columns but also contains
https://arxiv.org/abs/2505.15874v1
logically inconsistent rows (e.g., the buyer “Bob” appears with an amount below 1000, contradicting the filtering intent). This case exemplifies several challenges specific to Text-to-Pipeline : •Instruction grounding : The model fails to fully map the natural language intent to a faithful operator sequence, omitting the projection implied by “extract the columns.” •Schema evolution tracking : Without the initial pruning, subsequent operations operate on a misaligned schema, which affects aggregation and renaming logic. •Multi-step reasoning : The model struggles to anticipate how early-stage operations affect down- stream validity, leading to cascading inconsistencies. Such errors highlight the brittle nature of zero-shot pipelines when schema dependencies are not explicitly modeled. They also underscore the importance of symbolic alignment and intermediate validation, motivating the design of agent-based methods such as Pipeline-Agent that incorporate execution-aware reasoning and schema propagation during generation. Case Study Task Intent Extract the " Trans ID", " Buyer ", and " Amount " columns from table_1 . Filter the rows where " Amount " > 1000. Then , group by " Buyer " to sum " Amount ", and rename " Amount " to " Total Payment ". Ground Truth Pipeline [ {" op ": " select ", " params ": {" columns ": [" Trans ID", " Buyer ", " Amount "]}} , {" op ": " filter ", " params ": {" condition ": "`Amount `> 1000"}} , {" op ": " groupby ", " params ": {" by ": [" Buyer "], " agg ": {" Amount ": " sum "}}} , {" op ": " rename ", " params ": {" rename_map ": { " Amount ": " Total Payment "}}} ] Predicted Pipeline [ {" op ": " filter ", " params ": {" condition ": "`Amount `> 1000"}} , {" op ": " groupby ", " params ": {" by ": [" Buyer "], " agg ": {" Amount ": " sum "}}} , {" op ": " rename ", " params ": {" rename_map ": { " Amount ": " Total Payment "}}} ]Input Table (excerpt) | Trans ID | Buyer | Amount | Product | |----------|-------|--------|---------| | T001 | Alice | 1200 | Pen | | T002 | Bob | 800 | Notebook | | T003 | Alice | 2500 | Laptop | Expected Output Table | Buyer | Total Payment | |------------|---------------| | Alice | 3700 | Predicted Output Table | Buyer | Total Payment | Product | |------------|---------------|---------| | Alice | 3700 | Pen | | Bob | 800 | Notebook | 30 E.9 Prompts for Automatic Pipeline Data Synthesis In this section, we present the detailed prompts used in our data synthesis framework. As mentioned in Sec. 3, we primarily utilize LLMs for instruction generation andrefinement in Stage 3, and forverifying the consistency between natural language instructions and the operator chain in the DSL during Stage 5. Below, we will introduce the prompts employed in these two stages. Prompt for Instruction Generation % [breaklines=True] You are a data preparation expert. I have some related input tables and a target table, where
https://arxiv.org/abs/2505.15874v1
the target table is obtained by transforming the input tables. Based on the transformation relationship between them, please generate a clear natural language instruction that describes how to transform the input tables into the target table.,→ ,→ ,→ ,→ The transformation operations and their detailed parameters are as follows: ,→ {transform_chain_str} Input Tables (First 10 Rows): {input_table_str} Target Table (First 10 Rows): {target_table_str} Please generate a clear and natural data preparation instruction in English. The instruction should explicitly describe the required transformation steps and clearly state the table names involved, without mentioning specific programming languages or function names. Use terminology from the data analysis domain and consider the purpose and effect of the operations.,→ ,→ ,→ ,→ ,→ Your instructions just need to clearly describe the conversion chain without describing additional operations. ,→ Your instruction should follow the format: Instruction: [Your data preparation instruction] Prompt for Instruction Refinement Based on the following data preparation task description, generate a natural language statement expressing the user 's intent. ,→ Concise, Action-Oriented Language: Focus on the core actions and remove unnecessary details. Keep the language clear and direct to highlight the transformation intent.,→ ,→ Clarification of Key Tables and Columns: Maintain essential table names and columns, but express them in a natural, straightforward way. ,→ Simplified Descriptions of Complex Steps: Emphasize the main objectives (sorting, filtering, deduplication) without diving into excessive details, unless they are crucial for the context.,→ ,→ Necessary details need to be preserved such as the suffix of the join, the way the de-duplication operation is performed (first or last), etc. ,→ Here are some examples: --- 31 Task Description: To transform the input tables into the target table, follow these steps: 1. Begin by performing an inner join between table_1 and table_2 using the allergy name column from table_1 and the allergy column from table_2. This will combine records from both tables where there is a match on these columns, while including the allergy name and allergy type from table_1 along with the stuid from table_2. 2. Next, group the resulting dataset by the allergy name (now included in the joined table) and aggregate the data by counting the number of unique stuid entries for each allergy name. This will give you the total number of students associated with each allergy. 3. After aggregating, sort the grouped data first by the count of stuid in ascending order and then by allergy name in descending order. This will organize the data based on the number of students and the names of the allergens. 4. From the sorted data, select the top 7 entries based on the highest counts of students. This step ensures that we focus only on the most significant allergens. 5. Rename the columns in the resulting dataset by changing allergy name to allergen and stuid to student ID to make the column names more intuitive. 6. Apply a filter to retain only those records where the student ID (which now represents the count of students) is greater than or equal to 3. This will help in identifying the allergens
https://arxiv.org/abs/2505.15874v1
that have a notable number of students associated with them. 7. Remove any duplicate entries from the filtered dataset to ensure that each allergen-student ID combination is unique. 8. Finally, perform a sort on the deduplicated data by student ID in ascending order and allergen in descending order to achieve the desired final format. Following these steps will yield a table that lists allergens along with the count of students associated with each, structured as specified in the target table.,→ ,→ ,→ ,→ ,→ ,→ ,→ ,→ ,→ ,→ ,→ ,→ ,→ ,→ ,→ ,→ ,→ ,→ ,→ ,→ ,→ ,→ ,→ ,→ ,→ ,→ ,→ User Intent: Start by performing an inner join between table_1 and table_2 on 'allergy name 'and 'allergy ', with suffixes '_left 'and '_right '. Then, group the data by allergy name and count the number of'stuid 'entries for each allergen to determine the number of students associated with each allergy. After grouping, sort the data first by the student count in ascending order and then by allergy name in descending order. Select the top 7 entries. Rename the columns to change allergy name to allergen and stuid to student ID for clarity. Apply a filter to keep only the records where the student ID is 3 or greater. Deduplicate the data, keeping the first occurrence of each duplicate entry to ensure uniqueness. Finally, sort the deduplicated dataset by student ID in ascending order and allergen in descending order to produce the final result.,→ ,→ ,→ ,→ ,→ ,→ ,→ ,→ ,→ ,→ ,→ ,→ --- Task Description: First, combine the two input tables, table_1 and table_2, by performing a union operation to consolidate all records, including duplicates. Next, pivot the resulting table to reorganize the data, setting the station names (STN_NAM) as the index, the data provider (DATA_PVDR) as the columns, and using the minimum longitude (LONGITUDE) as the values. After pivoting, rename the column STN_NAM to Station Name. Then, filter the table to keep only the rows where the data provider is "NAV CANADA". Following this, remove any rows that contain missing values in the "NAV CANADA" column. Convert the data type of the "NAV CANADA" column to string. Next, ensure there are no rows where "NAV CANADA" is equal to itself (this condition might be meant for data cleansing or error checking). Finally, deduplicate the entries based on the "NAV CANADA" column while keeping the last occurrence of each duplicate. The result will be your target table with the columns DATA_PVDR and NAV CANADA.,→ ,→ ,→ ,→ ,→ ,→ ,→ ,→ ,→ ,→ ,→ ,→ ,→ ,→ 32 User Intent: Begin by performing a union operation on table_1 and table_2 to consolidate all records, including duplicates. Then, pivot the resulting table with the station names (STN_NAM) as the index, the data provider (DATA_PVDR) as the columns, and use the minimum longitude (LONGITUDE) as the values. Rename the STN_NAM column to "Station Name" for clarity. Next, select the only column "NAV CANADA", and remove any rows with missing values in the "NAV CANADA" column. Convert
https://arxiv.org/abs/2505.15874v1
the "NAV CANADA" column to a string data type and ensure that there are no rows where "NAV CANADA" is equal to itself. Finally, deduplicate the data based on the "NAV CANADA" column, keeping the last occurrence of each duplicate entry.,→ ,→ ,→ ,→ ,→ ,→ ,→ ,→ ,→ ,→ --- Task Description: First, reshape the data from the wide format to a long format by selecting the columns related to 'PUZZLE B 'and 'PUZZLE A ', while keeping the specified index columns intact. After transforming the data to a long format, you can apply the explode operation. This operation will split any column containing comma-separated values into individual rows. Next,transforms data from wide format to long format,it keeps the columns in id_vars unchanged and stacks the values from value_vars ("PUZZLE A" and "PUZZLE B") into two new columns: one for the variable names and another for the values.,→ ,→ ,→ ,→ ,→ ,→ ,→ ,→ User Intent: First,reshape the data by collapsing columns that start with "PUZZLE B" or "PUZZLE A" into a long format, while keeping the specified index columns ("Index", "Where are we?") unchanged. The original suffixes from the column names are extracted into a new column called var, using a space as the separator and matching suffixes with a word character pattern (\w+).Then, Explode the "PUZZLE B" column to create separate rows for each puzzle listed, ensuring that each puzzle is split by commas first. Next,transforms data from wide format to long format,the columns specified in id_vars ("Index", "Where are we?") remain unchanged and serve as identifiers for each row. The values in value_vars ("PUZZLE A" and "PUZZLE B") are then stacked into two new columns: one for the variable names and another for the values.,→ ,→ ,→ ,→ ,→ ,→ ,→ ,→ ,→ ,→ ,→ ,→ --- Now, based on the following task description, generate a user intent statement: ,→ Transformation Chain: {transform_chain} Task Description: {task_instruction} Please output only the intent statement, without explanation or numbering. 33 Prompt for Instruction Verify Task Background: The user has generated an initial natural language description from a transformation chain, and then used an LLM to generate a user intent statement based on that initial description.,→ ,→ 1. **Transformation Chain**: {transform_chain_str} 2. **Initial Natural Language Description**: {instruction} 3. **Generated Intent**: {intent_text} Task Requirement: Assume you are a data preparation expert. Based on the current intent, can you infer the correct conversion chain, including the details of the parameters?,→ ,→ Output Requirements: - If the intent allows you to infer a complete and reasonable transformation chain, output: ,→ {{ "is_valid": "true", "intent": "{intent_text}" }} - Otherwise, output: {{ "is_valid": "false", "intent": "[Rewritten Intent]" }} Please return the result in strict JSON format with no additional explanations. ,→ E.10 Prompts for Zero-shot LLMs This section provides the detailed prompt template designed for zero-shot large language models (LLMs). The prompt is carefully constructed to guide the model in generating accurate and semanti- cally faithful instructions without any fine-tuning. It incorporates schema information and explicit instructions to improve model understanding and output quality. This prompt
https://arxiv.org/abs/2505.15874v1
serves as the basis for consistent evaluation across different zero-shot LLMs. Prompts for Zero-shot LLMs You are a data expert with extensive knowledge in data preparation pipelines. ,→ Your task is to select operators based on user intent and use them to transform the source tables. ,→ Important notes: - After selecting the operators, ensure they can be correctly executed, especially keeping variable names consistent. ,→ - Note: Except for the `join `and `union `operations, the result table name remains the same as the source table name. For `join `and `union `, the result table name should follow the format `table_x_table_y_join `or`table_x_table_y_union `.,→ ,→ ,→ Below are the available operators: { "operators": [ { "name": "join", "pandas_equivalent": "merge", 34 "parameters": { "left_table": "left_table_name", "right_table": "right_table_name", "result_table": "table_x_table_y_join", "left_on": "left_column", "right_on": "right_column", "how": "", "suffixes": ["", ""] }, "description": "Merge two datasets on a common column with specified input/output table names" ,→ }, { "name": "union", "pandas_equivalent": "concat", "parameters": { "source_tables": ["table_1", "table_2"], "axis": 0, "result_table": "table_x_table_y_union", "ignore_index": true, "how": ["all", "distinct"] }, "description": "Vertically concatenate multiple tables (similar to SQL UNION)" ,→ }, { "name": "groupby", "pandas_equivalent": "groupby", "parameters": { "source_table": "source_table_name", "group_by": ["group_column_1", "group_column_2"], "aggregations": { "value_column_1": "aggregation_function", "value_column_2": "aggregation_function" }, "result_table": "source_table_name" }, "description": "Group data by specified columns and apply aggregation (similar to SQL GROUP BY)" ,→ }, { "name": "pivot", "pandas_equivalent": "pivot_table", "parameters": { "source_table": "source_table_name", "index": ["index_column_1", "index_column_2"], "columns": ["column_to_expand"], "values": ["value_column"], "aggfunc": "aggregation_function", "result_table": "source_table_name" }, "description": "Convert long-format data into wide-format (similar to Excel Pivot Table)" ,→ }, { "name": "unpivot", "pandas_equivalent": "melt", 35 "parameters": { "source_table": "source_table_name", "id_vars": ["fixed_column_1", "fixed_column_2"], "value_vars": ["column_to_unpivot_1", "column_to_unpivot_2"], "var_name": "variable", "value_name": "value", "result_table": "source_table_name" }, "description": "Convert wide-format data into long-format (similar to SQL UNPIVOT)" ,→ }, { "name": "explode", "pandas_equivalent": "pd.explode", "parameters": { "source_table": "source_table_name", "result_table": "source_table_name", "column": "list_column", "split_comma": True or false }, "description": "Expand column values into separate rows (separate them by commas first if necessary)" ,→ }, { "name": "filter", "pandas_equivalent": "query", "parameters": { "source_table": "source_table_name", "condition": " `column_name `operation value", "result_table": "source_table_name" }, "description": "Filter rows based on conditions (similar to SQL WHERE)" ,→ }, { "name": "sort", "pandas_equivalent": "sort_values", "parameters": { "source_table": "source_table_name", "by": ["column_1", "column_2"], "ascending": [true, false], "result_table": "source_table_name" }, "description": "Sort data by specified columns (similar to SQL ORDER BY)" ,→ }, { "name": "wide_to_long", "pandas_equivalent": "pd.wide_to_long", "parameters": { "source_table": "source_table_name", "subnames": ["subname"], "i": ["id_column"], "j": "var", "sep": "", "suffix": "", "result_table": "source_table_name" 36 }, "description": "Convert wide-format data to long-format" }, { "name": "transpose", "pandas_equivalent": "transpose", "parameters": { "source_table": "source_table_name" }, "description": "Transpose rows and columns of a table; no additional parameters needed" ,→ }, { "name": "rename", "pandas_equivalent": "rename", "parameters": { "source_table": "source_table_name", "rename_map": "Dictionary mapping old column names to new names" }, "description": "Rename columns based on the provided mapping" }, { "name": "dropna", "pandas_equivalent": "dropna", "parameters": { "source_table": "source_table_name", "subset": ["List or single column name to check for missing values"], ,→ "how": "Deletion strategy: either 'any'or'all'" }, "description": "Remove rows with missing values in specified columns" ,→ }, { "name": "deduplicate", "pandas_equivalent": "drop_duplicates", "parameters": { "source_table":
https://arxiv.org/abs/2505.15874v1
"source_table_name", "subset": ["List or single column name to determine duplicates"], "keep": ["first", "last"] }, "description": "Remove duplicate rows, keeping either the first or last occurrence in each group" ,→ }, { "name": "topk", "pandas_equivalent": "head(k)", "parameters": { "source_table": "source_table_name", "k": "Number of top rows to retain" }, "description": "Select the top k rows after sorting by index or specific criteria" ,→ }, { "name": "select", "pandas_equivalent": "loc / bracket selection", 37 "parameters": { "source_table": "source_table_name", "columns": "List of column names to keep" }, "description": "Select specified columns from the table" }, { "name": "cast", "pandas_equivalent": "astype", "parameters": { "source_table": "source_table_name", "column": "Column name to change data type", "dtype": "Target data type (e.g., 'int','float ','str')" }, "description": "Convert the data type of the specified column" } ] } Please output the transformation steps from the input tables to the target table using the above operations. ,→ The output should follow the JSON format below: ```json [ { "name": "", "parameters": { } } ] User Intent: {USER_INTENT} Input Tables (First 10 Rows): {SOURCETABLE} Please reason step-by-step based on the user intent, and then provide the result. ,→ The final output should be in JSON format. """ E.11 Prompts for Text-to-Pandas To ensure accurate and reproducible generation of pandas code from natural language instructions, we designed structured prompt templates tailored to various data preparation operations. These templates guide the model in understanding user intent, interpreting table schemas, and producing syntactically and semantically correct code. Prompt for Text-to-Pandas I need you to convert natural language into Pandas code. dataset schema: {dataset_schema} question: {task[ 'question ']} Code Return Guidelines: 38 1. If you need to return a DataFrame as the result, assign it to a variable named 'result ' ,→ 2. If you modify an existing DataFrame, keep its original variable name 3. If you create a new DataFrame (other than the final result), use a clear variable name (e.g., df_temp) ,→ 4. Each input table is already loaded as a DataFrame. The variable name of each DataFrame is the same as the input table name, such as table_1, table_2, etc.,→ ,→ For join operations: 1. Use the appropriate merge/join method based on the requirement 2. Make sure to specify the correct 'on'or'left_on '/'right_on ' parameters ,→ 3. Use the appropriate join type (inner, left, right, outer) as required 4. After joining, assign the result to the 'result 'variable. Please generate Pandas code that solves this problem. Only return the code, no explanation. ,→ Ensure the code is executable and follows the return guidelines above. E.12 Prompts for Text-to-SQL Prompt for Text-to-SQL I need you to convert natural language questions into SQL queries. The database schema is as follows: {schema_prompt} question: {sample["question"]} Please generate an SQL query that can answer this question. Only return the SQL query, no explanation is needed. Ensure your SQL query is executable in SQLite.,→ ,→ 39
https://arxiv.org/abs/2505.15874v1
arXiv:2505.15877v1 [cs.CV] 21 May 2025Highlighting What Matters: Promptable Embeddings for Attribute-Focused Image Retrieval Siting Li University of Washington sitingli@cs.washington.eduXiang Gao IIIS, Tsinghua University x-gao22@mails.tsinghua.edu.cn Simon Shaolei Du University of Washington ssdu@cs.washington.edu Abstract While an image is worth more than a thousand words, only a few provide crucial information for a given task and thus should be focused on. In light of this, ideal text-to-image (T2I) retrievers should prioritize specific visual attributes relevant to queries. To evaluate current retrievers on handling attribute-focused queries, we build COCO-F ACET , a COCO-based benchmark with 9,112 queries about diverse attributes of interest. We find that CLIP-like retrievers, which are widely adopted due to their efficiency and zero-shot ability, have poor and imbalanced performance, possibly because their image embeddings focus on global semantics and subjects while leaving out other details. Notably, we reveal that even recent Multimodal Large Language Model (MLLM)-based, stronger retrievers with a larger output dimension struggle with this limitation. Hence, we hypothesize that retrieving with general image embeddings is suboptimal for performing such queries. As a solution, we propose to use promptable image embeddings enabled by these multimodal retrievers, which boost performance by highlighting required attributes. Our pipeline for deriving such embeddings generalizes across query types, image pools, and base retriever architectures. To enhance real-world applicability, we offer two acceleration strategies: Pre-processing promptable embeddings and using linear approximations. We show that the former yields a 15% improvement in Recall@5 when prompts are predefined, while the latter achieves an 8% improvement when prompts are only available during inference. 1 Introduction Images offer valuable information that facilitates problem-solving and reasoning [ 64,36,37]. Al- though there may be abundant information in one image, especially when it depicts a complex scene with plenty of elements [ 12,54,41], usually only a small part is critical to a task or query at a time. For both performance and efficiency considerations, there has been recent effort to focus the models on key aspects of images for better visual reasoning [ 58,16,42]. Similarly, for text-to-image (T2I) retrieval that helps knowledge and fact checking [ 62,49], an ideal retriever should also be able to select images with given attributes of interest, such as a specified time, location, or object, which are not necessarily the main elements of the image. Are current T2I retrievers capable of accomplishing attribute-focused queries? We explore this question on various CLIP-like models, from CLIP [ 47] to recent SigLIP2 [ 53], and Multimodal Large Language Model (MLLM)-based embedders like VLM2Vec [ 20]. Since commonly used T2I benchmarks like MSCOCO [ 32] and Flickr30K [ 63] only contain general queries on global alignment, Preprint. Under review. Query: Find me an image that contains any car. CLIPVLM2Vec w/o PromptGround TruthAttribute-Focused Image Retrieval×9,112 COCO-Facet Benchmark Objects Scenes AnimalsCount of People Materials Times Weathers GesturesCaptions Answers: & MSCOCO Visual7W VisDial COCO-Stuff MLLM-Based Universal Embedder (VLM2Vec) QueryCosine Similarity Pre-Processed Embeddings Linear ApproximationImage + PromptWhich objects are present in this image? 0 20 40 60 80 100 Recall@5 (%) CLIP w/o Prompt w/Prompt Figure 1: Overview. (Above) We study the task of attribute-focused text-to-image retrieval
https://arxiv.org/abs/2505.15877v1
and build COCO-F ACET for benchmarking various retrievers. (Below) We show that using promptable image embeddings enhances performance on such queries, and propose two acceleration strategies to improve its applicability. we build a new benchmark, COCO-F ACET with 9,112 attribute-focused queries of 8 types1, based on existing annotations of COCO images [ 32,71,3,8]. We find that current retrievers behave worse on attributes other than Animals compared with general T2I retrieval on MSCOCO, and struggle more with detailed or underexplored attributes such as Time . Additionally, we discover that they fail to prioritize images with the correct-but-non-dominant attribute over images with a dominant-but-wrong attribute (See an example in Figure 1). As embedders with various architectures and scales all fall short on such queries, we hypothesize that retrieving with general image embeddings is inefficient and suboptimal for this task. Therefore, we propose to retrieve with promptable image embeddings: We employ MLLM-based universal embedders that can process combination of images and text, and show that using the GPT-written prompt for each category as text helps to highlight key attributes in image embeddings, demonstrating improvement on harder attributes while maintaining good performance on easier ones (See the bar plot in Figure 1). Additionally, we design two strategies to accelerate this pipeline: Predefining potentially useful prompts and pre-processing the promptable image embeddings, or deriving linear approximation of the embedder at test time, which can be efficiently applied to the query vectors. Our main contributions are listed as follows: •Introduced in Section 3, our benchmark COCO-F ACET on attribute-focused T2I queries is a good supplement to the current general-purposed T2I retrieval evaluation. We also provide construction pipelines, which can be utilized for building benchmarks focusing on other attributes for future research. •In Section 3.2, we reveal the limitation of current CLIP-like retrievers and MLLM-based embedders on attribute-focused T2I queries, which affects models with various scales, image resolutions, and output dimensions. •In Section 4, we propose to use promptable image embedding enabled by MLLM-based universal embedders as the solution. We show that it enhances the retrieval performance and generalizes over query types, image pools, and base retriever architectures. •In Section 5, we develop two acceleration strategies for real-world usage. Our results demonstrate that the pre-processing technique increases Recall@5 by 15% when prompts are predefined, while the linear approximation achieves an 8% improvement on Recall@5 when prompts are only available at inference time. 1The COCO-F ACET benchmark can be downloaded at https://www.dropbox.com/scl/fo/ hbkknl14pj5wwgpphbt6l/AC15YovOLv65Ek3hE4kib1o?rlkey=fhphyfml0uc6ctnb70v95id1n&st= qrhpzs3o&dl=0 . 2 2 Related Work Text-to-image Retrieval has been a long-standing research direction not only due to the real-world need for image search, but also as an important step in general problem-solving [62, 15]. CLIP-like dual-encoder approaches [ 47,65,7,28] are widely-used due to their efficiency and remarkable zero-shot performance on standard T2I benchmarks like MSCOCO [ 32] and Flickr30K [ 63]. Recent MLLM-based retrievers [ 31,25,69,20] extend the input modality to joint image-text pairs. Since these models output image and text embeddings for retrieval, there have been benchmarks for more comprehensive evaluation of embeddings [ 56,20,59], which include domain-focused retrieval subsets like FashionIQ [ 57], EDIS
https://arxiv.org/abs/2505.15877v1
[ 35], OVEN [ 14], Wiki-SS-NQ [ 38]. We note that these subsets either focus on the global semantics of the images or pre-select images with a single main subject, though in-the-wild images can be visually crowded with many attributes. Visual Genome [ 24] considers such complexity by annotating the images with detailed descriptions for advanced visual understanding. COCO-Attributes [ 44] is built with attribute annotations for multi-label classification but only targets main subjects like people, animals, and objects. On the other hand, previous studies on visual grounding and reasoning have pointed out that CLIP-like models [ 43,5,52] might neglect visual details. Sogi et al. [50] shows that the retrieval performance degrades when the target objects are small. To overcome this limitation, researchers propose to use two-stage approaches [ 39,13] or guidance from an LLM [ 26]. Notably, pre-CLIP methods represented by scene graphs [ 21,48,46] might capture more visual details, but they lag behind in terms of data availability and inference efficiency. Another improvement based on CLIP-like models is to obtain task-specific image embeddings, similar to task-specific text embeddings [ 51,2]. CLOC [ 5] proposes the new learning goal of promptable embeddings for better localization given spatial hints. GiVE [ 29] and FLAIR [ 61] design patch-level or token-level image-text interaction mechanisms for language-informed image embeddings. Universal embedders [ 56,67,69,70] leverage promptable query embeddings for the composed image retrieval task. For the retrieval targets, VLM2Vec [ 20] and E5-V [ 19] try using different texts as prompts when deriving image embeddings for some domains (e.g., news, fashion). MM-Embed [ 31] recognizes the benefit of prompts but requires fine-tuning with domain-specific instruction. Promptable embeddings are also applied to other areas like reinforcement learning [ 6]. We systematically study the promptable image embeddings for retrieval targets and propose acceleration strategies. Localized Vision-Language Models (VLMs) are motivated by similar ideas that some part of an image is of interest for a given task. V* [ 58] formulates the problem as iterative visual search on a high-resolution image. Kosmos-2 [ 45] and GLIPv2 [ 66] consider language grounding by bounding boxes or phrases. DAM [ 30] explores captioning for a given region. Lin et al. [33] applies cropping to regions of interest, and Wang et al. [55] aims at a similar task called Partial Scene Text Retrieval. The visual intelligence of OpenAI’s o3 and o4-mini models is incorporated with simple tools like zooming and cropping to process the images for better reasoning [ 42], but their approach is region-based, while our attributes of interest could be non-region-based, like Time andScene . 3 Benchmarking T2I Retrievers on Attribute-Focused Queries We focus on attribute-focused T2I queries in this work. In standard, general-purpose T2I benchmarks like MSCOCO [ 32], T2I queries are image captions that describe the main content of the image (e.g., “ A black Honda motorcycle parked in front of a garage. ”) but omitting other attributes like Weather , especially when they are non-dominant attributes. Hence, they cannot be used for our purpose directly. Therefore, we construct a new benchmark, COCO-F ACET , for evaluating
https://arxiv.org/abs/2505.15877v1
T2I retrievers on attribute- focused queries in Section 3.1, and analyze their performance in Section 3.2. We reveal that current retrievers fall short of such queries, even though they involve less image-text matching than long MSCOCO-style captions with multiple attributes. 3 Table 1: Model details of T2I retrievers and their average Recall@1 and Recall@5 (in percentage points) on the MSCOCO 2017 validation set and COCO-F ACET . Recent MLLM-based universal embedders (shown in the second section) outperform CLIP-like models, but all T2I retrievers exhibit a performance drop on attribute-focused queries. Retriever Img. Size Params (M) Output Dim. COCO COCO-F ACET Recall@1 Recall@5 Recall@1 Recall@5 CLIP-ViT-L/14 3362427.9 768 81.0 97.9 33.7 47.0 EV A01 ViT-g-14 22421136.4 1024 83.2 98.4 35.4 48.3 EV A02 ViT-bigE-14+ 22425044.9 1024 87.9 99.2 34.2 48.8 SigLIP ViT-SO-14 3842878.0 1152 73.2 95.5 37.8 51.9 SigLIP2 ViT-SO-14 38421136.0 1152 87.4 98.4 39.8 52.6 BLIP2-COCO 22421173.2 768 88.8 99.1 37.8 51.6 MagicLens 2242427.6 768 87.2 99.2 40.6 56.0 E5-V 33628355.3 4096 89.6 99.3 46.0 61.8 MM-Embed 33628175.5 4096 93.2 99.7 42.8 58.4 MMRet-MLLM-S2 33627566.3 4096 93.7 99.7 48.8 64.5 LLaVE-2B 33621945.2 1536 92.4 99.7 45.6 59.5 VLM2Vec-Phi-3.5-V 33624146.6 3072 89.4 99.5 44.5 58.9 3.1 Benchmark Construction We utilize the existing annotations provided by MSCOCO [ 32], Visual7W [ 71], VisDial [ 8], and COCO-Stuff [ 3] about COCO images. In total, we collect 9,112 test cases covering eight types: ὋCObjects ,/d⌢veAnimals , and♂prayGestures of people in the image based on MSCOCO’s annotations on segmentation, /umbrella-beachScenes and/clockTimes of the day shown in the image based on Visual7W’s where- and when-question answering, /usersCount of People in the image based on Visual7W’s how-many-people-question answering, /cloud-sunWeathers based on MSCOCO, Visual7W, and VisDial annotations, /toolsMaterials of objects or surfaces shown in the images based on COCO-Stuff. While ὋCObjects ,/d⌢veAnimals , and/umbrella-beachScenes are extensively studied in areas like image classification, others are less-explored. Additionally, we consider both regional attributes ( /d⌢veAnimals ,ὋCObjects , ♂prayGestures ,/toolsMaterials ) and global attributes ( /umbrella-beachScenes ,/usersCount of People ,/clockTimes , /cloud-sunWeathers ) which require inference based on the whole picture, so simple strategies like cropping or zooming may not be effective. Each test case contains a text query (e.g., “ Find me an everyday image that shows the scene of the beach. ”), a positive candidate (ground truth) that contains the attribute required in the query (the scene of the beach), and 99 negative candidates that do not contain such attribute. To ensure the quality of the negatives, we randomly select from images that have exclusive attributes to avoid ambiguity (e.g., images that show the scene of a conference room). More benchmark details and generation procedure are deferred to Appendix A and examples of each category are shown in Appendix D. We use the validation set of MSCOCO 2017 for comparison after converting it to the same format (“ Find me an everyday image that matches the given caption. ”+COCO caption as the query text, and 100 candidate images). 3.2 Benchmark Results We evaluate 12 state-of-the-art T2I retrievers and present the results in Table 1 and Table 2. The first 7
https://arxiv.org/abs/2505.15877v1
rows feature the CLIP family (CLIP [ 47], EV A-CLIP [ 10,11], SigLIP [ 65], SigLIP2 [ 53], and BLIP2 finetuned on COCO [ 28]) and MagicLens [ 67], which use unimodal encoders. Although MagicLens accepts image+text as the input through a fusion module, we follow their T2I retrieval protocol and use the finetuned text and image encoders without other modules. As CLIP is sensitive to text format, we also conduct evaluation using standard CLIP-style texts (“ a photo ... ”) for CLIP-ViT- L/14 in Appendix B.2 but find no significant difference. The last 5 rows are recent MLLM-based universal embedders that accept combinations of images and texts as input naturally: E5-V [ 19], MM- Embed [ 31], MMRet [ 69], LLaVE [ 25], and VLM2Vec [ 20]. Since our targets are image-only, we 4 Table 2: Recall@1 and Recall@5 (in percentage points) for various text-to-image retrievers by category on our COCO-F ACET benchmark ( /d⌢ve: Animals, /umbrella-beach: Scenes, ὋC: Objects, /users: Count of People,/tools: Materials, /clock: Times, /cloud-sun: Weathers, ♂pray: Gestures). Cells shaded in red indicate low category-specific performance (Recall@1 < 10% or Recall@5 < 20%). All models struggle more on the last five attributes. Img. Size/d⌢ve/umbrella-beachὋC/users /tools /clock /cloud-sun ♂pray Recall@1 CLIP-ViT-L/14 336291.5 55.2 54.0 3.5 3.5 4.5 4.2 6.8 EV A01 ViT-g-14 224293.7 56.4 58.1 2.3 2.8 4.6 2.9 7.0 EV A02 ViT-bigE-14+ 224288.1 55.8 55.9 3.3 3.1 4.6 3.2 7.1 SigLIP ViT-SO-14 384292.4 57.0 63.1 4.4 3.2 4.7 3.8 7.5 SigLIP2 ViT-SO-14 384294.9 54.1 66.2 4.9 3.4 3.6 4.8 11.2 BLIP2-COCO 224287.0 62.2 61.3 4.9 5.0 4.6 3.5 14.8 MagicLens 224294.7 70.9 63.2 15.9 6.2 10.0 3.2 14.3 E5-V 336292.7 70.4 71.2 31.4 10.5 7.5 5.2 20.1 MM-Embed 336292.7 67.4 68.1 13.3 7.4 5.1 3.9 19.5 MMRet-MLLM-S2 336297.2 72.1 76.0 29.8 10.0 8.4 3.6 24.1 LLaVE-2B 336296.3 70.9 73.1 19.4 8.8 3.0 4.5 19.2 VLM2Vec-Phi-3.5-V 336295.5 69.8 74.4 14.4 6.3 5.5 4.3 9.8 Recall@5 CLIP-ViT-L/14 336298.4 80.8 72.7 13.5 11.4 10.1 14.3 18.5 EV A01 ViT-g-14 224299.0 84.9 75.6 12.1 9.6 12.4 12.8 19.1 EV A02 ViT-bigE-14+ 224298.7 83.7 75.5 15.9 10.8 12.6 14.6 19.9 SigLIP ViT-SO-14 384299.5 81.4 80.3 13.3 14.5 21.5 13.1 20.8 SigLIP2 ViT-SO-14 384299.3 79.1 82.2 18.7 12.3 12.4 13.5 25.2 BLIP2-COCO 224297.8 85.5 75.9 19.3 15.3 15.7 14.6 33.1 MagicLens 224299.3 89.5 81.3 35.7 15.7 25.1 16.3 32.1 E5-V 336298.3 91.9 89.0 60.1 25.2 18.6 16.1 35.5 MM-Embed 336298.7 87.8 84.9 29.3 20.3 15.0 15.3 45.3 MMRet-MLLM-S2 336299.9 92.4 91.6 45.2 28.3 19.7 17.6 49.7 LLaVE-2B 336299.4 93.6 88.2 41.3 21.5 10.9 16.3 37.0 VLM2Vec-Phi-3.5-V 336299.3 90.7 90.7 36.6 18.2 12.9 19.2 27.1 use the default, general texts (e.g., “ Represent the given image. ” for VLM2Vec) accompanied with the images for encoding. More details about these models are in Appendix B. Current retrievers struggle on our benchmark. Compared with Recall@1 and Recall@5 on original MSCOCO, the average performance on our benchmark degrades for all retrievers across different architectures and scales. The performance among the CLIP family are close, with the most recent model, SigLIP2, performing the best on COCO-F ACET . We
https://arxiv.org/abs/2505.15877v1
observe that MagicLens has the best overall performance in the first section despite its small scale, output dimension, and image resolution. Universal multimodal embedders have better performance than unimodal embedders, with MMRet-MLLM-S2 in a relatively large scale outperforming other models. Still, the performance differences are relatively small among universal embedders, with a clear gap between COCO and COCO-F ACET results. Retrievers have imbalanced performance on different attributes. Again compared with COCO, models have lower performance on attributes apart from /d⌢veAnimals , and significantly lower on the last five attributes in Table 2, indicating that they are largely neglected. Possible causes include reporting bias [ 22]—“people murder” is more likely to appear than “people breathe” in the corpora, as in our case the Time (e.g., morning or night) might be too obvious to report—in both training data and previous evaluation. Meanwhile, this verifies the findings of prior work on visual grounding and reasoning [ 43,5,52] in the setting of T2I retrieval, that CLIP-like models only achieve global text-image alignment, and thus their image embeddings focus on global semantics or subjects while leaving out attributes like object details and quantity. 5 Table 3: Recall@1 and Recall@5 for text-to-image retrieval (in percentage points) on our COCO- FACET benchmark. Promptable image embeddings yield substantial average improvements and outperform the baselines on seven out of eight attributes. /d⌢ve/umbrella-beachὋC/users /tools /clock /cloud-sun ♂pray Avg. Recall@1 CLIP-ViT-L/14-336px 90.9 55.2 53.1 3.5 3.5 4.5 4.2 6.8 33.7 VLM2Vec-Phi-3.5-V 95.5 69.8 74.4 14.4 6.3 5.5 4.3 9.8 44.5 w/ GPT prompt 90.7 81.4 75.5 72.7 25.8 18.4 14.4 15.7 53.4 Text-Based 69.9 66.9 62.6 35.6 13.5 13.2 7.1 10.7 40.5 Recall@5 CLIP-ViT-L/14-336px 97.9 80.8 72.2 13.5 11.4 10.1 14.3 18.5 47.0 VLM2Vec-Phi-3.5-V 99.3 90.7 90.7 36.6 18.2 12.9 19.2 27.1 58.9 w/ GPT prompt 98.7 95.9 92.0 92.1 48.8 82.4 36.5 39.3 75.5 Text-Based 90.7 86.6 81.7 60.6 37.2 39.2 24.4 26.1 60.6 Dominant-but-wrong attributes may be favored over correct-but-non-dominant ones. We look into the failure cases of CLIP-ViT-L/14 and VLM2Vec as representative retrievers. As shown in Figure 1, they seem to rank higher the images with a similar attribute (a motorcycle or a train) as the main content but not the one with the correct attribute as non-dominant elements (a car in the background). More examples can be found in Appendix D. This implies that “simple” images consisting of fewer, more salient attributes may be preferred in retrieval rather than “complex” images at the cost of precision. 4 Promptable Image Embeddings When compressing images into embeddings of limited length, some visual information may be discarded, leading to low performance on relevant queries. Hence, a natural approach to address this issue is to (1) learning a denser visual representation during pretraining, or (2) using embeddings with larger dimensions. However, recent SigLIP2 [ 53] trained with a global-local loss for improving fine-grained local semantics only slightly outperforms other CLIP-like models, and using 4096- dimensional large embeddings (like in MMRet-MLLM-S2) still fails to resolve this issue, with Recall@1 lower than 10% and Recall@5 lower than 20% for some categories. Based on these findings,
https://arxiv.org/abs/2505.15877v1
we hypothesize that general image embeddings might be inefficient for attribute-specific queries. We therefore focus on highlighting the important part in the embeddings. For this purpose, we propose to use promptable image embeddings for different attributes—conditioning image embeddings on textual prompts, which is enabled by recent MLLM-based universal embedders. Although they are mainly motivated by tasks involving combinations of images and texts as queries or targets (e.g., composed image retrieval), previous research [ 19] finds that they can (1) deal with unseen prompts since they are based on pre-trained MLLMs, and (2) accept task-specific prompts accompanied with images, such as modification in the FashionIQ dataset [ 57]. In Section 4.1, we formally study the promptable image embeddings and demonstrate that our designed pipeline yields a performance boost for attribute-focused queries. Moreover, we show that it surpasses text-based T2I retrieval (Section 4.2). 4.1 Method We employ VLM2Vec-Phi-3.5-V [ 20] as the base retriever for deriving promptable image embeddings in this subsection. This model is built on Phi-3.5-V [ 1], which has strong capabilities in image understanding. VLM2Vec generates embeddings by taking the last layer vector representation of the last token as the embedding and is fine-tuned with a contrastive loss. Hence, when given a piece of text and an image, it produces a single combined embedding. The authors previously tried using question+image as the input for embedding queries for the VQA tasks, while the targets are the ground-truth, text-only answers. This motivates us to use questions as prompts for target images to ask about the required attributes in our case. Ideally, the resulting embeddings would contain “answers” to such constructed visual questions, which is the corresponding 6 VLM2Vec E5-V LLaVE MMRet-MLLM-S2304050607080Recall@1 (%)44.5%46.0% 45.6%48.8%53.4%54.9%56.1%57.6%Without GPT Prompt With GPT Prompt VLM2Vec E5-V LLaVE MMRet-MLLM-S2405060708090Recall@5 (%)58.9%61.8% 59.5%64.5%75.5% 73.0%74.3%77.7%Without GPT Prompt With GPT PromptFigure 2: Average retrieval performance across various base retrievers on COCO-F ACET , with and without GPT-generated prompts. The same set of prompts brings consistent performance gain on different multimodal base retrievers. attribute of the image. To make the pipeline automatic and extendable to unknown attributes, we use GPT-4o [17] to generate questions with the following template: Write a question to ask about the {Attribute Name} in a image, with possible answers such as {A}, {B}, and so on. Please answer in one sentence without mentioning any answer. For example, the output for input tuple ( people gesture ,standing ,jumping ) is “ What gesture are the people making in this image? ” Then we concatenate this question with the fixed template of the base retriever (“ <|image_1|> Represent the given image with the following question: ”) to construct the full prompt used for model input. We show that the promptable image embeddings improve the performance of VLM2Vec-Phi-3.5-V , especially on the five more challenging categories, by highlighting the corresponding attributes (See Table 3). In addition, we test human-written prompts and observe a similar performance gain, and attach all prompts in the Appendix B.3. Table 4: Recall@1 and Recall@5 of T2I retrieval for the converted Place365 and SUN397, comparing promptable image embeddings with original
https://arxiv.org/abs/2505.15877v1
image embed- dings. The improvement indicates that our pipeline generalizes effectively to these im- age pools. Place365 SUN397 Recall@1 VLM2Vec-Phi-3.5-V 30.0 57.0 w/ GPT prompt 33.9 61.9 Recall@5 VLM2Vec-Phi-3.5-V 58.2 85.2 w/ GPT prompt 66.7 88.9Furthermore, the same set of prompts generalizes to other universal multimodal embedders used as base retrievers (E5-V , LLaVE, MMRet-MLLM-S2) (See Fig- ure 2). We also apply them to MM-Embed, but this model lacks the zero-shot ability of the others and re- quires fine-tuning as mentioned in their paper. See the detailed numerical results by category in Appendix C.1. Additionally, the method works on different image pools. We converted two scene classification bench- marks, Place365 [ 68] and SUN397 [ 60] provided in MMEB with 1,000 test cases for each [ 20], into text-to- image retrieval benchmarks in the same format, except that we use one positive candidate and 499 negative candidates for each test case. We notice that the same prompt used for /umbrella-beachScene increases both Recall@1 and Recall@5 performance in Table 4. 4.2 Comparison with Text-Based T2I Retrieval As we mention in Section 4.1, the promptable image embeddings are likely to contain the corresponding attributes of the image as “answers” to the prompt. Another similar approach is to directly ask an MLLM the same question about the image and to obtain a pure-text answer. Then we can use the text embedding of this answer as the target embedding, which is explored in previous work [ 23]. The pure-text answer is also supposed to be a dense representation of the image which contains required attributes in the prompt. Is this text-based approach equivalent to our method? We conduct such comparison using LLaV A- 1.5 [34] as the MLLM and GRIT [ 40] as the text embedder. As shown in the last row in Table 3, this text-based approach underperforms our method, and it even loses to VLM2Vec-Phi-3.5-V without prompt in Recall@1. When checking its failure cases, we find that it suffers a lot from hallucination: 7 For instance, when asking LLaV A-1.5 about animals’ existence in an image with only feathers in a container, it wrongly answers with “there are birds visible.” Besides, it cannot process the linguistic ambiguity in a pure-text answer (e.g., “chicken” can refer to a domestic animal or a type of meat). Some examples are attached in the Appendix C.2. This indicates that the promptable image embedding provides more than an embedding of the pure-text answer to the given visual question in the prompt. 5 Acceleration While effective, the pipeline has high computational cost in real-world T2I retrieval. For the experi- ments in Section 4.1, we assume that the query type is known during the pre-processing stage for computing the promptable image embeddings. However, if the query type is only known at test time, we need to take the per-query computational cost into consideration. Assume that we have Nimages in the pool and Mtext queries. Since Nis typically large, the ideal per-query cost should not grow linearly with N. We focus primarily on embedding cost, as the maximum cosine-similarity searching step can
https://arxiv.org/abs/2505.15877v1
be efficiently handled by the FAISS library [ 9]. Let vdenote the cost of computing a single image embedding, and tthe cost for a single text embedding, using CLIP-like models. Let Frepresent the cost of a single forward pass through the base model of our multimodal embedder (e.g., Phi-3.5-V with a CLIP vision encoder). For CLIP-like models, the total embedding cost is Nv+Mt, with a per-query embedding cost of t. If we stick to the original pipeline for promptable image embeddings, the total embedding cost will be Nv+M(NF +F), leading to a per-query cost of NF +F. To reduce the per-query cost, we explore two strategies on VLM2Vec-Phi-3.5-V in this section. (1) The first approach is straightforward: We predefine potentially useful prompts and pre-process the promptable image embeddings. (2) The second solution is to derive linear approximation of the retriever at test time. Both strategies have lower per-query embedding cost and outperform the baseline VLM2Vec-Phi-3.5-V on Recall@5 of COCO-F ACET . 5.1 Pre-Processing Embeddings Since many attributes of interest can be predicted beforehand with prior knowledge of the incoming tasks, we can predefine some prompts at the pre-processing stage using our pipeline. During inference, we only need to select the most suitable prompt and retrieve from the corresponding promptable image embeddings. We test this strategy on the COCO-F ACET benchmark using the prompt set obtained in Section 4.1. We use GPT-4o for prompt selection at test time with the template attached in Appendix B.4. The ground-truth prompt for each query can be selected with high accuracy on average at test time. The low selection accuracy for the ♂prayGesture category is due to the similar prompt in /usersCount of People category, but we find that such a similar prompt other than the ground truth could also lead to correct answers, indicating some degree of error tolerance. (See Table 5.) The per-query embedding cost of this strategy is F(for embedding the query text), and there is an additional cost for calling the GPT-4o API. The NF term in the embedding cost is replaced by a higher memory cost and pre-processing time cost. 5.2 Linear Approximation at Test Time When there are new attributes required in the query, can we process it with a lower cost? We can first use the previous automatic pipeline to get a prompt p. Then, we experiment with a test-time linear approximation of the universal embedder. Specifically, we denote the normalized original image embeddings without a prompt as a, and the normalized promptable image embeddings as b. LetUbe the multimodal embedder and qbe the normalized query embedding. We would like to find a matrix Wwith respect to psuch that Wa≈U(a, p) =b for all ain the image pool. After deriving W, we can use Wafor retrieval, searching for the asuch that the dot product between Waandqis maximized. 8 Table 5: Recall@1 and Recall@5 (in percentage points) of accelerated text-to-image retrieval with pre-processed promptable image embeddings on the COCO-F ACET benchmark. The ground-truth prompt can be selected with a high accuracy for most categories. /d⌢ve/umbrella-beachὋC/users /tools /clock /cloud-sun ♂pray Avg. Recall@1
https://arxiv.org/abs/2505.15877v1
VLM2Vec-Phi-3.5-V 95.5 69.8 74.4 14.4 6.3 5.5 4.3 9.8 44.5 w/ selected GPT prompt 90.9 81.4 75.5 72.7 25.8 18.4 14.4 10.5 52.8 w/ gt GPT prompt 90.7 81.4 75.5 72.7 25.8 18.4 14.4 15.7 53.4 Recall@5 VLM2Vec-Phi-3.5-V 99.3 90.7 90.7 36.6 18.2 12.9 19.2 27.1 58.9 w/ selected GPT prompt 99.1 95.9 92.0 92.1 48.8 82.4 36.5 25.3 73.7 w/ gt GPT prompt 98.7 95.9 92.0 92.1 48.8 82.4 36.5 39.3 75.5 Selection Acc. 100 86.6 99.9 100 100 100 100 8.9 87.9 Table 6: Recall@1 and Recall@5 (in percentage points) of accelerated text-to-image retrieval with approximated promptable image embeddings on the COCO-F ACET benchmark. Results are averaged over five independent runs. /d⌢ve/umbrella-beachὋC/users /tools /clock /cloud-sun ♂pray Avg. Recall@1 VLM2Vec-Phi-3.5-V 95.5 69.8 74.4 14.4 6.3 5.5 4.3 9.8 44.5 w/ linear approx. 72.1 67.2 57.1 47.5 24.3 35.7 9.0 14.2 42.5 w/ GPT prompt 90.7 81.4 75.5 72.7 25.8 18.4 14.4 15.7 53.4 Recall@5 VLM2Vec-Phi-3.5-V 99.3 90.7 90.7 36.6 18.2 12.9 19.2 27.1 58.9 w/ linear approx. 84.6 91.9 83.2 73.7 43.9 71.5 28.4 38.1 67.0 w/ GPT prompt 98.7 95.9 92.0 92.1 48.8 82.4 36.5 39.3 75.5 We find this matrix Wbased on a small amount of (a, b)pairs. During pre-processing, we store afor all images in the pool. At test time, after obtaining p, we randomly sample Kimages from the pool along with their aand calculate U(a, p) =b, denoting by matrices AandB. The best linear approximation is then given by W=BA⊤.Wcan be applied to the query qinstead, since (Wa)⊤q=a⊤(W⊤q). The per-query embedding cost is thus KF +F, while the pre-processing cost remains unchanged. We test the method on VLM2Vec-Phi-3.5-V with K= 100 for each category. The results are shown in Table 6, with the error bar listed in Appendix C.3 due to the space limit. Although the linear approximation of a MLLM-based embedder has limited expressiveness and cannot capture the nonlinear, complex cross-modal interactions, it still improves the baseline on the harder attributes and yields a better Recall@5. It is worth noting that theoretically, Wshould be orthogonal to ensure that||Wa||2= 1as||b||2= 1, but in practice we find that the Wdirectly derived from BA⊤and normalized after being applied to qworks well. Thus, strict orthogonality is not required. 6 Conclusion We introduced COCO-F ACET , a benchmark to evaluate text-to-image retrieval performance on attribute-focused queries, revealing limitations in current CLIP-like and MLLM-based retrievers. General-purpose image embeddings often overlook fine-grained visual attributes critical for accurate retrieval. To address this, we propose to use promptable image embeddings on MLLM-based universal embedders, which improve focus on relevant attributes and enhance retrieval quality while being flexible, model-agnostic. We explore efficient acceleration strategies that makes it more practical for deployment. Together, our work offers a promising direction for building more precise and efficient T2I retrievers that can be integrated into systems for general problem-solving. 9 References [1]M. Abdin, J. Aneja, H. Awadalla, A. Awadallah, A. A. Awan, N. Bach, A. Bahree, A. Bakhtiari, J. Bao, H. Behl, et al. Phi-3 technical report: A highly capable language model locally on your phone. arXiv preprint arXiv:2404.14219 ,
https://arxiv.org/abs/2505.15877v1
2024. [2]A. Asai, T. Schick, P. Lewis, X. Chen, G. Izacard, S. Riedel, H. Hajishirzi, and W.-t. Yih. Task-aware retrieval with instructions. arXiv preprint arXiv:2211.09260 , 2022. [3]H. Caesar, J. Uijlings, and V . Ferrari. Coco-stuff: Thing and stuff classes in context. In Proceedings of the IEEE conference on computer vision and pattern recognition , pages 1209– 1218, 2018. [4]H. Chen, L. Wang, N. Yang, Y . Zhu, Z. Zhao, F. Wei, and Z. Dou. mme5: Improving multimodal multilingual embeddings via high-quality synthetic data. arXiv preprint arXiv:2502.08468 , 2025. [5]H.-Y . Chen, Z. Lai, H. Zhang, X. Wang, M. Eichner, K. You, M. Cao, B. Zhang, Y . Yang, and Z. Gan. Contrastive localized language-image pre-training. arXiv preprint arXiv:2410.02746 , 2024. [6]W. Chen, O. Mees, A. Kumar, and S. Levine. Vision-language models provide promptable representations for reinforcement learning. arXiv preprint arXiv:2402.02651 , 2024. [7]M. Cherti, R. Beaumont, R. Wightman, M. Wortsman, G. Ilharco, C. Gordon, C. Schuhmann, L. Schmidt, and J. Jitsev. Reproducible scaling laws for contrastive language-image learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 2818–2829, 2023. [8]A. Das, S. Kottur, K. Gupta, A. Singh, D. Yadav, J. M. Moura, D. Parikh, and D. Batra. Visual dialog. In Proceedings of the IEEE conference on computer vision and pattern recognition , pages 326–335, 2017. [9]M. Douze, A. Guzhva, C. Deng, J. Johnson, G. Szilvasy, P.-E. Mazaré, M. Lomeli, L. Hosseini, and H. Jégou. The faiss library. arXiv preprint arXiv:2401.08281 , 2024. [10] Y . Fang, W. Wang, B. Xie, Q. Sun, L. Wu, X. Wang, T. Huang, X. Wang, and Y . Cao. Eva: Exploring the limits of masked visual representation learning at scale. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 19358–19369, 2023. [11] Y . Fang, Q. Sun, X. Wang, T. Huang, X. Wang, and Y . Cao. Eva-02: A visual representation for neon genesis. Image and Vision Computing , 149:105171, 2024. [12] A. Gabbay, N. Cohen, and Y . Hoshen. An image is worth more than a thousand words: Towards disentanglement in the wild. Advances in Neural Information Processing Systems , 34: 9216–9228, 2021. [13] G. Geigle, J. Pfeiffer, N. Reimers, I. Vuli ´c, and I. Gurevych. Retrieve fast, rerank smart: Cooperative and joint approaches for improved cross-modal retrieval. Transactions of the Association for Computational Linguistics , 10:503–521, 2022. [14] H. Hu, Y . Luan, Y . Chen, U. Khandelwal, M. Joshi, K. Lee, K. Toutanova, and M.-W. Chang. Open-domain visual entity recognition: Towards recognizing millions of wikipedia entities. InProceedings of the IEEE/CVF International Conference on Computer Vision , pages 12065– 12075, 2023. [15] W. Hu, J.-C. Gu, Z.-Y . Dou, M. Fayyaz, P. Lu, K.-W. Chang, and N. Peng. Mrag-bench: Vision- centric evaluation for retrieval-augmented multimodal models. arXiv preprint arXiv:2410.08182 , 2024. [16] Y . Hu, W. Shi, X. Fu, D. Roth, M. Ostendorf, L. Zettlemoyer, N. A. Smith, and R. Krishna. Visual sketchpad: Sketching as a visual chain of thought for multimodal language models. arXiv preprint arXiv:2406.09403 , 2024. 10 [17] A. Hurst, A.
https://arxiv.org/abs/2505.15877v1
Lerer, A. P. Goucher, A. Perelman, A. Ramesh, A. Clark, A. Ostrow, A. Welihinda, A. Hayes, A. Radford, et al. Gpt-4o system card. arXiv preprint arXiv:2410.21276 , 2024. [18] G. Ilharco, M. Wortsman, R. Wightman, C. Gordon, N. Carlini, R. Taori, A. Dave, V . Shankar, H. Namkoong, J. Miller, H. Hajishirzi, A. Farhadi, and L. Schmidt. Openclip, July 2021. URL https://doi.org/10.5281/zenodo.5143773 . If you use this software, please cite it as below. [19] T. Jiang, M. Song, Z. Zhang, H. Huang, W. Deng, F. Sun, Q. Zhang, D. Wang, and F. Zhuang. E5-v: Universal embeddings with multimodal large language models. arXiv preprint arXiv:2407.12580 , 2024. [20] Z. Jiang, R. Meng, X. Yang, S. Yavuz, Y . Zhou, and W. Chen. Vlm2vec: Training vision- language models for massive multimodal embedding tasks. arXiv preprint arXiv:2410.05160 , 2024. [21] J. Johnson, R. Krishna, M. Stark, L.-J. Li, D. Shamma, M. Bernstein, and L. Fei-Fei. Image retrieval using scene graphs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , June 2015. [22] A. Kamath, J. Hessel, K. Chandu, J. D. Hwang, K.-W. Chang, and R. Krishna. Scale can’t overcome pragmatics: The impact of reporting bias on vision-language reasoning. [23] S. Karthik, K. Roth, M. Mancini, and Z. Akata. Vision-by-language for training-free composi- tional image retrieval. arXiv preprint arXiv:2310.09291 , 2023. [24] R. Krishna, Y . Zhu, O. Groth, J. Johnson, K. Hata, J. Kravitz, S. Chen, Y . Kalantidis, L.-J. Li, D. A. Shamma, et al. Visual genome: Connecting language and vision using crowdsourced dense image annotations. International journal of computer vision , 123:32–73, 2017. [25] Z. Lan, L. Niu, F. Meng, J. Zhou, and J. Su. Llave: Large language and vision embedding models with hardness-weighted contrastive learning. arXiv preprint arXiv:2503.04812 , 2025. [26] S. Lee, S. Yu, J. Park, J. Yi, and S. Yoon. Interactive text-to-image retrieval with large language models: A plug-and-play approach. arXiv preprint arXiv:2406.03411 , 2024. [27] D. Li, J. Li, H. Le, G. Wang, S. Savarese, and S. C. Hoi. LA VIS: A one-stop library for language-vision intelligence. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations) , pages 31–41, Toronto, Canada, July 2023. Association for Computational Linguistics. URL https://aclanthology.org/ 2023.acl-demo.3 . [28] J. Li, D. Li, S. Savarese, and S. Hoi. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. In International conference on machine learning , pages 19730–19742. PMLR, 2023. [29] J. Li, J. Ma, X. Zhang, Y . Li, and J. Shi. Give: Guiding visual encoder to perceive overlooked information. arXiv preprint arXiv:2410.20109 , 2024. [30] L. Lian, Y . Ding, Y . Ge, S. Liu, H. Mao, B. Li, M. Pavone, M.-Y . Liu, T. Darrell, A. Yala, et al. Describe anything: Detailed localized image and video captioning. arXiv preprint arXiv:2504.16072 , 2025. [31] S.-C. Lin, C. Lee, M. Shoeybi, J. Lin, B. Catanzaro, and W. Ping. Mm-embed: Universal multimodal retrieval with multimodal llms. arXiv preprint arXiv:2411.02571 , 2024. [32] T.-Y . Lin, M. Maire, S. Belongie, J. Hays, P.
https://arxiv.org/abs/2505.15877v1
Perona, D. Ramanan, P. Dollár, and C. L. Zitnick. Microsoft coco: Common objects in context. In Computer vision–ECCV 2014: 13th European conference, zurich, Switzerland, September 6-12, 2014, proceedings, part v 13 , pages 740–755. Springer, 2014. [33] W. Lin, J. Chen, J. Mei, A. Coca, and B. Byrne. Fine-grained late-interaction multi-modal retrieval for retrieval augmented visual question answering. Advances in Neural Information Processing Systems , 36:22820–22840, 2023. 11 [34] H. Liu, C. Li, Y . Li, and Y . J. Lee. Improved baselines with visual instruction tuning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 26296–26306, 2024. [35] S. Liu, W. Feng, T.-j. Fu, W. Chen, and W. Y . Wang. Edis: Entity-driven image search over multimodal web content. arXiv preprint arXiv:2305.13631 , 2023. [36] P. Lu, S. Mishra, T. Xia, L. Qiu, K.-W. Chang, S.-C. Zhu, O. Tafjord, P. Clark, and A. Kalyan. Learn to explain: Multimodal reasoning via thought chains for science question answering. Advances in Neural Information Processing Systems , 35:2507–2521, 2022. [37] P. Lu, H. Bansal, T. Xia, J. Liu, C. Li, H. Hajishirzi, H. Cheng, K.-W. Chang, M. Galley, and J. Gao. Mathvista: Evaluating mathematical reasoning of foundation models in visual contexts. arXiv preprint arXiv:2310.02255 , 2023. [38] X. Ma, S.-C. Lin, M. Li, W. Chen, and J. Lin. Unifying multimodal retrieval via document screenshot embedding. arXiv preprint arXiv:2406.11251 , 2024. [39] A. Miech, J.-B. Alayrac, I. Laptev, J. Sivic, and A. Zisserman. Thinking fast and slow: Efficient text-to-visual retrieval with transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 9826–9836, 2021. [40] N. Muennighoff, S. Hongjin, L. Wang, N. Yang, F. Wei, T. Yu, A. Singh, and D. Kiela. Generative representational instruction tuning. In ICLR 2024 Workshop: How Far Are We From AGI, 2024. [41] D.-K. Nguyen, M. Assran, U. Jain, M. R. Oswald, C. G. Snoek, and X. Chen. An image is worth more than 16x16 patches: Exploring transformers on individual pixels. arXiv preprint arXiv:2406.09415 , 2024. [42] OpenAI. Thinking with images. https://openai.com/index/thinking-with-images/ , 2025. Accessed: 2025-05-09. [43] R. Paiss, A. Ephrat, O. Tov, S. Zada, I. Mosseri, M. Irani, and T. Dekel. Teaching clip to count to ten. In Proceedings of the IEEE/CVF International Conference on Computer Vision , pages 3170–3180, 2023. [44] G. Patterson and J. Hays. Coco attributes: Attributes for people, animals, and objects. European Conference on Computer Vision , 2016. [45] Z. Peng, W. Wang, L. Dong, Y . Hao, S. Huang, S. Ma, and F. Wei. Kosmos-2: Grounding multimodal large language models to the world. arXiv preprint arXiv:2306.14824 , 2023. [46] K. Pham, C. Huynh, S.-N. Lim, and A. Shrivastava. Composing object relations and attributes for image-text matching. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 14354–14363, 2024. [47] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark, et al. Learning transferable visual models from natural language supervision. InInternational conference on machine learning , pages 8748–8763. PmLR, 2021. [48] S.
https://arxiv.org/abs/2505.15877v1
Schuster, R. Krishna, A. Chang, L. Fei-Fei, and C. D. Manning. Generating semantically precise scene graphs from textual descriptions for improved image retrieval. In A. Belz, L. Coheur, V . Ferrari, M.-F. Moens, K. Pastra, and I. Vuli ´c, editors, Proceedings of the Fourth Workshop on Vision and Language , pages 70–80, Lisbon, Portugal, Sept. 2015. Association for Computational Linguistics. doi: 10.18653/v1/W15-2812. URL https://aclanthology. org/W15-2812/ . [49] S. Sharifymoghaddam, S. Upadhyay, W. Chen, and J. Lin. Unirag: Universal retrieval augmen- tation for multi-modal large language models. arXiv preprint arXiv:2405.10311 , 2024. [50] N. Sogi, T. Shibata, and M. Terao. Object-aware query perturbation for cross-modal image-text retrieval. In European Conference on Computer Vision , pages 447–464. Springer, 2024. 12 [51] H. Su, W. Shi, J. Kasai, Y . Wang, Y . Hu, M. Ostendorf, W.-t. Yih, N. A. Smith, L. Zettlemoyer, and T. Yu. One embedder, any task: Instruction-finetuned text embeddings. arXiv preprint arXiv:2212.09741 , 2022. [52] S. Tong, Z. Liu, Y . Zhai, Y . Ma, Y . LeCun, and S. Xie. Eyes wide shut? exploring the visual shortcomings of multimodal llms. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 9568–9578, 2024. [53] M. Tschannen, A. Gritsenko, X. Wang, M. F. Naeem, I. Alabdulmohsin, N. Parthasarathy, T. Evans, L. Beyer, Y . Xia, B. Mustafa, et al. Siglip 2: Multilingual vision-language en- coders with improved semantic understanding, localization, and dense features. arXiv preprint arXiv:2502.14786 , 2025. [54] J. Urbanek, F. Bordes, P. Astolfi, M. Williamson, V . Sharma, and A. Romero-Soriano. A picture is worth more than 77 text tokens: Evaluating clip-style models on dense captions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 26700–26709, 2024. [55] H. Wang, M. Liao, Z. Xie, W. Liu, and X. Bai. Partial scene text retrieval. IEEE Transactions on Pattern Analysis and Machine Intelligence , 2024. [56] C. Wei, Y . Chen, H. Chen, H. Hu, G. Zhang, J. Fu, A. Ritter, and W. Chen. Uniir: Training and benchmarking universal multimodal information retrievers. In European Conference on Computer Vision , pages 387–404. Springer, 2024. [57] H. Wu, Y . Gao, X. Guo, Z. Al-Halah, S. Rennie, K. Grauman, and R. Feris. Fashion iq: A new dataset towards retrieving images by natural language feedback. In Proceedings of the IEEE/CVF Conference on computer vision and pattern recognition , pages 11307–11317, 2021. [58] P. Wu and S. Xie. V*: Guided visual search as a core mechanism in multimodal llms. arXiv preprint arXiv:2312.14135 , 2023. [59] C. Xiao, I. Chung, I. Kerboua, J. Stirling, X. Zhang, M. Kardos, R. Solomatin, N. A. Moubayed, K. Enevoldsen, and N. Muennighoff. Mieb: Massive image embedding benchmark. arXiv preprint arXiv:2504.10471 , 2025. [60] J. Xiao, J. Hays, K. A. Ehinger, A. Oliva, and A. Torralba. Sun database: Large-scale scene recognition from abbey to zoo. In 2010 IEEE computer society conference on computer vision and pattern recognition , pages 3485–3492. IEEE, 2010. [61] R. Xiao, S. Kim, M.-I. Georgescu, Z. Akata, and S. Alaniz. Flair: Vlm with fine-grained language-informed image representations. arXiv preprint
https://arxiv.org/abs/2505.15877v1
arXiv:2412.03561 , 2024. [62] M. Yasunaga, A. Aghajanyan, W. Shi, R. James, J. Leskovec, P. Liang, M. Lewis, L. Zettle- moyer, and W.-t. Yih. Retrieval-augmented multimodal language modeling. arXiv preprint arXiv:2211.12561 , 2022. [63] P. Young, A. Lai, M. Hodosh, and J. Hockenmaier. From image descriptions to visual denota- tions: New similarity metrics for semantic inference over event descriptions. Transactions of the Association for Computational Linguistics , 2:67–78, 2014. [64] R. Zellers, Y . Bisk, A. Farhadi, and Y . Choi. From recognition to cognition: Visual commonsense reasoning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 6720–6731, 2019. [65] X. Zhai, B. Mustafa, A. Kolesnikov, and L. Beyer. Sigmoid loss for language image pre- training. In Proceedings of the IEEE/CVF international conference on computer vision , pages 11975–11986, 2023. [66] H. Zhang, P. Zhang, X. Hu, Y .-C. Chen, L. Li, X. Dai, L. Wang, L. Yuan, J.-N. Hwang, and J. Gao. Glipv2: Unifying localization and vision-language understanding. Advances in Neural Information Processing Systems , 35:36067–36080, 2022. 13 [67] K. Zhang, Y . Luan, H. Hu, K. Lee, S. Qiao, W. Chen, Y . Su, and M.-W. Chang. Magiclens: Self-supervised image retrieval with open-ended instructions. arXiv preprint arXiv:2403.19651 , 2024. [68] B. Zhou, A. Lapedriza, A. Khosla, A. Oliva, and A. Torralba. Places: A 10 million image database for scene recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence , 2017. [69] J. Zhou, Z. Liu, Z. Liu, S. Xiao, Y . Wang, B. Zhao, C. J. Zhang, D. Lian, and Y . Xiong. Megapairs: Massive data synthesis for universal multimodal retrieval. arXiv preprint arXiv:2412.14475 , 2024. [70] J. Zhou, Z. Liu, S. Xiao, B. Zhao, and Y . Xiong. Vista: visualized text embedding for universal multi-modal retrieval. arXiv preprint arXiv:2406.04292 , 2024. [71] Y . Zhu, O. Groth, M. Bernstein, and L. Fei-Fei. Visual7w: Grounded question answering in images. In Proceedings of the IEEE conference on computer vision and pattern recognition , pages 4995–5004, 2016. 14 A The COCO-F ACET Benchmark We provide more details on the COCO-F ACET Benchmark in this section. Building upon the annotations of MSCOCO 2017 captions [ 32], COCO-Stuff [ 3], Visual7W [ 71], and Visual Dialog [ 8] based on the COCO images, we generate 8 new subsets for evaluating T2I retrievers on attribute-focused retrieval tasks. Use of the COCO images must abide by the COCO anf Flickr Terms of Use2. The MSCOCO 2017 and COCO-Stuff annotations belong to the COCO Consortium and are licensed under a Creative Commons Attribution 4.0 License. The Visual7W annotations are under MIT License. The Visual Dialog annotations are licensed under a Creative Commons Attribution 4.0 International License. We extract images with different types of objects including laptops ,bicycles andsandwiches on it, with 3749 test cases in total; 8 gestures of people including standing andsitting on it, with 1176 test cases in total; and other 6 benchmarks showing the other attributes. The details of the dataset construction is introduced in Appendix A.1, where the full list of categories and related statistics can be seen
https://arxiv.org/abs/2505.15877v1
in Table 7. In short, we came up with the most proper division for every image in every fixed criteria. The information needed for the classification might come from the whole picture or a part of it. Each test case consists of a query text (e.g., “ Find me an everyday image showing some object or surface made of stone ”) , an image corresponding to the text (e.g. a photo of a stone wall) and 99 negatives that should be divided into other categories with no intersection with the true category in the classification for the retrieval (e.g. a photo of a wooden floor). For each of the test cases in these provided data, the chance level performance to choose a correct image is 1%. A.1 Generating Datasets We use these steps as a norm when generating the datasets. Details for each benchmark will be shown in Appendix A.2. 1. We find a proper criteria that can include sufficient images and categories. 2.Initially, we filter the captions or dialogues within the datasets to identify those containing information in specific categories. 3. Then we analyze some filtered samples and refine the rules of filtering. 4.We remove the samples that belong to multiple mutually exclusive categories under the rule. 5.Subsequently, we manually validate the positive images to minimize wrong labels and ambiguity. 6.We design the queries and randomly select negative samples from mutually exclusive categories. A.2 Details of Sets See Table 7 for the category statistics. A.2.1 Gestures of People • Idea: The position and arrangement of different parts of the human body vary significantly depending on the gesture, such as standing, sitting, or lying. Therefore, a classification based on gestures should be feasible. •Categories: “stand”, “sit”, “jump”, “lie”, “bend”, “squat”, “kneel”, “crawl” (8 categories, 1176 samples in total). •Avoidance: We avoid using “squat” or “kneel” as negative samples for each other, as they can be hard to distinguish (e.g., a kneeling squat). •Sampling: All samples were sourced from the val2017 split of the COCO dataset. We first filter captions using keywords such as “stand” or “sitting.” Then, we refine the keywords 2http://cocodataset.org/#termsofuse andhttps://www.flickr.com/creativecommons/ 15 Table 7: Statistics of our COCO-F ACET benchmark( /d⌢ve: Animals, /umbrella-beach: Scenes, ὋC: Objects, /users: Count of People, /tools: Materials, /clock: Times,/cloud-sun: Weathers, ♂pray: Gestures). NUM. SamplesNUM. CategoriesDetails of each category /d⌢ve 763 10“bird”: 44, “cat”: 141, “dog”: 112, “horse”: 88, “sheep”: 48, “cow”: 56, “elephant”: 79, “bear”: 42, “zebra”: 73, “giraffe”: 80 /umbrella-beach 172 >90 omitted ὋC 3849 69 omitted /users 570 12“0”: 131, “1”: 88, “2”: 106, “3”: 86, “4”: 42, “5”: 22, “6”: 23, “7”: 11, “8”: 8, “9”: 8, “10”: 7, “over 10”: 38 /tools 1128 5 “wood”: 231, “stone”: 57, “metal”: 741, “paper”: 68, “brick”: 31 /clock 760 7“daytime”: 588, “night”: 103, “afternoon”: 24, “dusk”: 3, “morning”: 15, “sunrise”: 1, “evening”: 26 /cloud-sun 694 12“sunny”: 179, “clear”: 77, “misty”: 14, “overcast”: 17, “cloudy”: 138, “rainy”: 50, “drizzly”: 1, “stormy”: 2, “snowy”: 193, “warm”: 8, “cold”: 14, “chilly”: 1 ♂pray 1176 8“stand”: 660, “sit”: 386, “lie”: 25, “jump”: 77, “bend”: 10, “squat”: 3, “kneel”:
https://arxiv.org/abs/2505.15877v1
14, “crawl”: 1 with more precise phrases to avoid mislabeling images based on captions like “A building sits between.” Finally, we manually check the images to ensure that they meet our criteria. A.2.2 Materials of Objects or Surfaces •Idea: Although different materials may have similar uses, they often exhibit different visual characteristics. Therefore, a classifier should be able to distinguish between them. •Categories: “wood”, “metal”, “stone”, “brick”, “paper” (5 categories, 1128 samples in total). •Avoidance: We avoid using “stone” or “brick” as negative samples for each other. This is because some images contain bricks made of stone. Since we categorize such images under “stone”, we aim to avoid interfering with the classification of “brick”. •Sampling: We utilized the COCO-Stuff annotations on val2017, extracting subcategories that fall under our target categories. Then, we manually review the images and exclude samples that may contain multiple materials, have unclear material identification, or only show a small portion of the target material. A.2.3 Count of People •Idea: Object detection models are capable of accurately locating object boundaries. There- fore, it is reasonable to infer that they can accurately count the number of people in an image. •Categories: “0”, “1”, “2”, “3”, “4”, “5”, “6”, “7”, “8”, “9”, “10”, “over 10” (12 categories, 570 samples in total). •Avoidance: We require that the negative samples differ from the positive sample by at least 3. This accounts for potential ambiguity regarding whether certain individuals should be 16 included or excluded, providing a margin for a potential count discrepancy of one person in both positive and negative samples. Specifically, “over 10” is treated as 11 when calculating the differences. •Sampling: We primarily use the Visual7W VQA dataset as our resource. We first filter questions that contain interrogative phrases like “how many people” Then, we select ques- tions with clear answers relevant to our task, either manually verify them or deduce the final answer from similar questions (e.g., “how many people other than a person”) and visually confirm. A.2.4 Weather Conditions •Idea: Weather conditions are relatively easy to distinguish in images showing a large outdoor scene. In addition, there are many different criteria for evaluating weather, allowing a substantial number of categories. •Categories: “sunny”, “clear”, “misty”, “overcast”, “cloudy”, “rainy”, “drizzly”, “stormy”, “snowy”, “warm”, “cold”, “chilly” (12 categories, 694 samples in total). •Avoidance: For the first two categories (“sunny” and “clear”), we avoid using them as negative samples for each other because they both describe a sky with no clouds. We also avoid using “warm” or “sunny” as negative samples as these conditions can coexist with a clear or sunny sky. Categories three through eight (“misty” to “stormy”) are not used as negative samples for each other, as they all describe conditions involving clouds, rain, or something that obstructs sunlight, making them difficult to delineate clearly. Furthermore, we did not use “cold” or “chilly” as negative samples for these conditions, as it was difficult to determine the precise temperature under such circumstances. Categories three through five (“misty”, “overcast”, “cloudy”) are not used as negative samples with “warm” for similar reasons (We assume
https://arxiv.org/abs/2505.15877v1
that it is generally not warm when it is raining). Among the last four categories (“snowy”, “warm”, “cold”, “chilly”), “snowy”, “cold”, and “chilly” often co-occur and are therefore not used as positive and negative samples together. •Sampling: We first extract questions containing weather conditions from the Visual7W and VisDial datasets. Then, we record, simplify and verify the answers. In addition, we deduce weather conditions from the captions in val2017 and perform a double check. A.2.5 Time of the Scenery •Idea: The Visual 7W dataset includes “When” as a category of questions. These questions often have a clear answer, thus we can get the time of the scene. This is a proper benchmark, since time is a property of the whole image. •Categories: “daytime”, “night”, “evening”, “afternoon”, “dusk”, “morning”, “sunrise”(7 categories, 760 samples in total) •Avoidance: We avoid using samples from a similar category as negative samples. For instance, “morning” is not a good negative sample for “sunrise”. •Sampling: We filter the answers of the time questions in Visual 7W. The core word is used as our category. A.2.6 Scenes of the Locations or Activities •Idea: Different locations and activities have quite distinct scenes. Through the“where” questions of the Visual 7W set, we can easily get a description of the scene (a noun), and repetitions hardly exist. •Categories: “beach (scene)”, “beach shore”, “baseball game”, “Oahu”, “baseball field”, “baseball park”, “sports arena”, “outdoor eating area”, “bedroom”, “bathroom”, “station”, “train station”, “railroad tracks”, “backyard”, “backyard patio”, “zoo”, “tennis court”, “Broad- way”, “harbor”, “street”, “city street”, “side of the road”, “mountain”, “river”, “safari”, “grassland”, “airport”, “air strip”, “parking lot(area)”, “skate park”, “open field”, “field”, “construction site”, “classroom”, “fountain”, “London”, “nature”, “farm”, “restaurant”, “din- ner”, “dining room”, “kitchen”, “kitchen being remodeled in a home”, “living room”, “park”, 17 “ski slope”, “(ski) lodge”, “sidewalk”, “parlor”, “boardwalk”, “waterhole”, “baby shower”, “press conference”, “apple computers”, “downtown Toronto”, “outside a city”, “near a river”, “by/near the ocean”, “ocean shore”, “inside a home”, “in a room”, “inside a refrigerator”, “near the food”, “performance”, “market”, “farmers market”, “a man on a phone in a room”, “tourist trap”, “coffee shop”, “on a desk”, “table”, “on a counter”, “sky”, “woods”, “birthday party”, “outdoor show”, “yard”, “soccer field”, “indoors”, “in front of clock tower”, “on a road in front of a large building”, “in a building”, “in front of a television”, “third street”, “in a car”, “airport runway”, “intersection”, “museum”, “concert photography session”, “inside a home very close to a marina and the sea”, “road”, “Tokyo”, “by the water”, “on sand dune”, “bakery”, “motorcycle race”, “house”(172 samples in total) •Avoidance: We just avoid the categories with similar meaning to be used as both positive and negative. In fact, we manually check all the negative samples to avoid conflicting with the positive sample. •Sampling: We used the “Where” questions from the Visual 7W set. The answer was often used directly as the category to expand the number of categories. A.2.7 Objects contained •Idea: The COCO dataset provides the retrieval for objects in it, and the position of the objects are located with bounding boxes. So whether the
https://arxiv.org/abs/2505.15877v1
object is a crucial feature in the picture is really clear. To be clear, we do not involve animals or people in this classification, for they have totally different norms. • Categories: “bicycle”, “bus”, “light”, “backpack”, etc. (3849 samples in total) •Avoidance: The COCO set gives quite comprehensive content about the objects in the image. So we just need to avoid using the images with the same kind of objects (possibly with other objects) as negative samples. • Sampling: We can just use the annotations of objects for the COCO validation set. A.2.8 Animals Contained •Idea: Animals have a great difference from static objects; they are classified with their appearances and actions. As a result, we create an independent benchmark. •Categories: “giraffe”, “zebra”, “bear”, “elephant”, “cow”, “sheep”, “horse”, “dog”, “cat”, “bird” (10 categories, 763 samples in total) •Avoidance: We just need to avoid the same species or some close species in the negative samples. •Sampling: We include the objects marked in the COCO validation set that are animals or similar to animals. B Details of Retrievers We list the details of retrievers used in our evaluation in this section, including the baselines and the promptable image embeddings. The evaluation code is attached in the supplementary material for reproducibility purpose. B.1 Information of Baselines All evaluations of CLIP-family, MagicLens, MLLM-based universal multimodal retrievers, and variants of VLM2Vec can be done using one A6000 GPU with 48GB memory in less than 6 hours per category. CLIP-family: The CLIP family comprises vision-language models trained via contrastive learn- ing on large-scale image-text pairs. CLIP [ 47] introduced this paradigm, enabling zero-shot transfer to various vision tasks. We use the weights at https://huggingface.co/openai/ clip-vit-large-patch14-336 under MIT License. EV A-CLIP [ 10,11] enhances CLIP by in- tegrating improved training techniques for better efficiency and effectiveness. We access their 18 public model weights at OpenCLIP [ 18] with model name “EV A01-g-14” and “EV A02-E-14- plus” under MIT license. SigLIP [ 65] replaces the softmax loss with a sigmoid loss, allowing for scalable training without the need for large batch sizes. Building upon this, SigLIP2 [ 53] in- corporates multilingual capabilities and improved semantic understanding. We use the weights athttps://huggingface.co/google/siglip-so400m-patch14-384 for SigLIP and https: //huggingface.co/google/siglip2-so400m-patch14-384 for SigLIP2 under Apache license 2.0. BLIP2 [ 28] fine-tuned on COCO leverages a frozen image encoder and a lightweight Q-Former to bridge vision and language modalities effectively. We use the “blip2_feature_extractor” provided by LA VIS [ 27] under BSD 3-Clause License. By default, we directly use the query text as text input for these models. MagicLens: MagicLens [ 67] is a self-supervised image retrieval model trained on 36.7M triplets of (query image, instruction, target image). It supports open-ended instructions, enabling retrieval based on diverse semantic relations beyond visual similarity. The model employs a dual-encoder architecture with shared parameters and utilizes multi-head attention pooling to generate unified embeddings. We use the weights shared in the official github repository at https://github.com/ google-deepmind/magiclens under Apache-2.0 license. We only use their vision encoder and language encoder like CLIP, as we find that the model does not support
https://arxiv.org/abs/2505.15877v1
zero-shot instructions for embeddings. E5-V: E5-V [ 19] adapts an MLLM to generate universal multimodal embeddings. Unlike traditional models trained on image-text pairs, E5-V leverages MLLM’s capabilities to represent multimodal information effectively, demonstrating significant potential in various retrieval tasks. We use the model weights released at https://huggingface.co/royokong/e5-v . MM-Embed: MM-Embed [ 31] is a universal multimodal retrieval model that fine-tunes MLLMs as bi-encoder retrievers across diverse datasets and tasks. It supports flexible vision-language alignment and is adaptable to both retrieval and classification tasks without the need for instruction tuning. However, we find that the model does not process zero-shot instructions well. We use the weights at https://huggingface.co/nvidia/MM-Embed under Creative Commons Attribution Non Commercial 4.0. MMRet: MMRet [ 69] is trained on MegaPairs, a massive synthetic dataset generated using vision- language models and open-domain images. It employs separate encoders for vision and language, followed by deep fusion layers for cross-modal alignment, achieving state-of-the-art performance in universal multimodal retrieval tasks. We employ the MMRet-MLLM-S2 released at https: //huggingface.co/BAAI/BGE-VL-MLLM-S2 under the MIT license. LLaVE: LLaVE [ 25] introduces hardness-weighted contrastive learning to train large language and vision embedding models. By dynamically adjusting the learning process based on the difficulty of negative pairs, LLaVE enhances representation learning, leading to improved performance across var- ious multimodal tasks. We use the LLaVE-2B released at https://huggingface.co/zhibinlan/ LLaVE-2B under Apache license 2.0. VLM2Vec: VLM2Vec [ 20] transforms vision-language models into efficient multimodal embedders through contrastive training on the Massive Multimodal Embedding Benchmark (MMEB). It supports instruction-guided representation generation, outperforming existing models on both in-distribution and out-of-distribution datasets. We use the VLM2Vec-Phi-3.5-V at https://huggingface.co/ TIGER-Lab/VLM2Vec-Full under Apache license 2.0. B.2 Retrieval with CLIP-Style Text Our query text is designed to suit the retrieval tasks in universal multimodal embedders such as VLM2Vec. So, a question arises when we try to evaluate with CLIP, in which the recommended CLIP evaluation starts with “A photo of.” We perform an ablation study on replacing the text with CLIP-style text for evaluation. The mechanism of substitution is shown in Table 8. The results are shown in Table 9, where no significant difference is observed. 19 Table 8: CLIP-style text used in our evaluation. Original Text Revised Text Examples “Find me an everyday image that ···”“A photo that ···”“Find me an everyday image that is taken during the evening.” →“A photo that is taken during the evening.” “Find me an everyday image with ···”“A photo with ···”“Find me an everyday image with over 10 people.” →“A photo with over 10 people.” “Find me an everyday image showing ···”“A photo showing ···”“Find me an everyday image showing some object or surface made of brick.” → “A photo showing some object or surface made of brick.” Table 9: Recall@1 and Recall@5 of CLIP-ViT-L/14’s evaluation with original text or with CLIP-style text in percentage points on our COCO-F ACET benchmark. /d⌢ve/umbrella-beachὋC/users /tools /clock /cloud-sun ♂pray Avg. Recall@1 CLIP-ViT-L/14 91.5 55.2 54.0 3.5 3.5 4.5 4.2 6.8 33.7 w/ CLIP-style text 91.5 53.5 54.0 8.4 5.1 4.5 2.6 4.8 33.8 Recall@5 CLIP-ViT-L/14 98.4 80.8 72.7 13.5 11.4 10.1 14.3 18.5 47.0
https://arxiv.org/abs/2505.15877v1
w/ CLIP-style text 98.4 79.1 72.7 13.3 14.4 9.9 16.4 19.1 47.6 B.3 Promptable Image Embeddings We list the obtained GPT-written prompts for eight categories of our COCO-F ACET benchmark in Table 10. We also test human-written prompts listed in Table 11. The results are shown in Table 12, where we find that human-written prompts can lead to similar improvement. B.4 Pre-Processing Embeddings We use the following template for GPT-4o’s prompt selection: {Prompts} Given the instruction {text}, choose the most relevant prompt for verifying the results. Please answer in one letter. The “ Prompts ” part lists all the prompts in Table 10 in the format of “ A. Represent the given image with the following question: What type of location is depicted in this image? ”. C More results C.1 Detailed Results of Various Base Retrievers with Prompt We show that this strategy generalizes to different base retrievers. See Table 13. C.2 Detailed Text-Based Retrieval Results We find that text-based retrieval suffers from hallucination and linguistic ambiguity. Some failure cases from Animals regarding birds are shown in Figure 3 as examples. We find that elements like sky or feathers could trigger the model to answer with bird’s existence. In addition, this approach could not deal with polysemy like “chicken” shown in the fifth example. We also notice that the existence of bird patterns (in the third example) complicates this process, as the model could not prioritize real birds over the bird pattern on the container. 20 Table 10: GPT-written prompts for COCO-F ACET . Categories Prompts /d⌢veAnimals <|image_1|> Represent the given image with the following question: Which animals can be seen in this image? /umbrella-beachScenes <|image_1|> Represent the given image with the following question: What type of location is depicted in this image? ὋCObjects <|image_1|> Represent the given image with the following question: Which objects are present in this image? /usersCount of People <|image_1|> Represent the given image with the following question: How many people are present in this image? /toolsMaterials <|image_1|> Represent the given image with the following question: What material are the objects in this image made of? /clockTimes <|image_1|> Represent the given image with the following question: What time of day is depicted in this image? /cloud-sunWeathers <|image_1|> Represent the given image with the following question: What is the weather like in this image? ♂prayGestures <|image_1|> Represent the given image with the following question: What gesture are the people making in this image? Table 11: Human-written prompts for COCO-F ACET . Categories Prompts /d⌢veAnimals <|image_1|> Represent the given image with the following question: What animals are in this image? /umbrella-beachScenes <|image_1|> Represent the given image with the following question: What scene is in the image? ὋCObjects <|image_1|> Represent the given image with the following question: What objects are in the image? /usersCount of People <|image_1|> Represent the given image with the following question: How many people are in the image? /toolsMaterials <|image_1|> Represent the given image with the following question: What are the objects made of in the image? /clockTimes <|image_1|> Represent the given image with the following
https://arxiv.org/abs/2505.15877v1
question: When is the image taken? /cloud-sunWeathers <|image_1|> Represent the given image with the following question: What is the weather in the image? ♂prayGestures <|image_1|> Represent the given image with the following question: What is the person doing in the image? 21 Table 12: Recall@1 and Recall@5 of text-to-image retrieval in percentage points on our COCO- FACET benchmark with no prompt, GPT-written prompts, and human-written prompts. The human- written prompts lead to a similar performance gain. /d⌢ve/umbrella-beachὋC/users /tools /clock /cloud-sun ♂pray Avg. Recall@1 VLM2Vec-Phi-3.5-V 95.5 69.8 74.4 14.4 6.3 5.5 4.3 9.8 44.5 w/ GPT prompt 90.7 81.4 75.5 72.7 25.8 18.4 14.4 15.7 53.4 w/ human prompt 93.5 80.8 75.7 68.1 24.8 43.6 14.3 22.9 56.3 Recall@5 VLM2Vec-Phi-3.5-V 99.3 90.7 90.7 36.6 18.2 12.9 19.2 27.1 58.9 w/ GPT prompt 98.7 95.9 92.0 92.1 48.8 82.4 36.5 39.3 75.5 w/ human prompt 99.6 95.9 91.6 91.2 43.8 64.2 36.9 50.2 74.6 Table 13: Recall@1 and Recall@5 of text-to-image retrieval using various base retrievers with promptable image embeddings compared with original image embeddings. /d⌢ve/umbrella-beachὋC/users /tools /clock /cloud-sun ♂pray Avg. Recall@1 VLM2Vec-Phi-3.5-V 95.5 69.8 74.4 14.4 6.3 5.5 4.3 9.8 44.5 w/ GPT prompt 90.7 81.4 75.5 72.7 25.8 18.4 14.4 15.7 53.4 E5-V 92.7 70.4 71.2 31.4 10.5 7.5 5.2 20.1 46.0 w/ GPT prompt 96.3 77.3 77.0 60.3 18.9 36.2 7.1 24.7 54.9 MM-Embed 92.7 67.4 68.1 13.3 7.4 5.1 3.9 19.5 42.8 w/ GPT prompt 64.4 68.6 68.0 15.9 11.8 8.0 3.8 20.8 41.5 MMRet-MLLM-S2 97.2 72.1 76.0 29.8 10.0 8.4 3.6 24.1 48.8 w/ GPT prompt 91.7 78.5 82.2 78.6 27.2 21.1 8.7 23.3 57.6 LLaVE-2B 96.3 70.9 73.1 19.4 8.8 3.0 4.5 19.2 45.6 w/ GPT prompt 91.9 72.1 79.6 82.0 28.7 22.4 6.8 18.5 56.1 Recall@5 VLM2Vec-Phi-3.5-V 99.3 90.7 90.7 36.6 18.2 12.9 19.2 27.1 58.9 w/ GPT prompt 98.7 95.9 92.0 92.1 48.8 82.4 36.5 39.3 75.5 E5-V 98.3 91.9 89.0 60.1 25.2 18.6 16.1 35.5 61.8 w/ GPT prompt 99.2 95.4 92.7 79.2 48.2 57.0 30.1 45.2 73.1 MM-Embed 98.7 87.8 84.9 29.3 20.3 15.0 15.3 45.3 58.4 w/ GPT prompt 83.3 87.8 85.1 45.2 33.4 22.6 19.2 44.3 60.6 MMRet-MLLM-S2 99.9 92.4 91.6 45.2 28.3 19.7 17.6 49.7 64.5 w/ GPT prompt 98.2 95.4 96.0 94.4 52.4 75.7 33.7 45.2 77.7 LLaVE-2B 99.4 93.6 88.2 41.3 21.5 10.9 16.3 37.0 59.5 w/ GPT prompt 98.7 90.1 94.5 94.4 52.5 53.2 30.8 40.6 74.3 C.3 Detailed Accelerated Retrieval Results We list the mean and error bar over five independent runs of the linear approximation in Table 14. The variability mainly comes from the random selection of samples used for deriving the matrix W. The error bar is calculated as the standard error (sample standard deviation divided by the square root of number of runs). D Test Cases in COCO-F ACET We collect some challenging test cases in COCO-F ACET for better understanding the drawbacks of retrievers. Examples are shown in Figure 4. There are several challenges: (1) The required attributes are not the main subject of the images, especially with the test cases from Animals
https://arxiv.org/abs/2505.15877v1
and Objects . (2) The semantic understanding of attributes, like “ 0 people ” and “ made of .” (3) The visual grounding of attributes, like “ sports arena ” and “ jumping ”. Current retrievers may not ground these attributes precisely. (4) Simple visual reasoning. In the Count of People category, 22 USER: <image> Which animals can be seen in this image? Please answer in detail. ASSISTANT: in the image, there are two birds visible. one bird is perched on a light fixture, while the other bird is flying in the sky. the presence of these birds adds a sense of liveliness to the scene, as they interact with the architecture and the surroundings. USER: <image> Which animals can be seen in this image? Please answer in detail. ASSISTANT: in this image, there are birds visible. USER: <image> Which animals can be seen in this image? Please answer in detail. ASSISTANT: in the image, there are two birds visible. one bird is perched on a branch, while the other bird is sitting on a vase. the birds are part of the decorative elements in the scene, adding a touch of nature to the setting. USER: <image> Which animals can be seen in this image? Please answer in detail. ASSISTANT: in the image, there are two birds visible. they are flying over the water, adding a sense of movement and life to the scene. USER: <image> Which animals can be seen in this image? Please answer in detail. ASSISTANT: in this image, there is a chicken visible on the plate. USER: <image> Which animals can be seen in this image? Please answer in detail. ASSISTANT: in this image, a dog can be seen.Figure 3: Failed top-1 retrieval results of the text-based retrieval. The query is “ Find me an image that contains any bird. ” in all cases. the model is required to perform approximation of the number of people in the image. Notably, to reduce the difficulty, we require that the negative samples differ from the positive sample by at least 3, but the model performance is still low. In the example of “ a sunny day ,” the ground truth does not feature a sunny sky but shows clear shadows on the ground. (5) Robustness to deceptive attributes. In the example of “ during the day time ,” there are lights outside in the CLIP-retrieved image, but human can judge that this image was taken at night. 23 Table 14: Recall@1 and Recall@5 of accelerated text-to-image retrieval with approximated prompt- able image embeddings in percentage points on our COCO-F ACET benchmark, along with the standard error. The results of approximation are averaged over five independent runs. /d⌢ve /umbrella-beach ὋC /users /tools /clock /cloud-sun ♂pray Avg. Recall@1 VLM2Vec-Phi-3.5-V 95.5 69.8 74.4 14.4 6.3 5.5 4.3 9.8 44.5 w/ linear approx. 72.1±2.267.2±0.557.1±0.447 .5±4.024 .3±1.135 .7±2.79.0±0.714 .2±2.142.5±0.5 w/ GPT prompt 90.7 81.4 75.5 72.7 25.8 18.4 14.4 15.7 53.4 Recall@5 VLM2Vec-Phi-3.5-V 99.3 90.7 90.7 36.6 18.2 12.9 19.2 27.1 58.9 w/ linear approx. 84.6±2.191 .9±0.683.2±0.573 .7±1.043 .9±1.871 .5±1.228 .4±0.838 .1±1.367 .0±0.3 w/ GPT
https://arxiv.org/abs/2505.15877v1
prompt 98.7 95.9 92.0 92.1 48.8 82.4 36.5 39.3 75.5 E Limitations First, COCO-F ACET is built on existing annotations of COCO images. Although we have conducted manual checking detailed in the Appendix, there might still contain inaccurate and ambiguous annotations. For example, some small animals or objects might not be annotated by bounding boxes in MSCOCO, which could affect the evaluation of the Animal andObject attributes in COCO- FACET . Second, some existing universal multimodal embedders like mmE5 [ 4] are not evaluated on our benchmark due to the limited computation resource. Third, the promptable image embeddings do not fully resolve the imbalance on different attributes, as we observe that the Recall@1 and Recall@5 accuracies for Materials ,Weather , and Gesture are lower than other attributes. Last, our pipeline in Section 4.1 relies on the usage of GPT-4o’s API. While other large language models, especially the open-sourced ones, can be good alternatives, we have not tested them in our scenario yet. F Broader Impacts Improving attribute-focused text-to-image retrieval can benefit applications that rely on fine-grained visual understanding, such as e-commerce. Our method enhances the precision of such retrieval tasks while maintaining efficiency, potentially enabling more responsive and accurate systems. At the same time, fine-grained retrieval poses risks, including potential misuse in surveillance or amplification of biases. In addition, since our approach builds on pretrained multimodal models like Phi-3.5-V , it may inherit existing biases and vulnerability to adversarial attacks of such models. To support responsible use, we encourage transparency around deployment contexts and recommend auditing tools to monitor for unintended outcomes. We release our benchmark and code to facilitate further research on both the benefits and limitations of attribute-focused retrieval. 24 Scenes: Find me an everyday image that shows the scene of sports arena. CLIPVLM2Vec w/o PromptGround TruthCount of People: Find me an everyday image with 0 peoplein the picture. CLIPVLM2Vec w/o PromptGround Truth Materials: Find me an everyday image showing some object or surface made ofwood. CLIPVLM2Vec w/o PromptGround TruthTimes: Find me an everyday image that is taken during thedaytime. CLIPVLM2Vec w/o PromptGround Truth Weathers: Find me an everyday image that is taken in a sunny day. CLIPVLM2Vec w/o PromptGround TruthGestures: Find me an everyday image that contains someonejumping. CLIPVLM2Vec w/o PromptGround TruthAnimals: Find me an image that contains any sheep. CLIPVLM2Vec w/o PromptGround TruthObjects: Find me an image that contains any bottle. CLIPVLM2Vec w/o PromptGround Truth Figure 4: Challenging test cases in COCO-F ACET . In each test case, the first image is the ground truth, the second is the top-1 image retrieved by CLIP, and the third is the top-1 image retrieved by VLM2Vec without promptable embeddings. VLM2Vec with GPT prompts can solve these test cases. 25
https://arxiv.org/abs/2505.15877v1
arXiv:2505.15879v1 [cs.CV] 21 May 2025GRIT: Teaching MLLMs to Think with Images Yue Fan1Xuehai He1Diji Yang1Kaizhi Zheng1 Ching-Chen Kuo2Yuting Zheng2Sravana Jyothi Narayanaraju2 Xinze Guan2Xin Eric Wang1 1UC Santa Cruz2eBay https://grounded-reasoning.github.io {yfan71,xwang366}@ucsc.edu Group- normalised advantage Input image and query The image shows a nest made of twigs containing nine eggs. The eggs are arranged in two groups: seven in the nest and two outside. Therefore, the total number of eggs is nine. To determine the number of eggs in the nest, we need to count visible eggs in the nest: Visual -ization Comprehensive Data How many eggs are in the nest? Zoom in Explored action Unexplored action 1 Executed element After examination, the eggs are in various colors: blue, green, yellow, pink, and red. There are six eggs in the nest. Grounded reasoning achieved with 20 training data samples (ours) 6.[150, 160, 250, 270], [280, 160, 380, 270], [390, 160, 490, 270], [500, 160, 600, 270], [610, 160, 710, 270], [720, 160, 820, 270] Thinking… Answer: Pure natural language reasoning from existing visual reasoning models Thinking… Answer: There are nine eggs in the nest. Input image Model <think> I need to check …located at [123,456,234,567 ] … [ 321,432,654,543 ] … <think> <rethink> So … Therefore … </rethink> <Answer> The answer is … Grounded reasoning c Final answer aModel update Input query Figure 1: Comparison of reasoning with pure natural language and grounded reasoning from GRIT that mixes explicit bounding boxes for image regions with a chain of natural language thoughts. Our GRIT method enables MLLMs to perform grounded reasoning with only 20 training samples, realizing a clear and reliable process of thinking with images. Abstract Recent studies have demonstrated the efficacy of using Reinforcement Learning (RL) in building reasoning models that articulate chains of thoughts prior to pro- ducing final answers. However, despite ongoing advances that aim at enabling reasoning for vision-language tasks, existing open-source visual reasoning models typically generate reasoning content with pure natural language, lacking explicit integration of visual information. This limits their ability to produce clearly articu- lated and visually grounded reasoning chains. To this end, we propose Grounded Reasoning with Images and Texts (GRIT), a novel method for training MLLMs to think with images. GRIT introduces a grounded reasoning paradigm, in which models generate reasoning chains that interleave natural language and explicit bounding box coordinates. These coordinates point to regions of the input im- age that the model consults during its reasoning process. Additionally, GRIT is equipped with a reinforcement learning approach, GRPO-GR, built upon the GRPO algorithm. GRPO-GR employs robust rewards focused on the final answer accuracy and format of the grounded reasoning output, which eliminates the need for data with reasoning chain annotations or explicit bounding box labels. As a result, GRIT achieves exceptional data efficiency, requiring as few as 20 image-question-answer triplets from existing datasets. Comprehensive evaluations demonstrate that GRIT effectively trains MLLMs to produce coherent and visually grounded reasoning chains, showing a successful unification of reasoning and grounding abilities. Preprint. 1 Introduction Reasoning models [ 1,2,3,4,5] are trained to articulate their problem-solving process through a
https://arxiv.org/abs/2505.15879v1
"reasoning chain" which comprises a detailed, step-by-step deliberation and a final answer. Recent studies have shown that such trained reasoning models are superior in reasoning than models directly prompted in the zero-shot Chain-of-Thought (CoT) [ 6] manner, where off-the-shelf models are simply guided to "think aloud" at inference time, often reflecting little of their true internal processes [ 7]. Reasoning models first became particularly prominent in the language domain [ 2,6,5,8,9], with models like DeepSeek-R1 [ 2] showcasing capabilities for complex tasks, and subsequently extended to the vision-language field [10, 11, 12]. Current open-source vision reasoning models yield reasoning chains consisting only of natural language when faced with multimodal inputs. This results in reasoning that is less clear and insufficiently grounded in the details of the visual input. To overcome this, a crucial question is how to empower models to think with images, mirroring how humans refer to visual input in their thoughts. However, realizing this capability presents significant technical challenges. Current MLLMs are designed to generate language tokens, lacking the inherent mechanism to generate images directly within a reasoning chain. Furthermore, processing and understanding reasoning sequences that interleave multiple visual elements poses a substantial hurdle, as many MLLMs struggle with maintaining context across numerous images in one input. Beyond these technical challenges, data also presents a major hurdle. In most cases, there is no unique correct reasoning path for complex multimodal questions, and human-annotated data explicitly incorporating visual evidence in reasoning steps is severely scarce. To address these challenges and enable more grounded visual reasoning in MLLMs, we propose the Grounded Reasoning with Images and Text (GRIT) method. GRIT introduces a novel grounded reasoning paradigm where the model generates natural language reasoning chains that freely mix bounding box coordinates pinpointing relevant regions from the input image. These bounding boxes serve to indicate the specific visual information that the model is consulting in its reasoning process. To simplify the inference process, after the generation of bounding box coordinates, the model does not receive additional pixel inputs in the proposed grounded reasoning paradigm; instead, the model comprehends and utilizes the visual information indicated by these coordinates based on its understanding of the original input image. By cropping the input image with the generated bounding boxes, the resulting reasoning chain can be visualized as interleaved text and highlighted regions from the input image, as illustrated in Figure 1. To train MLLMs to produce reasoning chains in the grounded reasoning paradigm, GRIT employs GRPO-GR, a reinforcement learning method built upon the GRPO algorithm. It is equipped with novel rewards specifically focused on the format of not only reasoning but also grounding, in addition to answer accuracy. Specifically, such format reward encourages reasoning outputs structured by a thinking token pair (e.g., <think> and </think>) and a rethink token pair (e.g., <rethink> and </rethink>); it also rewards the inclusion of syntactically valid bounding boxes within the generated sequence. As a result, the rewards in GRPO-GR do not constrain the specific textual content of the reasoning steps or the semantic accuracy of the grounded regions, thus eliminating the
https://arxiv.org/abs/2505.15879v1
need for data with reasoning chain annotations or explicit bounding box labels. As a result, we find that the GRIT method is extremely data efficient: it enables MLLMs to acquire the grounded reasoning ability with very few data samples sourced from existing VQA datasets using only image-query-answer triplets. With the GRIT method, we train state-of-the-art MLLMs—Qwen 2.5-VL [ 3] and InternVL 3 [ 13] using only 20 image–question–answer triplets drawn from existing object-relation and counting VQA datasets, VSR [ 14] and TallyQA [ 15]. A significant outcome of GRIT is that the trained models preserve their broad versatility, effectively handling not only visual question answering but also grounding-heavy referring expression comprehension tasks. In our experiments with a variety of testing data collected from benchmarks for both VQA and referring expression comprehension, we reveal several key observations. Firstly, the trained models effectively unify the grounding and reasoning abilities—which were originally inherent but disconnected in the base MLLMs—within their grounded reasoning output. Secondly, through both qualitative and quantitative analysis, we reveal a high correlation between the image regions referenced and the accompanying text in the reasoning chain produced by GRIT-trained models. Furthermore, we demonstrate that the generation of bounding boxes boosts the subsequent model reasoning to attend more effectively to the input visual information. Finally, we observe that as training data increases, models trained with GRIT show 2 improved performance but it also reveals challenges for boosting generalizability. Our contributions are as follows: •We propose Grounded Reasoning with Images and Text (GRIT), a novel method that teaches MLLMs to think with images through a grounded reasoning paradigm where models generate reasoning chains interleaving natural language with explicit bounding box coordinates. • We develop GRPO-GR, a reinforcement learning algorithm, which employs novel rewards that enable the grounded reasoning ability of MLLMs efficiently, using only image-question- answer triplets without requiring dense reasoning chains or bounding box annotations. •Through comprehensive evaluations, we demonstrate that MLLMs trained with GRIT, such as Qwen 2.5-VL and InternVL 3, successfully unify their grounding and reasoning abilities to produce accurate and coherent grounded reasoning. 2 Related Work 2.1 Reinforcement Learning for Vision-Language Reasoning Recent studies have applied reinforcement learning (RL) with verifiable rewards to build visual reasoning models for visual question-answering tasks, extending approaches from language-only models, such as DeepSeek-R1 [ 2], to enhance Multimodal Large Language Models (MLLMs). For instance, R1-OneVision [ 12] and R1-V [ 16] focus on diagram reasoning and math problems, respectively, while Vision-R1 [ 11] emphasizes symbolic reasoning tasks. However, these methods often treat visual grounding and textual reasoning as separate or do not tightly integrate them into a single generative process. VLM-R1 [ 10] applies RL to referring expression comprehension tasks, a grounding-heavy task, rewarding bounding box, and answer accuracy. While effective for these tasks, VLM-R1 typically outputs only bounding boxes as final answers, with an implicit reasoning process, rather than an interpretable, interleaved trace of text and visual grounding. In contrast, our GRIT (Grounded Reasoning with Images and Text) framework uses RL to train MLLMs to freely mix grounding and reasoning within a single generative
https://arxiv.org/abs/2505.15879v1
trace. Models trained with GRIT produce interleaved chains of natural language and bounding box coordinates, enabling a dynamic interplay where visual evidence informs textual logic, and vice-versa. While proprietary systems such as ChatGPT-o3/4 [ 17] have shown similar "thinking with images" capabilities, GRIT offers the first open-source approach to achieve this interleaved visual-textual reasoning via lightweight RL, without needing explicit annotations for intermediate reasoning or grounding steps. 2.2 Visual Chain-of-Thought Reasoning The idea of Chain-of-Thought (CoT) reasoning for vision-language tasks predates the RL-focused methods [ 18,19,20,21], where models are prompted to generate reasoning chains that include visual cues. Early approaches like Multimodal-CoT [ 19] used multi-stage prompting, while others like CCoT [ 21] leveraged external tools like scene graphs. These often rely on prompting or auxiliary modules rather than learning an end-to-end generative process for interleaved reasoning. Other works aimed to learn visually grounded CoT with minimal supervision. UV-CoT [ 22] used self-generated bounding boxes and an auxiliary MLLM for supervision, but still largely separated the grounding and reasoning phases. VisCoT [ 23] fine-tuned models on a dataset with detailed annotations for both textual rationales and corresponding bounding boxes. However, this requires high-quality, dense annotations linking each reasoning step to specific visual evidence. The GRIT method differs by enabling MLLMs to generate explicit, interpretable, and visually grounded reasoning steps from task- level reward signals alone, without requiring supervisory signals for bounding boxes or intermediate textual thoughts within the reasoning chain. Visual grounding (via bounding box coordinates) is embedded within the continuous reasoning chain. This allows models trained with GRIT to achieve a form of grounded CoT where visual information is directly integrated into the thought process, enabling them to reason "with" images, not just "about" them. 3 3 GRIT: Grounded Reasoning with Images and Text 3.1 Grounded Reasoning Paradigm The GRIT framework fosters a straightforward, grounded reasoning paradigm in MLLMs, enabling a more transparent and verifiable reasoning process. Given an image Iand a textual question q, GRIT enables the model to generate a two-part output (c, a): first, a reasoning chain c(starting with <think> ), followed by a concise final answer a(after <answer> ). The reasoning chain cfreely mixes natural-language text Tand optional bounding-box coordinates B. At any step pof the token generation of c, the model can choose to generate a bounding box cp∈Bor it can opt to continue generating natural language text cp∈T. The decision is based on the input and all existing reasoning chain c1:p−1. When the model has finished generating one or multiple bounding box coordinates at stepq, these coordinates are intended to directly inform and shape the subsequent reasoning steps cq+1, cq+2, . . .. Crucially, the generation of subsequent tokens does not receive additional pixel inputs based on the generated bounding boxes. Instead, the model relies on its internal understanding, informed by these newly generated coordinates, to continue the reasoning process. This requires the model to learn to interpret its own grounding actions i.e. the bounding boxes, and integrate that understanding into its ongoing textual deliberation. Leveraging the inherent grounding and reasoning abilities in MLLMs, the
https://arxiv.org/abs/2505.15879v1
proposed grounded reasoning paradigm encourages the model to unify these existing faculties to form the new grounded reasoning ability. Compared to alternative approaches such as generating pixel-level attention masks or adding the image region indicated by bounding boxes as additional input for multi-turn generation, the grounded reasoning paradigm is significantly more efficient. 3.2 Reinforcement Learning with GRPO-GR The GRIT method trains MLLMs via a newly proposed reinforcement learning algorithm, GRPO-GR, for grounded reasoning ability. Built upon the Group-Relative Policy Optimisation (GRPO) [ 24] algorithm, GRPO-GR optimizes a policy πθto generate sequences of reasoning (c, a)based on rewards combining answer correctness with format adherence as shown in Figure 2. A fixed prompt suffix is appended to the model’s input during training and inference, please refer to the Appendix D for details. RL Formulation. The model acts as a policy πθthat generates the output sequence (c, a)given the input (I, q). During training, for every image–question pair (I, q), we sample a group of N candidate completions {o1, . . . , o N}from the current policy πθ. For each completion oi, a task reward ri=R(q, oi)is computed based on a combination of components (detailed below). These rewards are used to derive a group-normalised advantage: Ai=ri−mean{r1, . . . , r N} std{r1, . . . , r N}+δ, (1) where δis a small constant (e.g., 10−8) for numerical stability. The task reward riis a composite signal comprising three components: a grounded-reasoning-format reward ( rformat ), an optional grounded-target-counting reward ( rcount), and a GPT-aided answer- accuracy reward ( rans). These components are designed to encourage the desired grounded reasoning behavior and accurate final answers. Grounded-reasoning-format reward ( rformat ).This reward encourages reasoning outputs structured by special token pairs and includes syntactically valid bounding boxes. It is composed of a special- token-format signal sstand a bounding-box-format signal sbf: rformat =sst+sbf, (2) where sstrewards the correct usage and order of special reasoning-format tokens (<think>...</think> then <rethink>...</rethink> ) within the reasoning chain c, which structure the reasoning, potentially across multiple steps. Each correctly placed token pair increments the reward by 0.5:sst= 0.5×I(correct think token pair ) + 0.5×I(correct rethink token pair ). The bounding box format signal sbfincentivizes the explicit generation of syntactically correct bounding boxes within c. These are detected via a regex matching quadruplets of integers separated by commas, typically expected before a rethink token. A reward of 0.5is assigned if at least one such bounding 4 Group- normalised advantage Input image and query The image shows a nest made of twigs containing nine eggs. The eggs are arranged in two groups: seven in the nest and two outside. Therefore, the total number of eggs is nine. To determine the number of eggs in the nest, we need to count visible eggs in the nest: Visual -ization Comprehensive Data How many eggs are in the nest? Zoom in Explored action Unexplored action 1 Executed element After examination, the eggs are in various colors: blue, green, yellow, pink, and red. There are six eggs in the nest. Grounded Reasoning with Images and Texts (ours) 6.[150,
https://arxiv.org/abs/2505.15879v1
160, 250, 270], [280, 160, 380, 270], [390, 160, 490, 270], [500, 160, 600, 270], [610, 160, 710, 270], [720, 160, 820, 270] Thinking… Answer: Pure natural language reasoning Thinking… Answer: There are nine eggs in the nest. Input image Model <think> I need to check …located at [123,456,234,567 ] … [ 321,432,654,543 ] … <think> <rethink> So … Therefore … </rethink> <Answer> The answer is … Grounded reasoning c Final answer aModel update Figure 2: Model update via GRPO-GR. During GRPO-GR training, we sample a group of model completions and calculate the grounded-reasoning-format reward ( rformat ), the optional grounded- target-counting reward ( rcount), and the GPT-aided answer-accuracy reward ( rans). The rewards are used to calculate the group-normalized advantage and guide the policy optimization. box is present: sbf= 0.5×I(num_bboxes ≥1). This reward component encourages the required format and presence of visual grounding elements without constraining the textual content or semantic accuracy of the grounded regions themselves. Grounded-target-counting reward ( rcount).This optional reward component is used specifically for training examples drawn from visual counting-related datasets (detailed in the experimental setup). It is set to 0.5if the number of bounding boxes generated within the reasoning exactly matches the ground-truth count for the target object. This encourages the model to systematically generate the correct quantity of bounding boxes as part of its counting reasoning process. GPT-aided answer-accuracy reward ( rans).This reward combines signals for the correctness of the final answer, offering a more robust signal than rule-based checks alone by supplementing them with an external Vision-Language Model judge. It is computed as: rans=sGPT + 0.1sBLEU ,where sGPT is a binary correctness score (0 or 1) from a GPT-4o judge evaluating the question, predicted answer, and ground truth triplet (q,ˆa, a), andsBLEU is the sentence-level BLEU-1 similarity between ˆaanda. The GPT-4o prompt is fixed and detailed in Appendix D. We down-weight sBLEU due to its sensitivity to length mismatch, ensuring the highest reward is for closely matching ground truth answers. Optimization Objective. The policy parameters θare updated by maximizing the GRPO objective JGRPO(θ)using the calculated group-normalised advantages Ai: JGRPO(θ) =1 NNX i=1h min siAi,clip si,1−ϵ,1 +ϵ Ai −β DKL πθ(·|q)∥πref(·|q)i ,(3) where πrefis the reference policy (typically the initial pre-trained model), βcontrols the KL divergence penalty, ϵdefines the trust region for the importance ratio si=πθ(oi|q) πθold(oi|q), and θoldare the policy parameters before the update. This objective balances maximizing expected reward within each group against maintaining proximity to the reference policy, promoting stable learning. 4 Experiments We first evaluate the grounded reasoning performance of models trained using the GRIT method in both grounding and reasoning perspectives. Then, we further qualitatively and quantitatively analyze the interaction between the bounding boxes and other reasoning contents generated during grounded reasoning. Last but not least, we examine the impact of increasing the training data volume. 4.1 Setup Testing data. We evaluate models trained with GRIT on curated testing sets derived by sampling from six public datasets: Visual Spatial Reasoning (VSR) [ 14] focusing on spatial relation verification, TallyQA [ 15] on object counting, GQA [ 25] on compositional object
https://arxiv.org/abs/2505.15879v1
spatial questions, MME [ 26] on diverse visual tasks including counting and position, MathVista-mini [ 27] on mathematical reasoning in visual contexts, and position subset of OVDEval [ 28] on open-vocabulary object grounding. For assessing the quality of bounding boxes generated during grounded reasoning outputs, we leverage available question-related bounding box annotations from VSR, TallyQA, and GQA and manually refine the bounding box coordinates for VSR and GQA data for our evaluation purpose. We provide more details, including data statistics, in Appendix A. 5 Table 1: Evaluation of the grounded reasoning accuracy. GRIT-trained models are compared with baselines across seven testing sets on GPT-as-judge answer accuracy score (ACC) and grounding IoU (GIoU). GRIT-trained models overall outperform baselines, demonstrating a successful unification of grounding and reasoning abilities that are originally inherent but separated in MLLMs. VSR TallyQA GQA MathVista MME OVDEval ACC GIoU ACC GIoU ACC GIoU ACC ACC GIoU Qwen2.5-VL 3B [8] Direct query 49.5 0.00 40.8 0.00 55.4 0.00 58.5 88.9 0.389 Chain-of-Thought 37.5 0.122 33.2 0.113 39.5 0.269 33.0 41.3 0.388 One-shot ICL 13.2 0.213 36.3 0.268 20.4 0.441 29.1 24.7 0.328 Few-shot fine-tuning 59.7 0.216 44.5 0.284 64.6 0.475 45.0 68.3 0.391 GRIT 72.9 0.325 47.8 0.447 62.8 0.485 59.8 89.3 0.398 InternVL3 2B [13] Direct query 52.9 0.000 15.5 0.000 29.4 0.000 43.0 40.0 55.1 Chain-of-Thought 6.4 0.428 6.8 0.279 4.1 0.292 7.5 14.0 21.9 One-shot ICL 3.4 0.435 3.7 0.275 2.6 0.435 14.1 1.3 59.3 Few-shot fine-tuning 54.0 0.381 22.5 0.116 46.8 0.114 17.4 62.3 7.8 GRIT 64.9 0.495 44.2 0.324 63.2 0.457 48.2 82.0 56.0 Training Data. Demonstrating the data efficiency of our GRIT method, we train on a dataset of only 20 unique image-query-answer triplets. This small training set is drawn from the Visual-Spatial Reasoning (VSR) [ 14] and TallyQA [ 15] datasets. These data focus on tasks requiring both explicit visual grounding and multi-step reasoning, providing a suitable testbed to evaluate GRIT’s ability to learn grounded reasoning formats with limited data. Please refer to Appendix B for more details. Training Implementation. We train two pre-trained MLLMs, Qwen2.5-VL-3B and InternVL-3-2B, directly using the GRIT method with reinforcement learning. We train the models for 200 steps with a total batch size of 128. During GRPO-GR training, we generate 4 candidate reasoning traces per input sample during training with a learning rate of 2 ×10 e-6. The optimizer for the training is AdamW and a Cosine scheduler is adopted. All training is conducted on 8 NVIDIA A100 (80GB) GPUs with Deepspeed Zero2 and the time for training each model is approximately 12 hours. 4.2 Research question 1 : How accurate is the grounded reasoning from both grounding and reasoning perspectives? Metrics. To comprehensively assess model performance across testing sets, we evaluate two key aspects of their output: (1) the correctness of the natural language answer, evaluated by GPT-as-judge answer accuracy score [29], a score between 0 (completely incorrect) and 1 (fully correct) assigned by GPT-4o to judge the correctness of the model-generated natural language answer. The GPT-4o is provided with the same prompt format as in the
https://arxiv.org/abs/2505.15879v1
GPT-aided answer-accuracy reward during training (Section 3.2); (2) the accuracy of the grounding coordinates, measured by grounding IoU , the average Intersection over Union (IoU) between the union of all bounding boxes generated by the model (i.e., within its reasoning chain or as the direct answer for tasks like OVDEval) and the union of all corresponding ground-truth bounding boxes. Different from metrics in traditional object detection tasks, grounding IoU focuses on whether the collection of grounded regions in each grounded reasoning output from the model together aligns with the annotated question-critical image regions. Baselines. We include baselines built upon the same base MLLMs and the same data as the models trained with GRIT: (1) Direct Query directly feeds the original MLLMs only the task query alongside the input image in a zero-shot manner, without any specific formatting or reasoning prompts, to assess its raw task-solving ability. (2) Chain-of-Thought (CoT) [6] feeds the original MLLMs queries with a fixed prompt suffix, which instructs the model to generate step-by-step reasoning chains before the answer. The prompt suffix for the CoT baseline is identical to the prompt used for models trained with GRIT, measuring the model’s latent ability without any post-training. (3) One-shot In -Context Learning (ICL) [30] prefixes each test question with a single, fixed exemplar consisting of {question, grounded -visual -thinking trace, answer}, instructing the model to imitate the format and reasoning style. (4) Few-shot SFT fine-tunes the MLLM on the same demonstrations employed for GRPO but with standard supervised learning, isolating the effect of reinforcement learning. 6 Question : How many zebras are pictured here? Model output : <think> There are six zebras in the picture. The coordinates for the zebras are as follows: 1. (200, 168, 248, 202) 2. (169, 159, 214, 186) 3. (76, 167, 108, 192) 4. (24, 173, 50, 197) 5. (51, 163, 70, 191) 6. (413, 159, 441, 189) 7. (463, 171, 483, 186) </think> <rethink> The coordinates provided for the zebras are accurate and cover all the zebras visible in the image. There are no overlapping or missing coordinates. </rethink> <answer> 7 Ground truth answer : 7 (i) Model provides an answer with regions and then reflects on it in the subsequent reasoning. Question : Is the truck beneath the cat? Model output : <think> The bounding box coordinates for the truck are approximately (0, 209, 488, 364). The cat is positioned on top of this truck. </think> <rethink> The cat is on top of the truck, so the truck is beneath the cat. </rethink> <answer> Yes Ground truth answer : Yes (ii) Model first grounds critical image region in its reasoning and then analyze in the subsequent reasoning. Question : Is there a knife in the image? Model output : <think> There is no knife present in the image. The focus is on a pot of soup with carrots and other ingredients. The bounding box coordinates for the knife would be outside the image area. </think> <rethink> The image shows a pot of soup with carrots and other ingredients, but there is no knife present.
https://arxiv.org/abs/2505.15879v1
The bounding box coordinates for the knife would be outside the image area. </rethink> <answer> No Ground truth answer : no (iii) Model correctly handles queries about non-existent entities without any grounding action. Figure 3: Inference examples of Qwen2.5-VL-GRIT. Results. The results are summarized in Table 1. In contrast to baselines, models trained with GRIT overall achieve higher scores on the adopted testing data. Despite being trained on only 20 training samples, models trained with GRIT not only improve on GPT-as-judge answer accuracy Scores on VSR and TallyQA (the two datasets seen during training) but also generalize effectively to other out-of-domain data, indicating strong reasoning ability. The results on the grounding IoU metric show that models trained with GRIT, although not directly optimized for this metric, outperform baselines in locating question-relevant image regions during their reasoning. Notably, on the OVDEval testing data, models trained with GRIT achieve more accurate detection results than zero-shot MLLMs, highlighting their emerging improvements in grounding capabilities. These results demonstrate a more successful unification of grounding and reasoning, where their integration demonstrably enhances the performance of both individual abilities. From the result, we also observe that baselines based on off-the-shelf MLLMs exhibit rigidity, where they tend to generate either only bounding boxes or the final answer. For CoT and one-shot ICL baselines, although they are prompted or guided by ICL to produce a reasoning chain interleaved with bounding box coordinates, their grounding and reasoning functions are forced to operate concurrently. As a result, they generally face a severely deteriorated performance either in answer accuracy or grounding IoU, indicating that these capabilities remain largely separated and can interfere with each other, resulting in suboptimal performance. This suggests a general disconnect between their inherent grounding and reasoning abilities. As for the Few-shot SFT baseline, while demonstrating more balanced performance across grounding IoU and GPT-as-judge answer accuracy score compared to the zero-shot baselines, it still achieves considerably lower scores than models trained with our GRIT method. This is likely because supervised fine-tuning primarily learns to mimic the surface form of outputting text interspersed with bounding boxes, rather than developing a deeply integrated reasoning process where grounding logically informs each step. 7 Figure 4: Correlation between image regions and "thoughts" in grounded reasoning eval- uated by our Vision-Language Reasoning Cross-Modal Correlation metric. The result shows that models trained with GRIT outper- form baselines. Figure 5: Model’s average attention for image to- kens during the generation of rethink segments. The overall higher curve for the original rethink segments of the GRIT-trained model shows that the bounding boxes generated facilitate stronger at- tention to the image input in subsequent reasoning. 4.3 Research question 2 : How generated bounding boxes interact with natural-language contents during grounded reasoning? Qualitative Analysis of Grounded Reasoning Contents. We qualitatively showcase the grounded reasoning produced by models trained with GRIT through a variety of input-output examples. A major strength of training with the GRIT method is that its reward design imposes no rigid constraint on the content of the reasoning chain. Consequently, the model learns flexible reasoning strategies that adapt
https://arxiv.org/abs/2505.15879v1
seamlessly to diverse queries. Specifically, as shown in Figure 3i, GRIT-trained models sometimes directly provide an answer while then reflecting the answer in subsequent reasoning; while in other instances, as in Figure 3ii, they perform the ground actions to identify visual evidence initially and subsequently analyze the selected image regions in their reasoning. Crucially, our models dynamically determine whether grounding is necessary, significantly reducing false-positive grounding instances. For example, in Figure 3iii, queries referencing non-existent entities in the input image do not prompt erroneous groundings, showcasing a robust multimodal reasoning capability. Cross-modal Correlation of Images Regions and Thoughts. To systematically evaluate the coherence between image regions and the natural language contents interleaved in the grounded reasoning chain of models trained with GRIT, we introduce the Vision-Language Reasoning Cross- Modal Correlation metric. Given each model-generated reasoning chain ( c), we extract the associated bounding boxes {ci|ci∈B}. To establish a rigorous evaluation, we randomly sample an equal number of bounding boxes from the input image as negative candidates {h0, ...h j} ∈B. We draw these two sets of boxes separately on the input image and then ask GPT-4o to identify the one from two input images with the set of bounding boxes that corresponds most closely with the textual reasoning with bounding box coordinates masked. The prompt used is shown in Appendix D. Leveraging GPT-4o’s strong Set-of-Mark (SoM) capabilities [ 31], this process allows us to quantitatively assess semantic coherence. We repeat this process three times for robustness and report the average correlation score with standard deviation. We evaluate the output from models trained with GRIT against the Chain-of-Thought and Few-shot fine-tuning outputs using this cross-modal correlation metric on six testing sets (VSR, TallyQA, GQA, MME, and MathVista), excluding OVDEval as it primarily poses a grounding challenge. In addition to evaluating model-generated outputs, we also manually create and evaluate 20 human-written reasoning chains with interleaved bounding boxes using the same vision-language reasoning cross-modal correlation method to establish a human performance baseline. As shown in Figure 4, models trained with our GRIT framework outperform both baselines, showing highly correlated image region selection with textural reasoning, while still exhibiting a gap when compared to the human-written reasoning chains, indicating room for future improvement. Influence of Grounding on Subsequent Reasoning. Models trained with GRIT interleave reasoning with grounding actions when producing visually integrated thought processes. To further understand how the bounding boxes generated impact subsequent reasoning, we examine attention scores for input visual tokens during inference. Specifically, we split outputs of a Qwen2.5-VL model trained with GRIT using the <rethink> token into pre-rethink and rethink segments, where pre-rethink segments usually include bounding-box coordinates due to the optimization guided by the grounded- 8 reasoning-format reward detailed in Section 3.2. We then create an alternative pre-rethink segment by removing all bounding boxes from the pre-rethink segment, simulating the situation where no grounding action is done during the reasoning. Next, we feed the alternative pre-rethink segment back into the Qwen2.5-VL model trained with GRIT for continuous token generation. We refer to the newly generated content based on the modified input
https://arxiv.org/abs/2505.15879v1