text string | source string |
|---|---|
structured reasoning first 2. Follow up with a final answer, must in the <answer> </answer> tag. For example, <answer> xxx </answer>. 3. The answer must be succinct and final. For math problems, return the answer using LaTeX in the \boxed format. 4. If the question requires multiple steps or facts, break down your reasoning accordingly 5. Be precise and factual in your responses 6. If you’re unsure about something, acknowledge the uncertainty Now, please answer the following question: Prompt D.2: Prompt for Agent You are an expert assistant who can solve any task using code blobs. You will be given a task to solve as best you can. To do so, you have been given access to a list of tools: these tools are basically Python functions which you can call with code. To solve the task, you must plan forward to proceed in a series of steps, in a cycle of Thought: ,Code: , andObservation: sequences. At each step, in the Thought: sequence, you should first explain your reasoning towards solving the task and the tools that you want to use. Then in the Code: sequence, you should write the code in simple Python. The code sequence must end with <end_code> sequence. During each intermediate step, you can use print() to save whatever important infor- mation you will then need. These print outputs will then appear in the Observation: field, which will be available as input for the next step. In the end you have to return a final answer using the final_answer tool. For math problems, if not specified, always return LaTex format as the final answer. ... (ellipsis) ... Here are the rules you should always follow to solve your task: 1. Always provide a Thought: sequence, and a Code:“‘py sequence ending with “‘<end_code> sequence, else you will fail. 2. Use only variables that you have defined! 3. Always use the right arguments for the tools. DO NOT pass the arguments as a dict as inanswer = wiki(‘query’: “What is the place where James Bond lives?") , but use the arguments directly as in answer = wiki(query=“What is the place where James Bond lives?") . 4. Take care to not chain too many sequential tool calls in the same code block, espe- cially when the output format is unpredictable. For instance, a call to search has an unpredictable return format, so do not have another tool call that depends on its output in the same block: rather output results with print() to use them in the next block. 5. Call a tool only when needed, and never re-do a tool call that you previously did with the exact same parameters. 6. Don’t name any new variable with the same name as a tool: for instance don’t name a variablefinal_answer . 7. Never create any notional variables in our code, as having these in your logs will derail you from the true variables. ... (ellipsis) ... 22 Example D.1: Failure case 1 of Agent in Precalculus (level 2) Problem: Compute arcsin −1 2 .Express your answer in radians. Agent: Thought: | https://arxiv.org/abs/2505.17612v1 |
To compute arcsin −1 2 , we need to find the angle θsuch that sin(θ) =−1 2andθlies within the range of the arcsine function, which is [−π 2,π 2]. Code: 1import math 2 3# Calculate the arcsine of -1/2 4theta = math . asin ( -1/2) 5print ( theta ) Observation: -0.5235987755982989 Thought: The output from the code snippet is the value of θin radians. However, the task requires the answer to be in LaTeX format and as an exact value, not a decimal approximation. The value ofθis5π 6, which is the exact value of the arcsine of −1 2within the range of the arcsine function. Code: 1final_answer ("\ boxed {\ frac {5\ pi }{6}} ") Observation:5π 6(wrong) CoT: To compute arcsin −1 2 , we need to find the angle θsuch that sin(θ) =−1 2andθlies within the range of the arcsine function, which is [−π 2,π 2]. 1. Recall that sin(θ) =−1 2corresponds to the sine of an angle in the unit circle where the y-coordinate is −1 2. 2. The sine function is negative in the third and fourth quadrants. However, since the range of the arcsine function is [−π 2,π 2], we are only interested in the fourth quadrant for this problem. 3. The reference angle for sin(θ) =1 2isπ 6. Therefore, the angle in the fourth quadrant that has a sine value of −1 2is−π 6. Thus, the angle θsuch that sin(θ) =−1 2andθ∈[−π 2,π 2]is−π 6. −π 6(correct) 23 Example D.2: Failure case 1 of Agent in Precalculus (level 4) part 1 Problem: Letf(x) = (arccos x)2+ (arcsin x)2. Find the range of f(x). All functions are in radians. Agent: Thought: To find the range of the function f(x) = (arccos x)2+ (arcsin x)2, we start by using the identity that relates the arccosine and arcsine functions: arccos x+ arcsin x=π 2. Letθ= arccos x. Then arcsin x=π 2−θ. Substituting these into the function, we get: f(x) =θ2+π 2−θ2 . Expanding the expression, we have: f(x) =θ2+π 22 −πθ+θ2= 2θ2−πθ+π 22 . This is a quadratic function in terms of θ. To find the range of f(x), we need to determine the range of θand then find the minimum value of the quadratic function over this range. The range of θis[0,π 2]. We can find the minimum value of the quadratic function f(θ) = 2θ2−πθ+ π 22by completing the square or using the vertex formula for a quadratic function ax2+bx+c, which is x=−b 2a. Let’s find the minimum value of f(θ)using the vertex formula. Code: 1from sympy import symbols , diff , solve , pi 2 3# Define the variable and function 4theta = symbols (’theta ’) 5f_theta = 2* theta **2 - pi* theta + (pi /2) **2 6 7# Find the derivative of the function 8f_prime_theta = diff ( f_theta , theta ) 9 10# Solve for the critical points 11critical_points = solve ( f_prime_theta , theta ) 12print ( critical_points ) Observation: [ π/4] Thought: The critical point is θ=π 4. We can now evaluate the function f(θ)at this point to find the minimum value | https://arxiv.org/abs/2505.17612v1 |
of the function. Let’s evaluate f(θ)atθ=π 4. Code: 1# Evaluate the function at the critical point 2theta_critical = pi / 4 3f_theta_critical = f_theta . subs (theta , theta_critical ) 4print ( f_theta_critical ) Observation: π2/8 Thought: The minimum value of the function f(θ)atθ=π 4isπ2 8. Since the function is a quadratic function that opens upwards (the coefficient of θ2is positive), the minimum value is the only value in the range of f(x). Therefore, the range of f(x)ish π2 8,∞ . Let’s use the final_answer tool to provide the final answer in LaTeX format. Code: 1final_answer ("\ boxed {\ left [ \ frac {\ pi ^2}{8} , \ infty \ right )}") Observation:π2 8,∞ (wrong) 24 Example D.3: Failure case 1 of Agent in Precalculus (level 4) part 2 Problem: Letf(x) = (arccos x)2+ (arcsin x)2. Find the range of f(x). All functions are in radians. CoT: To find the range of the function f(x) = (arccos x)2+ (arcsin x)2, we start by using the relationship between arccos xandarcsin x. Recall that for any x∈[−1,1], arccos x+ arcsin x=π 2. Letθ= arccos x. Then arcsin x=π 2−θ. Substituting these into the function f(x), we get f(x) =θ2+π 2−θ2 . Expanding the expression, we have f(x) =θ2+π 2−θ2 =θ2+π 22 −πθ+θ2= 2θ2−πθ+π2 4. This is a quadratic function in terms of θ. To find the range of f(x), we need to determine the range of θasxvaries from −1to1. Since θ= arccos x,θranges from 0toπ. We need to find the minimum and maximum values of the quadratic function g(θ) = 2 θ2−πθ+π2 4over the interval [0, π]. The vertex of the parabola g(θ) = 2 θ2−πθ+π2 4occurs at θ=−b 2a=π 4. Evaluating g(θ)at the vertex and at the endpoints of the interval [0, π]: 1. At θ= 0: g(0) = 2(0)2−π(0) +π2 4=π2 4. 2. At θ=π: g(π) = 2( π)2−π(π) +π2 4= 2π2−π2+π2 4=π2+π2 4=5π2 4. 3. At θ=π 4: gπ 4 = 2π 42 −ππ 4 +π2 4= 2·π2 16−π2 4+π2 4=π2 8. The minimum value of g(θ)isπ2 8and the maximum value is5π2 4. Therefore, the range of f(x)is π2 8,5π2 4 . Thus, the range of f(x)isπ2 8,5π2 4 . (correct) 25 Example D.4: Comparison of first-thought prefix Problem: Find the smallest positive real number Cfor which 2 3 0−2 v ≤C∥v∥ for all two-dimensional vectors v. Note that for a two-dimensional vector a,∥a∥is the magnitude of a. Small Agent distilled from trajectories without ftp: Thought: The problem is asking for the smallest positive real number C for which the inequality holds for all two-dimensional vectors v. This is a problem of finding the spectral radius of the matrix. The spectral radius of a matrix is the largest absolute value of its eigenvalues. I will first find the eigenvalues of the matrix, then find the largest absolute value of the eigenvalues, and that will be the value of C. Code: 1import numpy as np 2 3# Define the matrix 4A = np. array ([[2 , 3], [0, -2]]) 5 6# Find the eigenvalues 7eigenvalues = np. linalg . eigvals (A) | https://arxiv.org/abs/2505.17612v1 |
8 9# Find the largest absolute value of the eigenvalues 10C = max( abs( eigenvalues )) 11 12print (C) Observation: 2.0 Thought: The code has executed and the value of C has been found. Now I will return the final answer using the final_answer tool. Code: 1final_answer ("\ boxed {2} ") Observation: 2(wrong) 26 Example D.5: Comparison of first-thought prefix (cont.) Problem: Find the smallest positive real number Cfor which 2 3 0−2 v ≤C∥v∥ for all two-dimensional vectors v. Note that for a two-dimensional vector a,∥a∥is the magnitude of a. Small Agent distilled from trajectories with ftp: Thought: To find the smallest positive real number Cfor which the inequality 2 3 0−2 v ≤C∥v∥ holds for all two-dimensional vectors v, we need to analyze the norm of the matrix-vector product. The norm of the matrix-vector product can be computed as the maximum singular value of the matrix. The singular values of a matrix are the square roots of the eigenvalues of the matrix multiplied by its transpose. For a 2x2 matrix A= a b c d , the singular values are the square roots of the eigenvalues of ATA. Let’s compute the singular values of the matrix A= 2 3 0−2 . Code: 1import numpy as np 2 3# Define the matrix 4A = np. array ([[2 , 3], [0, -2]]) 5 6# Compute the singular values of the matrix 7singular_values = np. linalg .svd(A, compute_uv = False ) 8print ( singular_values ) Observation: [4. 1.] Thought: The singular values of the matrix A= 2 3 0−2 are 4 and 1. The maximum singular value is 4. Therefore, the smallest positive real number Cfor which the inequality holds for all two-dimensional vectors vis the maximum singular value. Code: 1# The maximum singular value is the smallest C 2C = max( singular_values ) 3final_answer (C) Observation: 4 (correct) 27 Example D.6: Illustrative case of first-thought prefix in factual reasoning Problem: Who founded the city where the founder of geometry lived? Small Agent distilled from trajectories with ftp: Thought: To answer this question, we need to identify the city where the founder of geometry, Euclid, lived and then determine who founded that city. Code: 1euclid_birthplace = web_search ( query =" where did Euclid live ") 2print (f" Euclid lived in { euclid_birthplace }.") Observation: Euclid lived in (...) "Euclid, Ohio" Euclid, Ohio Euclid is a city in Cuyahoga County, Ohio, United States. It is an inner ring suburb of Cleveland. As of the 2010 census, the city had a total population of 48,920. In 2009, Euclid celebrated its bicentennial. The city was named after Euclid, the Greek mathematician. The city was settled in 1796 and became a township in 1809. Euclid then became a village in 1903 and a city in 1930, during the Great Depression. The city (then a village) was the primary appealing party in the landmark case ""Village of Euclid v. Ambler Realty Co."". Euclid is located (...) Thought: From the search results, we can see that Euclid, Ohio was named after the Greek mathematician Euclid. Now, we | https://arxiv.org/abs/2505.17612v1 |
need to find out who founded the city of Euclid, Ohio. Code: 1founder_of_euclid = web_search ( query =" who founded the city of Euclid , Ohio ") 2print (f"The city of Euclid , Ohio was founded by { founder_of_euclid }.") Observation: The city of Euclid, Ohio was founded by (...) "South Euclid, Ohio" treaty with the Iroquois confederation in 1796 by the Connecticut Land Company. In 1797, Moses Cleaveland named the area east of the Cuyahoga River Euclid, after the Greek mathe- matician and ""patron saint"" of surveyors. Euclid Township was officially formed in 1809. In 1828, Euclid Township was divided into nine districts, with South Euclid becoming district two. (...) Thought: The search results indicate that the city of Euclid, Ohio was settled in 1796 by Moses Cleaveland, who was trained in mathematics and named the township after the Greek mathe- matician Euclid. Therefore, Moses Cleaveland founded the city where the founder of geometry, Euclid, lived. Code: 1final_answer (" Moses Cleaveland ") Observation: Moses Cleaveland (wrong) 28 Example D.7: Illustrative case of first-thought prefix in factual reasoning (cont.) Problem: Who founded the city where the founder of geometry lived? Small Agent distilled from trajectories without ftp: Thought: I need to find out who founded the city where the founder of geometry lived. I will first find out who the founder of geometry is, then find out where he lived, and finally find out who founded the city where he lived. I will use the tool ‘web_search’ to get this information. Code: 1founder_of_geometry = web_search ( query =" founder of geometry ") 2print (f" Founder of geometry : { founder_of_geometry }") Observation: Founder of geometry: (...) Euclid Euclid Euclid (; – ""Eukleíd ¯es"", ; fl. 300 BC), sometimes given the name Euclid of Alexandria to distinguish him from Euclides of Megara, was a Greek mathematician, often referred to as the ""founder of geometry"" or the ""father of geometry"". He was active in Alexandria during the reign of Ptolemy I (323–283 BC). His ""Elements"" is (...) Thought: From the search results, I can see that the founder of geometry is Euclid. Now I need to find out where he lived and who founded the city where he lived. I will use the tool ‘web_search‘ again to get this information. Code: 1euclid_lived = web_search ( query =" where did Euclid live ") 2print (f" Where did Euclid live : { euclid_lived }") Observation: Where did Euclid live: (...) Euclid A detailed biography of Euclid is given by Arabian authors, mentioning, for example, a birth town of Tyre. This biography is generally believed to be fictitious. If he came from Alexandria, he would have known the Serapeum of Alexandria, and the Library of Alexandria, and may have worked there during his time. (...) Thought: From the search results, I can see that Euclid lived in Alexandria, Egypt. Now I need to find out who founded the city where Euclid lived. I will use the tool ‘web_search‘ again to get this information. Code: 1founder_of_alexandria = web_search ( query ="who founded Alexandria " ) 2print (f"Who founded | https://arxiv.org/abs/2505.17612v1 |
Alexandria : { founder_of_alexandria }") Observation: Who founded Alexandria: (...) Alexandria Alexandria is believed to have been founded by Alexander the Great in April 331 BC as (""Alexandreia""). Alexander’s chief architect for the project was Dinocrates. Alexandria was intended to supersede Naucratis as a Hellenistic center in Egypt, (...) Thought: From the search results, I can see that Alexandria was founded by Alexander the Great. Therefore, the founder of the city where the founder of geometry lived is Alexander the Great. I will now provide the final answer using the ‘final_answer‘ tool. Code: 1final_answer (" Alexander the Great ") Observation: Alexander the Great (correct) 29 | https://arxiv.org/abs/2505.17612v1 |
arXiv:2505.17613v1 [cs.AI] 23 May 2025MMMG: A Comprehensive and Reliable Evaluation Suite for M ultitask M ultim odal G eneration Jihan Yao∗1, Yushi Hu∗1, Yujie Yi1, Bin Han1, Shangbin Feng1, Guang Yang1, Bingbing Wen1 Ranjay Krishna1,2,Lucy Lu Wang1,2,Yulia Tsvetkov1,Noah A. Smith1,2,Banghua Zhu1 1University of Washington2Allen Institute for AI jihany2@cs.washington.edu, yushihu@uw.edu Abstract Automatically evaluating multimodal generation presents a significant challenge, as automated metrics often struggle to align reliably with human evaluation, especially for complex tasks that involve multiple modalities. To address this, we present MMMG , a comprehensive and human-aligned benchmark for multimodal gener- ation across 4 modality combinations (image, audio, interleaved text and image, interleaved text and audio), with a focus on tasks that present significant challenges for generation models, while still enabling reliable automatic evaluation through a combination of models and programs. MMMG encompasses 49 tasks (including 29 newly developed ones), each with a carefully designed evaluation pipeline, and 937 instructions to systematically assess reasoning, controllability, and other key ca- pabilities of multimodal generation models. Extensive validation demonstrates that MMMG is highly aligned with human evaluation , achieving an average agreement of 94.3%. Benchmarking results on 24 multimodal generation models reveal that even though the state-of-the-art model, GPT I MAGE , achieves 78.3% accuracy for image generation, it falls short on multimodal reasoning and interleaved generation. Furthermore, results suggest considerable headroom for improvement in audio generation, highlighting an important direction for future research. Code and data are publicly available at https://github.com/yaojh18/MMMG. 1 Introduction As investments in multimodal generative AI grow, current models are rapidly advancing their capabilities in generating text [Achiam et al., 2023], images [Podell et al., 2024], audio [Evans et al., 2025], and their interleaved combinations [Chen et al., 2025c, Wang et al., 2024]. However, rigorous and reproducible evaluation of multimodal generation lags behind, raising a critical question: how can we accurately and effectively assess the capabilities of these models? Human evaluations [Chiang et al., 2024, Saharia et al., 2022, Liu et al., 2025], while considered the gold standard, are prohibitively expensive for comprehensive assessment at scale. Moreover, inherent subjectivity makes it difficult to systematically identify specific model weaknesses, limiting targeted improvements. As an alternative, existing automated evaluation approaches face two main limitations. First, it is hard to align automatic evaluation metrics well with human judges. Most multimodal generation benchmarks [Xia et al., 2025, Chen et al., 2024b, 2025a] rely on multimodal language models as judges (MLM-as-a-judge) [Hu et al., 2023, Chen et al., 2024a] without carefully validating their reliability, potentially causing misalignment with human judgment [Chen et al., 2024a, Pu et al., 2025]. Second, most benchmarks focus solely on single modalities [Ji et al., 2024, Ghosh et al., 2023, Xie et al., 2025b], failing to capture the rich interleaved multimodal content (vision, language, speech/audio) that characterizes real-world tasks such as cross-modal reasoning [Hu et al., 2024]. ∗equal contribution Preprint. Task : Enclose the image by a simple, flat, solid pink border that occupies approximately 10% of the image's width on all sides . Generated image : Program: 0.827(a) Image & Program Evaluation Evaluation Pseudo Code: # check border color | https://arxiv.org/abs/2505.17613v1 |
if ||avg_border_color - pink||2 > threshold: return 0 #check if border is solid ssim_score = SSIM(border, uniform(avg_border_color)) # if border is too wide penalty = ratio(inner_region == avg_border_color ) return ssim_score - penaltyTask : Create an image of a bike that has a square - shaped front wheel and a round -shaped rear wheel . Generated image: GPT -4o: (b) Image & Model Evaluation GPT -4o Evaluation: Prompt: "Is there exactly one bicycle with a square front wheel and a circular real wheel in the given image? Explain step by step and end your answer with “yes” or “no”. Response: Object identification: […] Wheel inspection: […] Counting: […] Extracted Response: yes Evaluation Pseudo Code: # split the audio with silence audio1,audio2 = split(audio) # use CLAPScore sim1 = CLAPScore (audio1, car horn reference audio) sim2 = CLAPScore (audio2, siren reference audio) # check if similarity is bigger than a threshold return (sim1 > thres & sim2 > thres)Task :Create an audio that begins with a loud car horn, followed by a long silence, and concludes with a distant siren . Generated audio : car horn silence siren CLAPScore + Program :(c) Audio & Model + Program Evaluation Shhh… Task : Modify the given speech to replace all “development” found with “progress” . Reference speech : Generated speech : Whisper + Program : (d) Speech & Model + Program Evaluation We need more development in our strategies. Development fosters creativity”Evaluation Pseudo Code: # Whisper transcript: test_transcript = “We need more progress in our strategies . Progress fosters creativity” reference = transcript.replace (“devel opment”, “progress”) # check if matches return (reference == test_transcript ) Evaluation Pseudo Code: # We have a pre -annotated bbox as “potential editing area” # Inside the bounding box inside_crop = <generated_image >.crop(inside_bbox ) # Compare with manually PS -ed result sim_in = DreamSim (reference_image , inside_crop ) # Outside the bounding box original = <image_0>.fill( inside_bbox , 0) Background = < generated_ image>.fill(inside_bbox ,0) sim_out = SSIM(original, background) # Pass the test only if all condition are met. return sim_in * sim_outTask :Create an image that adds the required object from <image_ 1> to the empty area of the wall in <image_ 0>, while keeping all other elements in the <image_ 0> unchanged . <image_ 0> <image_ 1> Generated image : DreamSim + SSIM + Program: 0.829(e) Interleaved T ext + Image & Model + Program Evaluation Figure 1: Examples of tasks and their evaluation metrics in MMMG . For each task, we develop an evaluation metric using programs, models or their combinations. The tasks are either verifiable purely by programs or have big generation-evaluation gaps: generation is challenging for models, while automatic evaluations have high correlation with human judgments. We show evaluation pseudo-code for demonstration the evaluation process. To address these gaps, we introduce MMMG , a new benchmark containing tasks that meet two criteria: (1) tasks that are verifiable as defined in IF-Eval [Zhou et al., 2023], where outputs can be objectively verified by programs through straightforward checks (e.g., checking if a speech transcript begins with a | https://arxiv.org/abs/2505.17613v1 |
keyword by comparing the first word with the keyword), and (2) tasks with significant generation-evaluation gaps , where the generation step is challenging due to complex constraints, yet the evaluation step remains simple (e.g., generating an image of a snowman without a carrot nose can be challenging due to spurious correlation [Ye et al., 2024], but verifying the absence of the carrot nose can be achieved accurately by prompting a VLM). Example tasks can be found in Figure 1. 2 Dataset # Samples # TasksGeneration Modality Evaluation Tested Capability ⊷/volume-downT+⊷T+/volume-downhuman mllm score code gen edit reason GenEval [Ghosh et al., 2023] 553 6 ✔ ✘ ✘ ✘ ✘ ✘ ✔ ✔ ✔ ✘ ✘ DrawBench [Saharia et al., 2022] 200 11 ✔ ✘ ✘ ✘ ✔ ✘ ✘ ✘ ✔ ✘ ✘ GenAI-Bench [Li et al., 2024] 1,600 8 ✔ ✘ ✘ ✘ ✔ ✘ ✘ ✘ ✔ ✘ ✘ AudioTime [Xie et al., 2024] 500 4 ✘ ✔ ✘ ✘ ✘ ✘ ? ✔ ✔ ✘ ✘ MusicEval [Liu et al., 2025] 384 1 ✘ ✔ ✘ ✘ ✔ ✘ ✘ ✘ ✔ ✘ ✘ CommonV oice [Ardila et al., 2020] 58,250 1 ✘ ✔ ✘ ✘ ✘ ✘ ✔ ✘ ✔ ✘ ✘ MMIE MMG [Xia et al., 2025] 16,487 7 ✘ ✘ ✔ ✘ ✘ ? ✘ ✘ ✔ ✘ ✘ CoMM [Chen et al., 2024b] 227,000 4 ✘ ✘ ✔ ✘ ✘ ? ✔ ✘ ✔ ✘ ✘ ISG-Bench [Chen et al., 2025a] 1,150 21 ✘ ✘ ✔ ✘ ✘ ? ✔ ✘ ✔ ✔ ✘ MixEval-X MMG [Ni et al., 2025] 600 3 ✔ ✔ ✘ ✘ ✔ ? ✘ ✘ ✔ ✔ ✘ Eval-Anything [Ji et al., 2024] 500 6 ✔ ✔ ✔ ✘ ✔ ? ✘ ✘ ✔ ✘ ✘ MMMG (Ours) 937 49 ✔ ✔ ✔ ✔ ✘ ✔ ✔ ✔ ✔ ✔ ✔ Table 1: Comprehensiveness of MMMG, compared with other multimodal generation benchmarks. ⊷, /volume-down,T+⊷,T+/volume-downrepresent image, audio, interleaved image-text, and interleaved audio-text gen- eration, respectively. “score” stands for embedding-based / rule-based similarity score, “code” for programmatically verification, and “reason” for multi-step reasoning. ?represents low human align- ment or no human experiments. MMMG exceeds other benchmarks in the number of covered tasks and modalities while providing more reliable evaluation. MMMG includes 49 tasks (29 are newly developed) and 937 instructions across 4 modality combinations—text, image, audio, and interleaved modalities—as depicted in Table 2. By cat- egorizing tasks based on assessed capabilities, MMMG enables fine-grained analysis of model performance and targeted identification of weaknesses. To validate the human alignment of MMMG , we conduct human evaluation across 37 tasks—674 instructions and 1886 evaluation questions—with each question assessed by three independent annotators and aggregated by majority vote. MMMG achieves an average human agreement of 94.3% with average inter-annotator agreement being 97.1%. Modality-specific agreements achieve 94.8% for image, 92.6% for audio, 95.6% for interleaved image-text, and 91.0% for interleaved audio-text, with relative improvements over prior best results by 14.2% for image, and 28.1% for interleaved image-text evaluation [Ghosh et al., 2023, Chen et al., 2025a]. We | https://arxiv.org/abs/2505.17613v1 |
benchmark 24 open and proprietary multimodal generation models using the optimal evaluation methods identified in human studies. Partial results are shown in Figure 2; the rest are in Appendix D.2. We find that modality-unified autoregressive models (ARMs) surpass diffusion models in image generation, with GPT I MAGE [OpenAI, 2025] achieving the best accuracy of 78.3%. This indicates ARMs trained on extensive language-image datasets have stronger linguistic capabilities, enabling better instruction following and improved alignment with user intent. However, GPT I MAGE still falls short in interleaved text-image reasoning tasks for math and code, achieving only 13.1% accuracy, 3D scene transformation at 34.1%, and interleaved image editing at 48.4%. Our qualitative error analysis reveals that another ARM, GEMINI IMAGE , tends to tangle multiple images in generation, hindering accurate image-sequence and image-text pair generation. Additionally, MMMG reveals greater headroom for improvement in audio generation tasks compared to image, with top-performing models achieving accuracies of 48.7% for sound and 41.9% for music generation. Overall, MMMG provides a reliable benchmark for multimodal model ranking and fine-grained capability analysis. 2 Related Work Interleaved Multimodal Generation. Interleaved multimodal generation involves generating co- herent content across multiple modalities simultaneously, such as visual storytelling [Huang et al., 2016, Wen et al., 2023], reference-based image editing [Chen et al., 2025b], and voice chatbots [Chu et al., 2024]. Effective models must understand multimodal inputs and produce aligned outputs across modalities. Current approaches include (1) LLM backbones with specialized decoders [Chen et al., 2025c, Xie et al., 2025a], which leverage dedicated components to render visual or audio outputs; (2) modality-unified autoregressive models [Chern et al., 2024, Hurst et al., 2024, Wang et al., 2024], processing text, visual, and acoustic tokens within a single sequence model, enabling native generation of interleaved content; and (3) agent-based methods [Chen et al., 2025a], using a “Plan-Execute-Refine” pipeline with modality-specific tools. Despite significant advances, evaluation 3 (a) Image Generation (b) Sound and Music Generation (c) Interleaved Image-Text Generation (d) Speech and Interleaved Speech-Text Generation Figure 2: Benchmark results of multimodal generation models on MMMG covering four modality combinations. Please refer to Table 2 for more detailed category information. We aggregate some sub-tasks for interleaved image-text generation. GPT I MAGE beats all other models on most image generation tasks, and strongly competes other baselines in generating consistent image sequences and coherent interleaved image-text contents. frameworks for interleaved multimodal generation remain underdeveloped, particularly in accurately and automatically assessing cross-modal consistency, and instruction-following capabilities. Multimodal Generation Evaluation. Evaluating image, audio and their interleaved generation presents unique challenges that have been addressed through several approaches, each with notable limitations, including (1) using specialized visual or audio models [Ghosh et al., 2023, Xie et al., 2025b], which struggle to generalize beyond their training data [Ming et al., 2022]; (2) directly employing MLMs as evaluators [Xia et al., 2025, Chen et al., 2024b, 2025a], which often misalign with human judgments [Chen et al., 2024a]; and (3) for image evaluation particularly, leveraging visual question answering (VQA) to assess specific aspects of generated content [Hu et al., 2023, Lin et al., 2024], which declines significantly | https://arxiv.org/abs/2505.17613v1 |
in accuracy when facing complex evaluation scenarios that require nuanced reasoning [Chen et al., 2025a]. To address these limitations, previous research incorporates extensive human preference data to enhance MLM accuracy [Xiong et al., 2024, Yao et al., 2025]. Our work is an orthogonal approach that carefully designs evaluation instructions to leverage current MLM strengths while mitigating their limitations, enabling reliable multimodal evaluation without extra training or finetuning. Table 1 compares MMMG with existing benchmarks. 4 3 MMMG Benchmark Construction Our goal is to build a multimodal generation benchmark that (1) covers a wide range of modalities and their combinations (image, audio, interleaved text and image, interleaved text and audio) with diverse tasks spanning different model capabilities. For each task, (2) we also ensure reliable automated evaluation that aligns well with humans. In this section, we first discuss our data and instruction construction in detail (§3.1), and then introduce the evaluation methods we built for each task (§3.2). Task Subtask Description InputOutput# Inst. Evaluation Object GenerationInclusion Include one or two unrelated objects in the scene. T⊷ 20 VLM Exclusion Exclude one related object from the scene. T⊷ 20 VLM Count Generate exactly N objects. T⊷ 20 VLM Attribution Generate an object with uncommon attributes. T⊷ 20 VLM Reasoning Generate the answer object to a multi-hop question. T⊷ 20 VLM Relation ControlComparison Generate two objects with uncommon relations. T⊷ 20 VLM Universal Generate objects with all identical/different attributes. T⊷ 20 VLM Relative Spatial Generate two objects with given relative spacial relation. T⊷ 20 VLM Absolute Spatial Generate one/two objects in the absolute image quarter. T⊷ 20 VLM Image FormatBorder Fill Surround the image with pure and solid colored border. T⊷ 15 Program + SSIM Region Fill Fill the given region with pure and solid color. T⊷ 15 Program + SSIM Text RenderingSingle Render English text on one object. T⊷ 20 VLM Double Render two English texts on two objects. T⊷ 20 VLM Multi-Lingual Render one Chinese/German text on one object. T⊷ 20 VLM Image EditingObject Adding Add a new object to the original image. T,⊷ ⊷ 20 VLM + SSIM Object Removing Remove an existing object in the original image. T,⊷ ⊷ 20 VLM + SSIM Object Modifying Replace an existing object in the original image. T,⊷ ⊷ 20 VLM + SSIM Text Editing Add/Remove/Replace text in the original image. T,⊷ ⊷ 25 VLM + SSIM Interleaved Adding Add an external image object to the original image. T,/images ⊷ 20 DreamSim + SSIM Interleaved ModifyingChange the color of an object in the original image. T,/images ⊷ 20 DreamSim + SSIM Image ConsistencySemantic Generate multiple images in semantic order. T/images 20 VLM Composition Compose individual objects in the given order. T/images 20 VLM Decomposition Decompose object combination in the given order. T,⊷ /images 20 VLM Multi-View Generate multiple views of the reference scene. T,⊷ /images 20 SSIM Multi-Angle Generate multiple views of the reference object. T,⊷ /images 20 SSIM Image-Text CoherenceSelf Count Count objects in the self-generated image. T T ,⊷ 20 VLM Self Color Name object colors in the self-generated image. T T ,⊷ | https://arxiv.org/abs/2505.17613v1 |
20 VLM Self Size Compare object sizes in the self-generated image. T T ,⊷ 20 VLM Self Relative Spatial Decide relative spacial relation in the generated image. T T ,⊷ 20 VLM Self Absolute Spatial Decide absolute spacial relation in the generated image. T T ,⊷ 20 VLM Self OCR Recognize the text in the generated image. T T ,⊷ 20 VLM Interleaved ReasoningMath Solve the IQ-test puzzles. T,⊷T,⊷ 20 VLM Code Read SVG codes and render the SVG image. T T ,⊷ 20 VLM + DreamSim Sound GenerationBegin-End Begin/End the audio with the given sound effect. T /volume-down 20 CLAPScore Positional Inclusion Include one sound effect at a relative audio position. T /volume-down 20 CLAPScore Silence Generate two ordered sound effects separated by silence. T /volume-down 20 CLAPScore Reasoning Generate the answer sound to a multi-hop question. T /volume-down 18 CLAPScore Music GenerationInstrument Inclusion Generate music with the given instrument. T /volume-down 15 CLAPScore Instrument Exclusion Generate music without the given instrument. T /volume-down 14 CLAPScore Tempo Generate music with the given tempo. T /volume-down 15 Program Intensity Generate music with fade in/out at the beginning/end. T /volume-down 10 Program Interleaved Speech GenerationV oice Attribution Generate an en. speech with required voice attributes. T /volume-down 20 Whisper+W2V+Program V oice Replication Generate an en. speech replicating the reference voice. T,/volume-down /volume-down 20 Whisper + WavLM Multi-Lingual Generate a zh. speech with required voice attributes. T /volume-down 20 Whisper+W2V+Program Transcript Generation Generating an speech with textual constraints for transcripts. T /volume-down 20 Whisper + Program Transcript Editing Editing an speech with textual constraints for transcripts. T /volume-down 20 Whisper + Program Conversation Generate a conversation with given speaker order. T/volume-down/volume-down 20 Whisper + WavLM Modality Order ControlImage-Text Generate interleaved image-text content in given order. T,⊷T,/images 20 Program Audio-Text Generate interleaved audio-text content in given order. T,/volume-downT,/volume-down/volume-down 20 Program Table 2: Detailed task definition and metadata for MMMG .Tdenotes text modality, ⊷for image modality,/imagesfor multiple images, /volume-downfor audio and /volume-down/volume-downfor multiple audios. We evaluate each task with the method that yields the highest human agreement. green background indicates new tasks. 5 3.1 Data Curation To guarantee high-quality instructions and reliable evaluation, we design a systematic data curation pipeline consisting of three key stages. Task Creation. We begin by creating an initial pool of 76 candidate task templates. These tasks span various modality combinations and each task aims to evaluate a single multimodal generation capability. The complete list of 76 tasks can be found in Appendix B.2. For each task, we conduct a rigorous feasibility assessment to ensure there is at least one reliable evaluation method available— either programmatic verification or a literature-supported, highly human-aligned evaluation method. Based on this process, we narrow our task pool down to 55 tasks. Instruction Synthesis and Validation. We employ a human-in-the-loop approach to synthesize high-quality instructions for each task. Inspired by Self-Instruct [Wang et al., 2023], we prompt GPT-4 O[Hurst et al., 2024] with the task template and quality-controlled criteria to generate 10 candidate instructions per task. We then go through a two-stage selection process: •Quality Filtering. Initially, | https://arxiv.org/abs/2505.17613v1 |
we remove instructions that are ambiguous (instructions with unclear or multiple interpretations), unrealistic (instructions that describe improbable or nonsensical scenarios), or redundant (instructions that closely resemble previously accepted examples). For instance, unrealistic instruction “ Generate an image of a forest without any trees ” is discarded because it is semantically contradictory and unlikely to occur in actual user queries. •Verifiability Assessment. For instructions passing the initial filtering stage, we sample generated outputs and verify if at least one evaluation methods yield high alignment with human judgments. This step is crucial because even models generally capable of performing a given task may fail to accurately evaluate out-of-distribution variants within that domain. For example, GPT-4 Ocan accurately count fewer than 7 objects but is prone to errors counting more than 10 objects. We then generate another 10 candidate instructions and repeat the generation and validation process continues until we gather approximately 20 high-quality instructions per task. Statistically, 10–50% of generated instructions pass examination, depending on task difficulty. Postprocessing. For final quality control, we perform a task-level postprocessing step to further refine our benchmark. This involves two procedures: (1) Task filtering: we recruit two independent annotators to judge if each task is realistic. We eliminate six tasks that at least one annotator judges to be unrealistic. (2) Instruction paraphrasing: To ensure linguistic diversity and prevent models from memorizing specific instruction patterns, we paraphrase all remaining instructions. Each paraphrased instruction is examined manually to verify that it is equivalent to the original instruction semantically. To this end, we ultimately collect a total of 937 instructions across 49 tasks spanning 4 modality combinations. This systematic approach ensures that MMMG provides a comprehensive, fine- grained, and reliable evaluation framework for assessing multimodal generation capabilities. The detailed definitions and metadata of each task in MMMG can be found in Table 2. 3.2 Evaluation Method We report the evaluation method used for each task in Table 2. For more details about implementation, please refer to Appendix C.3. VLM. We employ vision language models (VLMs) for most reference-free image evaluation tasks. We do not use object detection or OCR models because VLMs demonstrate superior performance in out-of-domain scenarios. A common practice to boost VLM-as-a-judge is visual question answering (VQA), where models generate verification questions and answer the questions based on images to determine if images follow given instructions. However, we find that automatically generated question-answer pairs like those in TIFA [Hu et al., 2023] often misalign with human judgment on challenging tasks. Therefore, we manually design visual questions for each instruction based on these important principles as shown in Figure 1(b): •Chain-of-thought prompting significantly improves VLM performance on boolean questions. Specifically, instructing models not to output yes/no at the beginning of their responses substantially reduces hallucination which echoes findings in Zhang et al. [2024]. 6 •Multiple-choice format can boost VLM’s performance on object counting and spatial relationship reasoning. We hypothesize that multiple-choice questions effectively reduce the output space, thereby simplifying these tasks. For example, including an option like “ E. More than 6 ” in object counting questions can prevent miscounting | https://arxiv.org/abs/2505.17613v1 |
errors in scenarios with numerous objects. •Adding negative prompts helps alleviate visual hallucination. For instance, VLMs can easily overlook a constraint such as “ one basketball with a cube shape ,” whereas “ one basketball with a cube shape instead of a sphere ” forces the VLM to reject a spherical basketball. Image Similarity. For reference-based image evaluation tasks requiring perceptual similarity, we employ DreamSim [Fu et al., 2023]. When exact matching is necessary, we use SSIM [Wang et al., 2004]. For image editing tasks, we implement a dual approach: DreamSim/VQA evaluates the edited region, while SSIM assesses the unmodified areas outside it, ensuring that local editing instructions are precisely followed as shown in Figure 1(e). Audio Similarity. Research indicates that current audio language models (ALMs) cannot reliably analyze sound or music clips [Sakshi et al., 2025]. Therefore, we select ESC-50 [Piczak, 2015] and OpenMIC-2018 [Humphrey et al., 2018] as reference datasets for sound and music evaluation, and compute CLAP cosine similarity [Wu et al., 2023] with reference audio as shown in Figure 1(c). Audio Model. For specialized audio analysis, we employ several targeted models. WAVLM[Chen et al., 2022] is employed for speaker similarity verification with an empirical optimal threshold of 0.86. For speech transcription, we use Whisper [Radford et al., 2023] as shown in Figure 1(d). Gender classification in speech leverages a finetuned WAV2VECcheckpoint [Fiury, 2023]. For music tempo computation, we employ BEATTHIS[Foscarin et al., 2024] for beat tracking and the beats statistics are used for music tempo computation. Program. For programmatic verification, we utilize PIL for image analysis as shown in Figure 1(a), Librosa [McFee et al., 2015] and Praat [Boersma and Van Heuven, 2001] for audio pitch, intensity, and speed analysis. For textual constraint verification, we follow the implementation of IF-Eval [Zhou et al., 2023]. We use word accuracy (WAcc) to evaluate textual similarity for visual text rendering and text-to-speech tasks which requires exact matching. Scoring. Each generation receives either a similarity score or a binary classification of whether all requirements in the instruction are correctly rendered following previous work setup [Ghosh et al., 2023]. We convert binary classification to numerical scores (0.0 for incorrect, 1.0 for correct) and average all generation scores within each task to obtain task-level scores, then macro-average all task scores to get the final accuracy score for the multimodal generation model. 4 Experiment Settings Generation. We evaluate 24 multimodal generation models specified in Appendix C.1. Following the experimental setup in Ghosh et al. [2023], we sample 4 generations for every instruction in our benchmark. We employ a temperature of 0 and a retry count of 4 for MLMs and sampling steps of 200 for diffusion models. We keep other parameters, such as guidance scale, as default values. Evaluation. We compare several evaluation methods. For image generation, we include GPT-4 O, GEMINI 2.5, and QWEN 2.5-VL [Bai et al., 2025] to perform VQA for evaluation. CLIPScore [Hessel et al., 2021] is found as less aligned with human judgment in previous studies [Hu et al., 2023], thus not included. For sound and music evaluation, we include | https://arxiv.org/abs/2505.17613v1 |
CLAPScore audio ,CLAPScore text, and employing GEMINI 2.5for acoustic question answering (AQA). CLAPScore audio computes the CLAP cosine similarity with reference audio, while CLAPScore textcomputes the similarity with reference audio captions. Following the optimal configurations identified in empirical studies, we calculate the average CLAPScore audio with the 10 most similar reference audio samples The threshold is 0.68 for ESC-50 and 0.62 for OpenMIC-2018. 5 Results and Analysis In this section, we first report our human alignment experiment results in §5.1, and then the bench- marking results evaluated by the most human-aligned metrics in §5.2. We also report the correlation between MMMG with real-world human preference leaderboard in §5.3. 7 Figure 3: Two prevalent failure cases observed in interleaved image-text generation tasks for GEMINI IMAGE : (1) models fail to accurately interpret the order of images in interleaved inputs; and (2) models frequently blend multiple images together, possibly due to limitations in encoding multiple images with continuous latent image representations. 5.1 Alignment with Human Judges We conduct human evaluations on 674 instructions evaluated by models. For each instruction, we randomly select two models from all models evaluated on this instruction and obtain one generation per model. Each generation is evaluated by two independent annotators, randomly selected from our pool of 20 graduate student annotators. To standardize the evaluation process and reduce subjective bias, we design specific multiple-choice questions for each instruction exemplified in Appendix C.5, thereby constraining annotators’ responses to a fixed set of choices and ensuring high inter-annotator agreement. In cases of disagreement, a third annotator determines the final annotation. In total, human studies involve 1886 evaluation questions and collect 3812 annotations. For verifiable instructions, human alignment validation is unnecessary as these tasks are designed for objective programmatic verification. Human-model and inter-human agreement measures can be found in Table 8. MMMG demonstrates high human alignment, with average best human-model agreement for image, audio, interleaved image-text, and interleaved audio-text being 0.948, 0.926, 0.956 and 0.910 respec- tively, calculated by selecting the method achieving the highest agreement per task and averaging across tasks. The average inter-annotator agreement remains as high as 0.971 with the worst case being 0.917. MMMG also outperforms previous best benchmark alignment significantly: agreement on image generation surpasses GenEval (0.830) by 14.2%, and Pearson correlation on interleaved image-text generation surpasses ISG-Bench (0.718) by 28.1%. Experiments show that while GPT-4 O remains the most human-aligned image evaluation model with an average agreement of 0.941, GEM- INI2.5shows superior performance on spatial relationships and editing evaluation. Open-source models like QWEN 2.5-VL still have a significant gap with proprietary models. For audio evaluation, even though CLAPScore textyields a satisfactory agreement of 0.926, it relies highly on the quality of reference audio, thus making it challenging for out-of-domain audio evaluation. 5.2 Benchmarking Results We benchmark models with the most aligned evaluation methods for each task. Selected model performances are illustrated in Figure 2, with complete evaluation results provided in Appendix D.2. Image Generation. ARMs outperform diffusion models on image generation tasks, with GPT I MAGE andGEMINI IMAGE achieving accuracies of 0.783 and 0.641 respectively, ranking 1st | https://arxiv.org/abs/2505.17613v1 |
and 3rd. This indicates that ARMs with stronger linguistic capabilities can better follow instructions. However, models struggle notably when generating objects with uncommon attributes and producing pairs of objects with unusual relationships, showing average accuracies of only 0.389 and 0.422 respectively. This underscores the vulnerability of image generation models to out-of-domain instructions. Interleaved Image-Text Generation. Interleaved image-text generation poses considerable chal- lenges, with the best-performing combination ( GEMINI 2.5+GPT I MAGE ) achieving limited accuracies of 0.131 on math and coding reasoning, 0.341 on 3D scene transformations, and 0.484 on interleaved image editing. Additionally, modality-unified autoregressive models such as ANOLE andGEMINI IMAGE struggle to understand interleaved input and tend to output tangled output, highlighting their limitations compared to agent-based models as shown in Figure 3. 8 Sound and Music Generation. Current audio generation models exhibit significant reasoning limitations, achieving low average accuracies across tested models—0.193 for instrument exclusion and 0.175 for sound reasoning. V olume-related tasks also demonstrate poor performance, with silence generation and intensity control reaching average accuracies of merely 0.048 and 0.085, respectively. Only MAKE-AN-AUDIO 2, leveraging large language models (LLMs) for instruction parsing, shows competence in sound reasoning, while MUSIC GENeffectively manages tempo control. Audio generation models remain domain-constrained; only STABLE AUDIO andAUDIO LDM 2 can handle both sound and music generation tasks. Speech and Interleaved Speech-Text Generation. The sole inherently interleaved speech-text model, SPIRIT LM, fails entirely to follow speech generation instructions, showing zero accuracy on most tasks. Agent-based models also exhibit difficulties on tasks that require simultaneous speech understanding and generation, with average accuracies of 0.275 for speech editing. 5.3 Correlation with Real-World Leaderboard Model Arena GenEval Draw GenAI MMMG IMAGEN 3 1087 0.707 0.831 0.793 0.510 RECRAFT V3 1009 0.732 0.826 0.817 0.489 LUMA PHOTON 1021 0.738 0.766 0.804 0.646 FLUX 1.1 P RO 1000 0.588 0.725 0.736 0.494 IDEOGRAM 2 1019 0.615 0.757 0.782 0.557 DALLE 3 978 0.627 0.809 0.811 0.376 SD 3.5 919 0.591 0.711 0.715 0.335 Pearson 0.592 0.633 0.554 0.673 Spearman 0.607 0.607 0.286 0.857 Table 3: Correlation of automated image genera- tion benchmarks with Chatbot Arena. Arena, Draw, GenAI represent Chatbot Arena, DrawBench, and GenAI-Bench. MMMG achieves the highest cor- relation with Chatbot Arena. This indicates even though our instructions are synthetic, the evalua- tion results are still highly human-aligned.We compare the correlation of the MMMG score with the Chatbot Arena [Chiang et al., 2024] score on the text-to-image task. We take the Arena Score for 7 image generation mod- els under the “User Prompts Only” category as a gold reference. We report the Pearson cor- relation and Spearman’s rank correlation coef- ficient between gold arena scores and scores produced by evaluating on different benchmarks in Table 3. We compare with GenEval, Draw- Bench, and GenAI-Bench. We employ VQAS- core [Lin et al., 2024] to replace human evalu- ation on DrawBench and GenAI-Bench; due to budgetary limitations, we randomly sample 400 out of 1600 instructions for GenAI-Bench. MMMG provides reliable model rankings with a Spearman correlation coefficient of 0.857, sig- nificantly outperforming baseline benchmarks. This indicates that despite that synthetic | https://arxiv.org/abs/2505.17613v1 |
instructions may not fully align with real-world queries, MMMG achieves higher alignment with human prefer- ences. Such results suggest that evaluator alignment (i.e., the reliability of the evaluation method) may outweigh instruction distribution alignment (i.e., the extent to which benchmark tasks reflect real-world task distributions) for accurate model assessment. Moreover, MMMG demonstrates superior differentiation capabilities among evaluated models. The performance gap of 0.318 between the highest- and lowest-ranked models is much larger than the next-best baseline (GenEval), which has only a gap of 0.147. This larger range underscores MMMG ’s enhanced ability to distinguish among models, particularly for differentiating performance among top-tier models. Due to the lack of real-world human preference leaderboards like Chatbot Arena for other modalities, we leave human preference correlation studies for other modalities as future work. 6 Conclusion In this work, we introduce MMMG , a comprehensive automated evaluation suite for multitask multimodal generation, addressing critical limitations of existing benchmarks. We collect 937 high-quality instructions spanning 49 diverse tasks involving text, image, audio, and interleaved content. Extensive human validation demonstrates that MMMG correlates better with human judgments compared to previous benchmarks. Benchmarking results highlight ongoing challenges in multimodal reasoning, interleaved generation, and audio generation. The fine-grained nature of MMMG enables detailed capability analysis, providing valuable insights for targeted multimodal improvements. Beyond serving as a leaderboard, we hope MMMG inspires scalable collection of verifiable validation signals for future multimodal generation training. Given the page limit, we refer readers to Appendix A for limitations and social impacts discussion. 9 References Abubakar Abid, Ali Abdalla, Ali Abid, Dawood Khan, Abdulrahman Alfozan, and James Zou. Gradio: Hassle-free sharing and testing of ml models in the wild. arXiv preprint arXiv:1906.02569 , 2019. Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774 , 2023. Ideogram AI. Ideogram 2, 2024a. URL https://ideogram.ai/ . Accessed April 27, 2025. Luma AI. Luma photon, 2024b. URL https://lumalabs.ai/photon . Accessed April 27, 2025. Recraft AI. Recraft v3 model, 2024c. URL https://www.recraft.ai/ . Accessed April 27, 2025. Rosana Ardila, Megan Branson, Kelly Davis, Michael Kohler, Josh Meyer, Michael Henretty, Reuben Morais, Lindsay Saunders, Francis Tyers, and Gregor Weber. Common voice: A massively- multilingual speech corpus. In Proceedings of the Twelfth Language Resources and Evaluation Conference , pages 4218–4222, 2020. Shuai Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, Sibo Song, Kai Dang, Peng Wang, Shijie Wang, Jun Tang, et al. Qwen2. 5-vl technical report. arXiv preprint arXiv:2502.13923 , 2025. Jason Baldridge, Jakob Bauer, Mukul Bhutani, Nicole Brichtova, Andrew Bunner, Lluis Castrejon, Kelvin Chan, Yichang Chen, Sander Dieleman, Yuqing Du, et al. Imagen 3. arXiv preprint arXiv:2408.07009 , 2024. James Betker, Gabriel Goh, Li Jing, Tim Brooks, Jianfeng Wang, Linjie Li, Long Ouyang, Juntang Zhuang, Joyce Lee, Yufei Guo, et al. Improving image generation with better captions. Computer Science. https://cdn. openai. com/papers/dall-e-3. pdf , 2(3):8, 2023. Paul Boersma and Vincent Van Heuven. Speak and unspeak with praat. Glot International , 5(9/10): 341–347, 2001. Huanqia Cai, Yijun Yang, and Winston Hu. | https://arxiv.org/abs/2505.17613v1 |
Mm-iq: Benchmarking human-like abstraction and reasoning in multimodal models. arXiv preprint arXiv:2502.00698 , 2025. Dongping Chen, Ruoxi Chen, Shilin Zhang, Yaochen Wang, Yinuo Liu, Huichi Zhou, Qihui Zhang, Yao Wan, Pan Zhou, and Lichao Sun. Mllm-as-a-judge: Assessing multimodal llm-as-a-judge with vision-language benchmark. In Forty-first International Conference on Machine Learning , 2024a. Dongping Chen, Ruoxi Chen, Shu Pu, Zhaoyi Liu, Yanru Wu, Caixi Chen, Benlin Liu, Yue Huang, Yao Wan, Pan Zhou, and Ranjay Krishna. Interleaved scene graphs for interleaved text-and-image generation assessment. In The Thirteenth International Conference on Learning Representations , 2025a. Liang Chen, Shuai Bai, Wenhao Chai, Weichu Xie, Haozhe Zhao, Leon Vinci, Junyang Lin, and Baobao Chang. Multimodal representation alignment for image generation: Text-image interleaved control is easier than you think. arXiv preprint arXiv:2502.20172 , 2025b. Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, et al. Wavlm: Large-scale self-supervised pre-training for full stack speech processing. IEEE Journal of Selected Topics in Signal Processing , 16(6): 1505–1518, 2022. Wei Chen, Lin Li, Yongqi Yang, Bin Wen, Fan Yang, Tingting Gao, Yu Wu, and Long Chen. Comm: A coherent interleaved image-text dataset for multimodal understanding and generation. arXiv preprint arXiv:2406.10462 , 2024b. Xiaokang Chen, Zhiyu Wu, Xingchao Liu, Zizheng Pan, Wen Liu, Zhenda Xie, Xingkai Yu, and Chong Ruan. Janus-pro: Unified multimodal understanding and generation with data and model scaling. arXiv preprint arXiv:2501.17811 , 2025c. 10 Ethan Chern, Jiadi Su, Yan Ma, and Pengfei Liu. Anole: An open, autoregressive, native large multimodal models for interleaved image-text generation. arXiv preprint arXiv:2407.06135 , 2024. Wei-Lin Chiang, Lianmin Zheng, Ying Sheng, Anastasios Nikolas Angelopoulos, Tianle Li, Dacheng Li, Banghua Zhu, Hao Zhang, Michael Jordan, Joseph E Gonzalez, et al. Chatbot arena: An open platform for evaluating llms by human preference. In Forty-first International Conference on Machine Learning , 2024. Yunfei Chu, Jin Xu, Qian Yang, Haojie Wei, Xipin Wei, Zhifang Guo, Yichong Leng, Yuanjun Lv, Jinzheng He, Junyang Lin, et al. Qwen2-audio technical report. arXiv preprint arXiv:2407.10759 , 2024. Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi, and Alexandre Défossez. Simple and controllable music generation. Advances in Neural Information Processing Systems , 36:47704–47720, 2023. Zach Evans, Julian D Parker, CJ Carr, Zack Zukowski, Josiah Taylor, and Jordi Pons. Stable audio open. In ICASSP 2025-2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) , pages 1–5. IEEE, 2025. Alexandre Fiury. wav2vec2-large-xlsr-53-gender-recognition-librispeech. https://huggingface. co/alefiury/wav2vec2-large-xlsr-53-gender-recognition-librispeech , 2023. Francesco Foscarin, Jan Schlüter, and Gerhard Widmer. Beat this! accurate beat tracking without DBN postprocessing. In Proceedings of the 25th International Society for Music Information Retrieval Conference (ISMIR) , San Francisco, CA, United States, November 2024. Stephanie Fu, Netanel Tamir, Shobhita Sundaram, Lucy Chai, Richard Zhang, Tali Dekel, and Phillip Isola. Dreamsim: Learning new dimensions of human visual similarity using synthetic data. Advances in Neural Information Processing Systems , 36:50742–50768, 2023. Yuying Ge, Sijie Zhao, Ziyun Zeng, Yixiao Ge, Chen Li, Xintao Wang, and Ying Shan. Making LLaMA SEE and draw with SEED tokenizer. In The Twelfth International Conference on Learning | https://arxiv.org/abs/2505.17613v1 |
Representations , 2024. Dhruba Ghosh, Hannaneh Hajishirzi, and Ludwig Schmidt. Geneval: An object-focused framework for evaluating text-to-image alignment. Advances in Neural Information Processing Systems , 36: 52132–52152, 2023. Jack Hessel, Ari Holtzman, Maxwell Forbes, Ronan Le Bras, and Yejin Choi. Clipscore: A reference- free evaluation metric for image captioning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing , pages 7514–7528, 2021. Yushi Hu, Benlin Liu, Jungo Kasai, Yizhong Wang, Mari Ostendorf, Ranjay Krishna, and Noah A Smith. Tifa: Accurate and interpretable text-to-image faithfulness evaluation with question answering. In Proceedings of the IEEE/CVF International Conference on Computer Vision , pages 20406–20417, 2023. Yushi Hu, Weijia Shi, Xingyu Fu, Dan Roth, Mari Ostendorf, Luke Zettlemoyer, Noah A Smith, and Ranjay Krishna. Visual sketchpad: Sketching as a visual chain of thought for multimodal language models. In The Thirty-eighth Annual Conference on Neural Information Processing Systems , 2024. Jiawei Huang, Yi Ren, Rongjie Huang, Dongchao Yang, Zhenhui Ye, Chen Zhang, Jinglin Liu, Xiang Yin, Zejun Ma, and Zhou Zhao. Make-an-audio 2: Temporal-enhanced text-to-audio generation. arXiv preprint arXiv:2305.18474 , 2023. Ting-Hao Huang, Francis Ferraro, Nasrin Mostafazadeh, Ishan Misra, Aishwarya Agrawal, Jacob Devlin, Ross Girshick, Xiaodong He, Pushmeet Kohli, Dhruv Batra, et al. Visual storytelling. InProceedings of the 2016 conference of the North American chapter of the association for computational linguistics: Human language technologies , pages 1233–1239, 2016. Yi Huang, Jiancheng Huang, Yifan Liu, Mingfu Yan, Jiaxi Lv, Jianzhuang Liu, Wei Xiong, He Zhang, Liangliang Cao, and Shifeng Chen. Diffusion model-based image editing: A survey. IEEE Transactions on Pattern Analysis & Machine Intelligence , (01):1–27, 2025. 11 Eric Humphrey, Simon Durand, and Brian McFee. Openmic-2018: An open data-set for multiple instrument recognition. In ISMIR , pages 438–444, 2018. Aaron Hurst, Adam Lerer, Adam P Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Os- trow, Akila Welihinda, Alan Hayes, Alec Radford, et al. Gpt-4o system card. arXiv preprint arXiv:2410.21276 , 2024. Jiaming Ji, Jiayi Zhou, Hantao Lou, Boyuan Chen, Donghai Hong, Xuyao Wang, Wenqi Chen, Kaile Wang, Rui Pan, Jiahao Li, et al. Align anything: Training all-modality models to follow instructions with language feedback. arXiv preprint arXiv:2412.15838 , 2024. Justin Johnson, Bharath Hariharan, Laurens Van Der Maaten, Li Fei-Fei, C Lawrence Zitnick, and Ross Girshick. Clevr: A diagnostic dataset for compositional language and elementary visual reasoning. In Proceedings of the IEEE conference on computer vision and pattern recognition , pages 2901–2910, 2017. Zhifeng Kong, Sang-gil Lee, Deepanway Ghosal, Navonil Majumder, Ambuj Mehrish, Rafael Valle, Soujanya Poria, and Bryan Catanzaro. Improving text-to-audio models with synthetic captions. In Proc. SynData4GenAI 2024 , pages 1–5, 2024. Felix Kreuk, Gabriel Synnaeve, Adam Polyak, Uriel Singer, Alexandre Défossez, Jade Copet, Devi Parikh, Yaniv Taigman, and Yossi Adi. Audiogen: Textually guided audio generation. arXiv preprint arXiv:2209.15352 , 2022. Black Forest Labs. Flux. https://github.com/black-forest-labs/flux , 2024. Yeonghyeon Lee, Inmo Yeon, Juhan Nam, and Joon Son Chung. V oiceldm: Text-to-speech with environmental context. In ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) , pages 12566–12571. IEEE, 2024. Baiqi Li, Zhiqiu Lin, Deepak Pathak, Jiayao Emily Li, Xide Xia, | https://arxiv.org/abs/2505.17613v1 |
Graham Neubig, Pengchuan Zhang, and Deva Ramanan. Genai-bench: A holistic benchmark for compositional text-to-visual generation. In Synthetic Data for Computer Vision Workshop@ CVPR 2024 , 2024. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In Computer vision– ECCV 2014: 13th European conference, zurich, Switzerland, September 6-12, 2014, proceedings, part v 13 , pages 740–755. Springer, 2014. Zhiqiu Lin, Deepak Pathak, Baiqi Li, Jiayao Li, Xide Xia, Graham Neubig, Pengchuan Zhang, and Deva Ramanan. Evaluating text-to-visual generation with image-to-text generation. In European Conference on Computer Vision , pages 366–384. Springer, 2024. Cheng Liu, Hui Wang, Jinghua Zhao, Shiwan Zhao, Hui Bu, Xin Xu, Jiaming Zhou, Haoqin Sun, and Yong Qin. Musiceval: A generative music corpus with expert ratings for automatic text-to-music evaluation. arXiv preprint arXiv:2501.10811 , 2025. Haohe Liu, Yi Yuan, Xubo Liu, Xinhao Mei, Qiuqiang Kong, Qiao Tian, Yuping Wang, Wenwu Wang, Yuxuan Wang, and Mark D Plumbley. Audioldm 2: Learning holistic audio generation with self-supervised pretraining. IEEE/ACM Transactions on Audio, Speech, and Language Processing , 2024. Navonil Majumder, Chia-Yu Hung, Deepanway Ghosal, Wei-Ning Hsu, Rada Mihalcea, and Soujanya Poria. Tango 2: Aligning diffusion-based text-to-audio generations through direct preference optimization. In Proceedings of the 32nd ACM International Conference on Multimedia , pages 564–572, 2024. Brian McFee, Colin Raffel, Dawen Liang, Daniel PW Ellis, Matt McVicar, Eric Battenberg, and Oriol Nieto. librosa: Audio and music signal analysis in python. SciPy , 2015:18–24, 2015. Ambuj Mehrish, Navonil Majumder, Rishabh Bharadwaj, Rada Mihalcea, and Soujanya Poria. A review of deep learning techniques for speech processing. Information Fusion , 99:101869, 2023. 12 Yifei Ming, Ziyang Cai, Jiuxiang Gu, Yiyou Sun, Wei Li, and Yixuan Li. Delving into out-of- distribution detection with vision-language representations. Advances in neural information processing systems , 35:35087–35102, 2022. Tu Anh Nguyen, Benjamin Muller, Bokai Yu, Marta R Costa-Jussa, Maha Elbayad, Sravya Popuri, Christophe Ropers, Paul-Ambroise Duquenne, Robin Algayres, Ruslan Mavlyutov, et al. Spirit-lm: Interleaved spoken and written language model. Transactions of the Association for Computational Linguistics , 13:30–52, 2025. Jinjie Ni, Yifan Song, Deepanway Ghosal, Bo Li, David Junhao Zhang, Xiang Yue, Fuzhao Xue, Yuntian Deng, Zian Zheng, Kaichen Zhang, Mahir Shah, Kabir Jain, Yang You, and Michael Shieh. Mixeval-x: Any-to-any evaluations from real-world data mixture. In The Thirteenth International Conference on Learning Representations , 2025. OpenAI. Introducing 4o image generation, 2025. URL https://openai.com/index/ introducing-4o-image-generation/ . Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur. Librispeech: an asr corpus based on public domain audio books. In 2015 IEEE international conference on acoustics, speech and signal processing (ICASSP) , pages 5206–5210. IEEE, 2015. Karol J Piczak. Esc: Dataset for environmental sound classification. In Proceedings of the 23rd ACM international conference on Multimedia , pages 1015–1018, 2015. Dustin Podell, Zion English, Kyle Lacey, Andreas Blattmann, Tim Dockhorn, Jonas Müller, Joe Penna, and Robin Rombach. Sdxl: Improving latent diffusion models for high-resolution image synthesis. In The Twelfth International Conference on Learning Representations , 2024. Shu Pu, Yaochen Wang, Dongping Chen, Yuhang Chen, Guohao Wang, Qi Qin, Zhongyi Zhang, Zhiyuan | https://arxiv.org/abs/2505.17613v1 |
Zhang, Zetong Zhou, Shuang Gong, et al. Judge anything: Mllm as a judge across any modality. arXiv preprint arXiv:2503.17489 , 2025. Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, and Ilya Sutskever. Robust speech recognition via large-scale weak supervision. In International conference on machine learning , pages 28492–28518. PMLR, 2023. Juan A Rodriguez, Shubham Agarwal, Issam H Laradji, Pau Rodriguez, David Vazquez, Christopher Pal, and Marco Pedersoli. Starvector: Generating scalable vector graphics code from images. arXiv preprint arXiv:2312.11556 , 2023. Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High- resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF confer- ence on computer vision and pattern recognition , pages 10684–10695, 2022. Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily L Denton, Kamyar Ghasemipour, Raphael Gontijo Lopes, Burcu Karagol Ayan, Tim Salimans, et al. Photorealistic text-to-image diffusion models with deep language understanding. Advances in neural information processing systems , 35:36479–36494, 2022. S Sakshi, Utkarsh Tyagi, Sonal Kumar, Ashish Seth, Ramaneswaran Selvakumar, Oriol Nieto, Ramani Duraiswami, Sreyan Ghosh, and Dinesh Manocha. MMAU: A massive multi-task audio understanding and reasoning benchmark. In The Thirteenth International Conference on Learning Representations , 2025. Shelly Sheynin, Adam Polyak, Uriel Singer, Yuval Kirstain, Amit Zohar, Oron Ashual, Devi Parikh, and Yaniv Taigman. Emu edit: Precise image editing via recognition and generation tasks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 8871–8879, 2024. Gemini Team, Rohan Anil, Sebastian Borgeaud, Jean-Baptiste Alayrac, Jiahui Yu, Radu Soricut, Johan Schalkwyk, Andrew M Dai, Anja Hauth, Katie Millican, et al. Gemini: a family of highly capable multimodal models. arXiv preprint arXiv:2312.11805 , 2023. 13 Xinlong Wang, Xiaosong Zhang, Zhengxiong Luo, Quan Sun, Yufeng Cui, Jinsheng Wang, Fan Zhang, Yueze Wang, Zhen Li, Qiying Yu, et al. Emu3: Next-token prediction is all you need. arXiv preprint arXiv:2409.18869 , 2024. Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A Smith, Daniel Khashabi, and Hannaneh Hajishirzi. Self-instruct: Aligning language models with self-generated instructions. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 13484–13508, 2023. Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing , 13(4):600–612, 2004. Bingbing Wen, Zhengyuan Yang, Jianfeng Wang, Zhe Gan, Bill Howe, and Lijuan Wang. Infovisdial: An informative visual dialogue dataset by bridging large multimodal and language models. arXiv preprint arXiv:2312.13503 , 2023. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. Huggingface’s transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771 , 2019. Yusong Wu, Ke Chen, Tianyu Zhang, Yuchen Hui, Taylor Berg-Kirkpatrick, and Shlomo Dubnov. Large-scale contrastive language-audio pretraining with feature fusion and keyword-to-caption augmentation. In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) , pages 1–5. IEEE, 2023. Peng Xia, Siwei Han, Shi Qiu, Yiyang Zhou, Zhaoyang Wang, Wenhao Zheng, Zhaorun Chen, Chen- hang Cui, Mingyu Ding, Linjie Li, | https://arxiv.org/abs/2505.17613v1 |
et al. Mmie: Massive multimodal interleaved comprehension benchmark for large vision-language models. In Adaptive Foundation Models: Evolving AI for Personalized and Efficient Learning , 2025. Jinheng Xie, Weijia Mao, Zechen Bai, David Junhao Zhang, Weihao Wang, Kevin Qinghong Lin, Yuchao Gu, Zhijie Chen, Zhenheng Yang, and Mike Zheng Shou. Show-o: One single transformer to unify multimodal understanding and generation. In The Thirteenth International Conference on Learning Representations , 2025a. Zeyu Xie, Xuenan Xu, Zhizheng Wu, and Mengyue Wu. Audiotime: A temporally-aligned audio-text benchmark dataset. arXiv preprint arXiv:2407.02857 , 2024. Zeyu Xie, Xuenan Xu, Zhizheng Wu, and Mengyue Wu. Audiotime: A temporally-aligned audio-text benchmark dataset. In ICASSP 2025-2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) , pages 1–5. IEEE, 2025b. Tianyi Xiong, Xiyao Wang, Dong Guo, Qinghao Ye, Haoqi Fan, Quanquan Gu, Heng Huang, and Chunyuan Li. Llava-critic: Learning to evaluate multimodal models. arXiv preprint arXiv:2410.02712 , 2024. Jin Xu, Zhifang Guo, Jinzheng He, Hangrui Hu, Ting He, Shuai Bai, Keqin Chen, Jialin Wang, Yang Fan, Kai Dang, et al. Qwen2. 5-omni technical report. arXiv preprint arXiv:2503.20215 , 2025. Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christopher D Manning. Hotpotqa: A dataset for diverse, explainable multi-hop question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing , pages 2369–2380, 2018. Jihan Yao, Wenxuan Ding, Shangbin Feng, Lucy Lu Wang, and Yulia Tsvetkov. Varying shades of wrong: Aligning LLMs with wrong answers only. In The Thirteenth International Conference on Learning Representations , 2025. Wenqian Ye, Guangtao Zheng, Xu Cao, Yunsheng Ma, and Aidong Zhang. Spurious correlations in machine learning: A survey. arXiv preprint arXiv:2402.12715 , 2024. 14 Ruibin Yuan, Hanfeng Lin, Shuyue Guo, Ge Zhang, Jiahao Pan, Yongyi Zang, Haohe Liu, Yiming Liang, Wenye Ma, Xingjian Du, et al. Yue: Scaling open foundation models for long-form music generation. arXiv preprint arXiv:2503.08638 , 2025. Muru Zhang, Ofir Press, William Merrill, Alisa Liu, and Noah A Smith. How language model hallucinations can snowball. In Proceedings of the 41st International Conference on Machine Learning , pages 59670–59684, 2024. Jeffrey Zhou, Tianjian Lu, Swaroop Mishra, Siddhartha Brahma, Sujoy Basu, Yi Luan, Denny Zhou, and Le Hou. Instruction-following evaluation for large language models. arXiv preprint arXiv:2311.07911 , 2023. Yixuan Zhou, Xiaoyu Qin, Zeyu Jin, Shuoyi Zhou, Shun Lei, Songtao Zhou, Zhiyong Wu, and Jia Jia. V oxinstruct: Expressive human instruction-to-speech generation with unified multilingual codec language modelling. In Proceedings of the 32nd ACM International Conference on Multimedia , pages 554–563, 2024. 15 A Limitations and Social Impacts While MMMG constitutes a significant advancement in automated multimodal generation evaluation, we acknowledge several limitations inherent to our methodology and scope. Limited Task Coverage. MMMG does not exhaustively cover all potential tasks within multi- modal generation, particularly in the domains of interleaved image-text generation and sound/music generation. This limitation primarily arises from current inadequacies in available evaluation methods or models, which fail to yield sufficiently human-aligned results on numerous widely-used tasks. Such gaps in coverage may introduce biases into our model rankings, potentially misaligning evaluation results with actual | https://arxiv.org/abs/2505.17613v1 |
user experiences. To mitigate this, we intend to dynamically expand and update our benchmark tasks in real-time as more powerful and reliable evaluation models become available. We also include tasks that we considered commonly used but abandoned due to infeasible evaluation in Appendix B.2. Dependence on Proprietary Models. Our evaluation heavily relies on proprietary models (e.g., GPT-4 O,GEMINI 2.5). The substantial performance gap between proprietary and open-source mod- els makes reliance on proprietary models necessary for achieving highly accurate and human-aligned evaluations across diverse tasks. Unfortunately, current open-source alternatives often lack sufficient accuracy on certain complex tasks, rendering them unsuitable as reliable evaluators. Consequently, this dependence limits broad reproducibility and access within the academic community, highlighting the urgent need for improved and accessible open-source evaluation models. B Detailed Dataset Information B.1 Data Source •Object Reasoning. We sample from HotpotQA [Yang et al., 2018] through the official website. We take the QA pairs where the answers are individual objects and can be directly transformed into image generation instructions or the answers are nations and can be transformed into the national flags or animals generation instructions. •Image Editing. We sample images from EmuEdit [Sheynin et al., 2024] through the “face- book/emu_edit_test_set” checkpoint on Huggingface [Wolf et al., 2019] for object adding, remov- ing, modifying and text editing tasks. We modify the instructions to make sure they are clear and unambiguous. We also sample object images from COCO [Lin et al., 2014] through the official website and use PhotoShop to combine with the scene images in EmuEdit to form golden reference images. We sample scene images from CLEVR [Johnson et al., 2017] through the official website for the interleaved color modifying task, since modifying color for pure-colored geometries is much more unambiguous than regular objects. We also use PhotoShop to generate the golden reference images. •3D transformation. We sample instructions and golden reference images from ISG-Bench [Chen et al., 2025a] through the official website. We polish the instructions to make sure they are clear and unambiguous. •Math. We sample images from MM-IQ [Cai et al., 2025] through the “huanqia/MM-IQ” checkpoint on Huggingface. We manually edit the images to transform the multiple-choice questions into free-form generation questions. We have 2 annotators to check if the free-form questions can only have one possible answer without alternatives. •Code. We sample SVG codes from StarVector [Rodriguez et al., 2023] through the “starvector/text2svg-stack” checkpoint on Huggingface. and transform the original image-to- text instructions into interleaved reasoning instructions. We samples SVG codes with a length between 1000-1500 characters to control difficulty. •Sound Generation. We make sure all the target sounds fall in the 50 categories in ESC-50 [Piczak, 2015] through the “ashraq/esc50” checkpoint on Huggingface so that CLAPScore audio can have reference audios to compare with. •Instrument Generation. We make sure all the target instruments fall in the 20 categories in OpenMIC-2018 [Humphrey et al., 2018] through the official website so that CLAPScore audio can have reference audios to compare with. 16 •Speech Replication. We samples speaker voices from LibriSpeech [Panayotov et al., 2015] ASR corpus through the official website and | https://arxiv.org/abs/2505.17613v1 |
use them as reference speeches for voice replication tasks. The remaining tasks are generated from GPT-4 Owith manually designed templates. B.2 Excluded Tasks We present the remaining 27 tasks we considered from our initial task set in Table 4. We ex- clude “Format Color”, “Format Symmetric”, “Speech Transcribing”, “Speech Encoding” tasks since they are not commonly seen in real user queries and “Image-to-Sound” and “Sound-to- Image” tasks are excluded because no models today can support these modalities. Other tasks are excluded because we could not find any reliable evaluation methods for those tasks. Task Example Input Output Table Generation Create a 2x2 table image. In the first column, place the text ’apple’ in the top cell and ’pear’ in the bottom cell. In the second column, place an image of an apple in the top cell and an image of a pear in the bottom cell.T ⊷ Figure Generation Create a histogram to visualize the given data. <data> T ⊷ Format Color Create a watermelon farm using only varying shades of red. T ⊷ Format Symmetric Generate an image of a futuristic cityscape. The image must be axisymmetric along the vertical center line.T ⊷ Art Style Create a painting of a dandelion sea in Impressionist style. T ⊷ Photography Create q zoomed out photo of a small bag of coffee beans from below. T ⊷ Scene Editing Make the weather in <image_0> sunny. T,⊷ ⊷ Attribute Editing Make the woman in <image_0> cry. T,⊷ ⊷ Sound Count Generate an audio of exactly three door knocks. T /volume-down Sound Order Generate an audio of a can being opened followed by a sipping sound. T /volume-down Sound Duration Generate audio of a car horn lasting for 3 seconds. T /volume-down Speech Emotion Generate an audio of a woman sorrowfully saying, "What a life." T /volume-down Speech Accent Generate an audio of a man speaking in Indian accent, "What a beautiful day!" T /volume-down Speech Background Generate an audio of a man speaking in noisy train station distantly, "I am really busy." T /volume-down Speech Stress Generate an audio of a man saying, "Give me money now!" with stress on word "now". T /volume-down Music Genre Create a light 80-90s country music. T /volume-down Music Emotion Generate a vibrant, pulsating disco drum track. T /volume-down Music Lyrics Create a flute melody with the lyrics, <lyrics>. T /volume-down Singer Attribution Generate a jazz piece accompanied by lyrics "<lyrics>", featuring a tenor singer per- forming in Bel Canto style.T /volume-down Lyrics Editing Replace the lyrics in <audio_0> with <lyrics>, keeping the original melody unchanged. T,/volume-down /volume-down Transition Visualization Generate three images showing the transition process from <image_0> to <image_1>. T,/images /images Future Prediction Generate three images showing the future events after <image_0>. T,⊷ /images Speech Translation Generate an English speech about sustainable development, and provide its Chinese transcript afterward.T T ,/volume-down Speech Encoding Generate a speech about sustainable development, and provide the speech transcript encoded in Base64.T T ,/volume-down Image-to-Sound Create a music predominately featuring the instrument shown in <image_0>. T,⊷ /volume-down Sound-to-Image Draw an image showing | https://arxiv.org/abs/2505.17613v1 |
the animal that is mostly likely to make the sound in <audio_0>. T,/volume-down⊷ Table 4: Tasks that are not included in MMMG .Tdenotes text modality, ⊷for image modality, /images for multiple images, /volume-downfor audio and /volume-down/volume-downfor multiple audios. We hope to incorporate these tasks when reliable evaluation methods are available. Statistics Number Total number of modality combinations 4 Total number of tasks 49 - I : A : I-T : A-T 14 : 12 : 20 : 3 Total number of questions 937 - I : A : I-T : A-T 270 : 405 : 262 : 60 Total number of images 487 Total number of audios 42 Average length of instructions 242.5 Table 5: Statistics of MMMG . I, A, I-T, A-T stands for image, audio, interleaved image-text and inter- leaved audio-text generations respectively.B.3 Dataset Statistics We present some important statistics of MMMG in Table 5. B.4 Computation Statistics The evaluation pipeline for MMMG requires at least a single NVIDIA A10 GPU for open- source models, and APIs from OpenAI and 17 Gemini for proprietary models. In our experiments, we used a single NVIDIA A40 GPU. On average, the evaluation runtime for each task is approximately 4 minutes, incurring a API cost of about $1.1 for a sample size of 4. For the generation phase, runtime significantly varies depending on the model itself. The most time-consuming model tested is YUE, which runs on a single NVIDIA H100 GPU. On average, Y UEtakes around 3 hours to complete generation per task. C Detailed Experiment Setup C.1 Model Details Generation. We employ 24 open and proprietary multimodal generation models from varying organizations. To encourage diversity, we only incorporate the latest model of a series. Even though our benchmark supports comprehensive and cross-modality evaluation, current multimodal generation models have very restricted output modalities. Thus, we categorize these models by their supported output modalities into image, interleaved image-text, sound-music, and interleaved speech-text generation. •Image Generation. We include GPT I MAGE [OpenAI, 2025], through the “gpt-image-1” check- point on OpenAI API; IMAGEN 3[Baldridge et al., 2024], through the “imagen-3.0-generate-002” checkpoint on Gemini API; RECRAFT V3[AI, 2024c], through the ”recraftv3” checkpoint on Recraft API; LUMA PHOTON [AI, 2024b], through the “luma/photon” checkpoint on Replicate API; FLUX 1.1 P RO[Labs, 2024], through the “black-forest-labs/flux-1.1-pro” checkpoint on Replicate API; IDEOGRAM 2[AI, 2024a], through the “ideogram-ai/ideogram-v2” checkpoint on Replicate API; DALLE 3[Betker et al., 2023], through the “dall-e-3” checkpoint on OpenAI API; and STABLE DIFFUSION 3.5Rombach et al. [2022], through the “stabilityai/stable-diffusion-3.5-large” checkpoint on Huggingface. •Interleaved Image-Text Generation. We include SEED-LL AMA[Ge et al., 2024], through the official implementation; ANOLE [Chern et al., 2024], through the official implementation on Github; and GEMINI IMAGE [Team et al., 2023], through the “imagen-3.0-generate-002” checkpoint on Gemini API. We also implement two agents models composing of a MLM and an image generation model: GEMINI 2.5+IMAGEN 3,GPT-4 O+GPT I MAGE andGEMINI 2.5+ GPT I MAGE .GEMINI 2.5is through the “gemini-2.5-pro-preview-03-25” checkpoint on Gemini API and GPT-4 Ois through the “gpt-4o-2024-08-06” checkpoint on Openai API. •Sound and Music Generation. We include STABLE AUDIO [Evans et al., | https://arxiv.org/abs/2505.17613v1 |
2025], through the “stabilityai/stable-audio-open-1.0” checkpoint on Huggingface, and AUDIO LDM 2 [Liu et al., 2024], through the “cvssp/audioldm2-large” checkpoint on Huggingface, capable of generating both sound and music. We also include sound generation models: AUDIO GEN[Kreuk et al., 2022], through the official implementation; MAKE-AN-AUDIO 2[Huang et al., 2023], through the official implementation; and TANGO 2[Majumder et al., 2024], through the “declare-lab/tango2-full” checkpoint on Huggingface. We also include music generation models: MUSIC GEN[Copet et al., 2023], through the “facebook/musicgen-large” checkpoint on Huggingface; TANGO MUSIC [Kong et al., 2024], through the “declare-lab/tango-music-af-ft-mc” checkpoint on Huggingface; and YUE [Yuan et al., 2025], through the official implementation. •Interleaved Speech-Text Generation. We include SPIRIT LM[Nguyen et al., 2025], through the official implementation. We also implement two agents models composing of a MLM and a voice synthesizing model: GEMINI 2.5+VOXINSTRUCT [Zhou et al., 2024] and GEMINI 2.5 +VOICE LDM [Lee et al., 2024]. VOXINSTRUCTION is through the official implementation and VOICE LDM is through the official implementation. Evaluation. We compare 3 VLMs: GPT-4 O, through the “chatgpt-4o-latest” checkpoint on Ope- nAI API; GEMINI 2.5, through the “gemini-2.5-pro-preview-03-25” checkpoint on Gemini API; and QWEN2.5-VL , through the “Qwen/Qwen2.5-VL-7B-Instruct” checkpoint on Huggingface. For audio models, we employ CLAP , through the “laion/clap-htsat-unfused” checkpoint on Hugging- face; WHISPER , through the “openai/whisper-large-v3” checkpoint on Huggingface and a finetuned Chinese speech-to-text checkpoint “BELLE-2/Belle-whisper-large-v3-zh” on Huggingface; WAVLM, through the “microsoft/wavlm-base-sv” checkpoint on Huggingface; and “Wav2Vec”, through the “alefiury/wav2vec2-large-xlsr-53-gender-recognition-librispeech” checkpoint on Huggingface. 18 C.2 Generation Details For non-agent models, we directly provide instructions to the model. For agent-based models, we prepend a system prompt to the instructions. This system prompt explicitly instructs the model to generate outputs following a structured, function-call-based approach. When the model needs visual or auditory outputs, it generates placeholders formatted as function calls within the text. Each placeholder clearly specifies the generation instructions and any necessary references to prior outputs or provided multimedia in user’s instructions. For each placeholder, we extract the function call, which are then fed into specialized image or audio generation models. To correctly handle references to previously generated media, we employ topological sorting. This ensures media outputs are generated in a sequence by dependencies, and circular dependencies are identified and reported as errors. Detailed system prompt for interleaved image-text agent is in Table 6 and interleaved audio-text agent is in Table 7. C.3 Evaluation Details Prompts for VLMs •Object Count. “How many [object] are there in the given image? Choose from the options: A. Less than 3 or the image is blank B. 3 C. 4 D. 5 E. 6 F . More than 6. Respond only with the option letter (A, B, C, D, E or F). Do not provide any explanation, reasoning, or additional information. ” Multiple choice questions can boost VLM’s performance on object count tasks. We employ this prompt for object count and self count tasks. •Absolute Spacial Relationship. “The [object] is located in which section of the image? Choose from the options: A. bottom left B. bottom right C. up left D. up right E. none of the above (positioned in | https://arxiv.org/abs/2505.17613v1 |
a more central way) Explain step by step and end your answer with Answer: [only an optional letter]. ” Multiple choice questions can boost VLM’s performance on spacial reasoning tasks. We employ this prompt for absolute spatial relationship and self absolute spatial relationship recognizing tasks. •Left-Right Spacial Relationship. “Looking at the 2D composition of the image, what is the horizontal alignment relationship between the [object1] and the [object2]? Choose from the options: A. the [object1] is obviously to the left of the [object2]. B. the [object1] is obviously to the right of the [object2]. C. the [object1] is neither obviously to the right nor left of the [object2]. Explain step by step and end your answer with Answer: [only an optional letter]. ” VLMs tend to be confused by perspective relationship, thus we ask VLMs to focus on 2D composition. We employ this prompt for relative spatial relationship and self relative spatial relationship recognizing tasks. •Up-Down Spacial Relationship. “Looking at the 2D composition of the image, what is the vertical alignment relationship between the [object1] and the [object2]? Choose from the options: A. the [object1] is obviously positioned higher than the [object2]. B. the [object1] is obviously positioned lower than the [object2]. C. the [object1] is neither obviously positioned higher nor lower than the [object2]. Explain step by step and end your answer with Answer: [only an optional letter]. ” We employ this prompt for relative spatial relationship and self relative spatial relationship recognizing tasks. •OCR English. “### Instruction: Recognize all the major texts (ignore small texts on the edge) ONLY on [object]. Only recognize texts in Latin alphabet characters (a-z, A-Z). Do not correct the text if it is misspelled, nonsense or wrong, output the most direct recognition result. Do not call any function. ### Output format: Output an executable Python list of all recognized texts from top to down, from left to right, e.g. [“Hello World”, “Good morning”]. Output an empty list if the there is no text on [object] or the image is blank. ” We employ this prompt for single and double text rendering and self OCR tasks. •OCR Chinese. “### Instruction: You are a conservative text recognition model. Your task is to recognize all the major Chinese characters in the given image. If the Chinese characters in the image are wrongly written or distorted, you should return an empty string. Do not call any function. ### Output format: Only a string of all recognized characters from top to down, from left to right. Do not add quotations. ” We employ this prompt for multi-lingual text rendering task. Since VLMs tend to recognize Chinese characters incorrectly or identify fake characters, 19 You are a multimodal assistant capable of generating both text and images. When visual content would enhance your response or is specifically requested, you can generate or edit images through advanced diffusion models. To generate or edit an image: 1. Identify when visual content would be beneficial or requested. 2. Insert an image generation/editing placeholder using the following format: <image_start><image_prompt="Detailed image generation or editing prompt here."><image_ref=[reference identifiers]><image_end> | https://arxiv.org/abs/2505.17613v1 |
3.The post-processing system replaces this placeholder with an image created or edited based on your instructions. 4.Naturally incorporate references to the generated or edited image in your ongoing conversation. When crafting image prompts, follow these guidelines: For image prompts: • Provide detailed, specific descriptions (15-30 words) for optimal results. • Include artistic styles (photorealistic, cartoon, watercolor, etc.) or style transfers. • Specify key objects and their attributes (colors, textures, etc.), or modifications. •Detail composition elements (spatial relationships, perspective, lighting, etc.), or com- positional changes. • Ensure instructions are clear and concise. For image references: Three reference types are available: 1. Image generation (no reference): <image_ref=[]> 2. Editing user-provided images: Format: <image_ref=[i]> where iis the index of the provided image (indices starting at 0). Example: <image_ref=[0]> references the first provided image. Multiple images example: <image_ref=[0,2]> references the first and third provided images. 3. Editing previously generated images: Format: <image_ref=[#N]> , where Nis the sequential number of previ- ously generated images (starting from 0). Example: <image_ref=[#3]> references the fourth generated image. Multiple images example: <image_ref=[#0,#2]> references the first and third generated images. Important: Use only one reference type within each placeholder. Different reference types may be used across multiple placeholders. Provide concise and direct responses following user instructions precisely. Always maintain the exact placeholder format for proper parsing, ensuring that both images and text appear in the required order. Do not omit any necessary text following image placeholders. Table 6: System prompt for interleaved image-text agent. we employ two separate VLMs and use the intersection of their recognition results to improve accuracy. •Text Pattern Verifying (Math) “Below are two descriptions of the same geometric pattern, one is ground-truth and the other is model-generated. Your task is to judge if the generated description is 20 You are a multimodal assistant capable of generating both text and audio. When audio content would enhance your response or is specifically requested, you can generate audio through text-to-audio models. To generate audio: 1. Identify when audio content would be beneficial or requested. 2. Insert an audio generation placeholder using the format: <audio_start><audio_type="sound" OR "speech" OR "music"><audio_text="Text to be spoken here."><audio_style="Descriptive text here." OR audio reference ID><audio_end> 3.The post-processing system replaces this placeholder with generated audio based on your specifications. 4. Naturally incorporate references to the generated audio in your ongoing conversation. When crafting audio prompts, follow these guidelines: Audio Type: • Must be exactly one of: "sound" ,"speech" , or"music" . •"speech" : For human speech. •"sound" : For environmental sounds or effects. •"music" : For musical compositions or instrumental pieces. Audio Text: • For "speech" : Provide the exact transcript. • For "sound" or"music" : Leave as empty string (""). • Keep speech concise (typically under 50 words). Audio Style: 1.Descriptive Text: •For"speech" : Specify voice characteristics (gender, emotion, pace, pitch, ac- cent). • For "sound" : Specify sound source, environment, qualities. • For "music" : Specify genre, mood, tempo, instruments. 2.Reference Audio: • For consistency, particularly with speech: –Previously generated audio: <audio_style=#N> (Nis sequential number starting at 0). –User-provided audio: <audio_style=N> (Nis sequential number of provided audio | https://arxiv.org/abs/2505.17613v1 |
starting at 0). •Important: Only reference audio that itself does not reference previous audio to avoid circular references. Provide concise, direct responses precisely following user instructions. In multi-speaker scenar- ios, maintain consistent and distinctive voice characteristics for each speaker. Always maintain the exact placeholder format for correct parsing Table 7: System prompt for interleaved audio-text agent. accurate. Analyze step by step and end your answer with “Yes” or “No”. Here are some criteria: 1. The model-generated pattern must state the pattern clearly without ambiguity. For example, a 3*3 grid of circles with some circles filled is ambiguous. 2. Make sure the overall structure, the position and situation of each element are accurate. Specifically, the situation of each element can include: filled (black, grey, filled with black or any equivalent words), unfilled (white, hollow, empty or any equivalent words), missing (the position is empty or missing). If the situation is not specified 21 in the ground-truth, the element can take any situation of the right shape. 3. If the ground-truth describes a coordinate system, the x-axis will increase from left to right while y-axis will increase from top to down. For example, for a 3*3 grid, the (3,2) coordinate is the middle-right element. ” We employ this prompt for math task. •Image Verifying (Math) “Your task is to judge if the given image accurately follows the ground- truth pattern. Analyze step by step and end your answer with “Yes” or “No”. Here are some criteria: 1. Make sure the overall structure, the position and situation of each element are accurate. Specifically, the situation of each element can include: filled (black, grey, filled with black or any equivalents), unfilled (white, hollow, empty or any equivalents), missing (the position is empty, missing or any equivalents). If the situation is not specified in the ground-truth, the element can take any situation of the right shape. 2. If the ground-truth describes a coordinate system, the x-axis will increase from left to right while y-axis will increase from top to down. For example, for a 3*3 grid, the (3,2) coordinate is the middle-right element. 3. If the given image contains multiple patterns (e.g. multiple grids) or question mark, the given image doesn’t follow the ground-truth pattern. ” We employ this prompt for math task. •Object Existing. “Is/Are there [detailed object description] in the given image? Explain step by step and end your answer with “Yes” or “No”. Answer “No” if the image is blank. ” We design detailed object description for each instruction manually, include object number, object attributes and undesired negative attributes, etc.. We employ this prompts for all image tasks unmentioned above. For spatial relation tasks, we first exam if the object number is accurate by object existing prompt and then check spatial relationship by corresponding prompts. Program Verifying •Solid Color Fill. The evaluation procedure starts by cropping the targeted region from the image and calculating its average RGB value. The average RGB value is compared with a standard reference color; if the relative deviation exceeds 15%, indicating significant color discrepancy, the evaluation returns zero. Next, | https://arxiv.org/abs/2505.17613v1 |
structural consistency is assessed by computing the SSIM between the targeted region and an artificially generated solid region filled with the calculated average RGB color, confirming color uniformity. Finally, the procedure examines over-fill by evaluating the margin area surrounding the targeted region and computing the proportion of pixels matching the region’s average RGB color. The ratio as penalty is subtracted from the SSIM score. •Image Editing. The evaluation for image editing begins by manually labeling a potential editing area within each image. Then crop the edited area from the generated image and compare against the corresponding area in a reference image or assessed via a VLM. Additionally, regions outside this area are compared with corresponding original outside area using SSIM to detect unintended changes. The final score is the product of these two comparisons, reflecting editing accuracy and preservation of original content. •Sound Generation. For begin-end tasks, clip the first or last 4 seconds of audio directly. For positional inclusion tasks, crop the corresponding fraction of the audio. For silence detection tasks, utilize the librosa.effects.split function to segment audio based on silence intervals and then verify if each section contains target sound through CLAPScore udio. •Music Generation. For tempo evaluation, use BEATTHISto extract beat tracks and calculate Beats Per Minute (BPM). For intensity evaluation, analyze the initial and final 4 seconds of the music, plotting the energy spectrum through librosa.feature.rms and computing its slope and goodness of fit. Only audio segments demonstrating clear upward or downward trends in energy pass the intensity evaluation. •Speech Generation. For pitch evaluation, calculate the average energy of each pitch through parselmouth.Sound.to_pitch and select the pitch with the highest average energy through parselmouth.Sound.to_intensity as the speech pitch. For speed evaluation, transcribe En- glish audio using WHISPER and compute words per minute (WPM); for Chinese audio, compute characters per minute (CPM). For textual constraints, normalize transcripts using WHISPER ’s tokenizer (removing punctuation, case sensitivity, etc.) and evaluate with the tools of IFEval. C.4 Annotation Interface We design task-specific annotation interfaces by Gradio [Abid et al., 2019], each including reference images or audio, model’s generated outputs, judgment instructions, and judgment criteria. We 22 Figure 4: Human annotation interface for instrument inclusion task. Typically, an inference will include reference audios/images, model’s generation, evaluation instruction, evaluation criteria and judgment radio boxes and next/previous button. preprocess some generated outputs to assist annotators in their judgments. For example, we provide cropped images within editing area for image editing tasks and clipped audio segments at the beginning or end for audio begin-end tasks. Judgments are typically collected through multiple- choice radio buttons to ensure high inter-annotator agreement. However, for OCR tasks specifically, annotators type the recognized text directly. An example of annotation interface is in Figure 4. C.5 Annotation Questions Instrument Inclusion. “What is the dominant instrument played the given audio? Reminder: 1. Failed generation should be considered as none of the above. 2. Choose multiple labels only when you are unsure or the given audio clearly have different types of instruments. ” We employ this question for instrument inclusion and exclusion tasks. 23 | https://arxiv.org/abs/2505.17613v1 |
Sound Inclusion. “Is the given audio about [sound]? Reminder: 1. Chose yes when [sound] is the main sound existing in the audio. 2. [sound] should be common real-world sound without distortion. ” We employ this question for all sound generation tasks. Speaker Similarity. “Are the speeches coming from the same speaker? Reminder: 1. Little speaker voice difference can be tolerated, but overall, there should be no major difference. ” We employ this question for voice replication and conversation tasks. Speaker Gender “What is the gender of the speaker in the given speech? Reminder: 1. Choose none of above when the voice sounds like electronic synthesizer sound or it is hard to categorize into binary genders. 2. Do not consider speech quality (clarity and fluency, etc.) when judging gender. ” We employ this question for voice attribution and multi-lingual speech tasks. D Experiment Results (Cont.) D.1 Correlation with Human Annotation We report the agreement and Pearson correlation of MMMG with human annotation per task in Table 8. We exclude DreamSim and Whisper as they are widely recognized as established “silver” standards [Huang et al., 2025, Mehrish et al., 2023]. D.2 Full Benchmarking Results Evaluation results of 24 multimodal generation models on 49 tasks are listed in Table 9, Table 10, Table 11 and Table 12, categorized by modalities. We report the following additional findings: •Although image generation models generally maintain consistent rankings across various tasks, certain models exhibit notable weaknesses in specific areas. For instance, IDEOGRAM 2performs particularly poorly when tasked with including unrelated objects in a scene, whereas IMAGEN 3struggles significantly with text rendering. These observations underscore the effectiveness of MMMG in pinpointing specific model weakness. •When comparing different interleaved image-text agent models, GEMINI 2.5demonstrates superior planning capabilities over GPT-4 O, resulting in a 38.2% performance improvement with the image generator GPT I MAGE . Additionally, although GPT I MAGE generally outperforms IMAGEN 3, this advantage partly arises from I MAGEN 3’s inability to accurately perform image editing tasks. •Unified understanding-generation models such as JANUS [Chen et al., 2025c] are excluded from our evaluation due to their requirement for manual modality selection, limiting their capability for automated, interleaved generation tasks. We also notice that models like ANOLE andSEED- LLAMA trained only on individual image generation and image understanding tasks can’t follow instructions at all for interleaved image-text input. This highlight the importance of collecting more comprehensive image-text interleaved dataset for training. •The natural speech-text interleaved model SPIRIT LMrarely scores above zero on evaluated tasks, suggesting it lacks adequate instruction tuning and consequently struggles to follow instructions effectively. In comparison, VOXINSTRUCT significantly outperforms VOICE LDM , achieving an 82.1% improvement, and thus demonstrates superior functionality as a multi-purpose speech synthesizer. Models like GPT-4 O-AUDIO andQWEN 2.5-O MNI [Xu et al., 2025] doesn’t support customized speaker voice, thus can not be evaluated. Models like YUE, which are designed for text-to-song generation, may face challenges when are required to generate pure music. D.3 Analysis Interleaved System Prompt. To investigate whether autoregressive models’ capabilities in generat- ing the desired number and order of modalities can be | https://arxiv.org/abs/2505.17613v1 |
improved, we conducted experiments with GEMINI IMAGE using the planning system prompt detailed in Table 13. The experimental results, summarized in Table 14, indicate that incorporating system prompts emphasizing modality count and order does not consistently lead to positive outcomes. Generally, adding a system prompt negatively impacts image generation quality, as the models shift their focus away from optimizing visual quality. 24 TaskGPT-4 O GEMINI 2.5 Q WEN 2.5-VL IAA agree corr agree corr agree corr agree corr Object Inclusion 0.975 0.912 0.950 0.804 0.750 0.514 1.000 1.000 Object Exclusion 1.000 1.000 0.950 0.905 0.900 0.799 1.000 1.000 Object Count 0.975 0.943 0.925 0.827 0.550 0.051 0.975 0.943 Object Reasoning 1.000 1.000 1.000 1.000 0.950 0.905 1.000 1.000 Object Attribution 0.950 0.882 0.900 0.722 0.700 0.144 1.000 1.000 Compassion Relation 0.925 0.850 0.875 0.741 0.625 0.349 0.950 0.896 Universal Relation 0.975 0.951 0.900 0.818 0.750 0.504 0.975 0.951 Relative Spatial 0.925 0.819 0.825 0.640 0.825 0.605 0.950 0.875 Absolute Spatial 0.825 0.825 0.925 0.839 0.550 0.252 0.983 0.960 Text Rendering (TR) 0.991 0.991 0.992 1.000 0.945 0.787 1.000 1.000 Double TR 0.841 0.906 0.646 0.662 0.566 0.595 0.938 0.938 Multi-lingual TR 0.889 0.989 0.889 0.968 0.773 0.965 1.000 1.000 Semantic 0.958 0.910 0.946 0.890 0.855 0.684 0.982 0.961 Composition 0.971 0.930 0.942 0.847 0.855 0.647 0.978 0.944 Decomposition 0.971 0.941 0.971 0.941 0.877 0.751 0.978 0.956 Text Editing 0.928 1.000 0.908 1.000 0.840 0.794 0.950 0.950 Object Adding 0.975 0.912 0.925 0.728 0.875 0.498 1.000 1.000 Object Removing 0.975 0.933 0.975 0.933 0.975 0.928 1.000 1.000 Object Modifying 0.925 0.819 0.975 0.941 0.900 0.749 0.925 0.819 Self Count 0.975 0.950 0.950 0.899 0.575 0.130 1.000 1.000 Self Color 0.950 0.881 0.950 0.883 0.808 0.592 0.983 0.960 Self Size 0.892 0.788 0.867 0.735 0.558 0.233 0.967 0.933 Self OCR 0.906 0.909 0.806 0.790 0.917 0.942 1.000 1.000 Self Relative Spatial 0.838 0.669 0.950 0.896 0.788 0.552 0.963 0.923 Self Absolute Spatial 0.913 0.821 0.950 0.897 0.725 0.526 0.975 0.948 Math 0.950 0.436 1.000 1.000 0.688 -0.074 0.988 0.703 Code 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 Average 0.941 0.881 0.922 0.863 0.782 0.571 0.980 0.950 TaskCLAPScore audio CLAPScore text GEMINI 2.5 IAA agree corr agree corr agree corr agree corr Sound Begin-End 0.925 0.951 0.825 0.687 0.625 0.204 0.967 0.933 Sound Inclusion 0.850 0.711 0.800 0.564 0.650 0.207 0.925 0.856 Sound Reasoning 0.944 0.817 0.861 0.534 0.639 0.439 0.917 0.720 Sound Silence 0.975 0.946 0.975 0.946 0.950 0.690 1.000 1.000 Instrument Inclusion 0.967 0.894 0.900 0.598 0.867 0.447 1.000 1.000 Instrument Exclusion 0.893 0.663 0.214 0.053 0.821 0.438 0.929 0.782 Average 0.926 0.831 0.763 0.563 0.759 0.404 0.956 0.882 TaskWavLM Wav2Vec IAA agree corr agree corr agree corr V oice Attribution - - 0.949 0.826 0.950 0.844 V oice Replication 0.875 0.731 - - 0.925 0.843 Speech Multi-lingual - - 0.966 0.876 0.925 0.856 Conversation 0.850 0.630 - - 0.925 0.819 Average 0.863 0.681 0.957 0.851 0.931 0.841 Table 8: Agreement and Pearson correlation of MMMG evaluation with human annotations. “IAA” stands for inter-annotator agreement, “agree” stands for agreement and “corr” stands | https://arxiv.org/abs/2505.17613v1 |
for Pearson correlation. We report Word Accuracy for text rendering, text editing and OCR tasks. Best results are in bold .MMMG achieves an average best human agreement of 0.943 with average inter- annotator agreement being 0.971. GPT-4 Ois the most human-aligned image evaluation model while CLAPScore audio is the most human-aligned audio evaluation method. Conversely, image editing tasks benefit from the addition of system prompts since without such prompts, models frequently generate multiple images unnecessarily. Nonetheless, system prompts do not effectively support generating sequential images or integrated image-text pairs, because models continue to intermix multiple images during generation, as illustrated in Figure 3. Variance Control. We present the 95% confidence intervals along with the average scores for each task in Table 9, Table 10, Table 11 and Table 12. A sample size of 4 can substantially reduce variance, with the maximum relative confidence interval being 16.3% and the average relative confidence interval being 5.2%. 25 TaskIMAGEN 3RECRAFT V3LUMA PHOTONFLUX 1.1 P ROIDEO -GRAM 2DALLE 3 SD 3.5SEED -LLAMAANOLEGEMINI IMAGEGPT IMAGE Object Inclusion 0.888 0.888 0.825 0.413 0.925 0.800 0.525 0.225 0.163 0.875 0.938 Object Exclusion 0.375 0.438 0.625 0.625 0.675 0.313 0.000 0.400 0.275 0.313 0.825 Object Count 0.338 0.463 0.438 0.538 0.438 0.150 0.300 0.250 0.038 0.450 0.825 Object Reasoning 0.488 0.488 0.850 0.525 0.475 0.525 0.175 0.175 0.100 0.825 0.613 Object Attribution 0.463 0.263 0.388 0.275 0.300 0.388 0.225 0.163 0.163 0.475 0.725 Comparison Relation 0.588 0.288 0.488 0.375 0.475 0.388 0.150 0.013 0.050 0.450 0.600 Universal Relation 0.425 0.538 0.638 0.463 0.500 0.375 0.350 0.125 0.113 0.450 0.813 Relative Spacial Relation 0.838 0.625 0.875 0.663 0.738 0.550 0.575 0.025 0.113 0.750 0.988 Absolute Spacial Relation 0.488 0.388 0.700 0.488 0.450 0.225 0.338 0.025 0.013 0.700 0.675 Region Fill 0.484 0.236 0.628 0.442 0.375 0.207 0.320 0.210 0.235 0.683 0.762 Border Fill 0.279 0.353 0.528 0.349 0.273 0.350 0.267 0.275 0.217 0.450 0.651 Single Text Rendering 0.827 0.994 0.936 0.901 0.995 0.661 0.811 0.000 0.031 0.997 1.000 Double Text Rendering 0.313 0.422 0.686 0.528 0.701 0.215 0.325 0.001 0.000 0.745 0.763 Multi-lingual Text Rendering 0.351 0.471 0.440 0.326 0.483 0.120 0.330 0.000 0.003 0.817 0.784 Average0.510 ±0.0140.489 ±0.0310.646 ±0.0100.494 ±0.0090.557 ±0.0260.376 ±0.0160.335 ±0.0150.135 ±0.0060.108 ±0.0180.641 ±0.0200.783 ±0.012 Table 9: Benchmarking Results of 11 models on 14 image generation tasks. We report 95% confidence intervals on average scores. Best results are in bold .GPT-4 Osignificantly outperforms other image generation models TaskSEED LLAMAANOLEGEMINI IMAGEGEMINI 2.5 + IMAGEN 3GPT-4 O + GPT I MAGEGEMINI 2.5 + GPT I MAGE Semantic Consistency 0.000 0.000 0.013 0.600 0.613 0.763 Multi-Angel Consistency 0.000 0.000 0.352 0.480 0.230 0.461 Multi-View Consistency 0.000 0.000 0.143 0.169 0.064 0.221 Compose Consistency 0.000 0.000 0.000 0.313 0.800 0.738 Decompose Consistency 0.000 0.000 0.013 0.325 0.600 0.875 Interleaved Object Adding 0.154 0.052 0.545 0.217 0.394 0.394 Interleaved Color Modifying 0.179 0.033 0.609 0.359 0.566 0.573 Text Editing 0.051 0.022 0.283 0.211 0.285 0.394 Object Adding 0.165 0.190 0.748 0.469 0.470 0.631 Object Removing 0.350 0.175 0.605 0.236 0.415 0.540 Object Modifying 0.109 0.121 0.487 0.449 0.453 0.627 Self Count 0.000 0.038 | https://arxiv.org/abs/2505.17613v1 |
0.213 0.438 0.100 0.850 Self Color 0.000 0.000 0.000 0.413 0.663 0.700 Self Size 0.000 0.000 0.263 0.338 0.338 0.600 Self OCR 0.000 0.000 0.101 0.626 0.312 0.958 Self Relative Spatial 0.000 0.000 0.250 0.475 0.538 0.725 Self Absolute Spatial 0.000 0.000 0.100 0.600 0.475 0.775 Interleaved Math 0.000 0.000 0.000 0.013 0.025 0.038 Interleaved Code 0.000 0.000 0.136 0.146 0.071 0.224 Image-Text Order 0.150 0.100 0.725 0.738 0.913 0.925 Average 0.058 ±0.002 0.037 ±0.004 0.279 ±0.012 0.381 ±0.011 0.416 ±0.007 0.601±0.011 Table 10: Benchmarking Results of 6 models on 20 image-text interleaved generation tasks. We report 95% confidence intervals on average scores. Best results are in bold . Agent model GEMINI 2.5 + GPT I MAGE is the best combination for consistent image sequence and coherent image-text pair generation. GEMINI IMAGE as a modality-unified autoregressive model, performs best at image editing tasks. TaskSTABLE AUDIOAUDIO LDM 2AUDIO GENMAKE-AN -AUDIO 2TANGO 2 M USIC GENTANGO MUSICYUE Sound Begin-End 0.525 0.450 0.475 0.631 0.525 - - - Sound Inclusion 0.700 0.413 0.450 0.575 0.513 - - - Sound Reasoning 0.014 0.014 0.042 0.611 0.194 - - - Sound Silence 0.063 0.019 0.019 0.131 0.006 - - - Instrument Inclusion 0.817 0.833 - - - 0.833 0.950 0.600 Instrument Exclusion 0.268 0.161 - - - 0.161 0.054 0.321 Music Tempo 0.200 0.017 - - - 0.633 0.100 0.067 Music Intensity 0.275 0.025 - - - 0.050 0.075 0.000 Average 0.358 ±0.031 0.241 ±0.027 0.246 ±0.020 0.487 ±0.026 0.310 ±0.034 0.419 ±0.029 0.295 ±0.011 0.247 ±0.010 Table 11: Benchmarking Results of 8 models on 8 sound and music generation tasks. We report 95% confidence intervals on average scores. Best results are in bold .MAKE-AN-AUDIO 2is the best audio generation model and the only model that can perform sound reasoning task; MUSIC GENis the best music generation model and the only model that can have tempo control. 26 TaskGEMINI 2.5 + V OXINSTRUCTGEMINI 2.5 + V OICE LDMSPIRIT LM V oice Attribution 0.684 0.567 0.000 V oice Replication 0.625 0.109 0.002 Speech Multi-lingual 0.654 - - Transcript Generation 0.638 0.438 0.200 Transcript Editing 0.200 0.375 0.000 Conversation Generation 0.788 0.375 0.000 Audio-Text Order Control 0.750 0.725 0.000 Average 0.620±0.038 0.427 ±0.008 0.034 ±0.000 Table 12: Benchmarking Results of 3 models on 7 speech-text interleaved generation tasks. We report 95% confidence intervals on average scores. Best results are in bold . Natural speech-text interleaved model SPIRIT LMdoes not have instruction following capability and get zero for most tasks. V OXINSTRUCT is the best multi-functional speech synthesizer. You are a multimodal assistant capable of generating interleaved text and images based on user instructions. •Follow the required modality structure and number in user’s instruction exactly, especially when multiple images are implied or requested. •Generate separate images for each described part, do not combine multiple concepts into one image unless told to. • Interleave images and text in the order described. Your goal is to match the user’s intent with exact number and sequence of image and text. Table 13: System prompt used to make G EMINI IMAGE output correct modality | https://arxiv.org/abs/2505.17613v1 |
order and number. TaskGEMINI IMAGE w/ promptGEMINI IMAGE w/o prompt Semantic Consistency 0.263 0.013 Multi-Angel Consistency 0.135 0.352 Multi-View Consistency 0.094 0.143 Compose Consistency 0.013 0.000 Decompose Consistency 0.000 0.013 Interleaved Object Adding 0.399 0.545 Interleaved Color Modifying 0.486 0.609 Text Editing 0.423 0.283 Object Adding 0.622 0.748 Object Removing 0.485 0.605 Object Modifying 0.468 0.487 Self Count 0.275 0.213 Self Color 0.113 0.000 Self Size 0.188 0.263 Self OCR 0.335 0.101 Self Relative Spatial 0.138 0.250 Self Absolute Spatial 0.175 0.100 Interleaved Math 0.000 0.000 Interleaved Code 0.110 0.136 Image-Text Order 0.725 0.725 Average 0.273 0.279TaskGEMINI IMAGE w/ promptGEMINI IMAGE w/o prompt Object Inclusion 0.888 0.875 Object Exclusion 0.400 0.313 Object Count 0.500 0.450 Object Reasoning 0.813 0.825 Object Attribution 0.475 0.475 Comparison Relation 0.475 0.450 Universal Relation 0.488 0.450 Relative Spacial Relation 0.850 0.750 Absolute Spacial Relation 0.738 0.700 Region Fill 0.585 0.683 Border Fill 0.459 0.450 Single Text Rendering 0.945 0.997 Double Text Rendering 0.800 0.745 Multi-lingual Text Rendering 0.691 0.817 Average 0.650 0.641 Table 14: Comparison of GEMINI IMAGE performance with and without system prompt on image generation (right) and interleaved image-text generation (left) tasks. Best results are in bold . System prompt does not always have positive impact. E Examples and Failure Analysis We provide examples of each task from Figure 5 to Figure 53. 27 Object Inclusion Instruction : Generate an image of a crowded beach. Please include a single snowman in the image. Good Example: FLUX 1.1 P RO Evaluation: ✔Bad Example: IMAGEN 3 Evaluation: ✘ Analysis : It’s not a snowman but a "sand- man", affected by its context of a beach. Figure 5: Examples for the task: Object Inclusion Object Exclusion Instruction : Generate an image of a birthday party. Do not include cakes in the image. Good Example: IDEOGRAM 2 Evaluation: ✔Bad Example: DALLE 3 Evaluation: ✘ Analysis : It fails to exclude cakes in the con- text of a birthday party. Figure 6: Examples for the task: Object Exclusion 28 Object Count Instruction : Generate an image of a race track with 3 clearly visible race cars, evenly spaced and not overlapping. The race cars should be of regular size and common shapes. Good Example: RECRAFT V 3 Evaluation: ✔Bad Example: SD 3.5 Evaluation: ✘ Analysis : Neither the number of tracks nor the number fo cars is correct. Figure 7: Examples for the task: Object Count Object Reasoning Instruction : Generate an image of a race track with 3 clearly visible race cars, evenly spaced and not overlapping. The race cars should be of regular size and common shapes. Good Example: IMAGEN 3 Evaluation: ✔Bad Example: SD 3.5 Evaluation: ✘ Analysis : Wrong object. Figure 8: Examples for the task: Object Reasoning 29 Object Attribution Instruction : Generate an image of a single red giraffe with green stripes. Good Example: IMAGEN 3 Evaluation: ✔Bad Example: DALLE 3 Evaluation: ✘ Analysis : The giraffe has green legs instead of stripes. Figure 9: Examples for the task: Object Attribution Comparison Relation Instruction : Generate an image of only one nail that is | https://arxiv.org/abs/2505.17613v1 |
longer than only one snake. Good Example: IDEOGRAM 2 Evaluation: ✔Bad Example: GEMINI IMAGE Evaluation: ✘ Analysis : The snake is longer than the nail. Figure 10: Examples for the task: Comparison Relation 30 Universal Relation Instruction : Generate an image of a toy box where all toys are animals except one car. Good Example: GPT I MAGE Evaluation: ✔Bad Example: RECRAFT V 3 Evaluation: ✘ Analysis : Have two cars instead of one. Bad Example: FLUX 1.1 PRO Evaluation: ✘ Analysis : Have building blocks which are not allowed in the instruction. Figure 11: Examples for the task: Universal Relation 31 Absolute Spatial Relation Instruction : Generate an image of a countryside porch, with a single rocking chair at the bottom left quarter of the image and a single lantern at the up right quarter of the image. Good Example: GEMINI IMAGE Evaluation: ✔Bad Example: DALLE 3 Evaluation: ✘ Analysis : The number of lanterns is two. Generating wrong number of objects is a prevelant failure mode for all image generation tasks other than obejct count task. Bad Example: SD 3.5 Evaluation: ✘ Analysis : The latten is at the left-up quarter and the chair at bottom-right. Figure 12: Examples for the task: Absolute Spatial Relation 32 Relative Spatial Relation Instruction : Generate an image about a quiet library. Please include a single wooden chair and and a single lamp. Make sure that the wooden chair is to the right of the lamp. Good Example: DALLE 3 Evaluation: ✔Bad Example: SD 3.5 Evaluation: ✘ Analysis : The wooden chair is to the left of the lamb. Figure 13: Examples for the task: Relative Spatial Relation Region Fill Instruction : Generate an image about a bustling city street at dusk. The left half should be pure green. Good Example: GPT I MAGE Evaluation: 0.892 (similarity score)Bad Example: SD 3.5 Evaluation: 0.327 (similarity score) Analysis : The whole image is green. Figure 14: Examples for the task: Region Fill 33 Border Fill Instruction : Generate a serene mountain landscape at sunset. The entire image should be surrounded by a simple and flat, solid and green border and approximately 10% of the image width on all sides. Good Example: GPT I MAGE Evaluation: 0.859 (similarity score)Bad Example: GEMINI IMAGE Evaluation: 0.0 (similarity score) Analysis : The border is too wide Bad Example: IDEOGRAM 2 Evaluation: 0.0 (similarity score) Analysis : There is no border. Figure 15: Examples for the task: Border Fill 34 Single Text Rendering Instruction : Generate an image of a sign and the only text on it is “Love The World Forever”. The text should be written in a clear, standard, easily readable typeface without any artistic distortions. Good Example: RECRAFT V 3 Evaluation: ✔Bad Example: DALLE 3 Evaluation: ✘ Analysis : The text rendered (“love the world forevver”) is wrong. Bad Example: IDEOGRAM 2 Evaluation: ✘ Analysis : Artistic distortion makes it hard to recognize. Figure 16: Examples for the task: Single Text Rendering 35 Double Text Rendering Instruction : Generate an image of exactly one sticky note on the | https://arxiv.org/abs/2505.17613v1 |
left side of a white desk and exactly one coffee mug on the right. The sticky note should have “Stay bold, chase dreams” written on it while the coffee mug should have “Coffee fuels infinite morning possibilities” printed on it. The text should be written in a clear, standard, easily readable typeface without any artistic distortions. The objects should be clearly visible and avoid overlapping. Good Example: GPT I MAGE Evaluation: ✔ Bad Example: IMAGEN 3 Evaluation: ✘ Analysis : The text rendered on the coffee mug is wrong.Bad Example: RECRAFT V 3 Evaluation: ✘ Analysis : The text is not rendered on the coffee mug. Bad Example: RECRAFT V 3 Evaluation: ✘ Analysis : The coffee mug is to the left of the sticky note. Figure 17: Examples for the task: Double Text Rendering 36 Multi-Lingual Text Rendering Instruction : Generate an image of a mug and the only text on it is " 美好". The text should be written in a clear, standard, easily readable typeface without any artistic distortions. Good Example: GPT I MAGE Evaluation: ✔Bad Example: RECRAFT V 3 Evaluation: ✘ Analysis : Distorted Chinese-like characters. In fact, all other models fails in generating Chinese except GPT I MAGE andGEMINI IMAGE . Figure 18: Examples for the task: Multi-Lingual Text Rendering 37 Semantic Consistency Instruction : Create four images of traditional dice faces, each displaying a single face with an increasing number of pips, starting from two up to the highest number. Good Example: GPT-4 O+ GPT I MAGE Response: Here are the images of traditional dice faces, each displaying a single face with an increasing number of pips: 1. Two pips: 2. Three pips: 3. Four pips: 2. Five pips: Evaluation: ✔Bad Example: GEMINI IMAGE Response : I will generate four images show- ing individual faces of a standard six-sided die. The first image will show the ’2’ face, the second will show the ’3’ face, the third will show the ’4’ face, and the final im- age will show the ’5’ face. Each face will be depicted with the traditional arrange- ment of white pips on a black background. Evaluation: ✘ Analysis : Generate only one image; pip num- bers are inconsistent with textual response; the last face is a number rather than pips. Figure 19: Examples for the task: Semantic Consistency 38 Multi-Angle Consistency Instruction : Using the provided image as the reference angle, create four additional images of the object from these specific angles: 30 degrees to the right, 60 degrees to the right, 30 degrees to the left, and 60 degrees to the left. Unfortunately all the models we evaluate perform poorly on this task. Here we will demonstrate our gold reference images. Reference Answer: 1. 30 degrees to the right: 2. 60 degrees to the right: 3. 30 degrees to the left: 4. 60 degrees to the left: Bad Example: GEMINI IMAGE Response : Generating an image of the floral patterned seat cushion rotated 30 degrees to the right, showing more of its right side and slightly obscuring the left. Generating an | https://arxiv.org/abs/2505.17613v1 |
image of the floral patterned seat cushion rotated 60 degrees to the right, revealing much of its right side and significantly obscuring the left. Generating an image of the floral patterned seat cushion rotated 30 degrees to the left, showing more of its left side and slightly obscuring the right. Generating an image of the floral patterned seat cushion rotated 60 degrees to the left, revealing much of its left side and significantly obscuring the right. Evaluation: [0.184, 0.401, 0.321, 0.320] (image similarity) Analysis : Fail to rotate the object correctly at the specific angles. Figure 20: Examples for the task: Multi-Angle Consistency39 Multi-View Consistency Instruction : Using the provided image as a reference for the frontal view, create four additional images from the following angles, in order: 15 degrees to the right, 60 degrees to the left, 30 degrees to the right, and 45 degrees to the left. Unfortunately all the models we evaluate perform poorly on this task. Here we will demonstrate our gold reference images. Reference Answer: 1. 15 degrees to the right: 2. 60 degrees to the left: 3. 30 degrees to the right: 4. 45 degrees to the left: Bad Example: GEMINI 2.5 + GPT I MAGE Response : Okay, here are the four images of the kitchen from the specified angles, using your provided image as the frontal view ref- erence. First, here is the kitchen viewed from 15 degrees to the right: Next, the kitchen viewed from 60 degrees to the left: Then, the kitchen viewed from 30 degrees to the right: And finally, the kitchen viewed from 45 degrees to the left: Evaluation: [0.302, 0.304, 0.299, 0.351] (image similarity) Analysis : Fail to rotate the scene correctly at the specific angles Figure 21: Examples for the task: Multi-View Consistency 40 Composition Consistency Instruction : Create three images that sequentially display the addition of an orange, a banana, and a kiwi into an initially empty fruit bowl. Ensure that previously added fruits remain in the bowl. Good Example: GPT-4 O+ GPT I MAGE Response : Here are the images showing the sequential addition of fruits into the bowl: 1. Empty Fruit Bowl with an Orange Added: 2. Fruit Bowl with an Orange and a Banana Added: 3. Fruit Bowl with an Orange, a Banana, and a Kiwi Added: These images will illustrate the gradual addi- tion of each fruit into the bowl. Evaluation: ✔Bad Example: GEMINI IMAGE Response : Image 1: An empty, clear glass fruit bowl sits on a light wooden surface, ready to be filled. Image 2: A vibrant orange has been placed in the clear glass fruit bowl, resting slightly off-center. The light wooden surface is visi- ble beneath and around the bowl. Image 3: A bright yellow banana has been added to the bowl, curving gently alongside the orange. Both fruits are now visible inside the clear glass bowl on the light wooden sur- face. Image 4: A fuzzy brown kiwi has been placed in the bowl with the orange and ba- nana. The three fruits now sit together in the | https://arxiv.org/abs/2505.17613v1 |
clear glass bowl on the light wooden surface. Evaluation: ✘ Analysis : Fail to follow the required order of adding objects and generate incorrect num- ber of images. Figure 22: Examples for the task: Composition Consistency 41 Decomposition Consistency Instruction : Create three images that sequentially show the result after removing the kiwi, then the banana, and finally the orange from the original image. Do not replace the removed objects. Good Example: GEMINI 2.5+GPT I MAGE Response : Okay, here are the images show- ing the sequential removal of the fruits. First, here is the image with the kiwi removed: Next, here is the image with the banana also removed: Finally, here is the image with the orange also removed, leaving an empty bowl: Evaluation: ✔Bad Example: GEMINI IMAGE Response : I will generate three images. The first image will show the wooden bowl with the orange and banana, after the kiwi has been removed. The second image will show the wooden bowl with only the orange, after the banana has been removed from the previous image. The third image will show the empty wooden bowl, after the orange has been removed from the previous image. Evaluation: ✘ Analysis : Fail to follow the sequence of re- moving objects in the second image. Figure 23: Examples for the task: Decomposition Consistency 42 Interleaved Image Editing - Object Adding Instruction : Create an image displaying the result after placing the object <im- age_0> from <image_1>. <image_0> <image_1> Good Example: GEMINI IMAGE Evaluation: 0.770 (image similarity)Bad Example: GEMINI 2.5 + GPT I MAGE Evaluation: 0.342 (image similarity) Analysis : The position of the cat is wrong and unreasonable. This is a common failure pattern among all the models. Figure 24: Examples for the task: Interleaved Image Editing - Object Adding 43 Interleaved Image Editing - Color Modifying Instruction : Generate an image that shows the result after changing the color of the largest ball in <image_0> to <im- age_1>. <image_0> <image_1> Good Example: GPT-4 O+ GPT I MAGE Evaluation: 0.722 (image similarity)Bad Example: GEMINI 2.5 + GPT I MAGE Evaluation: 0.483 (image similarity) Analysis : The colors of background and some other objects are changed. Bad Example: GEMINI IMAGE Evaluation: 0.608 (image similarity) Analysis : The colors and shapes of many ob- jects are changed. Figure 25: Examples for the task: Interleaved Image Editing - Color Modifying 44 Image Editing - Text Editing Instruction : Create an image displaying the result after inserting the word "clean" onto the white linen closest to the viewer, while leaving the rest of the image untouched. Good Example: GEMINI 2.5+GPT I MAGE Evaluation: ✔, 0.786 (image similarity)Bad Example: GEMINI IMAGE Evaluation: ✘ Analysis : The text rendered is wrong. Bad Example: GEMINI 2.5 + I MAGEN 3 Evaluation: ✘ Analysis : The original image is changed. This is inevitable if using a text-to-image model incapable of image editing. Figure 26: Examples for the task: Image Editing - Text Editing 45 Image Editing - Object Adding Instruction : Create an image displaying the result after placing a blue sticker | https://arxiv.org/abs/2505.17613v1 |
on the meter, with all other elements in the image remaining unchanged. Good Example: GEMINI IMAGE Evaluation: ✔, 0.911 (image similarity)Bad Example: GPT-4 O+ GPT I MAGE Evaluation: ✘ Analysis : No obviously visible blue sticker is added. Figure 27: Examples for the task: Image Editing - Object Adding 46 Image Editing - Object Removing Instruction : Create an image that displays the re- sult after removing the man’s wig, while leaving all other elements of the image unchanged. Good Example: GEMINI 2.5+GPT I MAGE Evaluation: ✔, 0.630(image similarity)Bad Example: GEMINI IMAGE Evaluation: ✘ Analysis : The wrong object (head) is re- moved. Figure 28: Examples for the task: Image Editing - Object Removing 47 Image Editing - Object Modifying Instruction : Generate an image displaying the re- sult after replacing the nuts with blueberries, while keeping all other elements of the image unchanged. Good Example: GEMINI 2.5+GPT I MAGE Evaluation: ✔, 0.853 (image similarity)Bad Example: GPT-4 O+ GPT I MAGE Evaluation: ✘ Analysis : Change elements (salmon to or- anges) that should remain the unchanged. Figure 29: Examples for the task: Image Editing - Object Modifying 48 Self Count Instruction : Create an image of a kitchen counter displaying multiple apples—specifically, more than two but fewer than seven—that are clearly visible, not stacked, and not overlapping. The apples should be of standard size and typical colors. After generating the image, return ONLY the number of apples N in the following XML format: <count>N</count>. Good Example: GEMINI 2.5+GPT I MAGE Response : <count>5</count> Evaluation: ✔Bad Example: GEMINI 2.5 + I MAGEN 3 Response : <count>5</count> Evaluation: ✘ Analysis : The number of apples is four. Figure 30: Examples for the task: Self Count 49 Self Color Recognition Instruction : Create an image of a kitchen that includes a chair, a microwave, and a vase, ensuring that each object appears exactly once. Assign a unique color to each object, choosing from the following: blue, yellow, and red. Once the image is complete, return only a JSON object showing the color assigned to each item, formatted as: {“chair”: “[color]”, “microwave”: “[color]”, “vase”: “[color]”}. Good Example: GPT-4 O+ GPT I MAGE Response : {“chair”: “blue”, “microwave”: “yellow”, “vase”: “red”} Evaluation: ✔Bad Example: GEMINI IMAGE Response : Evaluation: ✘ Analysis : No text response. Forgetting to output text is a common mistake for inter- leaved image-text generation. Figure 31: Examples for the task: Self Color Recognition 50 Self Size Recognition Instruction : Create an image of a living room that includes a carpet, a TV , and a poster, each with a different area and appearing only once in the scene. After generating the image, compare the areas of these three objects and provide ONLY a ranked list from smallest to largest area, such as [“poster”, “TV”, “carpet”]. Good Example: GPT-4 O+ GPT I MAGE Response : [“poster”, “TV”, “carpet”] Evaluation: ✔Bad Example: GEMINI 2.5 + I MAGEN 3 Response : [“poster”, “TV”, “carpet”] Evaluation: ✘ Analysis : The poster is larger than then the TV Figure 32: Examples for the task: Self Size Recognition 51 | https://arxiv.org/abs/2505.17613v1 |
Self OCR Instruction : Create an image of a whiteboard displaying a motivational quote. The quote must be between 5 and 10 words long, written in a clear, standard, and easily readable typeface without any artistic alterations. The quote should encourage people to follow their dreams. Once the image is generated, return ONLY the text from the whiteboard enclosed in XML format: <text>board text</text>. Good Example: GEMINI 2.5+GPT I MAGE Response : <text>chase your dreams, they know the way.</text> Evaluation: ✔Bad Example: GEMINI IMAGE Response : <text>follow yousved theie to say, dreams their their cleay drevcrd experition.</text> Evaluation: ✘ Analysis : The generated image has distorted text and does not align with model’s re- sponse. Figure 33: Examples for the task: Self OCR 52 Self Relative Spatial Relation Recognition Instruction : Create an image showing a bathroom counter that includes a toothbrush, a bar of soap, and a tube of toothpaste. Each item should appear exactly once in the image. Once the image is complete, respond to these two questions: 1. The soap is to the ___ of the toothpaste tube: A. left B. right C. neither obviously left nor right. 2. The toothbrush is positioned ___ than the soap: A. higher B. lower C. neither obviously higher nor lower. Provide ONLY a list of letter answers (e.g., [“A”, “B”]). Good Example: GEMINI 2.5+GPT I MAGE Response : [“A”, “A”] Evaluation: ✔Bad Example: GEMINI IMAGE Response : [“A”, “C”] Evaluation: ✘ Analysis : The toothbrush is positioned lower than the soap. Figure 34: Examples for the task: Self Relative Spatial Relation Recognition 53 Self Absolute Spatial Relation Recognition Instruction : Create an image of a kitchen countertop featuring a blender, a bowl, and a cutting board, each positioned in a different quadrant of the image. Ensure that each object appears only once. Once the image is complete, answer the following two questions: 1. In which quadrant is the blender located? A. bottom left B. bottom right C. top left D. top right. 2. In which quadrant is the cutting board located? A. bottom left B. bottom right C. top left D. top right. Respond with ONLY a list of the corresponding letter choices (e.g., [“A”, “D”]).. Good Example: GEMINI 2.5 + I MAGEN 3 Response : [“C”, “B”] Evaluation: ✔Bad Example: GEMINI 2.5 + GPT I MAGE Response : [“C”, “A”] Evaluation: ✘ Analysis : The cutboard is at the bottom cen- ter of the image. Bad Example: GPT-4 O+ GPT I MAGE Response : [“C”, “B”] Evaluation: ✘ Analysis : There are multiple bowls. The wrong number of objects is a common mis- take for all tasks. Figure 35: Examples for the task: Self Absolute Spatial Relation Recognition 54 Interleaved Math Reasoning Instruction : Carefully examine the sequence and determine the most suitable pattern to replace the question mark. Analyze the sequence step by step and clearly explain your reasoning. Present your final answer enclosed in double angle brackets as follows: «a detailed text description of the pattern ONLY». Then, create an image representing the same pattern to fill in the | https://arxiv.org/abs/2505.17613v1 |
question mark. Unfortunately all the models we evaluate perform poorly on this task. Here we will demonstrate our gold reference images. Reference Answer: Evaluation: ✔Bad Example: GEMINI 2.5 +GPT I MAGE Response: To determine the pattern that replaces the question mark, let’s analyze the positions of the white circle and the black square in each row of the sequence. We’ll number the squares from 1 (leftmost) to 6 (rightmost). Step 1: Analyze the position of the white circle. Row 1: The white circle is in the 3rd square. Row 2: The white circle is in the 4th square. (Movement: 3 →4, which is +1) Row 3: The white circle is in the 2nd square. (Movement: 4 →2, which is -2) . . . A white circle in the 1st square. A black square in the 6th square. The squares in between (2nd, 3rd, 4th, and 5th) will be empty. «A row of six squares with a white circle in the first square from the left and a black square in the sixth square from the left. The second, third, fourth, and fifth squares are empty. » Evaluation: ✘ Analysis : All the moving patterns are correct, but the last step of calculating the position of the black square goes wrong. Figure 36: Examples for the task: Interleaved Math Reasoning 55 Interleaved Code Reasoning Instruction : [SVG Code] What does this SVG code represent? Ana- lyze the elements step by step, then create a rendered image showing how it would appear in a browser. (reference image is not given) reference image Good Example: GEMINI 2.5+GPT I MAGE Response : This SVG code describes a cartoon image, likely an emoji or a simple illustration. Let’s break down its components: 1. <path fill="#AF7E57" d="M6 18.562..."/> : This is the main shape of the face, filled with a tan color ( #AF7E57 ). It’s a large, somewhat pear-shaped element forming the head. 2.<path fill="#DD2E44" d="M30.961 10c..."/> : This path, filled with red (#DD2E44 ), forms the main part of a hat sitting on top of the head. . . . 11. <path fill="#DD2E44" d="M18 30.651c..."/> : A red shape at the bottom of the face, forming a smiling mouth. 12. <path fill="#D1D3D4" d="M10 25c0.../> and<path fill="#D1D3D4" d="M26.003 25c0..."/> : Two light gray shapes at the very bottom, possibly representing parts of a beard, chin, or stylized feet if it were a full figure. Given the context, they are likely part of the lower face or beard detail. Collectively, these elements form an image of a Santa Claus face emoji . Here is a rendered image of how it would appear: Evaluation: ✔, 0.781 (similarity score)Bad Example: GPT-4 O+ GPT I MAGE Response : The SVG code represents a stylized cartoon face, possibly a bear or a similar animal, with various decorative elements. Let’s break down the elements step by step: 1.Main Face Shape : The main face is cre- ated using a path with a fill color of #AF7E57 , which is a brownish color. This path forms the central part of the face. 2.Ears : | https://arxiv.org/abs/2505.17613v1 |
Two ellipses with a fill color of #D1D3D4 are used to create the ears. These are positioned at the top left and right of the face. . . . 8.Mouth : A path with a fill color of #915A34 is used to create the mouth, adding detail to the expression. 9.Background Elements : The SVG in- cludes additional paths and ellipses to add depth and detail to the overall design. Now, let’s create a rendered image showing how this SVG would appear in a browser: Evaluation: ✘ Analysis : Wrong final image. Figure 37: Examples for the task: Interleaved Code Reasoning 56 Image-Text Order Instruction : Use visual storytelling to depict the four seasons. For each season, begin by creating an image of a landscape that represents that time of year, followed by a written description in natural language highlighting its key features. Provide only the requested content—do not include explanations, titles, or extra commentary. Good Example: GEMINI 2.5+GPT I MAGE Response : This spring scene showcases fresh green grass, trees bursting with new leaves and pink blossoms, and a variety of colorful flowers under a bright, clear sky. Summer is represented by a bright, sunny day with deep blue skies, lush green foliage at its peak, and often, warm, inviting waters. Autumn brings a palette of warm colors as leaves turn red, orange, and yellow; the air becomes crisp, and a sense of calm pervades the landscape. Winter is characterized by a blanket of snow covering the ground and trees, often with frozen bodies of water and a quiet, still atmo- sphere. Evaluation: ✔Bad Example: GEMINI IMAGE Response : A vibrant green field stretches under a clear blue sky, bordered by a line of lush, dark green trees in the distance. The grass shows subtle stripes from recent mow- ing, indicating care and growth. A stark white landscape is dominated by a snow-covered path leading into the distance, flanked by wooden posts connected by dark railings. The sky is a pale, overcast white, blending seamlessly with the snow-covered ground. Patches of snow dust a field of dry, golden- brown grass, contrasting with the remain- ing green leaves on the distant trees under a bright blue sky with wispy clouds. A large tree branch with vibrant orange leaves ex- tends into the frame from the upper left. A winding path covered in fresh snow cuts through a forest where trees are heavily laden with snow, creating a monochrome scene in shades of white and dark gray. A wooden fence runs along the side of the path, also covered in snow. Evaluation: ✘ Analysis : Generate only one image. Figure 38: Examples for the task: Image-Text Order 57 Sound Begin-End Instruction : Create an audio that begins with the sound of chirping birds and concludes with the sound of a mouse click. Good Example: MAKE-AN-AUDIO 2 Audio Description: an audio with the begin- ning 3 seconds of chirping birds and the last 1 sec of mouse click. Evaluation: ✔Bad Example: STABLE AUDIO Audio Description: an audio with chirping birds all the time, and | https://arxiv.org/abs/2505.17613v1 |
a mouth click only in the 2nd second, not the end. Evaluation: ✘ Figure 39: Examples for the task: Sound Begin-End Sound Positional Inclusion Instruction : Create an audio of a city street, ensuring a police car siren is included in the first half. Good Example: AUDIO GEN Audio Description: an audio of a city street with the first 3 seconds including a police car siren. Evaluation: ✔Bad Example: STABLE AUDIO Audio Description: an audio of police car siren mixed with normal cars passing by street all the time. Evaluation: ✘ Figure 40: Examples for the task: Sound Positional Inclusion Sound Reasoning Instruction : Produce the sound of a black bird commonly linked to death or misfortune. Good Example: MAKE-AN-AUDIO 2 Audio Description: an audio of very typical crow cry. Evaluation: ✔Bad Example: STABLE AUDIO Audio Description: bright and melodious chirp of a certain kind of bird that’s obvi- ously not crow. Evaluation: ✘ Figure 41: Examples for the task: Sound Reasoning Sound Silence Instruction : Create an audio that begins with a loud car horn, followed by a long silence, and concludes with a distant siren. Good Example: MakeAnAudio2 Audio Description: an audio with a loud car horn in the beginning 3 seconds, and then comes 4 seconds of silence, with the last 3 seconds of a distant siren. Evaluation: ✔Bad Example: AUDIO LDM 2 Audio Description: an audio without any silent time, and the two sounds are mixed together. Evaluation: ✘ Figure 42: Examples for the task: Sound Silence 58 Music Instrument Inclusion Instruction : Create a seamless saxophone improvisation. Good Example: TANGO MUSIC Audio Description: a casual piece of saxo- phone improvisation. Evaluation: ✔Bad Example: YUE Audio Description: an audio starting with 3 seconds of laughter and then 5 seconds of improvised jazz music including piano, drums and saxophone. Evaluation: ✘ Figure 43: Examples for the task: Music Instrument Inclusion Music Instrument Exclusion Instruction : Create an audio of a city street, ensuring a police car siren is included in the first half. Good Example: MUSIC GEN Audio Description: an audio of smooth jazz music featuring bass but without drums. Evaluation: ✔Bad Example: STABLE AUDIO Audio Description: an audio of jazz music with rhythms played by drums. Evaluation: ✘ Figure 44: Examples for the task: Music Instrument Exclusion Music Intensity Instruction : Compose a cinematic orchestral piece that gradually fades out at the end. Good Example: STABLE AUDIO Audio Description: an audio of orchestral piece featuring a cinematic build with rich instrumentation and gradually fading out, cre- ating a smooth ending. Evaluation: ✔Bad Example: TANGO MUSIC Audio Description: an audio quite the opposite, with a tranquil start and getting more intense. Evaluation: ✘ Figure 45: Examples for the task: Music Intensity Music Tempo Instruction : Create a laid-back lo-fi hip-hop beat at 100 BPM. Good Example: MusicGen Audio Description: an audio of a hip-hop beat at approximately 102 BPM. Evaluation: ✔Bad Example: AUDIO LDM2 Audio Description: an audio of a hip-hop beat at approximately 64 BPM. Evaluation: ✘ Figure 46: Examples for the task: Music Tempo 59 Speaker V | https://arxiv.org/abs/2505.17613v1 |
oice Attribution Instruction : Generate an audio of a man speaking rapidly in a low-pitched voice, saying, “The detective carefully examined the crime scene, noting every detail that could lead him to the truth, knowing that even the smallest clue might be the key to solving the mystery.” Good Example: GEMINI 2.5+VOXIN- STRUCT Speech Transcript: (low-pitched male voice talking rapidly) The detective carefully ex- amined the crime scene, noting every detail that could lead him to the truth, knowing that even the smallest clue might be the key to solving the mystery. Evaluation: ✔Bad Example: GEMINI 2.5 + V OICE LDM Speech Transcript: (high-pitched male voice talking rapidly) The detective carefully examined the crime scene, noting every detail that could lead him to the truth, knowing that even the smallest clue might be the key to solving the mystery. Evaluation: ✘ Figure 47: Examples for the task: Speaker V oice Attribution Multi-Lingual Speech Instruction : Generate an audio of a man slowly speaking: “ 窗外的雨滴敲打着玻璃,滴滴 答答的声音仿佛一首温柔的旋律,让她的思绪飘回了那个久远而温暖的夏天。” Good Example: GEMINI 2.5+VOXIN- STRUCT Speech Transcript: (an audio of a man gen- tly and slowly speaking Chinese) 窗外的雨 滴敲打着玻璃,滴滴答答的声音仿佛一 首温柔的旋律,让她的思绪飘回了那个 久远而温暖的夏天。 Evaluation: ✔Bad Example: GEMINI 2.5 +VOXIN- STRUCT Speech Transcript: (an audio of a man speaking Chinese hastily) 窗外的雨滴敲打 着玻璃,仿佛一首温柔的旋律,让她的 思绪飘回了那个久远而温暖的夏天。 Evaluation: ✘ Figure 48: Examples for the task: Multi-Lingual Speech Speaker V oice Replication Instruction : Create an audio of reading the sentence, “The aroma of fresh coffee and warm pastries filled the air as she stepped into the café, instantly feeling a sense of comfort and familiarity in the cozy atmosphere,” using the same voice as the reference speaker. (reference speech: a gentle male sound) Good Example: GEMINI 2.5+VOICE LDM Speech Transcript: (a warm, middle- pitched gentle male sound) The aroma of fresh coffee and warm pastries filled the air as she stepped into the café, instantly feeling a sense of comfort and familiarity in the cozy atmosphere. Evaluation: ✔Bad Example: GEMINI 2.5 +VOXIN- STRUCT Speech Transcript: (a hoarse male sound) The aroma of fresh coffee and warm pastries filled the air as she stepped into the café. Evaluation: ✘ Figure 49: Examples for the task: Speaker V oice Replication 60 Transcript Generation Instruction : Create a speech on the internet revolution, making sure the word "digital" is used at least twice. Good Example: GEMINI 2.5+VOXIN- STRUCT Speech Transcript: As you just heard, the internet revolution has profoundly impacted our lives, marking a significant shift into a new digital era. This ongoing digital trans- formation continues to unfold, bringing new opportunities and challenges. Evaluation: ✔Bad Example: GEMINI 2.5 + V OICE LDM Speech Transcript: Friends, colleagues, we stand at the precipice of an era by connectivity. The internet revolution has reshaped our world, ushering in a new digital age with transformation connecting billions and democratizing information, and its story is still being written. Evaluation: ✘ Analysis : keyword “digital” appears only once. Figure 50: Examples for the task: Transcript Generation Transcript Editing Instruction : Revise the provided speech by replacing the third sentence with: “Join hands, break barriers, and transform the world.” (Provided | https://arxiv.org/abs/2505.17613v1 |
speech transcript: “We must act now. Every action counts. We can make a difference together if we remain united. Progress comes from shared responsibility.”) Good Example: GEMINI 2.5+VOICE LDM Speech Transcript: We must act now. Every action counts. We can make a difference together if we remain united. Join hands, break barriers, and transform the world. Evaluation: ✔Bad Example: GEMINI 2.5 +VOXIN- STRUCT Speech Transcript: We must act now. Every action counts. Join hands, break barriers, and transform the world. Evaluation: ✘ Analysis : The third sentence is missing. Figure 51: Examples for the task: Transcript Editing 61 Speech Conversation Instruction : Create five consecutive speeches forming a dialogue between James and Victoria about architectural innovations for sustainable urban living, with each speech delivered by a single speaker. Use the following speaking order: James, Victoria, Victoria, James, James. Constraints: 1. Victoria’s second speech must begin with the phrase “Contrary to your point.” 2. James’s final speech must contain exactly two uses of the word “design”. Ensure each speaker maintains a consistent voice throughout the conversation. Good Example: GEMINI 2.5+VOICE LDM Conversation Transcript: James: I’ve been fascinated by the potential of kinetic pavements in high-traffic urban ar- eas to generate clean energy. Victoria: That’s an interesting concept, James! I also think smart water management systems like rainwater harvesting and grey- water recycling are vital. Victoria: Contrary to your point about focus- ing on generation, I believe water conserva- tion innovations offer more immediate and widespread benefits for sustainability. James: Both are crucial, Victoria. Perhaps in- tegrating biomimetic architecture that learns from nature’s efficiency could address energy and water simultaneously. James: The overall urban design must be rethought. We need a cohesive design ap- proach that incorporates all these innovations seamlessly for truly sustainable cities. Evaluation: ✔Bad Example: GEMINI 2.5 +VOXIN- STRUCT Conversation Transcript: James: I believe modular construction is key to sustainable urban development, significantly reducing waste and build times. Victoria: That’s a valid point, James. Integrating green roofs and vertical gardens can also drastically improve air quality and biodiversity. Victoria: Contrary to your point about modular construction being the key, I think retrofitting existing buildings with smart technologies offers a more immediate impact on energy consumption. James: Retrofitting is important, Victoria, but new constructions set the standard. We need innovations in material science too, like self-healing concrete. James: Ultimately, a holistic approach to urban design is crucial. Evaluation: ✘ Analysis : James’s final speech only contain one use of the word “design” Figure 52: Examples for the task: Speech Conversation 62 Audio-Text Order Instruction : Develop a 2-step guide for delivering a persuasive presentation. For each step, begin with detailed written instructions in natural language, followed by a speech from a communication expert demonstrating the technique. For the second step, also include an additional speech at the end that illustrates how the technique is applied in a formal business context. Generate only what is requested—no explanations, titles, or extra commentary. Good Example: GEMINI 2.5+VOXIN- STRUCT Response: Step 1: Thoroughly research your audience to understand their needs, motiva- tions, existing knowledge, and potential | https://arxiv.org/abs/2505.17613v1 |
ob- jections. Adapt your language, examples, and level of detail to resonate specifically with them. Frame your core message in a way that directly addresses their interests and demonstrates clear value from their perspec- tive, ensuring your arguments are supported by credible evidence and logical reasoning. (speech) To truly persuade, you must first connect—understand who you’re speaking to, what their concerns are, their aspirations. Tailor your message to their world, not yours. Make it relevant, make it resonate. Step 2: Structure your presentation for maxi- mum impact by creating a clear, logical, and compelling narrative. Begin with a strong opening that grabs attention and establishes relevance. Develop your core arguments with supporting evidence, data, and illustra- tive examples, addressing potential counter- arguments proactively. Conclude with a memorable summary and a clear call to ac- tion, guiding your audience on the next steps. (speech) Your delivery is as crucial as your content. Project confidence through your voice and body language. And when faced with questions or doubts, listen carefully, ac- knowledge their points, and respond thought- fully. This builds trust. (speech) While I understand concerns about the initial investment, consider the long-term ROI. Our projections show a 20% return within two years, significantly boosting our profit margins and market competitiveness. This strategic move positions us for success. Evaluation: ✔Bad Example: GEMINI 2.5 + V OICE LDM Response: Step 1: Thoroughly understand your audi- ence and craft a compelling narrative. This involves researching their demographics, interests, pre-existing beliefs, and potential pain points. Use this understanding to build a story that resonates, typically following a structure that introduces a problem, presents your proposed solution, highlights the benefits, and concludes with a clear call to action. Employ vivid language, relatable examples, and appropriate emotional appeals to make your message memorable and impactful. Step 2: Master your delivery and prepare to handle objections gracefully. Effective delivery encompasses strong body lan- guage—maintaining eye contact, adopting an open posture, and using purposeful ges- tures—as well as vocal variety, modulating your tone, pace, and volume to maintain engagement. (speech) To truly persuade, you must first connect—understand who you’re speaking to, what their concerns are, their aspirations. Tailor your message to their world, not yours. Make it relevant, make it resonate. (speech) A persuasive presentation flows like a good story. Hook your audience early, build your case with compelling evidence, and then guide them to action. (speech) Good morning. Our analysis indicates a 15% market share increase is achievable by Q4 if we implement the proposed strategy. This directly addresses the growth targets set last quarter and positions us ahead of key competitors. We seek your approval to proceed. Evaluation: ✘ Analysis : Speeches and texts are not interleaved in the expected order. Figure 53: Examples for the task: Audio-Text Order 63 | https://arxiv.org/abs/2505.17613v1 |
arXiv:2505.17615v1 [cs.LG] 23 May 2025Large language model as user daily behavior data generator: balancing population diversity and individual personality Haoxin Li*, Jingtao Ding∗, Jiahui Gong, Yong Li Department of Electronic Engineering, Tsinghua University dingjt15@tsinghua.org.cn; liyong07@tsinghua.edu.cn Abstract Predicting human daily behavior is challenging due to the complexity of routine patterns and short-term fluctuations. While data-driven mod- els have improved behavior prediction by lever- aging empirical data from various platforms and devices, the reliance on sensitive, large- scale user data raises privacy concerns and lim- its data availability. Synthetic data generation has emerged as a promising solution, though existing methods are often limited to specific applications. In this work, we introduce Be- haviorGen, a framework that uses large lan- guage models (LLMs) to generate high-quality synthetic behavior data. By simulating user behavior based on profiles and real events, Be- haviorGen supports data augmentation and re- placement in behavior prediction models. We evaluate its performance in scenarios such as pertaining augmentation, fine-tuning replace- ment, and fine-tuning augmentation, achieving significant improvements in human mobility and smartphone usage predictions, with gains of up to 18.9%. Our results demonstrate the potential of BehaviorGen to enhance user be- havior modeling through flexible and privacy- preserving synthetic data generation. 1 Introduction Predicting human behavior is inherently challeng- ing (Nadkarni, 2016). While most individuals fol- low routine patterns shaped by behavioral habits, they also exhibit short-term fluctuations driven by specific contexts. The increasing availability of em- pirical data capturing user behaviors across various web platforms and smart devices has significantly enhanced our ability to forecast future behavior through the use of data-driven models (Zhang and Dai, 2018; Zhang et al., 2019; Li et al., 2022). This progress represents a crucial step toward develop- ing intelligent, adaptable assistants capable of sup- porting daily human activities (Chung and Lee, *These two authors contributed equally.2018; Tulshan and Dhage, 2019; Savcisens et al., 2023). However, user behavior modeling methods rely heavily on empirical data that records real-world human behavior, often containing highly sensitive personal information. Although techniques such as privacy-preserving data publishing and privacy- aware computing approaches, like federated learn- ing, exist, it is becoming increasingly difficult to collect large-scale, high-quality user behavior data. This limitation hinders the development of down- stream applications. Synthetic data generation has emerged as a promising solution to address this data-centric challenge. Recently, deep generative models have been applied to behavior data gener- ation in areas such as recommender systems (Shi et al., 2019; Liu et al., 2022; Luo et al., 2022), hu- man mobility (Yuan et al., 2025), and urban daily life (Yuan et al., 2023). However, these approaches are typically tailored to specific applications and lack generalization capabilities across diverse sce- narios. Fortunately, large language models (LLMs) (Zhao et al., 2023; Brown, 2020), trained on massive corpora containing extensive knowledge and capable of generating high-quality textual data, have emerged as a potential solution to this problem (Long et al., 2024). Recent studies have begun exploring LLMs’ ability to simulate human behaviors in daily life (Shao et al., 2024; Wang et al., 2024; Li et al., 2024), such as | https://arxiv.org/abs/2505.17615v1 |
commuting and entertainment, indicating that these models have already captured significant behavioral knowledge through pretraining. In this work, we explore the synthetic data gener- ation capabilities of large language models (LLMs) for user behavior modeling. To meet practical re- quirements, modern user behavior prediction mod- els have evolved into a two-stage paradigm (Gong et al., 2024). These models are first pretrained on population-level behavior data and then fine- tuned on individual-level data. Unlike textual data, both population-level diversity and individual-level specificity are crucial in determining the quality of synthetic behavior data. To address this, we intro- duce the BehaviorGen framework, which prompts LLMs to simulate a specific user’s behavior based on a provided profile and a few real behavior events. This approach enables the flexible generation of high-quality synthetic user behavior data. We evaluate BehaviorGen’s data generation ca- pabilities across various usage scenarios, including: 1)Data augmentation for the pretraining stage : where behavior diversity is critical for pretraining a generalist behavior prediction model that serves as a robust initialization for all users. 2) Data re- placement for the fine-tuning stage : where LLMs generate behaviors highly personalized to specific users, effectively replacing real data. 3) Data aug- mentation for the fine-tuning stage : where, given a limited number of real records for a specific user, the model generates high-quality personalized data to supplement the fine-tuning process. Surprisingly, we find that BehaviorGen enables LLMs to gen- erate user behaviors that reflect both population diversity and individual personality. In the pretrain- ing augmentation scenario, BehaviorGen achieves performance gains of up to 2.6% and 6.9% in two applications: human mobility behavior prediction and smartphone usage behavior prediction, respec- tively. In the fine-tuning replacement scenario, syn- thetic data generated by BehaviorGen can replace real data, providing about 62.0% and 87.8% of the fine-tuning performance gains in these two appli- cations. Finally, for the fine-tuning augmentation scenario, our results show that BehaviorGen can generate augmented data using only around 100 in- dividual records, significantly boosting prediction performance by up to 18.9% and 5.3%, respec- tively. 2 Related Work 2.1 Synthetic Data Generation with LLMs Synthetic data generation has gained significant momentum with the advent of large language mod- els (LLMs) (Guo and Chen, 2024). The data gen- erated by LLMs closely approximates real-world data, making this approach a powerful solution to addressing the challenges of resource scarcity. Designing an informative prompt is key to ef- fective data generation with LLMs. Yu et al. (2023) explore synthetic data generation using di-versely attributed prompts, which have the poten- tial to produce diverse and richly attributed syn- thetic data. Reynolds and McDonell (2021) pro- pose MetaPrompt, a method where an expanded prompt is first generated by ChatGPT, then used to further prompt LLMs for data generation. Another promising approach for task-specific data gener- ation is to aggregate a few-shot dataset and per- form parameter-efficient adaptation on the LLM (Guo et al., 2022). Chen et al. (2023) train a set of soft prompt embeddings on few-shot, task-specific training data to condition the LLM for more effec- tive text generation. He et | https://arxiv.org/abs/2505.17615v1 |
al. (2023) AnnoLLM, an LLM-powered annotation system. It first prompts LLMs to explain the reasoning behind a ground truth label, then uses these explanations to create a few-shot chain-of-thought prompt for annotating unlabeled data. However, existing work has not adequately ad- dressed the balance between population diversity and individual preference, a crucial consideration in user behavior generation. 2.2 Synthetic Data for User Behavior Modeling Due to user privacy concerns and the difficulty of data collection, it is difficult to collect a large amount of data for model training in some user be- havior domains. synthetic data generation provides a promising way. Park et al. (2023) instantiate generative agents to populate an interactive sandbox environment in- spired by The Sims, where end users can guide the generation of behaviors of agents using natural language. Zherdeva et al. (2021) use the generated synthetic data to train the Mask R-CNN frame- work, which is used for digital human interaction with the 3D environment. Liu et al. (2022) present UPC-SDG, a User Privacy Controllable Synthetic Data, which generates synthetic interaction data for users based on their privacy preferences to im- prove the performance of recommendations. Chen et al. (2021) leverage a small set of uniform syn- thetic data to optimize the debiasing parameters by solving the bi-level optimization problem in rec- ommendations. Provalov et al. (2021) propose a novel method for evaluating and comparing rec- ommender systems using synthetic user and item data and parametric synthetic user-item response functions. However, current work focuses on a specific do- main of user behavior and lacks work on generating user behavior in all scenarios and around the clock. 3 Preliminary 3.1 Behavior Data Generation Problem Now, we give a formal definition of our research problem: PROBLEM (User behavior generation). The user behavior can be represented as xi= (di, ti, li, bi, pi) where bidenotes a specific behavior occurring at location liduring time slot tion day di. Here, di, ti,li, and biare the weekday, time slot, location, and behavior IDs, respectively. We denote the sets of weekdays, time slots, locations, and behaviors asD,T,L, andB, with sizes ND,NT,NL, and NB. Additionally, pirepresents the user profile, which consists of five key attributes: Age: The age group of the user. Education : The highest ed- ucation level attained by the user. Gender : The gender identity of the user. Consumption : The user’s estimated consumption level. Occupation : The profession or job category of the user. User behavior sequences can be represented as xi= [x1, x2, x3, ..., x I] where Idenotes the length of the input sequence. Our goal is to generate the user behavior sequence, which can be formulated as: [ ˆx1,ˆx2,ˆx3, ...,ˆxO] =G([x1, x2, x3, ..., x I])(1) where Orepresents the length of the generated sequence and Gis the generation function. By incorporating user profiles Pinto the behav- ior generation process, our method ensures that the generated behavior sequences align with realistic user characteristics, leading to more accurate and personalized synthetic data. 3.2 Behavior Prediction Problem To demonstrate the effectiveness of the generated sequence, we design the user behavior prediction | https://arxiv.org/abs/2505.17615v1 |
experiment. User behavior prediction aims to fore- cast future user behavior based on its past Ievent series, which can be formed as, ˆbt=f(xt−I, xt−I+1, ..., x t−1) (2)4 BehaviorGen Framework 4.1 Data Generation Procedure 4.1.1 Data generation Process Role Setting : In this stage, the Large Language Model (LLM) is assigned the role of "Generator." We choose gpt-4o-2024-0806 model as our gener- ator. As shown in Figure 1, by explicitly defining the role, the LLM is better equipped to understand the task structure and objectives, leading to more coherent and contextually appropriate output. Format Restrictions : In order to ensure that the generated data adheres to a consistent and inter- pretable structure, we impose strict formatting re- quirements, where the output is specified as [week- day, timestamp, location, intent]. Additionally, we limit the value and scope of the generated data, such as restricting the weekday to the range of 0-6. Format restrictions ensure the validity of generated data, reducing the subsequent steps in data process- ing. Segmented Generation : Given the complexity of generating long sequences of behavioral data, we utilize a segmented approach, where the user’s behavior is divided into weekly segments. This reduces the risk of context drift and helps main- tain consistency throughout the generation process. The segmentation is particularly useful for main- taining diversity and accuracy across varying time periods, thereby enhancing the success rate and effectiveness of data synthesis. Thus, segmentation balances diversity and faithfulness while ensuring robustness across weeks.We give detailed experi- ments and explanations in the Appendix A.5. as to why we chose weekly segments. 4.2 Usage Scenarios 4.2.1 Pretraining Augmentation In real-world scenarios, application service providers can only collect a limited amount of user data, which is insufficient to support the training of a population-level behavior prediction model. Therefore, it is necessary to synthesize ad- ditional data to enhance population diversity. This need arises from the challenges of establishing a population-level model capable of capturing com- mon behavioral patterns. To address this, we utilize a two-pronged approach involving both population- level data and pre-trained models. Specifically, we implement the PITuning(Gong et al., 2024) model architecture with both GPT-2 and Bert4Rec serving as the backbone models. Behavioral data Role setting Prompt: You are an assistant generating behavioral data based on given user behavior and profile data.ProfilesSynthetic Data Generation eatApplication ①Pretrain Augmentation Synthetic Dataset Base ModelPopulation - level model Individual - level model Individual - level modelFinetuning FinetuningPretraining Synthetic data Real data Segmented generation First weekSecond weekThird week ...... ...... ......Population - level model Population - level modelpopulation behavior individual behavior individual behaviorBehavior Prediction Task②Finetuning Replacement ③Finetuning Augmentation Format restrictions Prompt: Generate behavioral data in the exact format: " weekday,timestamp,loc,intent “ and make sure to mimic realistic patterns of the given personFigure 1: The Framework of BehaviorGen. Building upon this foundational training, we in- corporate behavioral data generated by the LLM as a means of data augmentation, which can be formed as, Lp(x,ˆx;θ) =L(x;θ) +L(ˆx;θ) (3) where Lpdenotes the cross-entropy classification loss in the pretraining process, with xandˆxde- noting real data and synthetic data, respectively. This | https://arxiv.org/abs/2505.17615v1 |
method enhances the model’s predictive ac- curacy concerning user behavior, enabling a more robust capture of behavioral patterns, as confirmed in subsequent experiments. The augmented dataset allows the model to better generalize and predict user behavior across diverse scenarios, ultimately improving its effectiveness in real-world applica- tions. 4.2.2 Finetuning Replacement Post pre-training, the fine-tuning phase serves as a pivotal step in enhancing the personalization and accuracy of recommendation systems. However, leveraging real user behavioral data in this phase poses significant privacy and security concerns. To mitigate these risks, we propose using behavioral data generated by the LLM as a replacement for real user data during fine-tuning, which can be formed as, Lf(ˆx;θ) =L(ˆx;θ) (4) where Lfdenotes the cross-entropy classification loss in the finetuning process. This approach en- ables the fine-tuning of the pre-trained model whilepreserving user privacy. Specifically, we gener- ate behavioral data for users who included in the finetuning phase. The synthesized data is then par- titioned into training, testing, and validation sets, facilitating the fine-tuning of the population-level model. This process not only enhances personal- ization at the individual level but also maintains the integrity of user data. 4.2.3 Finetuning Augmentation Accurate prediction of long-tail user behavior within recommendation systems poses a signifi- cant challenge due to the infrequency of such data and the inherent difficulties in its collection. In re- sponse to this challenge, we advocate for a strategy that involves the synthesis of behavioral data using a limited amount of real user behavior data as a base data. By combining LLM-generated user behavior data with this small set of authentic user data, we aim to enrich the training dataset for fine-tuning. This hybrid approach enhances the model’s capac- ity to predict long-tail user behaviors, ensuring that even less common patterns can be adequately rep- resented. Consequently, this not only improves the model’s overall predictive capabilities but also con- tributes to a more comprehensive understanding of user behavior across different demographics and usage contexts. 5 Experiment 5.1 Experiment Settings 5.1.1 Datasets We evaluate the performance of our model on two large-scale real-world activity datasets. •Tencent Dataset. The Tencent Dataset con- sists of anonymous user trajectory data col- lected from October to the end of December. The dataset includes a total of 667 users and 189,954 behavioral data entries. At the popu- lation level, we select 466 users for training, while at the individual level, we use the re- maining 201 users. In this dataset, we utilize location categories to represent user activities and intents. •Smartphone Dataset. The Smartphone Dataset is sampled from the usage log of the mobile phones. When a user uses mobile phones, various types of logs are generated, desensitized and reported (with user consent). We selected 114 types of events that are com- monly monitored in most mobile applications and classified them into 18 intents, which cover the aspects of news, study, work, entertainment, sports, etc. We sampled two datasets between June 1st and August 22nd, 2023 (the first) and August 22nd and September 10th, 2023 (the second) which in total contain 4,500 and 5,000 anonymous | https://arxiv.org/abs/2505.17615v1 |
users. 5.1.2 Metrics To assess model performance, we employ four widely used metrics: precision ( Pre), recall ( Rec), and NDCG(N) (Ding et al., 2018, 2020). NDCG gauge classification accuracy and ranking quality, respectively, while Pre and Rec evaluate the aver- age prediction accuracy for each intent, indicating the model’s predictive quality across intents. Refer to Appendix A.1 for metric calculations. 5.1.3 Baselines We carefully select the following three represen- tative methods to compare with our proposed algorithm, which include generative methods for sequence data (SeqGAN (Yu et al., 2017)), diffusion-based sequence generation models (Dif- fuSeq (Gong et al., 2022)), and a synthetic data generation method (UPC_SDG (Liu et al., 2022)). We provide the details of baselines in Appendix A.2.5.1.4 Evaluation Backbones. We choose PITuning (Gong et al., 2024) and Bert4Rec (Sun et al., 2019) as the evaluation back- bone. •PITuning PITuning is a Population-Individual Tuning framework that enhances common pattern extraction through dynamic event-to-intent tran- sition modeling and addresses long-tailed prefer- ences via adaptive unlearning strategies. •Bert4Rec Bert4Rec, a bidirectional encoder representation from Transformers, enhances the power of the historical sequence representations by jointly conditioning the left and right context. 5.1.5 Tasks •Pretraining Augmentation In the Pretrain- ing phase, as shown in Figure 1, we leverage population-level data in combination with synthetic data generated by our framework using large lan- guage models (LLMs). This synthetic data is em- ployed as a form of data augmentation. •Finetuning Replacement In the fine-tuning phase, as shown in Figure 1, we synthesized a set of personalized data from individual-level user data to protect user privacy. This synthetic data was used to replace actual user personal data. We fine- tuned a population-level model, which had been pre-trained, using this synthetic dataset. •Finetuning Augmentation During the finetun- ing phase, a common challenge arises when dealing with new users who possess limited behavioral data, which may be insufficient for effective model adap- tation. To address this issue, we propose a method for generating personalized behavioral data based on the limited real individual data(about 105 logs) available from users. This synthesized data serves to enhance the finetuning results of individual mod- els. 5.2 Overall Performance Analysis We report experiments on three usage scenarios for two prediction applications (Table 1-3). Across all experiments, our framework demonstrates clear superiority over baseline methods, both in terms of performance metrics and its ability to produce high-quality synthetic data. Whether augmenting real data during pretraining or replacing sensitive user data during finetuning, the models trained with our synthetic data consistently outperform those trained with data generated by SeqGAN, DiffuSeq, and UPC_SDG. The fine balance between diversity and faithfulness achieved by our framework not Category Tencent Dataset Smartphone Dataset BackboneBert4Rec PITuning Bert4Rec PITuning Pre Rec N@3 N@5 Pre Rec N@3 N@5 Pre Rec N@3 N@5 Pre Rec N@3 N@5 Real Data Pretrained 0.427 0.466 0.666 0.663 0.418 0.449 0.667 0.661 0.149 0.280 0.515 0.551 0.123 0.168 0.435 0.468 Real Data + Synthetic DataSeqGAN 0.417 0.452 0.682 0.676 0.401 0.429 0.638 0.630 0.150 0.259 0.524 0.554 0.134 0.162 0.435 0.463 DiffuSeq 0.436 0.471 0.684 0.685 0.281 | https://arxiv.org/abs/2505.17615v1 |
0.366 0.624 0.620 0.167 0.283 0.524 0.557 0.136 0.174 0.433 0.467 UPC_SDG 0.424 0.457 0.676 0.670 0.384 0.417 0.630 0.632 0.188 0.295 0.528 0.558 0.130 0.169 0.438 0.472 Ours 0.447 0.480 0.702 0.694 0.426 0.450 0.655 0.659 0.213 0.315 0.543 0.570 0.201 0.186 0.454 0.479 Improvement 2.5% 1.9% 2.6% 1.3% 1.9% 0.2% -1.8% -0.3% 13.3% 6.8% 2.8% 2.2% 4.8% 6.9% 3.7% 1.5% Table 1: Overall prediction performance Pretrain Augmentation compared with baselines on Tencent and Smartphone datasets. The improvement here is calculated using the formula: (ours - the best result from pretrained and baseline) / the best result from pretrained and baseline. Category Tencent Dataset Smartphone Dataset BackboneBert4Rec PITuning Bert4Rec PITuning Pre Rec N@3 N@5 Pre Rec N@3 N@5 Pre Rec N@3 N@5 Pre Rec N@3 N@5 Real DataPretrained 0.447 0.474 0.693 0.691 0.422 0.454 0.684 0.678 0.207 0.340 0.542 0.568 0.126 0.178 0.440 0.478 Finetuned 0.597 0.614 0.790 0.791 0.583 0.604 0.780 0.774 0.322 0.366 0.594 0.614 0.306 0.355 0.627 0.668 Synthetic DataSeqGAN 0.185 0.221 0.381 0.375 0.194 0.228 0.394 0.392 0.288 0.309 0.542 0.577 0.227 0.296 0.576 0.616 DiffuSeq 0.152 0.223 0.409 0.409 0.161 0.234 0.417 0.425 0.233 0.340 0.550 0.589 0.228 0.301 0.591 0.628 UPC_SDG 0.172 0.148 0.229 0.223 0.170 0.159 0.236 0.234 0.280 0.315 0.543 0.569 0.260 0.317 0.562 0.585 Ours 0.540 0.539 0.746 0.734 0.516 0.529 0.733 0.724 0.308 0.334 0.568 0.593 0.270 0.333 0.602 0.643 Replacement 62.0% 46.4% 54.4% 43.0% 58.4% 50.0% 51.0% 47.9% 87.8% -23.1% 50.0% 54.3% 80.0% 87.6% 86.6% 86.8% Table 2: Overall prediction performance Finetuning Replacement compared with baselines on Tencent and Smart- phone datasets. The replacement here is calculated using the formula: (ours - pretrained) / (real data finetuned - pretrained). Replacement tells us what percent of the improvement we can achieve when fine-tuning the model using only our synthetic data compared to using the real data. only leads to enhanced model performance but also offers a scalable solution for privacy-preserving data generation. •Our method demonstrates minimal discrep- ancies compared to fine-tuning with real data. Specifically, the proposed framework effectively generates personalized synthetic data, crucial for maintaining performance levels that closely resem- ble those achieved through fine-tuning on real data, all while ensuring user privacy. As evidenced in Table 2, models fine-tuned using synthetic data ex- hibit a performance gap of merely 5.7% and 1.4% in average precision, achieving scores of 0.540 on the Tencent dataset and 0.308 on the Smartphone dataset, respectively. Furthermore, the average re- placement rate of 57.6% highlights the equilibrium our framework achieves between privacy preser- vation and model efficacy. Notably, our synthetic data generation method surpasses other techniques, including SeqGAN and UPC_SDG, by a consid- erable margin, thereby affirming our framework’s efficacy in accurately capturing individual-level user behavior patterns. •Population-Level Analysis In the pre-training phase, as shown in Figure 1, we performed data augmentation using population-level data com- bined with synthetic data. The emphasis during this phase was on extracting common features across the population. The introduction of synthetic data not only enriched the diversity of user behavior patterns but also maintained a high level of faith-fulness | https://arxiv.org/abs/2505.17615v1 |
to real user data. This balance between diversity and fidelity enabled the model to gener- alize more effectively. As presented in Table 1, models pre-trained with a mix of real and synthetic data exhibited significant improvements in accu- racy and recall, indicating that synthetic data intro- duces sufficient variability without compromising the coherence of user trajectories. •Individual-Level Analysis In the fine-tuning phase, we synthesized a personalized dataset de- rived from individual user data to replace real user data. This approach not only ensures privacy but also faithfully captures individualized behavior pat- terns critical for intent prediction and behavior modeling tasks. At the individual level, our syn- thetic data remains faithful to real user behaviors while introducing subtle variations that better cap- ture users’ distinct decision-making processes. As shown in Tables 2 and 3, models fine-tuned with synthetic data significantly outperformed those fine-tuned solely on real data, particularly in metrics such as NDCG@3 and NDCG@5. The higher NDCG scores highlight that synthetic data more effectively mirrors individual users’ prefer- ences, improving the model’s performance in rec- ommendation tasks. Furthermore, the coherence of the generated user trajectories, a critical aspect of faithfulness, ensures that the synthetic data aligns with the unique patterns of real users, thereby en- hancing both intent and event prediction accuracy. Category Tencent Dataset Smartphone Dataset BackboneBert4Rec PITuning Bert4Rec PITuning Pre Rec N@3 N@5 Pre Rec N@3 N@5 Pre Rec N@3 N@5 Pre Rec N@3 N@5 Limited Real Data Finetuned 0.495 0.528 0.709 0.715 0.455 0.493 0.697 0.695 0.322 0.366 0.594 0.614 0.306 0.355 0.627 0.668 Limited Real Data + Synthetic DataSeqGAN 0.261 0.297 0.494 0.496 0.251 0.287 0.488 0.493 0.331 0.377 0.600 0.621 0.315 0.343 0.624 0.675 DiffuSeq 0.219 0.298 0.494 0.501 0.207 0.282 0.504 0.503 0.333 0.376 0.596 0.621 0.316 0.356 0.628 0.674 UPC_SDG 0.309 0.277 0.407 0.419 0.277 0.278 0.435 0.439 0.339 0.378 0.600 0.621 0.308 0.354 0.635 0.672 Ours 0.545 0.547 0.708 0.709 0.541 0.541 0.728 0.722 0.345 0.398 0.612 0.635 0.328 0.364 0.643 0.682 Improvement 10.1% 3.5% -0.1% -0.8% 18.9% 9.7% 4.4% 3.9% 1.8% 5.3% 2% 2.3% 3.8% 2.2% 1.3% 1.0% Table 3: Overall prediction performance Finetuning Augmentation compared with baselines on Tencent and Smartphone datasets. The improvement here is calculated using the formula: (ours - the best result from pretrained and baseline) / the best result from pretrained and baseline. 5.3 Ablation Study In this study, we conducted a comprehensive eval- uation of our proposed method through a series of ablation experiments, which were designed to assess the impact of various components on the quality of the generated behavioral data. The re- sults of these experiments are summarized in Table 4 and include several key performance indicators that are critical for evaluating the efficacy of our approach. Method KS_P BLEU BD JSD Pass@1 no_profile 0.231 0.444 0.068 0.053 100% no_role 0.213 0.449 0.066 0.053 97% our 0.327 0.512 0.050 0.029 100% no_segment 0.489 0.492 0.035 0.041 22.5% no_format nan nan nan nan 0% Table 4: The table presents the results comparing var- ious methods in data generation based on several eval- uation metrics: KS_P, BLEU, | https://arxiv.org/abs/2505.17615v1 |
BD, JSD, and Pass@1. The highest value for KSPandBLEU and the lowest value for BD andJSD are highlighted. We use following metrics: KS_P measures the discrepancy between the distributions of generated and real data, with higher values indicating bet- ter alignment. BLEU assesses the n-gram over- lap between generated and reference text, where a higher score signifies greater textual similarity. BD quantifies the similarity between two probability distributions, with lower values indicating greater similarity. JSD evaluates the similarity between distributions, ranging from 0 to 1, where lower scores denote closer alignment. Finally, Pass@1 reflects the proportion of instances where the model successfully predicts user behavior. Profile information : As shown in Table 4, pro- file information significantly improves model per- formance, with KS_P increasing from 0.231 to 0.327 and JSD decreasing from 0.053 to 0.029, in- dicating better distribution alignment and enhanced generation quality. Role setting : The "no_role" method shows mod-erate performance in KS_P and BLEU, indicating that including role information positively impacts the diversity and coherence of the generated output. The relatively low BD and JSD values suggest that this method produces a more faithful representation of the target distribution. The high Pass@1 score indicates that users could successfully identify cor- rect outputs in 97% of cases, which is commend- able. Format restrictions : The "no_format" method shows NaN values across all metrics, indicating that this setting was unable to produce outputs in the correct format, resulting in a complete loss of data usability. The 0% Pass@1 further emphasizes that the outputs were entirely unusable, underscor- ing the critical role of format restrictions in generat- ing coherent and interpretable results. This implies that neglecting format considerations severely ham- pers the model’s ability to produce valid outputs. Segmented generation : The "no_segment" method achieves the highest KS_P score and a com- petitive BLEU score, suggesting that segmenting the data enhances diversity and textual coherence significantly. The low BD and JSD values indicate that this method produces outputs that are closely aligned with the intended data distribution, improv- ing the quality of the generated content. However, the low Pass@1 score (22.5%) implies that while the outputs are diverse and coherent, they may not be entirely aligned with user expectations or spe- cific intents, leading to a lower success rate in iden- tifying correct outputs. Therefore, we adopt a seg- mented generation approach combined with Role setting and Format restrictions, ensuring the gen- erated data maintains both diversity and fidelity while consistently producing effective and usable outputs.The prompt used in our method can be found in Appendix A.3. 5.4 Case Study: Intent Distribution Analysis In this case study, we analyze the intent distribution at both the population level and individual user (a) User A (b) User B Figure 2: population and individual intent distribution. level to demonstrate the necessity and effectiveness of the fine-tuning phase in our model. Specifically, we examine how well the synthetic data captures individual users’ intent distributions compared to the population-level distribution. We selected two users, User A and User B, for a comparative analysis of their intent distributions. | https://arxiv.org/abs/2505.17615v1 |
Figure 2(a) and Figure 2(b) present the real intent distributions of these users alongside the intent dis- tributions generated from synthetic data. And the population level intent distribution is shown in grey. For both User A and User B, the real intent distri- bution (shown in blue) demonstrates a pronounced deviation from the population-level distribution. In contrast, the synthetic data (shown in orange) re- flects a strong alignment with the real intent distri- bution, validating the hypothesis that synthetic data can faithfully represent individual user behaviors. The discrepancies between the population-level intent distribution and the individual user intent distributions emphasize the necessity of the fine- tuning phase. By utilizing synthetic data tailored to reflect individual users’ intents, we can enhance the model’s performance in personalized recommenda- tion tasks. The findings from this analysis confirm that the fidelity of synthetic data is crucial, as it ensures that the model not only generalizes well across the population but also effectively adapts to the unique preferences of individual users.6 Conclusion This preliminary study explores the potential of large language models (LLMs) for generating syn- thetic user behavior data. Experimental results across three synthetic data usage scenarios show promising performance in enhancing two down- stream behavior prediction applications. These findings suggest that the generated synthetic behav- ior data effectively captures both population-level diversity and individual-level specificity, reflecting the complexity of human daily behavioral patterns. 7 Limitations Ethical Considerations. The ethical implications of using real behavior data in this study are of utmost importance. While the data we used is anonymized and preprocessed by our providers us- ing privacy-preserving techniques, including dif- ferential privacy, to prevent any risk of personal identification, it is still necessary to address poten- tial concerns around privacy. The use of differential privacy ensures that individual-level data cannot be reconstructed from aggregated information, fur- ther strengthening data security. We have signed non-disclosure agreements (NDAs) with our data providers and work under their supervision to en- sure responsible data handling and analysis. Bias. Since our work uses real user data to prompt LLMs in generating synthetic behavior data, there are two potential sources of bias. The first source is the empirical data provided, which may not equally represent all user groups, poten- tially leading to biases in how certain behaviors or demographics are modeled. The second source of bias stems from the LLMs themselves, which may exhibit biases based on the composition of their pretraining corpus, reflecting imbalances or stereo- types present in the data they were trained on. To address these concerns, we plan to implement sev- eral mitigation strategies. This includes applying fairness-aware techniques during both data prepro- cessing and model prompting to ensure diverse and equitable representation across user groups. Future Directions. There are several areas where our work can be further enhanced. First, developing more data-efficient generation methods is crucial, as behavior prediction scenarios typically involve large volumes of training data. Reducing the dependency on massive datasets without com- promising model performance would significantly improve scalability and practicality. Second, im- proving the underlying LLMs to better understand and model human | https://arxiv.org/abs/2505.17615v1 |
daily activities will be key to gen- erating higher-quality synthetic data (Ding et al., 2024; Zhang et al., 2025). References Martin Abadi, Andy Chu, Ian Goodfellow, H. Bren- dan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. 2016. Deep learning with differential pri- vacy. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Secu- rity, CCS ’16, page 308–318, New York, NY , USA. Association for Computing Machinery. Tom B Brown. 2020. Language models are few-shot learners. arXiv preprint arXiv:2005.14165 . Derek Chen, Celine Lee, Yunan Lu, Domenic Rosati, and Zhou Yu. 2023. Mixture of soft prompts for controllable data generation. arXiv preprint arXiv:2303.01580 . Jiawei Chen, Hande Dong, Yang Qiu, Xiangnan He, Xin Xin, Liang Chen, Guli Lin, and Keping Yang. 2021. Autodebias: Learning to debias for recommendation. InProceedings of the 44th International ACM SI- GIR Conference on Research and Development in Information Retrieval , pages 21–30. Hyunji Chung and Sangjin Lee. 2018. Intelligent virtual assistant knows your life. arXiv preprint arXiv:1803.00466 . YA DeMontjoye. 2013. Seqgan: Sequence generative adversarial nets with policy gradient. 31(1). Jingtao Ding, Fuli Feng, Xiangnan He, Guanghui Yu, Yong Li, and Depeng Jin. 2018. An improved sam- pler for bayesian personalized ranking by leveraging view data. In Companion proceedings of the the web conference 2018 , pages 13–14. Jingtao Ding, Yuhan Quan, Quanming Yao, Yong Li, and Depeng Jin. 2020. Simplify and robustify nega- tive sampling for implicit collaborative filtering. Ad- vances in Neural Information Processing Systems , 33:1094–1105. Jingtao Ding, Yunke Zhang, Yu Shang, Yuheng Zhang, Zefang Zong, Jie Feng, Yuan Yuan, Hongyuan Su, Nian Li, Nicholas Sukiennik, et al. 2024. Under- standing world or predicting future? a compre- hensive survey of world models. arXiv preprint arXiv:2411.14499 . Jiahui Gong, Jingtao Ding, Fanjin Meng, Guilong Chen, Hong Chen, Shen Zhao, Haisheng Lu, and Yong Li. 2024. A population-to-individual tuning framework for adapting pretrained lm to on-device user intent prediction. In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Min- ing, KDD ’24, page 896–907, New York, NY , USA. Association for Computing Machinery.Shansan Gong, Mukai Li, Jiangtao Feng, Zhiyong Wu, and LingPeng Kong. 2022. Diffuseq: Sequence to se- quence text generation with diffusion models. arXiv preprint arXiv:2210.08933 . Xu Guo and Yiqiang Chen. 2024. Generative ai for synthetic data generation: Methods, challenges and the future. arXiv preprint arXiv:2403.04190 . Xu Guo, Boyang Li, and Han Yu. 2022. Improving the sample efficiency of prompt tuning with domain adaptation. arXiv preprint arXiv:2210.02952 . Xingwei He, Zhenghao Lin, Yeyun Gong, Alex Jin, Hang Zhang, Chen Lin, Jian Jiao, Siu Ming Yiu, Nan Duan, Weizhu Chen, et al. 2023. Annollm: Making large language models to be better crowdsourced annotators. arXiv preprint arXiv:2303.16854 . Tong Li, Tong Xia, Huandong Wang, Zhen Tu, Sasu Tarkoma, Zhu Han, and Pan Hui. 2022. Smartphone app usage analysis: datasets, methods, and applica- tions. IEEE Communications Surveys & Tutorials , 24(2):937–966. Xuchuan Li, Fei Huang, Jianrong Lv, Zhixiong Xiao, Guolong Li, and Yang Yue. 2024. Be more real: Travel diary generation using llm agents and individ- ual profiles. | https://arxiv.org/abs/2505.17615v1 |
arXiv preprint arXiv:2407.18932 . Fan Liu, Zhiyong Cheng, Huilin Chen, Yinwei Wei, Liqiang Nie, and Mohan Kankanhalli. 2022. Privacy- preserving synthetic data generation for recommen- dation systems. In Proceedings of the 45th Inter- national ACM SIGIR Conference on Research and Development in Information Retrieval , pages 1379– 1389. Lin Long, Rui Wang, Ruixuan Xiao, Junbo Zhao, Xiao Ding, Gang Chen, and Haobo Wang. 2024. On llms- driven synthetic data generation, curation, and evalu- ation: A survey. arXiv preprint arXiv:2406.15126 . Xufang Luo, Zheng Liu, Shitao Xiao, Xing Xie, and Dongsheng Li. 2022. Mindsim: user simulator for news recommenders. In WWW , pages 2067–2077. Devaki Nadkarni. 2016. Superforecasting: The art & science of forecasting. Anvesha , 9(1):61. Joon Sung Park, Joseph O’Brien, Carrie Jun Cai, Mered- ith Ringel Morris, Percy Liang, and Michael S Bern- stein. 2023. Generative agents: Interactive simulacra of human behavior. In Proceedings of the 36th an- nual acm symposium on user interface software and technology , pages 1–22. Vladimir Provalov, Elizaveta Stavinova, and Petr Chu- naev. 2021. Synevarec: A framework for evaluating recommender systems on synthetic data classes. In 2021 International Conference on Data Mining Work- shops (ICDMW) , pages 55–64. Laria Reynolds and Kyle McDonell. 2021. Prompt programming for large language models: Beyond the few-shot paradigm. In Extended abstracts of the 2021 CHI conference on human factors in computing systems , pages 1–7. Germans Savcisens, Tina Eliassi-Rad, Lars Kai Hansen, Laust Hvas Mortensen, Lau Lilleholt, Anna Rogers, Ingo Zettler, and Sune Lehmann. 2023. Using se- quences of life-events to predict human lives. Nature Computational Science , pages 1–14. Chenyang Shao, Fengli Xu, Bingbing Fan, Jingtao Ding, Yuan Yuan, Meng Wang, and Yong Li. 2024. Beyond imitation: Generating human mobility from context- aware reasoning with large language models. arXiv preprint arXiv:2402.09836 . Jing-Cheng Shi, Yang Yu, Qing Da, Shi-Yong Chen, and An-Xiang Zeng. 2019. Virtual-taobao: Virtu- alizing real-world online retail environment for re- inforcement learning. In AAAI , volume 33, pages 4902–4909. Reza Shokri, Marco Stronati, Congzheng Song, and Vi- taly Shmatikov. 2017. Membership inference attacks against machine learning models. In 2017 IEEE Sym- posium on Security and Privacy (SP) , pages 3–18. Fei Sun, Jun Liu, Jian Wu, Changhua Pei, Xiao Lin, Wenwu Ou, and Peng Jiang. 2019. Bert4rec: Se- quential recommendation with bidirectional encoder representations from transformer. In Proceedings of the 28th ACM international conference on informa- tion and knowledge management , pages 1441–1450. Amrita S Tulshan and Sudhir Namdeorao Dhage. 2019. Survey on virtual assistant: Google assistant, siri, cortana, alexa. In Advances in Signal Processing and Intelligent Recognition Systems: 4th International Symposium SIRS 2018, Bangalore, India, September 19–22, 2018, Revised Selected Papers 4 , pages 190– 201. Springer. Jiawei Wang, Renhe Jiang, Chuang Yang, Zengqing Wu, Makoto Onizuka, Ryosuke Shibasaki, and Chuan Xiao. 2024. Large language models as urban resi- dents: An llm agent framework for personal mobility generation. arXiv preprint arXiv:2402.14744 . Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. 2017. Seqgan: Sequence generative adversarial nets with policy gradient. Proceedings of the AAAI Con- ference on Artificial Intelligence , 31(1). Yue Yu, Yuchen Zhuang, | https://arxiv.org/abs/2505.17615v1 |
Jieyu Zhang, Yu Meng, Alexander Ratner, Ranjay Krishna, Jiaming Shen, and Chao Zhang. 2023. Large language model as attributed training data generator: A tale of diversity and bias. Advances in neural information processing systems . Yuan Yuan, Jingtao Ding, Depeng Jin, and Yong Li. 2025. Learning the complexity of urban mobil- ity with deep generative network. PNAS nexus , 4(5):pgaf081.Yuan Yuan, Huandong Wang, Jingtao Ding, Depeng Jin, and Yong Li. 2023. Learning to simulate daily activities via modeling dynamic human needs. pages 906–916. Hongtao Zhang and Lingcheng Dai. 2018. Mobility prediction: A survey on state-of-the-art schemes and future applications. IEEE access , 7:802–822. Shuai Zhang, Lina Yao, Aixin Sun, and Yi Tay. 2019. Deep learning based recommender system: A sur- vey and new perspectives. ACM computing surveys (CSUR) , 52(1):1–38. Yu Zhang, Shutong Qiao, Jiaqi Zhang, Tzu-Heng Lin, Chen Gao, and Yong Li. 2025. A survey of large lan- guage model empowered agents for recommendation and search: Towards next-generation information re- trieval. arXiv preprint arXiv:2503.05659 . Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, et al. 2023. A survey of large language models. arXiv preprint arXiv:2303.18223 . Larisa Zherdeva, Denis Zherdev, and Artem Nikonorov. 2021. Prediction of human behavior with synthetic data. In 2021 International Conference on Informa- tion Technology and Nanotechnology (ITNT) , pages 1–6. A Appendix A.1 Details of Metrics we employ four widely used metrics:precision (Pre), recall ( Rec), and NDCG(N). The calcula- tion of each metric is as follows. The formula for Pre : Pre==1 |C|X c∈CTPc TPc+FPc(5) The formula for Rec : Rec=1 |C|X c∈CTPc TPc+FNc(6) Where |C|represents the total number of classes, True Positives (TPc)denotes the number of sam- ples correctly classified as class c, False Positives (FPc)represents the number of samples incor- rectly classified as class c, and False Negatives (FNc)stands for the number of samples incor- rectly classified as other classes instead of class c. And Precision andRecall respectively refer to the precision and recall of class c. The formula for N@k: N@k=PK i=12reli−1 log2(i+1)P|REL K| j=1relj−1 log2(j+1)(7) where relimeans the graded relevance of the re- sult at position i, and|REL K|means the list of predictions in the result ranking list up to position K. A.2 Details of Baselines Here we introduce the details of each baseline. •SeqGAN (Yu et al., 2017). SeqGAN is a sequence generative adversarial network that models sequence data generation as a reinforce- ment learning task, utilizing a GAN structure to capture the sequential dependencies in data generation. •DiffuSeq (Gong et al., 2022). DiffuSeq is a diffusion-based sequence generation model that adapts the diffusion process for text and sequence data generation, offering state-of-the- art performance on various generative tasks by leveraging noise-perturbed transitions during generation. •UPC_SDG (Liu et al., 2022). UPC_SDG is a user trajectory synthetic data generation model,which focuses on preserving the statistical char- acteristics of the original data. It generates plau- sible user trajectories by maintaining important spatiotemporal relationships and is particularly effective for data privacy scenarios. A.3 Used Prompts 1messages = [ 2 { | https://arxiv.org/abs/2505.17615v1 |
3 " role ": " system ", 4 " content ": """ 5You are an assistant generating behavioral data based on given user behavior and profile data . I will provide you with a subset of real behavioral data in the format [ weekday , timestamp , loc , intent ]. 6 7Your task : 81. Generate behavioral data for one month ( minimum 90 lines ) in the exact format : " weekday , timestamp , loc , intent ". 92. Make sure to mimic realistic patterns of the given person , such as daily routines , work hours , and leisure activities , while ensuring diversity in location ( loc ) and intent . Don 't have repetitive generation . 103. Ensure the weekdays values are within the range of 0-6, and timestamp values are within the range of 0 -95. 114. Ensure that generated data has more than 100 lines and is in the correct format . 12 }, 13 { 14 " role ": " user ", 15 " content ": f" Profile :\n{ json . dumps ( user_profile )}\ nBehavior data :\n{ behavior_part . to_string ( index = False )}" 16 } 17] Figure 3: Prompt for generating behavioral data. A.4 Study of segment We did experiments on segments on a small scale before generating synthetic data for all users of the dataset. We randomly select a batch of users (20), and give LLM users’ 1 piece, 1 day, 3 days, 7 days, 10 days......of real data and then fine-tuned on the pre-trained model with the generated synthetic data to see how the metrics change, as shown in Fig- ure4.It can be seen that when 7 days of data are pro- vided to LLM, the effect of synthetic data is close Figure 4: segment study to convergence.The line charts of the other metrics except Rec also show this trend. Although more data is provided, weekly segment is considered as the best choice for cost and benefit considerations. A.5 Study of privacy analysis To prove that the synthetic data generated by our framework does not leak individual privacy, we per- form experiments from three aspects (Yuan et al., 2025). •Uniqueness testing (DeMontjoye, 2013). This measure evaluates whether the generated data is completely identical to the original data. It highlights the extent to which the model di- rectly generates copies instead of brand-new data. To prove that the realistic generated mobility trajectory is not a simple copy of the real tra- jectory but a brand-new trajectory, we perform a uniqueness testing of it by comparing it with the real data. We randomly select generated trajectories and compare them with all the real trajectories from the training set. The two tra- jectories are aligned in the time dimension one by one and determine whether the locations at the corresponding time points are exactly the same. The overlapping ratio is defined as the ratio of the number of identical locations to the total trajectory length. Next, we choose The real trajectory that is most similar to the gener- ated one is | https://arxiv.org/abs/2505.17615v1 |
defined as the one with the highest overlapping ratio. We calculate the overlap- ping ratio distribution of all the generated tra- jectories with the most similar real trajectories mentioned before. The results can also be ex- tended by considering more similar trajectories, (a) Smartphone (b) Tencent Figure 5: Privacy evaluation in terms of uniqueness testing. e.g., the top-3 and top-5 most similar real tra- jectories. As shown in Supplementary Figure5, for the Smartphone datasets, more than 80% of the generated mobility trajectories cannot find any real trajectories that have more than a 30% overlapping ratio with them. For the Ten- cent dataset, more than 80% of the generated mobility trajectories overlap with real trajecto- ries with an overlapping ratio of less than 50%. These results demonstrate that, while capturing mobility patterns, our framework indeed learns to generate brand-new and unique trajectories rather than simply copying. •Membership inference attack (Shokri et al., 2017). If the generated data does not reveal the identities of users from the original data, it should not be possible to use the generated data to reidentify users in the training set.For this purpose,we use the framework of membership inference attack (Shokri et al., 2017). Stronger privacy protection leads to a lower attack suc- cessrate. Given a deep learning model and an individual record, the goal of the attack is to determine whether this record was included in the train- ing set or not. We follow the attack settings as described in (Shokri et al., 2017), where the attacker’s access to the deep learning model allows them to obtain the model’s output. To improve the attack performance, we estimate individual information leakage using power- ful machine learning models trained to predict whether an individual is in the training set. To control the impact of classification methods, we include four commonly used classification algorithms: Logistic Regression (LR), Support Vector Machine (SVM), K-Nearest Neighbors (KNN), and Random Forest (RF). The positive samples are those individuals in the training data, while the negative samples are not. The input feature is the overlapping ratio of multiple runs. The evaluation metric is the success rate, defined as the percentage of successful trials in determining whether a sample is in the train- ing set. Stronger privacy protection leads to a lower success rate. As shown in Supplementary Figure6, on the Smartphone datasets the attack success rate is less than 0.55, and the Tencent dataset is less than 0.74. This result indicates that attackers can hardly infer whether individ- uals are in the training set based solely on the information of the generated urban mobility data. Thus, our framework demonstrates ro- bustness against membership inference attacks. •Differential privacy (Abadi et al., 2016). A model is dierentially private if for any pair of training datasets and that differ in the record of a single user, it holds that: M(z;D)≤ eϵM(z;D′) +δwhich means one can hardly distinguish whether any individual is included in the original dataset or not by looking at the output. It is a rigorous mathematical definition of privacy For the output z,M(z, D)denotes the probabil- ity | https://arxiv.org/abs/2505.17615v1 |
distribution of zwith the data Das the input. Smaller values of ϵandδprovide stronger pri- vacy guarantees. In our experiment, we exam- ine the privacy budget of our proposed model from the perspective of changes in the overlap- (a) Smartphone (b) Tencent Figure 6: Privacy evaluation in terms of Membership inference attack. ping ratio. Specifically, the overlapping ratio of each individual, under the conditions that this individual is included in the training set or not, is modeled by two Gaussian distribu- tions, which are then regarded as M(z, D)and M(z, D′)to calculate the privacy budget ϵ. For each user, we compute ϵusing TensorFlow Pri- vacy (Abadi et al., 2016). The cumulative distri- bution of ϵis illustrated in Supplementary Fig- ure7. We observe that, without any additional privacy-preserving mechanism, when CDF is less than 0.9, our model achieves a maximum privacy budget of ϵ <4, which is typically con- sidered a reasonable operating point for gen- erative models. For example, Apple adopts a privacy budget of ϵ= 4.0. The privacy bud- get can be further improved by incorporating DP-SGD or DP-GAN. (a) Smartphone (b) Tencent Figure 7: Privacy evaluation in terms of Differential privacy. | https://arxiv.org/abs/2505.17615v1 |
arXiv:2505.17616v1 [cs.CL] 23 May 2025Runaway is Ashamed, But Helpful: On the Early-Exit Behavior of Large Language Model-based Agents in Embodied Environments Qingyu Lu♢, Liang Ding♡, Siyi Cao♢, Xuebo Liu❄, Kanjian Zhang♢♠* Jinxia Zhang♢, Dacheng Tao✿ ♢Southeast University♡The University of Sydney♠Southeast University Shenzhen Research Institute ❄Institute of Computing and Intelligence, Harbin Institute of Technology, Shenzhen, China ✿College of Computing and Data Science at Nanyang Technological University, Singapore 639798 luqingyu@seu.edu.cn ,liangding.liam@gmail.com https://github.com/Coldmist-Lu/AgentExit Abstract Agents powered by large language models (LLMs) have demonstrated strong planning and decision-making capabilities in complex em- bodied environments. However, such agents often suffer from inefficiencies in multi-turn interactions, frequently trapped in repetitive loops or issuing ineffective commands, lead- ing to redundant computational overhead. In- stead of relying solely on learning from tra- jectories, we take a first step toward exploring the early-exit behavior for LLM-based agents. We propose two complementary approaches, ❶anintrinsic method that injects exit instruc- tions during generation, and ❷anextrinsic method that verifies task completion to deter- mine when to halt an agent’s trial. To evaluate early-exit mechanisms, we introduce two met- rics: one measures the reduction of redundant steps as a positive effect, and the other evalu- atesprogress degradation as a negative effect. Experiments with 4 different LLMs across 5 embodied environments show significant effi- ciency improvements, with only minor drops in agent performance. We also validate a practical strategy where a stronger agent assists after an early-exit agent, achieving better performance with the same total steps. We will release our code to support further research. 1 Introduction Large Language Models (LLMs, Achiam et al., 2023) have shifted the paradigm from merely re- sponding to user inputs to tackling more com- plex tasks within interactive environments such as household settings (Shridhar et al., 2021), virtual worlds (Park et al., 2023), and games (Hu et al., 2024). LLM-based agents serve as intelligent con- trollers, capable of perceiving environments, ex- ecuting actions, and adapting through feedback (Wang et al., 2024; Luo et al., 2025). Previous stud- ies show that structured workflows—such as rea- *Corresponding Author. 0 10 20 30 40 Average Steps102030405060Success Rate (%)Llama3.1-8B-InstructLlama3.1-70B-Instruct Mistral-7B-InstructMistral-24B-Instruct ReAct Agent + Early ExitFigure 1: Early-exit behavior of different LLM-based agents in embodied environments. While early termi- nation slightly reduces the success rate, it significantly decreases the average number of interaction steps, indi- cating improved efficiency. soning before acting (Yao et al., 2023), predicting future states (Fu et al., 2025b), and learning from high-quality trajectories (Chen et al., 2024b; Song et al., 2024)—can improve performance within a single trial. When agents do fail, post-hoc ap- proaches such as Reflexion (Shinn et al., 2023), AutoPlan (Ouyang and Li, 2023), and ExpeL (Zhao et al., 2024) enable them to learn from failures and replan more effective solutions in subsequent trials. However, a key limitation of LLM-based agents remains underexplored: they often fail to recognize when a goal is too difficult or when they are stuck . Prior work shows that agents may repeat the same errors in unproductive loops without meaningful actions or self-correction (Fu et al., 2025a), leading to unnecessary computational overhead. This issue | https://arxiv.org/abs/2505.17616v1 |
becomes even more critical in real-world settings, where repeated mistakes by embodied agents can waste energy, cause wear-and-tear, or even damage physical objects in the environment. Therefore, incorporating built-in self-awareness mechanisms can help agents detect when progress has stalled, enabling early self-reflection and adjustment. You are in the middle of the room … Your task is to: look at bowl under the desklamp. Thought : I’ll check the desks first. Action : go to desk 1.Thought: I’m stuck. I’ll try again. Action: check valid actions.Thought: I need to figure out … Action: examine desk 1.Thought: I’m stuck. I’ll try again. Action: check valid actions.Thought: I think I am stuck. I need to … Action: put bowl 1 in/on desklamp 1.Thought : I’ll check the desks first. Action : go to desk 1.Thought: I need to finally check ... Action: examine desk 1. You can exit if you think the task is complete. Thought: I think I have completed the task. Action: EXIT. Intrinsic Early Exit Exit? No. (The agent is still exploring.)Exit? No. (The agent is trying to ...)Exit? Yes. The agent seems to stuck in a loop. Extrinsic Early ExitReAct AgentFigure 2: A comparative overview of our proposed Intrinsic and Extrinsic Early Exit with ReAct Agent. The intrinsic approach injects an exit instruction to guide the agent to self-terminate, while the extrinsic approach uses a verification module to determine whether to exit based on the current state. To this end, we take the first step by investigating theearly-exit behavior of LLM-based agents. As shown in Figure 2, we propose two complementary strategies: ❶Intrinsic Early Exit , which injects exit instructions directly into the agent’s prompts to encourage self-recognition of when to halt; and ❷ Extrinsic Early Exit , which introduces an external verification module that monitors the interaction status and outputs a binary (YES/NO) decision to control whether the agent should continue. In addition to using success rate and progress rate (Chang et al., 2024) to evaluate agent per- formance, we propose two new metrics to assess the impact of the early-exit mechanism. Redun- dancy Steps quantifies the positive effect by measur- ing reductions in unnecessary interactions, while Progress Degradation captures the potential nega- tive impact, indicating cases where exit early may interrupt or reverse meaningful progress. We conduct experiments on 5 datasets spanning over 400 environments and find that the early-exit mechanism significantly improves efficiency, with only a minor drop in task success and progress rates, as shown in Figure 1. We also propose a practical use of early-exit behavior: Once the agent exits early, a stronger agent reflects on the state and continues exploration, achieving improved perfor- mance within the same total steps. Our contributions are three-fold: •We present the first investigation into early- exit behavior in LLM-based agents, proposing two strategies that enable agents to develop self-awareness and terminate execution with- out external intervention. •We introduce two complementary metrics to evaluate the effectiveness of early exit. Thesemetrics can serve as standardized tools for assessing agent behavior and guiding the se- lection of optimal exiting strategies. •Our proposed methods | https://arxiv.org/abs/2505.17616v1 |
generalize across vari- ous LLM-based agents and task settings. We further demonstrate the practical value of our approach by introducing post-trial strategies that leverage stronger agents to enhance over- all performance. This study is an initial step toward exploring early-exit behavior in LLM-based agents. Our ap- proach encourages agents to make efficient deci- sions, avoid unnecessary interactions, and achieve a trade-off between efficiency and task performance. 2 Approach 2.1 Task Formulation Embodied Environments In embodied environ- ments, an agent interacts with the world through actions and receives feedback from the environ- ment. This interaction can be modeled as a special case of a Partially Observable Markov Decision Process (POMDP), defined by an instruction space U, state space S, action space A, observation space O, and a transition function T:S×A→S. LLM-based Agents In this work, we focus on text-based environments, where the instruction, ac- tion, and observation spaces are all expressed in natural language. The agent is provided with an instruction u, which includes a description of the task and environment, as well as the goal to be achieved. At each time step t, the agent, guided by a policy πθ(typically an LLM with parameters θ), must decide on the next action atbased on the trajectory history et. This decision-making process is formalized as: at∼πθ(· |et, u), (1) where et= (a1, o1, . . . , a t−1, ot−1)denotes the full trajectory up to time t, including previous ac- tions and observations. In this way, the agent con- tinually explores the environment, using feedback from observations to inform its next actions, until the task is completed or a predefined maximum number of steps is reached. 2.2 Dynamic Early Exit We propose two simple but effective early-exit strategies, Intrinsic Early Exit andExtrinsic Early Exit, that enable the agent to terminate its interac- tion when appropriate. Intrinsic Early Exit This strategy modifies the behavior of LLM agent by appending a natural lan- guage prompt that allows it to terminate the interac- tion with the environment when deemed necessary. The exit instruction can be formulated as: uintrinsic =concat (u, u exit). (2) In this way, the LLM may develop an intention to termiate based on the additional instruction uexit, leading to different actions and trajectories. As shown in Figure 2, the agent is prompted with an instruction to exit once the task is complete. After examining the relevant objects, the agent generates an "EXIT" action to terminate the interaction. Extrinsic Early Exit This strategy introduces a verification module vθ, which shares the same LLM backbone. The verification module operates after each action and observation, evaluating whether the agent should continue the task. It outputs a binary decision: "YES" to exit or "NO" to continue execution. Specifically, it functions as follows: uextrinsic =concat (u, u exit), (3) vθ(· |et, uextrinsic )∈ {0,1}. (4) The agent is verified periodically every ksteps. In our experiments, we set k= 11. As shown in Figure 2, the verification module detects that the agent is stuck and triggers an early exit, effectively avoiding further repetitive steps. 2.3 Evaluation Typically, the performance of | https://arxiv.org/abs/2505.17616v1 |
agents in embodied environments is evaluated using Success Rate and 1We set k= 1 to enable timely detection in our experi- ments. In practice, larger values (e.g., k= 2–5) can be used to reduce computational overhead.Progress Rate. To intuitively demonstrate the be- havior of the early-exit mechanism on LLM-based agents, we propose two complementary metrics that capture both its positive and negative effects. These metrics are defined as follows: Success Rate (SR) The environment is marked as successful if the agent completes the given task, typically when it reaches a predefined latent state that signifies task completion. A higher success rate indicates that the agent is more effective at solving environments under the same task. Progress Rate (PR) Progress Rate, proposed by Chang et al. (2024), quantifies the extent to which an agent advances toward the task goal, making it particularly valuable for evaluating incremental improvements. In embodied environments, the task goal is decomposed into a sequence of subgoals G= [g1,···, gK], where each subgoal contributes progressively to task completion. At each time step t, the progress is defined as: rt= max i,0≤i≤t 1 KKX k=1f(si, gk)! , (5) where f(si, gk)∈ {0,1}is a binary indicator func- tion that evaluates whether the agent state sisatis- fies subgoal gk, typically determined via regular- expression-based matching. PR offers a more fine- grained and informative evaluation of agent behav- ior than binary success metrics alone. New Metric 1: Redundancy Steps (RS) The primary purpose of introducing the early-exit mech- anism is to reduce redundant steps in the agent’s interaction with the environment. As illustrated in Figure 3(a), after completing subgoal 3 out of 4, the agent continues exploring unnecessarily for 5 ad- ditional steps before ultimately failing. Early-exit can mitigate this issue while maintaining the same level of progress. Let ntotaldenote the total number of steps in the trajectory, and nsubgoal be the index of the last step that achieves a new subgoal. The Redundancy Steps is defined as: RS=ntotal−nsubgoal . (6) For trivial cases, RS=ntotalif the agent fails to complete any subgoal (i.e., PR= 0). if the agent successfully completes the entire task, RS= 0, meaning that all steps are considered useful. Subgoal 1/4 Subgoal 2/4 Subgoal 3/4Redundant Steps Fail(a) Redundancy Steps Subgoal 1/4 Subgoal 2/4 Subgoal 3/4Subgoal 1/4 Subgoal 2/4 Subgoal 3/4 Early ExitX✓Success !Subgoal 4/4 (b) Progress Degradation Figure 3: An overview of the proposed metrics .Redundancy Steps measures the number of redundant steps. Progress Degradation measures task progress loss via reduced subgoal completion. New Metric 2: Progress Degradation (PD) The agent may also negatively impact agent perfor- mance by prematurely terminating trajectories that might have led to further progress. This can sup- press the agent’s potential, causing missed subgoals or converting potentially successful trials into fail- ures. To quantify this loss, we define Progress Degradation as: PD= max( PRref−PRexit,0), (7) where PRrefdenotes the progress rate without exit, while PRexitis the progress rate when early-exit is applied2. As shown in Figure 3(b), the agent exits 3 steps early, leaving an otherwise successful environ- ment unfinished with only 75% progress, resulting in a | https://arxiv.org/abs/2505.17616v1 |
25% loss in progress. Progress Degradation ranges from 0 (no degradation) to PRref(complete loss of progress). A higher PD indicates greater performance loss. In the trivial case, PD= 0im- plies no degradation, while PD=PRrefindicates complete progress failure (e.g., all environments terminate at the first step). 3 Experimental Setup Datasets We evaluate our methods across 3 em- bodied environments and 2 gaming environments. For embodied environments, AlfWorld (Shridhar et al., 2021) includes 134 household tasks that re- quire agents to explore their surroundings and com- plete instructions such as “Look at bowl under the desklamp.” ScienceWorld (Wang et al., 2022) sim- ulates a total of 90 scientific experiments in an interactive setting, such as “measure the melting point.” BabyAI (Chevalier-Boisvert et al., 2019) is a 20x20 grid-based environment where agents must navigate and interact with objects to accomplish 112 defined goals. We also consider two gaming environments. Jericho (Hausknecht et al., 2020) comprises 20 text-based fictional worlds, which we adapt using the setup from Chang et al. (2024) 2Progress degradation is only meaningful when compared against a reference baseline.to be completed within 15 subgoals. PDDL rep- resents a suite of strategic planning tasks defined in the Planning Domain Definition Language (Val- lati et al., 2015). Following Chang et al. (2024), we include four distinct games, namely, 60 unique environments for evaluation. LLMs To ensure reproducibility, we evaluate four open-source large language models with vary- ing parameter sizes. From the LLaMA 3.1 se- ries3(Grattafiori et al., 2024), developed by Meta, we use two instruction-tuned models: the 8B ver- sion ( Llama3.1-8B-Instruct ) and the 70B version (Llama3.1-70B-Instruct ), with the latter quantized using 4-bit AWQ (Lin et al., 2024) for efficient inference. In addition, we test two models from the Mistral family4(Jiang et al., 2024): Mistral-7B- Instruct (v0.3) and Mistral-24B-Instruct (Mistral- Small-Instruct-2409). Prompts We adopt ReAct-style (Yao et al., 2023) prompting to enable LLM-based agents to interact effectively with the environment. Following Song et al. (2024), we format the interaction prompt as a multi-turn conversation, including an in-context example for each task. For early-exit instructions, we explore prompt variants with varying strictness levels (see in Appendix B), aligning the strategy with specific LLMs. Hyperparameters For all experiments, we set the temperature to 0.1 and limit each turn’s re- sponse to a maximum of 256 tokens. Device and Platform All experiments are con- ducted on two NVIDIA A100 GPUs with 80GB of memory each. We deploy the models using VLLM (Kwon et al., 2023) for distributed inference and access them through OpenAI-compatible chat com- pletion APIs (Achiam et al., 2023). Evaluation is performed using AgentBoard (Chang et al., 2024), measuring both success rate and progress rate. 3https://huggingface.co/meta-llama 4https://huggingface.co/mistralai Setting ALFWorld BabyAI ScienceWorld Int. Ext. SR↑PR↑RS↓PD↓Steps↓SR↑PR↑RS↓PD↓Steps↓SR↑PR↑RS↓PD↓Steps↓ Llama3.1-8B-Instruct - - 23.1 45.2 13.8 - 33.4 41.1 54.6 8.2 - 27.1 8.9 37.3 16.5 - 38.5 ✓ ✗ 14.2 38.3 4.1 14.1 15.9 41.1 54.3 5.3 10.5 25.6 7.8 32.6 13.5 11.1 32.3 ✗ ✓ 20.9 38.3 2.5 16.3 9.6 16.1 25.4 1.6 30.5 6.6 7.8 29.5 4.6 13.9 10.5 ✓ ✓ 21.6 44.4 4.5 9.1 16.8 46.4 | https://arxiv.org/abs/2505.17616v1 |
57.3 6.5 6.6 25.1 7.8 34.5 13.0 9.4 32.3 Llama3.1-70B-Instruct - - 76.1 81.1 2.3 - 19.0 49.1 62.8 8.8 - 26.4 34.4 67.5 15.7 - 31.4 ✓ ✗ 61.2 67.4 1.4 17.4 13.8 36.6 53.1 7.4 18.0 18.2 18.9 59.8 8.3 12.3 21.2 ✗ ✓ 70.2 79.3 1.4 8.7 13.4 42.0 59.3 4.5 12.9 13.3 27.8 63.6 6.0 9.0 17.4 ✓ ✓ 80.6 84.0 1.5 5.8 17.0 40.2 57.8 6.8 13.1 19.9 27.8 64.6 8.7 10.6 22.8 Mistral-7B-Instruct - - 20.9 40.4 13.9 - 34.8 17.0 21.7 5.6 - 34.0 2.2 16.6 15.3 - 39.2 ✓ ✗ 14.9 36.3 5.9 15.9 23.9 16.1 25.9 7.7 5.6 32.0 2.2 18.4 15.2 3.4 36.3 ✗ ✓ 11.2 32.5 8.9 16.7 24.9 10.7 18.2 5.0 12.0 27.9 1.1 15.4 7.8 3.4 17.2 ✓ ✓ 17.2 36.1 12.1 11.4 32.7 16.1 22.4 5.7 4.9 33.5 2.2 18.2 15.4 1.9 38.0 Mistral-24B-Instruct - - 58.2 71.6 5.0 - 25.9 49.1 60.9 7.0 - 25.5 15.6 42.5 16.0 - 36.9 ✓ ✗ 31.3 51.7 4.0 26.0 17.4 40.2 51.5 8.2 19.4 27.0 11.1 40.7 11.3 18.9 31.7 ✗ ✓ 57.5 70.7 4.5 10.8 20.5 37.5 50.1 3.2 16.3 13.3 3.3 23.3 4.3 20.7 9.5 ✓ ✓ 57.5 74.3 5.7 10.5 25.7 35.7 53.9 10.5 19.6 28.4 12.2 39.5 12.0 18.0 35.2 Table 1: Performance comparison of two early-exit approaches —Extrinsic (Ext.) and Intrinsic (Int.)—vs. the ReAct baseline across four LLMs in three embodied environments. Red indicates negative impact (e.g., performance drop or progress degradation), while Green shows positive effects (e.g., reduced redundancy). Metrics: SR (Success Rate), PR (Progress Rate), RS (Redundant Steps), PD (Progress Degradation), and Steps (Average Steps). 4 Main Results We experiment on 3 embodied environments and 2 gaming environments, and report results in Table 1 and Table 2, respectively. We can see that: (i) Early-exit mechanisms significantly reduce redundant steps. Across all three embodied en- vironments, baseline methods exhibit substantial re- dundancy (“RS”) in their thought-action sequences. For example, LLama3.1-8B-Instruct averages 13.8 unnecessary steps out of 40 in Alfworld. Almost all early exit mechanisms are able to reduce the re- dundancy, by approximately 50% to 70%, leading to a notable increase in overall efficiency. A similar trend is observed in the average steps (“Steps”), decreasing alongside the reduction in redundant steps, further highlighting the effectiveness of the early-exit mechanism in improving task efficiency. (ii) Minor performance drop in success and progress rates. While early exit improves effi- ciency, it inevitably causes slight reductions in both success and progress rates. The observed progress degradation (“PD”) further confirms this trade-off. However, for all four tested LLMs, certain early exit strategies yield minimal performance loss. Forexample, using the extrinsic ("Ext.") method on Llama3.1-70B-Instruct , the progress rate drops by under 2%, 3%, and 4% in ALFWorld, BabyAI, and ScienceWorld, respectively. This shows that ap- propriate early exits can greatly improve efficiency with negligible performance impact. (iii) LLMs show varying preferences for early exit strategies. LLMs respond differently to the same early exit approach. For example, the intrin- sic ("Int.") early exit performs better for Mistral- 7B-Instruct , | https://arxiv.org/abs/2505.17616v1 |
whereas it significantly degrades the performance of Mistral-24B-Instruct . Conversely, Mistral-24B-Instruct benefits more from the extrin- sic method ("Ext."). This is possibly because the larger Mistral LLMs is more sensitive to intrinsic cues, resulting in premature termination, whereas extrinsic method provide more stable exit signals. (iv) Combining intrinsic and extrinsic early exit maximizes performance retention. We explore a hybrid strategy that first applies extrinsic veri- fication to detect potential exit, then applies the intrinsic method to confirm termination. While this increases the number of steps and reduces ef- ficiency, it achieves the best performance preser- Setting PDDL Jericho Int. Ext. SR↑PR↑RS↓PD↓Steps↓SR↑PR↑RS↓PD↓Steps↓ Llama3.1-8B-Instruct - - 11.7 29.9 11.8 - 38.3 5 27.3 13.8 - 36.5 ✓ ✗ 6.7 30.5 5.5 6.1 31.4 5 26.8 10.4 9 37.7 ✗ ✓ 1.7 4.4 1.1 25.9 4 0 7.5 1.9 19.8 6.8 ✓ ✓ 8.3 31.4 6.7 6.1 32.3 10 31.8 13.1 10.5 33.3 Llama3.1-70B-Instruct - - 45 62.2 6.5 - 31.1 35 55.9 11.9 - 32.3 ✓ ✗ 41.7 64.8 4.4 5.8 28.2 25 41.5 7.8 23.1 29.8 ✗ ✓ 43.3 63.5 2.5 4.9 23 20 38.5 7.5 19.8 21.7 ✓ ✓ 38.3 61.9 6.8 8.1 29.7 20 41.5 10.1 21.1 29.4 Mistral-7B-Instruct - - 0 9.7 12.2 - 40 0 11.7 16.1 - 38.4 ✓ ✗ 1.7 12.1 8.9 5 30.3 0 6.9 9.9 4.8 30.2 ✗ ✓ 3.3 13.8 6.6 4.6 20.9 0 9 7.5 6 26.1 ✓ ✓ 3.3 12.9 9.6 4.2 35.6 0 12 12.2 3.5 36.6 Mistral-24B-Instruct - - 13.3 27.4 7.5 - 37 15 43.8 19.2 - 37.3 ✓ ✗ 13.3 33.3 8.6 8.5 34.7 10 33.8 14.4 12.7 32.7 ✗ ✓ 10.0 24.3 3.8 7.9 16.2 5 29.5 9.3 18.2 20.9 ✓ ✓ 11.7 32.0 9.9 7.8 36.1 10 27.4 12.9 20.5 37.0 Table 2: Performance comparison of two early-exit settings across four LLMs in game environments. Red indicates negative impact, while Green shows positive effects (e.g., reduced redundancy). Metrics: SR (Success Rate), PR (Progress Rate), RS (Redundant Steps), PD (Progress Degradation), and Steps (Average Steps). vation ("Int. ✓+ Ext.✓"). Notably, it even slightly improves performance on Llama3.1-70B-Instruct andMistral-24B-Instruct , possibly due to diversity behavior introduced by prompt modification. (v) Early-exit strategy generalizes to gaming environments. As shown in Table 2, applying early-exit in gaming environments yields similar trends, but smaller gains in efficiency and minor performance changes compared to embodied tasks. Redundancy reduction is less pronounced (gener- ally below 50%), and the drops in performance are marginal, except for Mistral-7B-Instruct , occasion- ally showing improvement. This may be due to: 1) the longer trajectories in gaming environments, which lead to lower baseline success rates (e.g., below 20% for most LLMs except Llama3.1-70B- Instruct ) and greater sensitivity to prompt varia- tions; and 2) ambiguous subgoal definitions, allow- ing multiple valid strategies and reducing consis- tency in progress measurement. 5 Analysis 5.1 Interpretation of Efficiency Metrics We illustrate how Redundancy Steps (RS) and Progress Rate (PR) complement each other in mea- suring the early-exit behavior in Figure 4. 1/4 subgoal2/4 subgoal3/4 subgoal4/4 subgoal 100%75%50%25%051015Redundancy Steps (RS) Progress Degradation | https://arxiv.org/abs/2505.17616v1 |
(PD)Maximum Step ReachedPerfect Early-ExitToo Early RSPD Too Late RSPD Figure 4: Redundant Steps and Progress Degrada- tion measured in a failure case with 3 out of 4 subgoals completed. The metrics vary as the early-exit mecha- nism is triggered at different steps. Perfect Early-Exit Scenario The ideal early exit scenario ("Perfect Early Exit") occurs when both RS and PR are zero, meaning no redundant steps and no progress loss. However, this ideal is rarely achievable across all environments in practice. Too-Early Scenarios If the early exit mechanism triggers too early ("Too Early"), it may reduce redundant steps but significantly impair progress. This is evident in the result of the external early exit ofLlama3.1-8B-Instruct on BabyAI, where early termination yields a low RS but a high PD of 30.5. Llama3.1-8B Llama3.1-70B Mistral-7B Mistral-24B Instruction-Tuned LLM050010001500Average Token CountReAct Intrinsic Early-Exit Extrinsic Early-Exit Intrinsic + Extrinsic Early-ExitFigure 5: Comparison of the average token cost for one environment using different early-exit mechanisms. Too-Late Scenarios Conversely, if the early exit mechanism triggers too late ("Too Late"), PD re- mains low but RS stays high. This is seen in Mistral-24B-Instruct , when using both intrinsic and extrinsic early exit methods fail to reduce RS. Takeaways Neither too-early nor too-late exits are optimal in practice. Our results highlight the im- portance of selecting appropriate early exit settings for each LLM to balance RS and PR effectively. 5.2 Inference Cost To further validate the efficiency improvements achieved by the early-exit mechanism, in addition to reporting the average number of execution steps in the main results, we also examine the average token cost for each environment, which directly re- flects the computational resource usage. As shown in Figure 5, the early-exit approach consistently reduces the number of tokens compared to ReAct across all four tested LLMs. It is worth noting that, in our extrinsic early-exit approach, the verification module generates only a simple "YES" or "NO" response. As a result, it has a negligible impact on the overall token cost. 6 Practical Implications 6.1 Motivation A key advantage of our proposed early-exit mecha- nism is that agents capable of recognizing failure can proactively terminate and seek assistance, lead- ing to more efficient problem-solving. This aligns with realistic application scenarios, where humans may intervene directly or request help via a cen- tral server. In contrast, agents without early-exit continue until the step limit, often wasting valuable interactions and failing to complete the task. To illustrate this, we simulate a practical scenario in embodied environments, where a weaker agent 10 15 20 25 30 35 40 Maximum Steps20304050607080Success Rate (%) Mistral-24B BaselineMistral-24B Early-Exit + Llama-70BLlama-70B BaselineFigure 6: Performance comparison under different max step limits using strong-agent assistance with an early-exit weak agent, compared to baseline agents. Mistral-24B-Instruct ("Mistral-24B") is used as the weak agent, and Llama-3.1-70B-Instruct ("Llama-70B") as the strong agent. exits early from challenging environments and requests assistance from a stronger agent . 6.2 Setting Dataset We use ALFWorld (Shridhar et al., 2021) as our test set, which is a typical embodied envi- ronment with 134 different tasks. Models We use Mistral-24B-Instruct | https://arxiv.org/abs/2505.17616v1 |
as the weak agent which achieves a 58.2% success rate under a ReAct-style format and 57.5% when paired with an external early-exit mechanism (seen in Table 1), andLlama3.1-70B-Instruct as the strong agent. Setup In baseline, the weaker agent executes up to 40 steps regardless of progress. With the ex- trinsic early-exit mechanism, it can terminate early and hand over control to a stronger agent, which replans and continues within the remaining steps. 6.3 Experiment Result As shown in Figure 6, early exit followed by strong agent assistance yields over a 10% im- provement in success rate within the same 40-step budget, demonstrating the effectiveness of reallo- cating interaction steps to a more capable agent. Case Study Figure 7 visualizes environments impacted by early-exit, where successful environ- ments by both early-exit and baseline are ignored. Around 15 environments (e.g., #3, #12) were com- pleted with strong-agent help. Of these, 7 envi- ronments (e.g., #12, #19) were not completed by the baseline within 40 steps but were solved with early exit and assistance. Only 2 cases (#29, #61) were prematurely exited but solved by baseline. 346781112161719232429313235363742474952535461636567707275767880818385868992939697101102106108110112115117120121126132133 Environment ID10152025303540Steps xx x xx x xxx x x xxxx xxxx xxxx xx xxx xxxx xxx x xMistral-24B Baseline Mistral-24B Early-Exit Llama-70B Continue ExploringFigure 7: Case study of failure environments under the early-exit approach in ALFWorld. Different colors indicate the contributions of various strategies. "x" marks the exit step for each environment, and ★indicates completion by the stronger agent ( Llama3.1-70B-Instruct ). Some tasks (e.g., #4, #36) remained unsolved by both agents but benefited from reduced wasted com- putation. These results clearly highlight the effi- ciency improvements brought by early-exit, espe- cially when supported by stronger agents. 7 Related Work LLM-based Agents LLM-based agents are cen- tral to many tasks and show strong practical poten- tial. Some approaches, like ETO (Song et al., 2024) and AgentFLAN (Chen et al., 2024b), improve per- formance through expert trajectory training, achiev- ing better generalization. Others, such as ReAct (Yao et al., 2023), PreAct (Fu et al., 2025b), and StateFlow (Wu et al., 2024), focus on prompt de- sign to enhance chain-of-thought (CoT, Wei et al., 2022) reasoning. While effective, these meth- ods often neglect efficiency, especially in failure cases. Complementary post-hoc strategies—like self-reflection (Shinn et al., 2023), trajectory revi- sion (Ouyang and Li, 2023), and experience ex- traction (Zhao et al., 2024)—help refine future be- havior but only after trials conclude. We propose early-exit approaches that improve efficiency and demonstrate the practical benefits of leveraging stronger agents and post-hoc strategies. Dynamic Early Exit Dynamic early exit is an adaptive inference strategy originally introduced in pre-trained language models to reduce compu- tational cost and latency by skipping certain lay- ers during inference (Zhou et al., 2020; Sun et al., 2022). Recent work extends this concept to LLMs to address the issue of excessively long and unpre- dictable generations. Yang et al. (2025) applies early exit mechanism to truncate outputs at appro- priate reasoning steps, thereby mitigating the “over- thinking” problem in LLMs (Chen et al., 2024a).Wang et al. (2025) eliminates redundant agent | https://arxiv.org/abs/2505.17616v1 |
for better token efficiency in agent-collaboration sce- narios. In this work, we apply early exit to LLM- based agents in embodied environments, proposing an efficient and robust method adaptable to various agents, along with metrics to assess performance. Agent Verification and Evaluation Traditional benchmarks like AgentBench (Liu et al., 2024) as- sess overall agent performance using metrics such as reward or success rate. AgentBoard (Chang et al., 2024) improves transparency with human- annotated subgoals for process-level evaluation. A growing line of work explores using agents them- selves as evaluators, extending ideas from text generation (Zheng et al., 2023; Lu et al., 2024), code evaluation (Chen et al., 2024b). For exam- ple, Pan et al. (2024) explore using agents for self-evaluation and refinement. In this work, we leverage the agent verification module to verify its process in extrinsic early exit approach, and intro- duce two efficiency metrics to complement existing agent evaluation strategies. 8 Conclusion In this work, We propose a dynamic early-exit framework for LLM-based agents in complex em- bodied environments, incorporating intrinsic and extrinsic early-exit mechanisms. Both approaches improve efficiency in our experiments. To better evaluate the impact of early exits, we introduce two complementary metrics that capture both its posi- tive and negative effects. Additionally, we design a practical experiment in which a stronger agent assists a weaker one in continuing task execution, leading to enhanced performance. We hope our approach serves as a first step toward improving the efficiency of LLM-based agents and that our proposed metrics can be readily adopted by future research for evaluating agent efficiency. Limitations The limitations of our work are as follows: •Limited Datasets: We evaluate only five datasets from embodied and gaming environ- ments. Tasks like web navigation or app ex- ecution are excluded, as they often involve simpler, more direct goals, making early-exit less impactful. We leave these for future work. •No Training Integration: While our ap- proaches and metrics are designed to be plug- and-play for all LLM-based agents, we re- strict our experiments to models that were not trained with held-in data due to uncertainties about the complexity of datasets. •LLM Scope: We test four open-source LLMs due to budget constraints and to avoid data contamination. Proprietary models like GPT (Achiam et al., 2023) are not included. •Residual Redundancy: While our approach reduces redundant steps, it does not fully elim- inate them, likely due to current LLMs’ lim- ited instruction-following ability. Further im- provements are still necessary. Ethics Statement We take ethical considerations very seriously, and strictly adhere to the Code of Ethics. All proce- dures performed in this study are in accordance with the ethical standards. This paper explores early-exit mechanisms for LLM-based agents in embodied environments. Our proposed approaches and metrics, does not include statements that in- duce the model to generate harmful information. Additionally, the approach focuses solely on deter- mining when to terminate agent execution, thereby reducing potential risks. Both the datasets and models used in this paper are publicly available and have been widely adopted by researchers. We ensure that the findings and conclusions of | https://arxiv.org/abs/2505.17616v1 |
this pa- per are reported accurately and objectively. No human participants were involved as evaluators or case studies in this work.References Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023. Gpt-4 technical report. arXiv preprint . Ma Chang, Junlei Zhang, Zhihao Zhu, Cheng Yang, Yujiu Yang, Yaohui Jin, Zhenzhong Lan, Lingpeng Kong, and Junxian He. 2024. Agentboard: An an- alytical evaluation board of multi-turn llm agents. NeurIPS . Xingyu Chen, Jiahao Xu, Tian Liang, Zhiwei He, Jianhui Pang, Dian Yu, Linfeng Song, Qiuzhi Liu, Mengfei Zhou, Zhuosheng Zhang, et al. 2024a. Do not think that much for 2+3=? on the overthinking of o1-like llms. arXiv preprint . Zehui Chen, Kuikun Liu, Qiuchen Wang, Wenwei Zhang, Jiangning Liu, Dahua Lin, Kai Chen, and Feng Zhao. 2024b. Agent-FLAN: Designing data and methods of effective agent tuning for large lan- guage models. In ACL. Maxime Chevalier-Boisvert, Dzmitry Bahdanau, Salem Lahlou, Lucas Willems, Chitwan Saharia, Thien Huu Nguyen, and Yoshua Bengio. 2019. Babyai: A plat- form to study the sample efficiency of grounded lan- guage learning. In ICLR . Dayuan Fu, Keqing He, Yejie Wang, Wentao Hong, Zhuoma GongQue, Weihao Zeng, Wei Wang, Jin- gang Wang, Xunliang Cai, and Weiran Xu. 2025a. Agentrefine: Enhancing agent generalization through refinement tuning. In ICLR . Dayuan Fu, Jianzhao Huang, Siyuan Lu, Guanting Dong, Yejie Wang, Keqing He, and Weiran Xu. 2025b. PreAct: Prediction enhances agent‘s plan- ning ability. In COLING . Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al- Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, et al. 2024. The llama 3 herd of mod- els.arXiv preprint . Matthew Hausknecht, Prithviraj Ammanabrolu, Marc- Alexandre Côté, and Xingdi Yuan. 2020. Interactive fiction games: A colossal adventure. In AAAI . Sihao Hu, Tiansheng Huang, Gaowen Liu, Ramana Rao Kompella, Fatih Ilhan, Selim Furkan Tekin, Yichang Xu, Zachary Yahn, and Ling Liu. 2024. A survey on large language model-based game agents. arXiv preprint . Albert Q Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, Chris Bam- ford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, et al. 2024. Mixtral of experts. arXiv preprint . Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph Gon- zalez, Hao Zhang, and Ion Stoica. 2023. Efficient memory management for large language model serv- ing with pagedattention. In SOSP . Ji Lin, Jiaming Tang, Haotian Tang, Shang Yang, Wei- Ming Chen, Wei-Chen Wang, Guangxuan Xiao, Xingyu Dang, Chuang Gan, and Song Han. 2024. Awq: Activation-aware weight quantization for on- device llm compression and acceleration. MLSys . Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, et al. 2024. Agentbench: Eval- uating llms as agents. In The Twelfth International Conference on Learning Representations . Qingyu Lu, Baopu Qiu, Liang Ding, Kanjian Zhang, Tom Kocmi, and Dacheng Tao. 2024. Error analysis prompting enables human-like translation evaluation in large language models. In ACL. Junyu Luo, Weizhi Zhang, Ye | https://arxiv.org/abs/2505.17616v1 |
Yuan, Yusheng Zhao, Jun- wei Yang, Yiyang Gu, Bohan Wu, Binqi Chen, Ziyue Qiao, Qingqing Long, et al. 2025. Large language model agent: A survey on methodology, applications and challenges. arXiv preprint . Siqi Ouyang and Lei Li. 2023. AutoPlan: Automatic planning of interactive decision-making tasks with large language models. In EMNLP . Jiayi Pan, Yichi Zhang, Nicholas Tomlin, Yifei Zhou, Sergey Levine, and Alane Suhr. 2024. Autonomous evaluation and refinement of digital agents. In COLM . Joon Sung Park, Joseph O’Brien, Carrie Jun Cai, Mered- ith Ringel Morris, Percy Liang, and Michael S. Bern- stein. 2023. Generative agents: Interactive simulacra of human behavior. In UIST . Noah Shinn, Federico Cassano, Ashwin Gopinath, Karthik Narasimhan, and Shunyu Yao. 2023. Re- flexion: Language agents with verbal reinforcement learning. NeurIPS . Mohit Shridhar, Xingdi Yuan, Marc-Alexandre Cote, Yonatan Bisk, Adam Trischler, and Matthew Hausknecht. 2021. Alfworld: Aligning text and embodied environments for interactive learning. In ICLR . Yifan Song, Da Yin, Xiang Yue, Jie Huang, Sujian Li, and Bill Yuchen Lin. 2024. Trial and error: Exploration-based trajectory optimization of LLM agents. In ACL. Tianxiang Sun, Xiangyang Liu, Wei Zhu, Zhichao Geng, Lingling Wu, Yilong He, Yuan Ni, Guotong Xie, Xu- anjing Huang, and Xipeng Qiu. 2022. A simple hash-based early exiting approach for language un- derstanding and generation. In ACL.Mauro Vallati, Lukas Chrpa, Marek Grze ´s, Thomas Leo McCluskey, Mark Roberts, Scott Sanner, et al. 2015. The 2014 international planning competition: Progress and trends. Ai Magazine . Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, et al. 2024. A survey on large language model based autonomous agents. Frontiers of Computer Science . Ruoyao Wang, Peter Jansen, Marc-Alexandre Côté, and Prithviraj Ammanabrolu. 2022. ScienceWorld: Is your agent smarter than a 5th grader? In EMNLP . Zhexuan Wang, Yutong Wang, Xuebo Liu, Liang Ding, Miao Zhang, Jie Liu, and Min Zhang. 2025. Agentdropout: Dynamic agent elimination for token- efficient and high-performance llm-based multi-agent collaboration. arXiv preprint . Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. 2022. Chain-of-thought prompting elicits rea- soning in large language models. NeurIPS . Yiran Wu, Tianwei Yue, Shaokun Zhang, Chi Wang, and Qingyun Wu. 2024. Stateflow: Enhancing llm task- solving through state-driven workflows. In COLM . Chenxu Yang, Qingyi Si, Yongjie Duan, Zheliang Zhu, Chenyu Zhu, Zheng Lin, Li Cao, and Weiping Wang. 2025. Dynamic early exit in reasoning models. arXiv preprint . Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. 2023. React: Synergizing reasoning and acting in language models. In ICLR . Andrew Zhao, Daniel Huang, Quentin Xu, Matthieu Lin, Yong-Jin Liu, and Gao Huang. 2024. Expel: Llm agents are experiential learners. In AAAI . Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. 2023. Judging llm-as-a-judge with mt-bench and chatbot arena. NeurIPS . Wangchunshu Zhou, Canwen Xu, Tao Ge, Julian McAuley, Ke Xu, and Furu Wei. 2020. | https://arxiv.org/abs/2505.17616v1 |
Bert loses patience: Fast and robust inference with early exit. NeurIPS . A Recommended Early-Exit Approaches Based on our experimental results and analysis, we provide a set of recommendations for selecting suitable early-exit approaches for specific LLMs. These guidelines are summarized in Table 3 and can serve as a reference for future research. LLM Intrinsic Extrinsic Llama3.1-8B-Instruct /toggle-on /toggle-on Llama3.1-70B-Instruct /toggle-on /toggle-on Mistral-7B-Instruct /toggle-on /toggle-off Mistral-24B-Instruct /toggle-off /toggle-on Table 3: Recommendations for selecting early-exit approaches for different LLMs. B Prompt Variants In our initial experiments, we observed that prompts behave differently across various LLMs. For instance, in the case of extrinsic early- exit, Llama3.1-70B-Instruct is particularly sensi- tive—strict prompts can easily trigger an early exit. To address this, we designed two prompt variants for each experimental setting: “Modest Condition” and “Strict Condition.” The Strict Condition uses a firmer tone and outlines more detailed exit criteria, while the Modest Condition is more lenient. We provide the full prompt contexts in Table 4, along with their corresponding compatible LLMs. C Prompt Context We follow Chang et al. (2024) in using the pro- vided task instruction, task goal, and example for each dataset. Since Chang et al. (2024) adopt an act-only prompting style rather than ReAct- style, we follow Song et al. (2024) to design a ReAct-style prompt format. The original examples are extended from Act-Only to ReAct-style using gpt-4o-2024-08-06 . Initial observations and in- teractions are provided by the environment, and the intrinsic and extrinsic early-exit instructions are shown in Table 4. For ALFWorld and Science- World tasks, we observe that providing valid ac- tions leads to a significant performance difference (approximately 10%–20% in success rate). There- fore, we include valid actions in these two datasets to ensure fair comparison with prior work (Song et al., 2024; Fu et al., 2025a). ReAct-Style Prompt for ALFWorld SYSTEM : You are a helpful assistant. USER : Your task is to interact with a virtual house- hold simulator to accomplish a specific task. With each interaction, you will receive an observation. Your role is to ... {task instruc -tion} ASSISTANT: OK. USER: Here is the example: {example} Now, it’s your turn. You should perform thoughts and actions to accomplish the goal. Your response should use the following for- mat: Thought: <your thoughts> Action: <your next action> Your task is: {task goal} You are in the middle of a room. Looking quickly around you, ... {init observation} {interactionhistory} ## Important ##: Your thought should be short, clear and concise. {intrinsicearly -exitinstruc tion} The next action could be chosen from these valid actions: {valid actions} Extrinsic Early-Exit Verification SYSTEM : You are a helpful assistant. USER : You will be given a historical scenario in which you are placed in a specific environ- ment with a designated objective to accom- plish. ### Task Description: Your task is to in- teract with a virtual household simulator to accomplish a specific task. With each in- teraction, you will receive an observation. Your role is to ... {task instruc tion} ### Your Objective: {task goal} Your Current History: {interactionhistory} Instructions: {extrinsicearly -exitinstruc | https://arxiv.org/abs/2505.17616v1 |
tion} Do not include any additional text or expla- nations in your response. Early-Exit Approach Strict Condition Modest Condition Intrinsic Early-Exit Once the environment appears complete or no further progress is likely, include ’EXIT’ in your action to end the task without delay.If you believe the environment is complete, your task is finished, and no further attempts are needed, please include ’EXIT’ in your action. Prompt →LLM Llama3.1-70B-Instruct Mistral-24B-InstructLlama3.1-8B-Instruct Mistral-7B-Instruct Extrinsic Early-Exit Evaluate the current history of the agent and determine if it meets any of the following condi- tions: 1. The recent steps show repetitive actions or the agent appears to be stuck in a loop. 2. The agent repeatedly checks for valid actions but fails to make meaningful progress toward the objective. 3. The agent’s recent thoughts suggest the task is complete and no further steps are necessary. 4. The task is no longer achievable due to high difficulty or significant deviation from the ex- pected course. If any of the above conditions are met, output “YES”. Otherwise, output “NO” to indicate the agent should continue exploring.Evaluate the agent’s recent history and consider: 1. Whether the agent appears stuck or making little meaningful progress despite repeated at- tempts. 2. Whether the task seems complete or no longer feasible to pursue. If you have good reason to believe further steps are unlikely to help, you may output “YES” to suggest stopping. Otherwise, output “NO” and continue exploring. Prompt →LLM Llama3.1-8B-Instruct Mistral-7B-Instruct Mistral-24B-InstructLlama3.1-70B-Instruct Table 4: Early-Exit prompt context with different condition. We also provide their correspondding LLM used in our approach. | https://arxiv.org/abs/2505.17616v1 |
arXiv:2505.17625v1 [cs.CL] 23 May 2025Enhancing Large Vision-Language Models with Layout Modality for Table Question Answering on Japanese Annual Securities Reports 1stHayato Aida Stockmark Tokyo, Japan hayato.aida@stockmark.co.jp2ndKosuke Takahashi Stockmark Tokyo, Japan kosuke.takahashi@stockmark.co.jp3rdTakahiro Omi Stockmark Tokyo, Japan takahiro.omi@stockmark.co.jp Abstract —With recent advancements in Large Language Mod- els (LLMs) and growing interest in retrieval-augmented gen- eration (RAG), the ability to understand table structures has become increasingly important. This is especially critical in financial domains such as securities reports, where highly accu- rate question answering (QA) over tables is required. However, tables exist in various formats—including HTML, images, and plain text—making it difficult to preserve and extract structural information. Therefore, multimodal LLMs are essential for robust and general-purpose table understanding. Despite their promise, current Large Vision-Language Models (LVLMs), which are major representatives of multimodal LLMs, still face challenges in accurately understanding characters and their spatial rela- tionships within documents. In this study, we propose a method to enhance LVLM-based table understanding by incorporating in-table textual content and layout features. Experimental results demonstrate that these auxiliary modalities significantly improve performance, enabling robust interpretation of complex document layouts without re- lying on explicitly structured input formats. The TableCellQA dataset—including rendered images, layout (bounding-box) anno- tations, and QA data—will be publicly released upon publication (license confirmation in progress). Index Terms —Table QA, VQA, LVLM, Multimodal, Layout I. I NTRODUCTION Tables are structured representations of data and are widely used in business documents such as reports, spreadsheets, and financial statements. In particular, financial documents like annual securities reports often contain complex tables that are essential for quantitative decision-making. The NTCIR- 18 U4 task [4] provides a valuable benchmark, offering tables extracted from Japanese securities reports in structured HTML format. However, real-world tables appear in a variety of for- mats—including HTML, Markdown, CSV , and scanned PDFs—many of which lack explicit structural annotations. This diversity poses significant challenges for automated table understanding, especially when accurate value extraction is required for downstream tasks such as financial question answering (QA). Recent advances in Large Vision-Language Models (LVLMs) have enabled joint processing of image and text inputs, offering a promising path toward format-agnostic table interpretation.To better evaluate table understanding in such multimodal settings, we reformulate the original NTCIR-18 U4 Table QA dataset into a simplified version that focuses purely on direct value extraction from table cells. We refer to this task asTableCellQA . Unlike the original task—which includes deriving answers through arithmetic operations or transforma- tions—TableCellQA requires models to identify and extract exact cell values based solely on structural and semantic alignment with the question. This setup allows us to isolate and assess the model’s ability to comprehend table structure without external reasoning. In this study, we propose a multimodal framework that en- hances LVLMs by incorporating not only the image modality, but also text and layout information. We decompose each HTML table into three distinct modalities—image, text, and layout—and feed them into a modified LVLM to improve its structural understanding capabilities. Experimental results demonstrate that incorporating layout and text modalities leads to more than a 7% improvement in accuracy over the image-only baseline. | https://arxiv.org/abs/2505.17625v1 |
Ablation studies further highlight the critical role of these auxiliary modalities in correctly identifying relevant cells, providing new insights into effective multimodal strategies for table understanding in real-world business documents. II. R ELATED WORK Visual document understanding encompasses the extraction and interpretation of information from document images to answer relevant queries. Within the area of visual document understanding, Table QA specifically focuses on comprehend- ing tabular information contained within documents. Various benchmarks have been established for assessing performance of downstream tasks in this field. For instance, DocVQA [9] involves extracting and understanding textual and visual content from diverse document images to respond accurately to posed questions. Similarly, datasets like CORD [11] and FUNSD [3] focus on specialized tasks such as receipt un- derstanding and form information extraction from scanned documents, respectively. These datasets leverage multimodal information, including visual features, textual content, and spatial layouts, which are crucial elements for accurate table comprehension. Recent advancements in transformer-based architectures have further improved multimodal document understanding. LayoutLMv3 [2], for example, incorporates visual, textual, and spatial modalities to achieve state-of-the-art results across various document understanding tasks. More recently, LVLMs have demonstrated their potential for document understanding tasks by leveraging the strong text-processing capabilities of high-performing LLMs. Models such as LLaV A [6] and LLaV A-OneVision [5] have shown promising capabilities in general multimodal tasks. Specif- ically, Qwen2-VL [13] has achieved state-of-the-art perfor- mance on the DocVQA benchmark. Meanwhile, Zheng et al. [16] proposed Table-LLaV A, a multimodal vision-language model specifically optimized for table understanding from images. They reformulated table understanding as an instruction-following problem and con- structed the MMTab dataset to support this paradigm. While Table-LLaV A primarily operates on table images without di- rect access to underlying structured representations like HTML or CSV , it demonstrates that combining visual information with task-specific instructions can substantially enhance table reasoning capabilities. While models like Table-LLaV A focus solely on visual inputs, some prior approaches have explored incorporating additional modalities such as text and layout information. LayoutLM [15], on the other hand, is specifically tailored for document image understanding. It extends traditional encoder-based language models by incorporating not only textual content but also the spatial layout of documents. Based on the Transformer architecture, LayoutLM integrates token embeddings with 2D positional embeddings that represent the coordinates of text within a document. This approach allows the model to capture the structural information essential for understanding complex documents, such as forms or receipts. Subsequent versions, like LayoutLMv2 [14] and LayoutLMv3 [2], further enhance this capability by incorporating actual image embeddings alongside text and layout information, enabling a more comprehensive understanding of documents that include both textual and visual elements. Additionally, models that leverage only textual and layout information have also been explored. LayTextLLM [7] focuses on integrating textual content with spatial layout information. It achieves this integration by mapping each bounding box to a single embedding and interleaving it with the corresponding text. This approach efficiently addresses sequence length is- sues and leverages the autoregressive traits of LLMs, enabling effective document understanding without relying on explicit visual inputs. With the advancement | https://arxiv.org/abs/2505.17625v1 |
of LLMs utilizing Transformer de- coders with layout awareness, QA datasets incorporating lay- out information have also been proposed. LayoutLLM [8] introduces a QA dataset that leverages document images along with the text and layout information within them, while also presenting a baseline architecture. The construction ofthis dataset has the potential to enhance LLMs’ ability to comprehensively understand text, layout, and images. Motivated by these developments, we build upon exist- ing LVLM frameworks by: (1) adopting an LVLM capable of efficiently processing high-resolution images and (2) ex- tending its architecture to incorporate comprehensive mul- timodal inputs—including image, text, and layout informa- tion—specifically optimized for the task of table understand- ing. III. M ETHODS A. Task Definition: TableCellQA The goal of TableCellQA is to evaluate a model’s ability to extract precise cell values from tables in response to natural language questions. This task is derived from the Table QA subtask in NTCIR-18 U4, which originally required arithmetic reasoning and unit interpretation (e.g., converting from thou- sands to millions of yen). While valuable, such requirements introduce additional complexity that can obscure the model’s true table understanding capabilities. To isolate and measure this core skill, we reformulate the original task into TableCellQA. In TableCellQA, the answer to each question is defined as the raw value of a single cell that aligns with the question’s intent. We utilize the cell IDs provided in the original dataset, which specify the location of the answer cell, to replace answers that originally involved computation with the corresponding raw cell values. This setting allows us to focus purely on structural and semantic comprehension without external reasoning steps. Figure 1 shows an example from TableCellQA, where the model must identify the correct cell in the table and extract its value without performing any additional reasoning. B. Data Preparation Figure 3 illustrates the processing pipeline used to obtain layout (L), text (T), and image (I) modalities from HTML tables. To generate the image and layout modalities, we first rendered HTML tables as PDFs. From these PDFs, we extracted the layout modality as bounding-box coordinates representing the position of each text span, along with the cor- responding OCR-extracted text and table images. A concrete example of the layout extraction is shown in Figure 2. For constructing the QA data for TableCellQA, we mapped each question in the NTCIR-18 U4 Table QA dataset to its corresponding answer cell using the provided cell IDs, and retrieved the raw cell value as the new answer. Specifically, we parsed the HTML tables to extract a dictionary that maps cell IDs to their corresponding values from the HTML attributes, which was used for converting the original QA pairs into TableCellQA format. This conversion was applied consistently to both the training and test splits. By replacing computation-involved answers with direct cell values, we created instruction-style QA pairs that emphasize table structure comprehension. In addition, we prepared structured table text data by ex- tracting clean HTML representations (removing non-structural tags and attributes) and converting them into Markdown and JSON formats to explore the effect of different text styles. After filtering | https://arxiv.org/abs/2505.17625v1 |
out cases where table rendering failed, we obtained a final dataset consisting of 10,278 training examples and 1,303 test examples. This dataset serves as the foundation for evaluating our proposed TableCellQA task under various modality configurations. C. Model Construction LVLMs typically accept only images as input. However, we modify the architecture to incorporate text and layout modalities in addition to images for this task. Following previous studies such as LayTextLLM [7], layout embedding is achieved by converting bounding box coordi- nates into the hidden dimensions of an LLM using a two-layer MLP. Each bounding box is represented as a 4-dimensional vector b= (xmin, ymin, xmax, ymax), where (xmin, ymin)and (xmax, ymax)denote the top-left and bottom-right coordinates of the bounding box, respectively. The layout embedding eL∈Rdis then obtained via a two-layer multilayer perceptron (MLP): eL=MLP(b) where MLP denotes a feed-forward neural network with non- linear activations, and dis the embedding dimension. Each layout embedding is treated as a single token input to the LLM and paired with the corresponding text, as shown in Figure 4. This approach enables the model to process text while maintaining its spatial correspondence within the table. Each text span within the table is associated with a layout embedding eL,iand a text embedding eT,i. These embeddings are concatenated to form a combined representation for each span: hi= [eL,i;eT,i] The combined sequence of all spans is denoted as: HL+T= (h1,h2, . . . , hN) where Nis the number of text spans in the table. Finally, the model input Xis constructed by concatenating the image embedding, the question embedding, and the sequence HL+T: X= [HL+T;eimage;equestion ] This combined input Xis then fed into the LLM for process- ing. In the experiments, different combinations of image, layout, and text inputs are tested to analyze the contributions of each modality. To ensure consistency in analysis, all evaluations are conducted under the same conditions, allowing for a direct comparison of the impact of each input type. D. Evaluation Metrics We evaluated model performance using two complementary metrics: Accuracy and Approximate Normalized Levenshtein Similarity (ANLS) [1]. Accuracy measures the proportion of predictions that exactly match the ground-truth cell values, Fig. 1. Example of TableCellQA Fig. 2. Layout feature example. Each text region is represented by a bounding box (x1, y1)–(x2, y2)indicating its spatial position in the table image. providing a strict evaluation of model precision. In addition, we report ANLS, which quantifies the character-level similar- ity between predicted and reference values. This allows for a more tolerant evaluation that accounts for minor recognition errors, such as small typos or OCR noise, which are common when relying solely on the image modality in conventional vision-language models (VLMs). IV. E XPERIMENTS In this section, we present the experimental conditions and results using the data and models described in the previous section. A. Training Conditions Table I summarizes the input modalities evaluated. L+T+I denotes models using layout, text, and image jointly; L+T uses layout and text only; T+I uses text and image; L+I uses layout and image; and I uses image only. In addition, models were Fig. | https://arxiv.org/abs/2505.17625v1 |
3. Overall system architecture Fig. 4. The architecture of our LVLM. Each text token is paired with its corresponding layout feature (e.g., Layout 1and Text 1), where the layout is represented by bounding box coordinates. Layout features are encoded via an MLP and combined with text and image features as input to the LLM. TABLE I DESCRIPTION OF EACH INPUT MODALITY Modality or Format Description L+T+I Layout, Text, and Image L+T Layout and Text T+I Text and Image L+I Layout and Image I Image only HTML Cleaned HTML text extracted from the document Markdown Rendered Markdown converted from HTML JSON JSON representation generated from HTML trained and evaluated using structured table representations in HTML, Markdown, and JSON formats for comparison with traditional text-based table understanding approaches. All fine-tuning experiments were conducted based on the llava-onevision-qwen2-7b-ov (LLaV A-ov-7b) model [5]. Training for TableCellQA was performed for 2 epochs with a batch size of 8, a learning rate of 1e-5, and a warmup ratio of 0.03. B. Results Table II presents the evaluation results across different input modality combinations. We also include comparisons with the zero-shot performance of strong pretrained multimodal models. The ”FT” column indicates whether the model was fine-tuned on the TableCellQA dataset. To compare the zero-shot performance of high-capacity multimodal models, we additionally conducted inference using Qwen2.5-VL-72B [12], [13] and GPT-4o (gpt-4o-2024-08- 06) [10]. 1) Comparison of Text, Layout and Image Modalities: To investigate the effectiveness of incorporating layout and in-image text information in LVLMs, we compared various combinations of the I (image), T (text), and L (layout) modal- ities. The combination of text (T) and layout (L) achieved theTABLE II COMPARISON OF MODALITIES (TABLE CELLQA-T EST) Model FT Modality Acc. ANLS LLaV A-ov-7b Yes L+T+I 0.9478 0.9666 LLaV A-ov-7b Yes L+T 0.9509 0.9645 LLaV A-ov-7b Yes T+I 0.9394 0.9571 LLaV A-ov-7b Yes L+I 0.8733 0.9129 LLaV A-ov-7b Yes I 0.8764 0.9139 Qwen2.5-VL-72B No I 0.6631 0.7610 GPT-4o No I 0.5748 0.6708 TABLE III COMPARISON OF STRUCTURED TABLE TEXT (TABLE CELLQA-T EST) Model FT Format Acc. ANLS LLaV A-ov-7b Yes Markdown 0.9540 0.9657 LLaV A-ov-7b Yes JSON 0.9563 0.9693 LLaV A-ov-7b Yes HTML 0.9655 0.9762 highest Accuracy, while the combination of all three modal- ities (L+T+I) yielded the highest ANLS score. Performance dropped in all cases where either text or layout information was removed. In particular, the L+I setting, which excludes text, showed the most significant performance degradation. The T+I setting, which lacks layout information, also under- performed in both Accuracy and ANLS compared to the T+L setting, where only the image modality was excluded. These results indicate the critical importance of both textual content and its spatial layout for multimodal table under- standing. Among the modalities, text contributes the most to performance, followed by layout information. Interestingly, while the combination of layout and text (L+T) achieved the highest Accuracy, the addition of the image modality (L+T+I) led to a higher ANLS score. This trend suggests that incorporating image information, despite introducing minor OCR errors, provides complementary evidence that improves character-level similarity metrics. Conversely, the highest Ac- curacy under L+T highlights | https://arxiv.org/abs/2505.17625v1 |
the advantage of relying solely on clean text embeddings aligned with layout information, without introducing redundant visual representations where the textual content is already explicitly provided. However, it should be noted that these differences are within a margin of less than 0.5%, and may fall within the natural variance of the dataset. To clarify the effect of layout information, we compared the inference results of the L+T+I and I+T settings and analyzed cases where the model failed when layout information was excluded. This phenomenon is illustrated in Figure 5, where the model incorrectly selects neighboring cells due to the absence of spatial layout information. Since models using only text input are not provided with positional relationships beyond sequence order, such errors are likely due to a lack of spatial context. These findings suggest that layout information plays a crucial role in recognizing the row and column structures of tables. Performance deteriorated significantly under the image-only condition, particularly for larger tables with smaller and denser textual elements. This performance degradation stems from Fig. 5. An example of a prediction error caused by the absence of layout modality.(T+I) Fig. 6. An example of a large table where OCR-related errors occur. While the L+T+I setting correctly predicts the answer ”3,466”, the image-only (I) setting produces an incorrect answer ”3,468”, highlighting the limitations of visual-only approaches. the limitations of current LVLM OCR capabilities, which struggle to accurately recognize text in small-scale or visually crowded contexts. For instance, in large tables with dense text, we observed basic OCR errors, such as predicting ”3,468” instead of the correct value ”3,466,” illustrating the suscep- tibility of image-only models to minor recognition mistakes (see Figure 6). These findings highlight the need for not only enhanced OCR capabilities but also the incorporation of additional modalities, such as layout and text features, to reliably parse complex visual data. Overall, the results underscore that textual content remains the primary source of information for accurate table under- standing, with layout information providing crucial spatial context. While image information currently plays a comple- mentary role and may introduce minor noise, it also offers valuable support, particularly for tables with rich visual ele-ments, suggesting promising potential for future extensions in multimodal table understanding. 2) Comparison of Structured Table Text: Table III presents the evaluation results for different formats of structured ta- ble text. Structured table text represented in HTML, JSON, and Markdown formats significantly outperformed all settings that utilized image or layout modalities, highlighting the effectiveness of clean and explicitly structured data for table understanding tasks. This suggests that properly structured tables allow LLMs to perform at their best. Among the three formats, HTML achieved the highest performance. Markdown tables do not support cell merging, and merged cells are instead duplicated to create a pseudo-structure during conversion. This leads to a loss of structural information compared to the original HTML, which likely contributed to the lowest performance among the three formats. JSON-formatted tables, on the other hand, can preserve merged cell information and are structurally equivalent to HTML in terms of information content. However, representing tables | https://arxiv.org/abs/2505.17625v1 |
in JSON is less common in real-world applications, and it is expected that the underlying models are pre-trained on a larger volume of HTML tables than JSON ones. These factors likely account for the performance differences observed among the structured table text formats. Although structured HTML tables demonstrated the highest performance in our experiments, such structured table text is typically not available in real-world scenarios, where business documents are commonly provided in PDF format. In practice, an additional module would be required to infer or reconstruct structured table text from unstructured sources, and the final performance would depend heavily on the accuracy of that module. Moreover, extending this approach to documents containing figures or photographs remains a challenge. In such cases, our proposed multimodal approach incorporating layout, text, and image modalities remains applicable. Thus, while structured table text is highly effective when available, multimodal ap- proaches remain essential for handling unstructured, visually complex documents encountered in practical applications. In our dataset, the HTML tables used as the rendering source retain complete structural information and serve as an upper bound for table understanding performance. Notably, the T+L+I setting—combining image, text, and layout—achieves results that are remarkably close to those of the structured formats. This indicates that the multimodal representation ef- fectively captures the underlying table structure, and can serve as a practical substitute when explicit structural annotations are unavailable. 3) Performance Comparison with State-of-the-Art Models: We compared the performance on a visual question answering task using only table images, employing Qwen2.5-VL-72B and GPT-4o. The zero-shot performance of these state-of-the- art models was lower than that of all the methods proposed in this study. In particular, there was a notable performance gap compared to models fine-tuned even with image-only inputs, indicating that task-specific adaptation and familiarity TABLE IV COMPARISON OF TRAINING DATA Model Acc. ANLS L+T+I 0.9478 0.9666 L+T+I w LayoutLLM-SFT 0.9355 0.9544 with dataset characteristics play a critical role in achiev- ing high accuracy. Interestingly, the zero-shot performance of GPT-4o was lower than that of Qwen2.5-VL. Qwen2.5- VL is reported to have been trained on a large number of synthetic document images generated from HTML sources, which may contribute to its strong performance on table-like visual inputs. In contrast, GPT-4o is likely trained with a greater emphasis on general-purpose capabilities. These results highlight the importance of pre-training data distribution in tasks that require domain-specific visual understanding, such as table-based reasoning. 4) Comparison of Training Data: Since the proposed lay- out modality is not present in existing LVLMs, pre-training is conducted to help the model adapt to this format. The LayoutLLM-SFT dataset [8] is designed for document-based QA tasks and includes OCR text and bounding-box coordi- nates alongside images and QA data. To explore the impact of data augmentation for learning layout features, we conducted an additional experiment in which 50% of the LayoutLLM- SFT dataset was used for pre-training. This experiment was conducted only on models that utilize the L+T+I modalities. Table IV shows the comparison results with and without pre- training on the LayoutLLM-SFT dataset. When pre-training was conducted using LayoutLLM-SFT, both | https://arxiv.org/abs/2505.17625v1 |
Accuracy and ANLS scores decreased. This finding suggests that the inherent capability of a general-purpose LVLM to comprehend struc- tured visual information may already be sufficient—or even preferable—for TableCellQA tasks, compared to specialized pre-training that could unintentionally limit the model’s gen- eralization ability. V. C ONCLUSIONS In this study, we developed models for table question answering that leverage multimodal information, including text, images, and layout. We found that incorporating textual and layout information within tables significantly enhances the table understanding capability of LVLMs. Among the three modalities—text, layout, and image—we observed that text contributes the most to performance, followed by layout information, and then image information. On the other hand, structured table text achieved the highest overall performance, reaffirming the importance of explicitly defined table structure. Our approach can be seen as an intermediate solution between end-to-end table understanding from raw images and methods that rely on fully structured HTML-based tables. This approach allows us to bypass the need for full table structure reconstruction while mitigating the limitations of image-only understanding, thereby enabling more generalizable table comprehension.We also found that task-specific pretraining did not nec- essarily improve performance, suggesting that preserving the generalization ability of LVLMs is crucial for flexible and robust table understanding across diverse document formats. In future work, we aim to extend our method beyond text- only tables to handle more complex documents that contain mixtures of text, images, and figures. Ultimately, we seek to develop multimodal LLMs that are well-suited for real-world business scenarios. REFERENCES [1] Ali Furkan Biten, Ruben Tito, Andres Mafla, Lluis Gomez, Marc ¸al Rusi ˜nol, Ernest Valveny, C. V . Jawahar, and Dimosthenis Karatzas. Scene text visual question answering, 2019. [2] Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, and Furu Wei. LayoutLMv3: Pre-training for document AI with unified text and image masking. arXiv [cs.CL] , (1), April 2022. [3] Guillaume Jaume, Hazim Kemal Ekenel, and Jean-Philippe Thiran. Funsd: A dataset for form understanding in noisy scanned documents, 2019. [4] Yasutomo Kimura, Eisaku Sato, Kazuma Kadowaki, and Hokuto Oto- take. Overview of the NTCIR-18 U4 task. Proceedings of the 18th NTCIR Conference on Evaluation of Information Access Technologies , 6 2025. [5] Bo Li, Yuanhan Zhang, Dong Guo, Renrui Zhang, Feng Li, Hao Zhang, Kaichen Zhang, Peiyuan Zhang, Yanwei Li, Ziwei Liu, and Chunyuan Li. LLaV A-OneVision: Easy visual task transfer. arXiv [cs.CV] , August 2024. [6] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. arXiv [cs.CV] , April 2023. [7] Jinghui Lu, Haiyang Yu, Yanjie Wang, Yongjie Ye, Jingqun Tang, Ziwei Yang, Binghong Wu, Qi Liu, Hao Feng, Han Wang, Hao Liu, and Can Huang. A bounding box is worth one token: Interleaving layout and text in a large language model for document understanding. arXiv [cs.CL] , July 2024. [8] Chuwei Luo, Yufan Shen, Zhaoqing Zhu, Qi Zheng, Zhi Yu, and Cong Yao. LayoutLLM: Layout instruction tuning with large language models for document understanding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 15630– 15640, 2024. [9] Minesh Mathew, Dimosthenis Karatzas, and C | https://arxiv.org/abs/2505.17625v1 |
V Jawahar. DocVQA: A dataset for VQA on document images. arXiv [cs.CV] , July 2020. [10] OpenAI. Gpt-4o system card. https://arxiv.org/abs/2410.21276, 2024. Accessed: 2025-04-28. [11] Seunghyun Park, Seung Shin, Bado Lee, Junyeop Lee, Jaeheung Surh, Minjoon Seo, and Hwalsuk Lee. Cord: A consolidated receipt dataset for post-ocr parsing. 2019. [12] Qwen Team. Qwen2.5-vl, January 2025. [13] Peng Wang, Shuai Bai, Sinan Tan, Shijie Wang, Zhihao Fan, Jinze Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, Yang Fan, Kai Dang, Mengfei Du, Xuancheng Ren, Rui Men, Dayiheng Liu, Chang Zhou, Jingren Zhou, and Junyang Lin. Qwen2-VL: Enhancing vision- language model’s perception of the world at any resolution. arXiv [cs.CV] , September 2024. [14] Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, and Lidong Zhou. LayoutLMv2: Multi-modal pre-training for visually-rich document understanding. arXiv [cs.CL] , December 2020. [15] Yiheng Xu, Minghao Li, Lei Cui, Shaohan Huang, Furu Wei, and Ming Zhou. LayoutLM: Pre-training of text and layout for document image understanding. arXiv [cs.CL] , December 2019. [16] Ruyu Zheng, Xinyu Zhang, Jiaze Zhang, Yuwei Liu, Qiang Zhang, Kang Zheng, and Wen Gao. Multimodal table understanding as instruction following. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (ACL) , 2024. | https://arxiv.org/abs/2505.17625v1 |
GIM: Improved Interpretability for Large Language Models Joakim Edin1,3Róbert Csordás2Tuukka Ruotsalo3,4 Zhengxuan Wu2Maria Maistro3Jing Huang2Lars Maaløe1 1Corti2Stanford University3Copenhagen University4LUT University je@corti.ai Abstract Ensuring faithful interpretability in large language models is imperative for trust- worthy and reliable AI. A key obstacle is self-repair, a phenomenon where net- works compensate for reduced signal in one component by amplifying others, masking the true importance of the ablated component. While prior work at- tributes self-repair to layer normalization and back-up components that compensate for ablated components, we identify a novel form occurring within the attention mechanism, where softmax redistribution conceals the influence of important attention scores. This leads traditional ablation and gradient-based methods to underestimate the significance of all components contributing to these attention scores. We introduce Gradient Interaction Modifications (GIM), a technique that accounts for self-repair during backpropagation. Extensive experiments across mul- tiple large language models (Gemma 2B/9B, LLAMA 1B/3B/8B, Qwen 1.5B/3B) and diverse tasks demonstrate that GIM significantly improves faithfulness over existing circuit identification and feature attribution methods. Our work is a sig- nificant step toward better understanding the inner mechanisms of LLMs, which is crucial for improving them and ensuring their safety. Our code is available at https://github.com/JoakimEdin/gim . Figure 1: Attention self-repair. When multiple values associated with large attention weights contain similar information, ablating one attention score has little effect on the output because the softmax activation function compensates by increasing the weight of other positions. This results in perturbation-based and gradient-based explanation methods underestimating the importance of components that contribute to the attention scores through the keys and queries. Preprint. Under review.arXiv:2505.17630v1 [cs.CL] 23 May 2025 1 Introduction Large language models (LLMs) have demonstrated remarkable capabilities across diverse tasks, from text summarization to code synthesis [ 1]. Despite their impressive performance, our understanding of their inner mechanisms remains limited. This opacity hinders our ability to explain model outputs, catch hallucinations, systematically improve performance, and ensure reliable behavior in novel situations. To address this, most of the previous explainability methods aim to quantify how internal components (e.g., attention heads, neurons, token embeddings) affect model output [ 2]. Two of the most widely adopted approaches, perturbation-based and gradient-based methods, both rest on the assumption that modifying a truly salient component will lead to a commensurate change in the model’s output. For instance, perturbation-based methods (e.g., SHAP, LIME, activation patching) investigate the effect of large perturbations [ 3,4], while gradient-based methods analyze infinitesimally small perturbations [ 5]. However, this core assumption that perturbing important components will perturb the model output often fails due to what is known as the self-repair effect [6]. Self-repair is a phenomenon in language models where downstream model components compensate for perturbed components, resulting in deceptively small output changes [ 7,8,6]. As a consequence, explanation methods may underestimate the importance of certain model components [6]. In this paper, we identify a new form of self-repair within the attention mechanism of LLMs. When multiple value vectors associated with large attention weights contain similar information, perturbing any individual attention score has little effect on the model output. If one position’s attention score | https://arxiv.org/abs/2505.17630v1 |
is reduced, the softmax activation function increases the attention weight of the other positions, maintaining similar output signals (see Figure 1). This attention self-repair systematically causes both perturbation-based methods and gradient methods to underestimate the importance of all components contributing to these attention scores. To address these challenges, we propose gradient interaction modifications (GIM), a novel gradient- based attribution method specifically designed to account for self-repair effects in language models. GIM introduces targeted modifications to backpropagation to better handle interactions that cause self-repair. First, we develop temperature-adjusted softmax gradients to address the attention self-repair problem. Second, we implement layernorm freeze to deal with the self-repair in layer normalization [ 9,6]. Finally, we incorporate gradient normalization as proposed by Achtibat et al. [10], which we found essential for making our backpropagation modifications effective in practice. Our key contributions are: 1.We identify and formalize the attention self-repair problem in softmax operations, showing how it fundamentally undermines the faithfulness metrics of traditional interpretability methods. 2.We introduce GIM , a novel gradient-based attribution method that combines three com- plementary techniques to address feature interactions in transformer components. We demonstrate that GIM is more faithful than other gradient-based circuit identification and feature attribution methods across multiple large language models (Gemma 2B/9B, LLAMA 1B/3B/8B, and Qwen 1.5B/3B) and six datasets spanning four tasks (question-answering, fact-verification, sentiment classification, and hate speech detection). 3.We empirically isolate and demonstrate the importance of each of our three modifications, showing their significant positive effect on explanation faithfulness across multiple large language models and tasks. 2 Related Work 2.1 Self-repair in Neural Networks Self-repair occurs when ablating components have unexpectedly small effects on model outputs because other components compensate for the ablation. Several studies have identified attention heads that suppress the signal from earlier layers, and the suppression is reduced when those early layers are ablated [ 8,7,11]. McGrath et al. [8]identified what they called MLP erasure neurons that have a similar suppressing effect on the signal from previous layers. Rushing and Nanda [6] investigated the role of layer normalization in self-repair. They demonstrated that normalization can 2 compensate for ablated layers by rescaling the remaining signals. Our work identifies a new form of self-repair that occurs within attention mechanisms, where softmax redistribution conceals the influence of important attention scores. We also show that this self-repair mechanism can impact the gradient. 2.2 Gradient-based Attribution Methods Attribution methods quantify model components’ contribution to specific model behaviors. Feature attribution methods quantify the importance of input features (e.g., tokens), while circuit identification methods quantify the importance of internal model components (e.g., attention heads). Attribution methods are usually perturbation-based or gradient-based [ 12]. Perturbation-based approaches (e.g., SHAP [ 3] and activation patching [ 13]) attribute importance by measuring output changes when perturbing components. However, they require numerous forward passes, making them prohibitively slow for large language models. Gradient-based methods [ 5,14] offer a faster alternative by leveraging backpropagation. However, they often produce unfaithful explanations due to neural network non- linearities, where first-order approximations fail to capture true component importance [ 10,15]. Methods such as Layer-wise Relevance Propagation (LRP) | https://arxiv.org/abs/2505.17630v1 |
[ 16], Integrated Gradients [ 5], and DeepLIFT [ 17] attempt to address these limitations by modifying gradient flows, but empirically struggle with language models [18, 19, 10]. Several studies propose specialized backpropagation rules for the transformer architecture. Trans- formerLRP [ 9] treats the layer normalization scaling variable and the attention matrix as constants, essentially not backpropagating through them. ATP* [ 20] propose a backpropagation rule for the soft- max operation that better estimates the impact of large attention score perturbations. However, ATP* ignores the normalization contribution in the softmax operation. Achtibat et al. [10] propose AttnLRP, which is a collection of backpropagation rules. Grad norm, their most important contribution to our work, divides the gradient by Nwhen backpropagating through operations where Nvariables are multiplied together.1Using Taylor Decomposition and Shapley, they provide theoretical justification for this rule. While TransformerLRP and AttnLRP show that the combination of their backpropa- gation rules improves the attribution faithfulness, they do not isolate each rule’s contribution to the improvements. Unlike previous work, our approach specifically targets self-repair in attention mechanisms and systematically evaluates both existing and novel backpropagation modifications, measuring their individual and combined impacts on attribution faithfulness. 3 Attention Self-Repair Self-repair in the attention layer is when an attention score strongly influences the output, yet perturbing it has minimal effect. This effect deems both perturbation- and gradient-based attribution methods inaccurate. In this section, we show when and why attention self-repair occurs and its consequences on gradient-based explanation methods. In transformer models, attention mechanisms control information flow between positions. Position i gathers information from all positions using the following computations: a=Qi·KT(Attention scores for position i) s=Softmax (a)(Attention weights for position i) o=s·V (Attention output for position i)(1) where Qiis the query for position i,Kare the keys, and Vare the values. The softmax function normalizes attention scores into a probability distribution: Softmax (a) =ea/τ P keak/τ(2) where τis the temperature parameter. For clarity, we distinguish between softmax input “attention scores” aand softmax output “attention weights” s. 1Achtibat et al. refer to grad norm as the uniform rule 3 Attention mechanisms can be viewed as information routing systems, where each value vector Vj contain information from position j, and the attention weights sjdetermine how much information to copy from that position. Self-repair occurs when multiple values with large attention weights contain similar information. When an attention score ajis ablated, the softmax redistributes weights primarily to positions with the highest remaining scores. If these positions contain values Vjthat contain similar information, the output remains virtually unchanged. Figure 1 illustrates this with attention weights [0.5,0.5,0]and value vectors [1,1,0]. Note that the values are identical at the positions with non-zero attention weights. Individually ablating either attention score (i.e., softmax input) leaves the output unchanged, as the softmax shifts weight to the other position with the same value. Only ablating both scores simultaneously affects the output, as the softmax shifts the attention weights to the position with a different value. 3.1 Attention self-repair results in zero gradient Here we will prove that when the conditions for attention self-repair are met, the gradient of | https://arxiv.org/abs/2505.17630v1 |
the model’s final output logit zwith respect to the attention score ajwill be near-zero. We can define the gradient as: ∂z ∂aj=sj ∂z ∂sj(1−sj)−X k̸=j∂z ∂sksk (3) where∂z ∂sjis the gradient of the output logit with respect to the attention weight sj. This is computed as follows: ∂z ∂sj=∂z ∂o·Vj (4) To demonstrate the effect of attention self-repair on Equation (3), we first define a set of positions with substantial attention weights Iϵ={k:sk> ϵ}where ϵis close to zero. Recall that for self-repair to occur, the values Vjat the positions with substantial attention weights must contribute similarly to the output. We can measure a value’s unweighted contribution to the output with the dot product between the value and its upstream gradient (GradientXInput). This is equivalent to∂z ∂sjin Equation (4). Therefore, the key condition for self-repair, which occurs when values at positions with substantial attention weights have similar contributions to the output, can be expressed mathematically as: ∂z ∂sj≈∂z ∂sk≈c∀j, k∈ Iϵ (5) where cis some constant. Next, we use the property that softmax weights sum to 1:X k∈Iϵsk≈1⇔1−sj≈X k∈Iϵ k̸=jsk (6) Substituting these into Equation (3), we get: ∂z ∂aj≈sj cX k∈Iϵ k̸=jsk−X k∈Iϵ k̸=jcsk =sjc X k∈Iϵ k̸=jsk−X k∈Iϵ k̸=jsk = 0(7) This result reveals that standard gradient-based explanation methods underestimate the importance of attention scores when attention self-repair conditions are met. Even when an attention score ajsignificantly impacts the model’s prediction, its gradient can be near-zero. This mathematical insight motivates our approach to modify gradient calculations to better reflect the true importance of components. 4 Methods To address self-repair and other gradient distortion issues in transformer models, we introduce gradient interaction modification (GIM), a comprehensive approach combining three complemen- tary modifications of standard back-propagation. This section describes the three modifications: Temperature-adjusted softmax gradients (TSG), Layernorm freeze , and Grad norm . 4 (a) Difference between ablating the two largest at- tention scores separately (x-axis) and jointly (y-axis). The 95% quantile interaction is colored orange. (b) Comparison of gradient and TSG. The attention vectors with the 95% quantile interactions are colored orange. Figure 2: The attention self-repair effect and how temperature-adjusted softmax gradients increase the gradient when there is self-repair for LLAMA-3.2 1B on FEVER. The left plot compares ablating the two largest attention scores separately and jointly. The strongest interactions are colored orange. The right plot compares the gradient for the same attention scores with the TSG. TSG results in a stronger signal for attention scores with a strong interaction. 4.1 Temperature-adjusted softmax gradients TSG modifies backpropagation through attention to address the self-repair problem presented in Section 3.1. The goal of TSG is to preserve attribution magnitude when standard backpropagation would otherwise have underestimated the importance of attention scores because of the self-repair effect. TSG addresses the issue with a simple approach: 1.During the forward pass, TSG uses a standard temperature ( T= 1), preserving the model’s normal behavior and predictions 2.During the backward pass, TSG recomputes the softmax with a higher temperature ( T >1) before computing gradients Increasing the temperature results in a more uniform attention weight distribution (an | https://arxiv.org/abs/2505.17630v1 |
increase of entropy). In other words, large attention weights decrease while small weights increase. This increases our set of positions with significant weights Iϵ={k:sk> ϵ}. Recall that for self-repair to occur, the gradients∂z ∂sjmust be approximately uniform across all positions in Iϵ. By expanding Iϵto include more positions, we potentially disrupt the uniformity condition that leads to cancellation. When this uniformity condition is broken, the two terms in Equation (3) may no longer completely cancel each other out, allowing for non-zero gradients. 4.2 Layernorm freeze & gradient norm Layernorm freeze addresses the self-repair effect in the layer normalization by treating the normaliza- tion parameters as constants during backpropagation [ 6,9]. Specifically, during the forward pass, we save the normalization factor σ, and during the backward pass, we divide the upstream gradient by the normalization factor σinstead of backpropagating through it. This naively prevents the self-repair effect by preventing layer normalization from changing due to perturbations. Gradient norm normalizes the gradient after variable multiplications [ 10]. During backpropagation, it divides the gradient by the number of inputs involved in the multiplication. Achtibat et al. [10] proved the optimality of this approach using Taylor decomposition and Shapley values. Specifically, we divide the gradient by 2 for three key multiplicative interactions in transformer models: 1) attention-value, 2) query-key, and 3) MLP gate-projection. 5 Experimental setup Our experiments were designed around three questions: 1) Does attention self-repair occur in large language models? 2) Does GIM produce more faithful explanations than existing circuit identification and feature attribution methods? 3) How much does each of GIM’s three modifications (TSG, 5 layernorm freeze, and grad norm) contribute to the increased faithfulness? This section describes how we ran the experiments. 5.1 Models, datasets, and evaluation metrics For our experiments, we use seven instruction-tuned large language models spanning three model families of various sizes: Gemma-2 (2B, 9B), LLaMA-3.2 (1B, 3B), LLaMA-3.1 (8B), and Qwen-2.5 (1.5B, 3B) [21, 22, 23]. We use six datasets spanning four tasks: BoolQ (question-answering), FEVER and SciFact (fact verification), Twitter sentiment classification and Movie review (sentiment classification), and HateX- plain (hate speech detection). These datasets were selected based on their diversity and prevalence in prior attribution studies [24, 25, 12]. We provide more details on the datasets in Section 9.2. We evaluate the faithfulness of the attributions using Comprehensiveness and Sufficiency [ 24].Com- prehensiveness (↑) measures the average output change after cumulatively ablating top-attributed features: Comp (f,x,e) =1 Nf(x)NX i=1f(x)−f(p(x,rank(e)1:i)) (8) where fis the model, xis the input, econtains attribution scores, pis the perturbation function, and rank(·)orders features by attribution. Sufficiency (↓) measures the average output change after cumulatively ablating the lowest-attributed features: Suff(f,x,e) =1 Nf(x)NX i=1f(x)−f(p(x,rank(−e)1:i)) (9) 5.2 Experiments 1. Attention self-repair in large language models: We investigate whether the attention self-repair problem occurs in large language models and evaluate the effectiveness of TSG in amplifying the signal when gradients would otherwise have underestimated the importance of attention scores. We first identify attention score vectors where multiple positions contribute significantly to the model output. We define position jas having a significant contribution when: ∂z ∂sjsj> β | https://arxiv.org/abs/2505.17630v1 |
where βis a threshold parameter. For each identified attention vector, we measure the joint effect by ablating the two largest attention scores simultaneously versus the sum of their individual ablation effects. If the joint effect is larger than the sum of the individual effect, the attention scores exhibit OR-gate-like behaviour, which is equivalent to self-repair [ 26]. We quantify these effects by the change in weighted gradient sumP k∂z ∂sksk. We do this instead of computing the full forward pass to avoid other components compensating for the ablations. Next, we compare TSG and standard gradient attribution scores for attention scores, focusing on examples with large self-repair effects. This comparison evaluates whether TSG correctly amplifies attribution for attention scores that standard gradients would underestimate due to self-repair. 2. Faithfulness of GIM: For feature attribution evaluation, we compare GIM with five gradient- based methods: GradientXInput (GxI) [ 5], Integrated gradients (IG) [ 5], DeepLIFT [ 27], Trans- formerLRP [ 9], and AttnLRP [ 28]. We did not evaluate perturbation-based methods because they are too computationally expensive for large language models on long inputs [ 29]. We computed comprehensiveness and sufficiency by replacing tokens with the token representing a space. For circuit identification evaluation, we compare GIM with attribution patching (ATP) [ 29], ATP*2[15], and Integrated Gradients [ 28] by computing comprehensiveness and sufficiency at each layer separately, by replacing the representations by the layer-average representations. 2We only implemented ATP*’s softmax gradient fix component. 6 0.0 2.5 5.0 7.5 10.0 12.5 15.0 Layer0.00.20.40.6→ ComprehensivenessAttribution Patching Integrated Gradients ATP* GIM 0.0 2.5 5.0 7.5 10.0 12.5 15.0 Layer0.00.20.40.6← SufficiencyAttribution Patching Integrated Gradients ATP* GIM(a) BoolQ 0.0 2.5 5.0 7.5 10.0 12.5 15.0 Layer0.00.20.40.6→ ComprehensivenessAttribution Patching Integrated Gradients ATP* GIM 0.0 2.5 5.0 7.5 10.0 12.5 15.0 Layer0.00.20.40.6← SufficiencyAttribution Patching Integrated Gradients ATP* GIM (b) Fever 0.0 2.5 5.0 7.5 10.0 12.5 15.0 Layer0.00.20.40.6→ ComprehensivenessAttribution Patching Integrated Gradients ATP* GIM 0.0 2.5 5.0 7.5 10.0 12.5 15.0 Layer0.00.20.40.6← SufficiencyAttribution Patching Integrated Gradients ATP* GIM (c) Movie review Figure 3: Comparison of circuit identification methods for LLAMA-3.2 1B (95% CI). The top row depicts comprehensiveness per layer ( ↑). The bottom row depicts sufficiency ( ↓). All attribution scores follow the formula (x−˜x)·α, where xand˜xare original and counterfactual representations, and αrepresents method-specific attributions. For the feature attribution evaluation, we used the token representing a space as the counterfactuals, while for the circuit identification evaluation, we used the layer-average representations as counterfactuals. We used a temperature of 2 for TSG, which we chose based on the results on Gemma-2 2B on the FEVER and HateXplain datasets. 3. Identifying the importance of each GIM modification: Our final experiment is an ablation study, where we estimate the importance of the three modifications in GIM. We estimate their importance by cumulatively adding modifications to standard GradientXInput and measuring the change in feature attribution faithfulness. 6 Results Self-repair is prevalent in attention mechanisms: Our analysis reveals that attention self-repair is common across large language models. Figure 2a demonstrates this phenomenon by comparing the joint effect of ablating the two largest attention scores simultaneously | https://arxiv.org/abs/2505.17630v1 |
(y-axis) versus the sum of their individual ablation effects (x-axis). The consistent gap above the diagonal, especially prominent for orange-highlighted instances (top 5% strongest interactions), provides clear empirical evidence of self-repair: individual attention scores show disproportionately small effects when ablated separately. These results validate our theoretical analysis from Section 3.1 in practical model settings. We provide results for other models and datasets in Section 9.9. TSG attributes more importance to attention scores during attention self-repair: Figure 2b compares standard gradient attribution (x-axis) with our Temperature-adjusted Softmax Gradients (y-axis) for the same attention scores. The orange dots represent attention vectors identified as having strong self-repair interactions in the left figure. TSG consistently attributes higher importance to these scores. This shows that TSG successfully addresses the gradient cancellation issue identified in our theoretical analysis by increasing attribution magnitude where standard gradients would otherwise assign low importance due to self-repair. We provide similar results for other models and datasets in Section 9.9. GIM is more faithful than other gradient-based circuit identification methods: Figure 3 shows comprehensiveness and sufficiency metrics across layers for LLAMA-3.2 1B. GIM significantly out- performs other circuit identification methods in early layers, while Integrated Gradients demonstrates slight advantages in middle and late layers. Similar patterns appear across other models and datasets (Section 9.8). For all the methods, the scores worsen in the top layers of the LLM. This could either be because they become less faithful or because the upper and lower limits of comprehensiveness and sufficiency change across layers [ 18]. Since we are interested in the relative differences between the methods, we leave it to future work to understand what causes the drop in the scores in top layers. 7 Table 1: Comparison of gradient-based feature attribution methods. Comprehensiveness ↑ Sufficiency ↓ Gemma LLAMA Qwen Gemma LLAMA Qwen 2B 9B 1B 3B 8B 1.5B 3B 2B 9B 1B 3B 8B 1.5B 3B BoolQGxI 0.09 0.00 0.18 0.45 0.39 0.27 0.57 0.60 0.43 0.71 0.39 0.67 0.54 0.43 IG 0.51 0.20 0.20 0.52 0.40 0.00 0.48 0.11 0.12 0.43 0.34 0.58 0.63 0.54 DeepLIFT 0.41 0.15 0.26 0.36 0.31 0.20 0.34 0.37 0.37 0.58 0.55 0.74 0.67 0.77 TransformerLRP 0.32 0.09 0.57 0.55 0.17 0.26 0.27 0.37 0.07 0.30 0.30 0.68 0.49 0.64 AttnLRP - - 0.66 0.70 0.61 0.67 0.65 - - 0.25 0.17 0.50 0.15 0.34 GIM 0.59 0.43 0.69 0.72 0.60 0.68 0.61 0.03 0.04 0.22 0.10 0.49 0.09 0.23 FEVERGxI 0.03 0.06 0.39 0.53 0.46 0.43 0.47 0.47 0.40 0.56 0.63 0.71 0.59 0.73 IG 0.47 0.32 0.28 0.51 0.50 0.03 0.54 0.02 0.20 0.51 0.59 0.62 0.67 0.61 DeepLIFT 0.12 0.20 0.33 0.47 0.53 0.31 0.43 0.43 0.36 0.58 0.65 0.76 0.64 0.76 TransformerLRP 0.17 0.22 0.54 0.68 0.21 0.30 0.49 0.44 0.21 0.36 0.46 0.75 0.63 0.69 AttnLRP - - 0.60 0.75 0.65 0.61 0.68 - - 0.32 0.44 0.52 0.23 0.49 GIM 0.42 0.39 0.62 0.75 0.69 0.51 0.68 -0.01 0.09 0.26 0.39 0.51 0.31 0.41 HateXplainGxI 0.07 0.08 0.53 0.38 0.44 0.49 0.53 0.59 0.42 0.60 0.67 0.73 0.63 0.63 IG 0.62 0.37 | https://arxiv.org/abs/2505.17630v1 |
0.49 0.64 0.44 0.00 0.46 0.35 0.32 0.58 0.56 0.61 0.64 0.72 DeepLIFT 0.36 0.29 0.55 0.36 0.41 0.29 0.42 0.52 0.45 0.59 0.70 0.73 0.74 0.73 TransformerLRP 0.54 0.14 0.51 0.58 0.33 0.20 0.30 0.38 0.24 0.59 0.57 0.67 0.66 0.71 AttnLRP - - 0.68 0.70 0.60 0.85 0.59 - - 0.30 0.47 0.42 0.33 0.71 GIM 0.64 0.48 0.68 0.64 0.65 0.80 0.63 0.26 0.14 0.26 0.46 0.47 0.35 0.51 MovieGxI 0.15 0.04 0.34 0.51 0.34 0.42 0.52 0.61 0.52 0.61 0.51 0.71 0.68 0.76 IG 0.60 0.38 0.28 0.69 0.41 0.00 0.58 0.17 0.13 0.49 0.36 0.52 0.71 0.56 DeepLIFT 0.44 0.22 0.31 0.62 0.48 0.49 0.35 0.35 0.48 0.60 0.49 0.71 0.56 0.77 TransformerLRP 0.48 0.26 0.61 0.70 0.22 0.23 0.49 0.27 0.13 0.35 0.40 0.83 0.74 0.67 AttnLRP - - 0.69 0.76 0.72 0.78 0.77 - - 0.25 0.30 0.48 0.19 0.46 GIM 0.60 0.44 0.71 0.78 0.71 0.83 0.76 0.11 0.07 0.19 0.30 0.48 0.17 0.31 SciFactGxI 0.09 0.05 0.27 0.42 0.40 0.34 0.57 0.55 0.39 0.57 0.59 0.72 0.56 0.63 IG 0.58 0.32 0.27 0.46 0.50 0.01 0.65 0.18 0.20 0.46 0.56 0.57 0.69 0.54 DeepLIFT 0.28 0.27 0.28 0.43 0.50 0.25 0.62 0.47 0.37 0.58 0.61 0.75 0.69 0.65 TransformerLRP 0.30 0.22 0.56 0.56 0.15 0.39 0.51 0.41 0.22 0.33 0.43 0.76 0.50 0.67 AttnLRP - - 0.64 0.71 0.65 0.67 0.74 - - 0.30 0.35 0.49 0.22 0.40 GIM 0.57 0.40 0.67 0.74 0.67 0.62 0.74 0.04 0.08 0.22 0.31 0.49 0.26 0.38 TwitterGxI 0.40 0.27 0.41 0.48 0.45 0.48 0.73 0.51 0.37 0.64 0.75 0.77 0.73 0.64 IG 0.53 0.34 0.41 0.70 0.55 0.40 0.67 0.36 0.33 0.62 0.59 0.66 0.71 0.66 DeepLIFT 0.44 0.23 0.42 0.65 0.54 0.51 0.75 0.54 0.44 0.62 0.67 0.75 0.75 0.67 TransformerLRP 0.43 0.26 0.58 0.62 0.34 0.41 0.50 0.43 0.38 0.49 0.66 0.76 0.73 0.72 AttnLRP - - 0.64 0.73 0.68 0.75 0.66 - - 0.42 0.61 0.57 0.48 0.66 GIM 0.52 0.38 0.68 0.72 0.69 0.76 0.72 0.34 0.22 0.41 0.61 0.56 0.55 0.58 GIM is often more faithful than other gradient-based feature attribution methods: Table 1 compares GIM against five other gradient-based attribution methods across multiple datasets and model architectures. GIM drastically outperforms traditional methods like GradientXInput, Integrated Gradients, and DeepLIFT on nearly all evaluations. While AttnLRP [ 10] outperforms GIM on several dataset-model pairs, GIM achieves the highest comprehensiveness and sufficiency scores for the majority of combinations, establishing it as the most consistently faithful method overall. All modifications contribute to GIM’s faithfulness improvements: Figure 4 compares how adding different GIM modifications to GradientXInput improves feature attribution faithfulness. Each point represents the average improvement for a dataset compared to the GradientXInput baseline. Blue boxplots show feature attribution performance with LayerNorm freeze + Gradient Normalization, while orange boxplots include TSG as well (full GIM). The results demonstrate that LayerNorm freeze + Gradient Normalization substantially improve both attribution metrics across all models. Adding TSG further enhances attribution quality for most models, with particularly strong improvements for Gemma-2 2B and LLAMA models. However, TSG appears less | https://arxiv.org/abs/2505.17630v1 |
beneficial for Qwen-2.5 1.5B, where it occasionally reduces attribution faithfulness. Qwen-2.5 1.5B had a hundred-fold larger gradients than the other models, which perhaps is problematic for TSG (see Figure 11). Comprehensive feature attribution results for all modification combinations are available in Table 2. 7 Discussion When to amplify signals during self-repair: Our temperature-adjusted softmax gradients (TSG) amplify attribution signals when attention self-repair occurs, but is this amplification justified? We argue that amplification is appropriate for components that were actively contributing to the output before any perturbation, even when self-repair dampens the observed output change. The “firing 8 (a) Comprehensiveness ↑ (b) Sufficiency ↓ Figure 4: Relative improvement of adding modifications to GradientXInput. We compare adding layernorm freeze and grad norm with adding also TSG. Each point represents the average statistics of a dataset. The y-axis shows the improvements over GradientXInput. squad” analogy from Pearl [30] illustrates this well: when two soldiers simultaneously fire at a prisoner (where either shot alone would be fatal), both bear full responsibility despite the redundancy. Even though removing one shooter’s bullet would not change the outcome (a form of self-repair), we should still attribute causal responsibility to both shooters. Crucially, we should not attribute importance to components that only begin to contribute after perturbation occurs, regardless of whether this redistribution happens immediately (as in attention self- repair) or through backup mechanisms (as in the hydra effect [ 8]). Consider a historical counterfactual: “What killed Archduke Franz Ferdinand?” If we ablate the assassination in Sarajevo, and in this counterfactual he later dies of cancer, we would not say he died from both being shot and cancer. When a component only contributes after ablation of another component, that compensating component should not receive attribution in the unperturbed state. Could GIM improve model training? Since GIM’s gradient modifications more accurately attribute importance to model components during interpretation, we speculate that they might also improve model training itself. By implementing these modifications during backpropagation, we might achieve more precise parameter updates that account for feature interactions and self-repair mechanisms. We suggest that future work empirically evaluate whether models trained with GIM-modified gradients demonstrate faster convergence or better performance compared to standard training approaches. Limitations: While our experiments demonstrate that TSG increases both backpropagation signals during self-repair and overall attribution faithfulness, we cannot conclusively establish a causal link between these observations. Temperature adjustment affects backpropagation in multiple ways beyond mitigating self-repair, and these additional effects may contribute to the improved faithfulness we observe. Furthermore, we did not find an explanation for why TSG improved faithfulness across all models except for Qwen-2.5 1.5B. We observed that its gradient was a hundred-fold larger than the other models’, but could not provide any definite evidence that this caused the decreased faithfulness. 8 Conclusion We identified attention self-repair, a phenomenon where softmax normalization redistributes weights during perturbation, causing interpretability methods to underestimate component importance. Our proposed method, Gradient Interaction Modifications (GIM), addresses this through temperature- adjusted softmax gradients (TSG), while also incorporating layer normalization freeze and gradient normalization to handle other issues in transformer interpretability. Our | https://arxiv.org/abs/2505.17630v1 |
evaluations across multiple models and datasets show that GIM consistently outperforms existing circuit identification and feature attribution methods in faithfulness. By accounting for self-repair during backpropagation, GIM better quantifies the importance of model components, advancing our understanding of neural networks by improving our ability to interpret them. 9 Acknowledgments and Disclosure of Funding Thank you, Casper L. Christensen, for the enthusiastic discussions and for proposing and testing ideas for how to improve GIM. Thank you, Jakob Havtorn, Lasse Borgholt, and Søren Hauberg, for discussions and advice. This research was partially funded by the Innovation Fund Denmark via the Industrial Ph.D. Program (grant no. 2050-00040B) and Academy of Finland (grant no. 322653). References [1]Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-V oss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language Models are Few-Shot Learners. arXiv:2005.14165 [cs] , July 2020. [2]Dongfang Li, Baotian Hu, Qingcai Chen, and Shan He. Towards Faithful Explanations for Text Classifica- tion with Robustness Improvement and Explanation Guided Training. In Anaelia Ovalle, Kai-Wei Chang, Ninareh Mehrabi, Yada Pruksachatkun, Aram Galystan, Jwala Dhamala, Apurv Verma, Trista Cao, Anoop Kumar, and Rahul Gupta, editors, Proceedings of the 3rd Workshop on Trustworthy Natural Language Processing (TrustNLP 2023) , pages 1–14, Toronto, Canada, July 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.trustnlp-1.1. [3]Scott M Lundberg and Su-In Lee. A Unified Approach to Interpreting Model Predictions. In Advances in Neural Information Processing Systems , volume 30. Curran Associates, Inc., 2017. [4]Kevin Meng, David Bau, Alex Andonian, and Yonatan Belinkov. Locating and Editing Factual Associations in GPT. [5]Mukund Sundararajan, Ankur Taly, and Qiqi Yan. Axiomatic Attribution for Deep Networks. In Pro- ceedings of the 34th International Conference on Machine Learning , pages 3319–3328. PMLR, July 2017. [6] Cody Rushing and Neel Nanda. Explorations of Self-Repair in Language Models, May 2024. [7]Kevin Wang, Alexandre Variengien, Arthur Conmy, Buck Shlegeris, and Jacob Steinhardt. Interpretability in the Wild: A Circuit for Indirect Object Identification in GPT-2 small, November 2022. [8]Thomas McGrath, Matthew Rahtz, Janos Kramar, Vladimir Mikulik, and Shane Legg. The Hydra Effect: Emergent Self-repair in Language Model Computations, July 2023. [9]Ameen Ali, Thomas Schnake, Oliver Eberle, Grégoire Montavon, Klaus-Robert Müller, and Lior Wolf. XAI for Transformers: Better Explanations through Conservative Propagation. In Proceedings of the 39th International Conference on Machine Learning , pages 435–451. PMLR, June 2022. [10] Reduan Achtibat, Sayed Mohammad Vakilzadeh Hatefi, Maximilian Dreyer, Aakriti Jain, Thomas Wie- gand, Sebastian Lapuschkin, and Wojciech Samek. AttnLRP: Attention-Aware Layer-Wise Relevance Propagation for Transformers. In Forty-First International Conference on Machine Learning , June 2024. [11] Callum McDougall, Arthur Conmy, Cody Rushing, Thomas McGrath, and Neel Nanda. Copy Suppression: Comprehensively Understanding an Attention Head, October 2023. [12] Qing Lyu, Marianna Apidianaki, and Chris Callison-Burch. Towards Faithful Model Explanation in NLP: A Survey. Computational Linguistics , 50(2):657–723, June 2024. doi: 10.1162/coli_a_00511. [13] | https://arxiv.org/abs/2505.17630v1 |
Kevin Meng, David Bau, Alex Andonian, and Yonatan Belinkov. Locating and Editing Factual Associations in GPT. Advances in Neural Information Processing Systems , 35:17359–17372, December 2022. [14] Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. Deep Inside Convolutional Networks: Visualis- ing Image Classification Models and Saliency Maps, April 2014. [15] János Kramár, Tom Lieberum, Rohin Shah, and Neel Nanda. AtP*: An efficient and scalable method for localizing LLM behaviour to components, March 2024. [16] Sebastian Bach, Alexander Binder, Grégoire Montavon, Frederick Klauschen, Klaus-Robert Müller, and Wojciech Samek. On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation. PLOS ONE , 10(7):e0130140, July 2015. ISSN 1932-6203. doi: 10.1371/journal. pone.0130140. [17] Avanti Shrikumar, Peyton Greenside, Anna Shcherbina, and Anshul Kundaje. Not Just a Black Box: Learning Important Features Through Propagating Activation Differences, April 2017. 10 [18] Joakim Edin, Andreas Geert Motzfeldt, Casper L. Christensen, Tuukka Ruotsalo, Lars Maaløe, and Maria Maistro. Normalized AOPC: Fixing Misleading Faithfulness Metrics for Feature Attribution Explainability, August 2024. [19] Joakim Edin, Maria Maistro, Lars Maaløe, Lasse Borgholt, Jakob Drachmann Havtorn, and Tuukka Ruotsalo. An Unsupervised Approach to Achieve Supervised-Level Explainability in Healthcare Records. In Yaser Al-Onaizan, Mohit Bansal, and Yun-Nung Chen, editors, Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing , pages 4869–4890, Miami, Florida, USA, November 2024. Association for Computational Linguistics. doi: 10.18653/v1/2024.emnlp-main.280. [20] János Kramár, Tom Lieberum, Rohin Shah, and Neel Nanda. AtP*: An efficient and scalable method for localizing LLM behaviour to components, March 2024. [21] Gemma Team, Morgane Riviere, Shreya Pathak, Pier Giuseppe Sessa, Cassidy Hardin, Surya Bhupatiraju, Léonard Hussenot, Thomas Mesnard, Bobak Shahriari, Alexandre Ramé, and et al. Gemma 2: Improving Open Language Models at a Practical Size, October 2024. [22] Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al- Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, and et al. The Llama 3 Herd of Models, November 2024. [23] Qwen, An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, and et al. Qwen2.5 Technical Report, January 2025. [24] Jay DeYoung, Sarthak Jain, Nazneen Fatema Rajani, Eric Lehman, Caiming Xiong, Richard Socher, and Byron C. Wallace. ERASER: A Benchmark to Evaluate Rationalized NLP Models. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics , pages 4443–4458, Online, July 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.acl-main.408. [25] Lucas E. Resck, Marcos M. Raimundo, and Jorge Poco. Exploring the Trade-off Between Model Perfor- mance and Explanation Plausibility of Text Classifiers Using Human Rationales, April 2024. [26] Michael Tsang, Sirisha Rambhatla, and Yan Liu. How does This Interaction Affect Me? Interpretable Attribution for Feature Interactions. In Advances in Neural Information Processing Systems , volume 33, pages 6147–6159. Curran Associates, Inc., 2020. [27] Avanti Shrikumar, Peyton Greenside, and Anshul Kundaje. Learning important features through propagat- ing activation differences. In Proceedings of the 34th International Conference on Machine Learning - Volume 70 , ICML’17, pages 3145–3153, Sydney, NSW, Australia, August 2017. JMLR.org. [28] Michael Hanna, Sandro Pezzelle, and Yonatan Belinkov. Have Faith in Faithfulness: Going Beyond Circuit Overlap | https://arxiv.org/abs/2505.17630v1 |
When Finding Model Mechanisms, July 2024. [29] Aaquib Syed, Can Rager, and Arthur Conmy. Attribution Patching Outperforms Automated Circuit Discovery, November 2023. [30] Judea Pearl. Causality . Cambridge University Press, September 2009. ISBN 978-0-521-89560-6. [31] Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions. In Jill Burstein, Christy Doran, and Thamar Solorio, editors, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers) , pages 2924–2936, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1300. [32] Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems. page 30. [33] James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. FEVER: A Large-scale Dataset for Fact Extraction and VERification. In Marilyn Walker, Heng Ji, and Amanda Stent, editors, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers) , pages 809–819, New Orleans, Louisiana, June 2018. Association for Computational Linguistics. doi: 10.18653/v1/N18-1074. [34] David Wadden, Shanchuan Lin, Kyle Lo, Lucy Lu Wang, Madeleine van Zuylen, Arman Cohan, and Hannaneh Hajishirzi. Fact or Fiction: Verifying Scientific Claims. In Bonnie Webber, Trevor Cohn, Yulan He, and Yang Liu, editors, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) , pages 7534–7550, Online, November 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.emnlp-main.609. [35] Omar Zaidan, Jason Eisner, and Christine Piatko. Using “Annotator Rationales” to Improve Machine Learning for Text Categorization. In Candace Sidner, Tanja Schultz, Matthew Stone, and ChengXiang Zhai, editors, Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference , pages 260–267, Rochester, New York, April 2007. Association for Computational Linguistics. 11 [36] Maggie, Phil Culliton, and Wei Chen. Tweet sentiment extraction, 2020. [37] Binny Mathew, Punyajoy Saha, Seid Muhie Yimam, Chris Biemann, Pawan Goyal, and Animesh Mukherjee. HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection. Proceedings of the AAAI Conference on Artificial Intelligence , 35(17):14867–14875, May 2021. ISSN 2374-3468, 2159-5399. doi: 10.1609/aaai.v35i17.17745. 12 9 Appendix / supplemental material 9.1 Ablation study Table 2 shows the performance of each modification combination. The modifications have a much stronger effect combined than individually, especially grad norm combined with LN freeze. Table 2: Performance when we cumulatively add modifications to GradientXInput Comprehensiveness ↑ Sufficiency ↓ Gemma LLAMA Qwen Gemma LLAMA Qwen 2B 1B 3B 1.5B 3B 2B 1B 3B 1.5B 3B BoolQGradient X Input 0.09 0.18 0.45 0.27 0.57 0.60 0.71 0.39 0.54 0.43 + LN freeze 0.08 0.50 0.45 0.32 0.43 0.65 0.36 0.44 0.55 0.55 + Grad norm 0.44 0.38 0.48 0.40 0.63 0.37 0.56 0.43 0.52 0.43 + TSG 0.09 0.35 0.25 0.32 0.47 0.54 0.56 0.65 0.72 0.46 + Grad norm + TSG 0.37 0.36 0.49 0.45 0.67 0.46 0.57 0.58 0.68 0.39 + LN freeze + TSG 0.09 | https://arxiv.org/abs/2505.17630v1 |
0.45 0.47 0.37 0.18 0.53 0.24 0.30 0.62 0.72 + Grad norm + LN freeze 0.53 0.68 0.72 0.69 0.66 0.07 0.28 0.12 0.12 0.29 + Grad norm + LN freeze + TSG 0.59 0.69 0.72 0.68 0.61 0.03 0.22 0.10 0.09 0.23 FEVERGradient X Input 0.03 0.39 0.53 0.43 0.47 0.47 0.56 0.63 0.59 0.73 + LN freeze 0.18 0.49 0.52 0.37 0.58 0.38 0.48 0.69 0.49 0.61 + Grad norm 0.25 0.40 0.62 0.53 0.72 0.36 0.54 0.73 0.51 0.53 + TSG 0.03 0.42 0.41 0.31 0.47 0.45 0.54 0.70 0.70 0.71 + Grad norm + TSG 0.30 0.34 0.61 0.54 0.70 0.32 0.55 0.72 0.71 0.49 + LN freeze + TSG 0.10 0.49 0.38 0.33 0.43 0.38 0.38 0.67 0.65 0.72 + Grad norm + LN freeze 0.39 0.60 0.76 0.62 0.66 -0.03 0.29 0.39 0.25 0.46 + Grad norm + LN freeze + TSG 0.42 0.62 0.75 0.51 0.68 -0.01 0.26 0.39 0.31 0.41 HateXplainGradient X Input 0.07 0.53 0.38 0.49 0.53 0.59 0.60 0.67 0.63 0.63 + LN freeze 0.19 0.42 0.31 0.19 0.18 0.58 0.66 0.77 0.68 0.78 + Grad norm 0.39 0.40 0.61 0.30 0.83 0.58 0.59 0.65 0.67 0.61 + TSG 0.03 0.39 0.47 0.41 0.42 0.67 0.65 0.62 0.75 0.68 + Grad norm + TSG 0.46 0.61 0.64 0.54 0.81 0.51 0.45 0.58 0.73 0.45 + LN freeze + TSG 0.22 0.57 0.28 0.17 0.15 0.56 0.47 0.70 0.76 0.80 + Grad norm + LN freeze 0.63 0.67 0.64 0.85 0.50 0.31 0.37 0.52 0.28 0.91 + Grad norm + LN freeze + TSG 0.64 0.68 0.64 0.80 0.63 0.26 0.26 0.46 0.35 0.51 MovieGradient X Input 0.15 0.34 0.51 0.42 0.52 0.61 0.61 0.51 0.68 0.76 + LN freeze 0.28 0.57 0.68 0.29 0.26 0.56 0.28 0.40 0.75 0.75 + Grad norm 0.52 0.34 0.49 0.42 0.73 0.50 0.65 0.71 0.55 0.51 + TSG 0.14 0.57 0.27 0.56 0.50 0.65 0.45 0.74 0.68 0.80 + Grad norm + TSG 0.59 0.63 0.49 0.61 0.88 0.49 0.45 0.67 0.75 0.43 + LN freeze + TSG 0.27 0.62 0.62 0.37 0.35 0.56 0.25 0.35 1.10 0.76 + Grad norm + LN freeze 0.55 0.68 0.78 0.82 0.73 0.16 0.26 0.31 0.13 0.45 + Grad norm + LN freeze + TSG 0.60 0.71 0.78 0.83 0.76 0.11 0.19 0.30 0.17 0.31 SciFactGradient X Input 0.09 0.27 0.42 0.34 0.57 0.55 0.57 0.59 0.56 0.63 + LN freeze 0.16 0.52 0.49 0.23 0.59 0.45 0.42 0.55 0.57 0.65 + Grad norm 0.41 0.36 0.48 0.43 0.76 0.43 0.53 0.69 0.55 0.52 + TSG 0.09 0.31 0.25 0.37 0.63 0.56 0.56 0.69 0.80 0.66 + Grad norm + TSG 0.43 0.33 0.51 0.54 0.74 0.43 0.53 0.68 0.75 0.42 + LN freeze + TSG 0.23 0.53 0.34 0.28 0.36 0.48 0.33 0.59 0.74 0.76 + Grad norm + LN freeze 0.60 0.64 0.72 0.66 0.72 0.05 0.25 0.30 0.20 0.40 + Grad norm + LN freeze + TSG 0.57 0.67 0.74 0.62 0.74 0.04 0.22 0.31 0.26 0.38 TwitterGradient X Input 0.40 0.41 0.48 0.48 0.73 0.51 | https://arxiv.org/abs/2505.17630v1 |
0.64 0.75 0.73 0.64 + LN freeze 0.45 0.50 0.46 0.39 0.75 0.48 0.56 0.75 0.72 0.69 + Grad norm 0.48 0.38 0.55 0.39 0.69 0.47 0.62 0.75 0.69 0.64 + TSG 0.35 0.50 0.47 0.59 0.71 0.59 0.61 0.76 0.78 0.66 + Grad norm + TSG 0.47 0.48 0.55 0.63 0.80 0.48 0.51 0.68 0.75 0.61 + LN freeze + TSG 0.37 0.47 0.52 0.42 0.51 0.56 0.63 0.75 0.87 0.73 + Grad norm + LN freeze 0.52 0.64 0.73 0.78 0.65 0.32 0.43 0.60 0.47 0.66 + Grad norm + LN freeze + TSG 0.52 0.68 0.72 0.76 0.72 0.34 0.41 0.61 0.55 0.58 9.2 Datasets BoolQ comprises yes/no questions about Wikipedia articles, which is used in the SuperGLUE and ERASER benchmarks [31, 32, 24]. FEVER andSciFact are fact-verification datasets [ 33,34]. Fact verification is classifying whether a claim is true, given a document containing the necessary information. The documents and claims in 13 Table 3: Dataset statistics Name Number of examples A VG number of tokens Task Hatexplain 300 314 Hatespeech detection Movie Review 199 930 Sentiment classification Twitter Sentiment 300 78 Sentiment classification BoolQ 199 1184 Question-answering FEVER 300 336 Fact-verification Scifact 209 448 Fact-verification FEVER are generated from Wikipedia, while SciFact comprises scientific abstracts and claims about COVID-19. Movie Reviews andTwitter Sentiment Extraction are sentiment classification datasets, where the goal is to classify the sentiment of movie reviews and Twitter posts [35, 36]. HateXplain comprises Twitter posts, where the task is to classify whether a post contains hate speech [37] We downsample our datasets because our experiments are computationally expensive. We create smaller subsets by randomly sampling 300 examples where all the models predicted the correct answer. We present an overview of the datasets in Table 3 9.3 Licenses Here is a list of the assets used in our experiments and their Licences: 1. BoolQ: CC-BY 3.0 2. FEVER: CC-BY 3.0 3. SciFact: CC BY 4.0. 4. HateXplain: MIT License 5. Movie review: CC-BY 3.0 6. Twitter Sentiment Extraction: CC-BY 4.0 7. LLAMA models: LLAMA3.2 COMMUNITY LICENSE AGREEMENT 8. Gemma models: GEMMA LICENCE AGREEMENT 9. Qwen models: Qwen LICENSE AGREEMENT 9.4 Computational requirements We ran all our experiments on an A100 80GB GPU. The feature attribution and self-repair experiments were fast, using less than an hour each for the smaller models (<3B) and around 2–3 hours for the large models. The experiments comparing circuit identification were slow, especially for integrated gradients. Integrated gradients used around half a day per dataset on the A100 GPU. 9.5 Classification performance In Table 4, we evaluate the large language models’ performance on the datasets. All models perform well across datasets except for LLAMA-3.2 1B, which always predicts positive sentiment on the Movie and Twitter datasets (high recall and low specificity). Since we in the main paper focus on evaluating explanation faithfulness, a strong performance is not crucial for accurate results. We only want accurate explanations of the models’ inner mechanisms, whether that inner mechanism solves the tasks well or not. 9.6 Model prompts We designed unique prompt templates for each model-dataset | https://arxiv.org/abs/2505.17630v1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.