text
string
source
string
are needed to enhance o3’s instruction-following ability. Table 3: Error analysis of chart understanding failures on ChartQAPro for o1, o3, and o4-mini. Model Perception ErrorInstruction Following Error o1 48 12 o3 47 22 o4-mini 46 8 Table 4: Performance of o3 using different levels of reasoning effort. Reasoning Effort Direct CoT PoT Grounded CoT Medium 60.6 60.0 59.5 61.6 High 60.4 61.0 60.8 61.8 F Detailed Evaluation Results Comparing adaptation methods and object detection models We evaluate all applicable adapta- tion methods for each model, except for standard fine-tuning, which is restricted to models that fit within the memory constraints of an Nvidia Tesla V100 GPU. For few-shot prompting and fine-tuning methods, we use k= 4,10, and30randomly selected infographics. We average the results over 3 runs, excluding T-Rex2 and DINO-X, due to their reliance on charged APIs. Tables 5 and 6 show the AP and AR along with their standard deviation for all models. Table 5: AP of object detection models for the chart and HRO categories. The best one is bold . ModelZero-shot promptingFew-shot prompting Standard fine-tuning 4-shots 10-shots 30-shots 4-shots 10-shots 30-shots OrionBench Chart CategoryFoundation ModelsRegionCLIP 1.45 - - - 8.64±4.72 11.43 ±1.67 14.79 ±0.60 18.19 ±0.74 Detic 4.54 - - - 26.30±5.58 30.62 ±1.38 35.27 ±1.05 52.58 ±0.43 Grounding Dino 18.71 - - - - - - - GLIP 18.42 - - - - - - - MQ-GLIP 18.42 19.96±0.35 20.19 ±0.25 20.43 ±0.01 - - - - T-Rex2 - 13.72 - - - - - - DINO-X 21.75 - - - - - - -Traditional ModelsFaster R-CNN - - - - 9.96±5.09 11.04 ±3.53 20.16 ±2.39 82.44 ±0.36 YOLOv3 - - - - 11.06±1.96 15.57 ±0.54 19.47 ±5.49 49.89 ±0.43 RTMDet - - - - 26.22±3.87 41.01 ±7.21 52.31 ±4.64 77.46 ±0.44 Co-DETR - - - - 42.07±12.92 47.98 ±10.55 66.17 ±0.56 90.15 ±0.38 HRO CategoryFoundation ModelsRegionCLIP 2.90 - - - 11.17±0.49 14.37 ±0.42 15.77 ±1.02 23.31 ±0.36 Detic 4.40 - - - 10.50±2.23 15.65 ±2.50 22.57 ±0.56 33.94 ±0.79 Grounding Dino 11.46 - - - - - - - GLIP 11.89 - - - - - - - MQ-GLIP 11.88 13.12±0.85 13.40±0.39 13.70 ±0.25 - - - - T-Rex2 - 13.14 - - - - - - DINO-X 13.78 - - - - - - -Traditional ModelsFaster R-CNN - - - - 1.60±0.40 5.96 ±1.46 14.04 ±0.14 77.45 ±0.23 YOLOv3 - - - - 5.61±0.49 9.58 ±2.98 15.05 ±0.77 39.27 ±2.33 RTMDet - - - - 21.43±1.08 29.81 ±1.12 32.37 ±1.73 62.26 ±0.25 Co-DETR - - - - 28.24±0.10 36.89 ±0.59 43.76 ±1.06 86.03 ±0.51 9 Table 6: AR of object detection models for the chart and HRO categories. The best one is bold . ModelZero-shot promptingFew-shot prompting Standard fine-tuning 4-shots 10-shots 30-shots 4-shots 10-shots 30-shots OrionBench Chart CategoryFoundation ModelsRegionCLIP 20.10 - - - 18.36±4.31 23.21 ±1.52 25.42 ±0.34 24.35 ±0.63 Detic 30.39 - - - 42.02±5.92 47.08 ±1.90 51.09 ±0.59 67.59 ±0.36 Grounding Dino 76.77 - - - - - - - GLIP 57.44 - - - - - - - MQ-GLIP 57.44 53.90±0.91 53.98 ±0.71
https://arxiv.org/abs/2505.17473v2
54.29 ±0.63 - - - - T-Rex2 - 21.36 - - - - - - DINO-X 38.17 - - - - - - -Traditional ModelsFaster R-CNN - - - - 22.95±7.38 26.67 ±4.31 34.75 ±3.32 87.57 ±0.07 YOLOv3 - - - - 26.01±1.54 30.68 ±1.54 36.56 ±3.68 61.81 ±0.47 RTMDet - - - - 56.76±2.27 63.70 ±5.13 70.22 ±1.04 83.80 ±0.42 Co-DETR - - - - 66.74±11.12 74.94 ±5.47 84.02 ±0.75 94.26 ±0.14 HRO CategoryFoundation ModelsRegionCLIP 25.06 - - - 20.89±1.67 25.24 ±0.94 26.77 ±0.27 28.86 ±0.28 Detic 13.05 - - - 19.57±3.39 28.44 ±5.47 39.21 ±0.95 47.86 ±0.72 Grounding Dino 50.80 - - - - - - - GLIP 35.57 - - - - - - - MQ-GLIP 35.56 42.59±2.04 43.53 ±1.43 44.16 ±0.51 - - - - T-Rex2 - 23.74 - - - - - - DINO-X 29.85 - - - - - - -Traditional ModelsFaster R-CNN - - - - 2.03±1.28 10.46 ±4.28 28.68 ±2.87 82.01 ±0.10 YOLOv3 - - - - 14.09±1.38 21.70 ±1.69 29.26 ±0.49 48.87 ±2.43 RTMDet - - - - 50.39±0.31 53.51 ±1.90 54.83 ±0.65 72.75 ±0.19 Co-DETR - - - - 54.04±2.36 62.29 ±0.28 66.45 ±0.46 91.58 ±0.31 Ablating training set sizes and mixing proportions To analyze the impact of training set size and the proportion of real and synthetic infographics on model performance, we conduct an ablation study. Specifically, we create subsets of the OrionBench training set by randomly sampling real and synthetic infographics in various proportions. We evaluate four subset sizes ( n= 200,1000,5000,25000 ) and six proportions of real infographics ( q= 0,0.2,0.4,0.6,0.8,1.0). Due to the high computational cost of training all models across different subset sizes and proportions, we focus on Faster R-CNN for its balance between training efficiency and strong performance. Fig. 5 shows the evaluation results. Each point represents the model’s mean average precision (mAP) across charts and HROs on a subset, and the lines are fitted using the log-linear performance scaling relationship [ 21]. The results show that: 1) Training exclusively on real or synthetic infographics results in rapid saturation at limited performance as the dataset size increases, and 2) Combining real and synthetic infographics enhances performance, with consistent improvement as more samples are added. These findings highlight the importance of leveraging both real and synthetic infographics in robust detection across diverse infographics. /uni00000013 /uni00000018/uni00000013/uni00000013/uni00000013 /uni00000014/uni00000013/uni00000013/uni00000013/uni00000013 /uni00000014/uni00000018/uni00000013/uni00000013/uni00000013 /uni00000015/uni00000013/uni00000013/uni00000013/uni00000013 /uni00000015/uni00000018/uni00000013/uni00000013/uni00000013 /uni00000051/uni00000016/uni00000013/uni00000017/uni00000013/uni00000018/uni00000013/uni00000019/uni00000013/uni0000001a/uni00000013/uni0000001b/uni00000013/uni00000050/uni00000024/uni00000033 /uni00000054/uni00000020/uni00000013/uni00000011/uni00000013 /uni00000054/uni00000020/uni00000013/uni00000011/uni00000015 /uni00000054/uni00000020/uni00000013/uni00000011/uni00000017/uni00000054/uni00000020/uni00000013/uni00000011/uni00000019 /uni00000054/uni00000020/uni00000013/uni00000011/uni0000001b /uni00000054/uni00000020/uni00000014/uni00000011/uni00000013 Figure 5: Ablation of training set sizes and mixing proportions. G Ethical Considerations To ensure the integrity of this work, we carefully consider several ethical aspects during the collection of real infographics from online platforms. First, we utilize GPT-4o mini to identify potential harmful or offensive infographics, which are then manually verified and filtered out. Second, we focus on collecting infographics from publicly available online platforms instead of proprietary sources. We release the benckmark only for research purposes. 10 References [1]A. Masry, M. S. Islam, M. Ahmed, A. Bajaj, F. Kabir, A. Kartha, M. T. R. Laskar, M. Rahman, S. Rahman, M. Shahmohammadi, M. Thakkar, M. R. Parvez, E. Hoque, and S.
https://arxiv.org/abs/2505.17473v2
Joty, “Chartqapro: A more diverse and challenging benchmark for chart question answering,” 2025. [Online]. Available: https://arxiv.org/abs/2504.05506 [2]A. F. Biten, R. Tito, A. Mafla, L. Gomez, M. Rusinol, E. Valveny, C. Jawahar, and D. Karatzas, “Scene text visual question answering,” in Proceedings of the IEEE/CVF international conference on computer vision , 2019, pp. 4291–4301. [3]Y . Zhong, J. Yang, P. Zhang, C. Li, N. Codella, L. H. Li, L. Zhou, X. Dai, L. Yuan, Y . Li et al. , “Regionclip: Region-based language-image pretraining,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , 2022, pp. 16 793–16 803. [4]X. Zhou, R. Girdhar, A. Joulin, P. Krähenbühl, and I. Misra, “Detecting twenty-thousand classes using image-level supervision,” in Proceedings of European conference on computer vision , 2022, pp. 350–368. [5]S. Liu, Z. Zeng, T. Ren, F. Li, H. Zhang, J. Yang, Q. Jiang, C. Li, J. Yang, H. Su et al. , “Grounding dino: Marrying dino with grounded pre-training for open-set object detection,” in Proceedings of European Conference on Computer Vision , 2024, pp. 38–55. [6]L. H. Li, P. Zhang, H. Zhang, J. Yang, C. Li, Y . Zhong, L. Wang, L. Yuan, L. Zhang, J.-N. Hwang et al. , “Grounded language-image pre-training,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , 2022, pp. 10 965–10 975. [7]Y . Xu, M. Zhang, C. Fu, P. Chen, X. Yang, K. Li, and C. Xu, “Multi-modal queried object detection in the wild,” in Proceedings of Advances in Neural Information Processing Systems , vol. 36, 2023, pp. 4452–4469. [8]T. Ren, Y . Chen, Q. Jiang, Z. Zeng, Y . Xiong, W. Liu, Z. Ma, J. Shen, Y . Gao, X. Jiang, X. Chen, Z. Song, Y . Zhang, H. Huang, H. Gao, S. Liu, H. Zhang, F. Li, K. Yu, and L. Zhang, “Dino-x: A unified vision model for open-world object detection and understanding,” 2024. [Online]. Available: https://arxiv.org/abs/2411.14347 [9]Q. Jiang, F. Li, Z. Zeng, T. Ren, S. Liu, and L. Zhang, “T-rex2: Towards generic object detection via text-visual prompt synergy,” in Proceedings of European Conference on Computer Vision , 2024, pp. 38–57. [10] S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards real-time object detection with region proposal networks,” in Proceedings of Advances in Neural Information Processing Systems , vol. 28, 2015. [11] J. Redmon and A. Farhadi, “Yolov3: An incremental improvement,” 2018. [Online]. Available: https://arxiv.org/abs/1804.02767 [12] C. Lyu, W. Zhang, H. Huang, Y . Zhou, Y . Wang, Y . Liu, S. Zhang, and K. Chen, “Rtmdet: An empirical study of designing real-time object detectors,” 2022. [Online]. Available: https://arxiv.org/abs/2212.07784 [13] Z. Zong, G. Song, and Y . Liu, “Detrs with collaborative hybrid assignments training,” in Proceedings of the IEEE/CVF International Conference on Computer Vision , 2023, pp. 6748– 6758. [14] B. Deka, Z. Huang, C. Franzen, J. Hibschman, D. Afergan, Y . Li, J. Nichols, and R. Kumar, “Rico: A mobile app dataset for building data-driven design applications,” in Proceedings of the annual ACM symposium on user interface software and technology , 2017, pp. 845–854. [15]
https://arxiv.org/abs/2505.17473v2
R. Xia, S. Mao, X. Yan, H. Zhou, B. Zhang, H. Peng, J. Pi, D. Fu, W. Wu, H. Ye, S. Feng, B. Wang, C. Xu, C. He, P. Cai, M. Dou, B. Shi, S. Zhou, Y . Wang, B. Wang, J. Yan, F. Wu, and Y . Qiao, “Docgenome: An open large-scale scientific document benchmark for training and testing multi-modal large language models,” 2024. [Online]. Available: https://arxiv.org/abs/2406.11633 [16] D. Manandhar, D. Ruta, and J. Collomosse, “Learning structural similarity of user interface layouts using graph networks,” in Proceedings of European conference on computer vision , 2020, pp. 730–746. 11 [17] D. Manandhar, H. Jin, and J. Collomosse, “Magic layouts: Structural prior for component detection in user interface designs,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , 2021, pp. 15 809–15 818. [18] W. Wang, J. Dai, Z. Chen, Z. Huang, Z. Li, X. Zhu, X. Hu, T. Lu, L. Lu, H. Li et al. , “Internimage: Exploring large-scale vision foundation models with deformable convolutions,” inProceedings of the IEEE/CVF conference on computer vision and pattern recognition , 2023, pp. 14 408–14 419. [19] H. Zhang, F. Li, S. Liu, L. Zhang, H. Su, J. Zhu, L. Ni, and H.-Y . Shum, “DINO: DETR with improved denoising anchor boxes for end-to-end object detection,” in The International Conference on Learning Representations , 2023. [20] I. Loshchilov and F. Hutter, “Decoupled weight decay regularization,” in International Confer- ence on Learning Representations , 2019. [21] F. Kang, H. A. Just, A. K. Sahu, and R. Jia, “Performance scaling via optimal transport: Enabling data selection from partially revealed sources,” Proceedings of Advances in Neural Information Processing Systems , vol. 36, pp. 61 341–61 363, 2023. 12
https://arxiv.org/abs/2505.17473v2
arXiv:2505.17481v1 [cs.CL] 23 May 2025MARCO : Meta-Reflection with Cross-Referencing for Code Reasoning Yusheng Zhao1, Xiao Luo2, Weizhi Zhang3, Zhiping Xiao4, Wei Ju1, Philip S. Yu3, Ming Zhang1 1Peking University,2University of California, Los Angeles, 3University of Illinois Chicago,4University of Washington yusheng.zhao@stu.pku.edu.cn ,xiaoluo@cs.ucla.edu ,patxiao@uw.edu , {wzhan42,psyu}@uic.edu ,{juwei,mzhang_cs}@pku.edu.cn Abstract The ability to reason is one of the most fundamental capabilities of large language models (LLMs), enabling a wide range of downstream tasks through sophisticated problem-solving. A critical aspect of this is code reasoning, which involves logical reasoning with formal languages ( i.e., programming code). In this paper, we enhance this capability of LLMs by exploring the following question: how can an LLM agent become progressively smarter in code reasoning with each solution it proposes, thereby achieving substantial cumulative improvement? Most existing research takes a static perspective, focusing on isolated problem-solving using frozen LLMs. In contrast, we adopt a cognitive-evolving perspective and propose a novel framework named Meta-Reflection with Cross-Referencing ( MARCO ) that enables the LLM to evolve dynamically during inference through self-improvement. From the perspective of human cognitive development, we leverage both knowledge accumulation andlesson sharing . In particular, to accumulate knowledge during problem-solving, we propose meta-reflection that reflects on the reasoning paths of the current problem to obtain knowledge and experience for future consideration. Moreover, to effectively utilize the lessons from other agents, we propose cross- referencing that incorporates the solution and feedback from other agents into the current problem-solving process. We conduct experiments across various datasets in code reasoning, and the results demonstrate the effectiveness of MARCO . 1 Introduction Go to bed a little smarter each day. —Warren Buffett Large language models (LLMs) have achieved great success in understanding human instructions and generating texts [ 1,62,45,23], enabling a plethora of downstream applications, including dialogue systems [ 17,11], automated summarization [ 59,19], and code completion [ 21,16]. While they excel in retrieving and summarizing textual information, the ability to perform rational reasoning remains an important yet unfulfilled aspect when dealing with complex or logical tasks [ 60,49]. Code reasoning [ 78] is an important aspect of the reasoning ability of LLMs, involving the logical reasoning with formal languages ( i.e., programming code) instead of natural languages. It requires abilities in both logical inference (rationality in content generation) and background knowledge (retrieval of past experiences) [20, 78], making it a challenging task for LLMs. Recently, some research has investigated the LLMs’ ability to solve complex or logical problems through reasoning. One line of research focuses on decomposing complex tasks into several man- ageable sub-tasks, using various data structures like chains [ 68,17,76], trees [ 74,9,54], and graphs Preprint. Under review. Find a rule that maps the following inputs to outputs and write a python function of it: Case 1: "123" -> "123.0" , Case 2: "987" -> "987.0" , Case 3: "12.0" -> "12.0"Problem i: Problem i-2 Problem i-1Response i-2 Response i-1Key takeaways: (...) Consider both integers and float numbers. (...)... ...(a) vanilla (c) a smarter LLM with shared lessonsRule: The rule is to add ".0" at the end of the input string.
https://arxiv.org/abs/2505.17481v1
Python function: lambda x: x + ".0" Rule: For integer inputs, add ".0" at the end of it. For float numbers, return the input directly. Python function: ... Problem i ResponseCase 3 check failed. Input "12.0" , expect "12.0" , got "12.0.0" Rule: For inputs end with ".0" , directly return it. Otherwise add ".0" at the end of the input . Python function: ...(b) a smarter LLM with accumulated knowledge Peer agent: Figure 1: We adopt a cognitive-evolving perspective and propose MARCO that enhances the code ability of an LLM through knowledge accumulation (b) and lesson sharing (c). [69,38,5,51]. Another line of research focuses on refining the solution iteratively through the feed- back from various sources like reflection [ 42,4,41], human guidance [ 26,13], or environmental input [50], and adopts reinforcement learning for optimization [ 75,48,64]. Despite their great success, existing research mainly adopts a static perspective , focusing on problems themselves through task decomposition or solution refinement. However, the ability of the LLM itself during inference is not fully explored. Since human coders learn very fast through reading, understanding and writing the code, we aim to explore the following question: How can an LLM agent become smarter at code reasoning with each response it generates, leading to substantial cumulative improvement? By answering this question, we adopt a cognitive-evolving perspective , focusing on improving the ability of the LLM itself during problem-solving. To achieve this, we take both inter-problem knowledge accumulation and intra-problem lesson sharing, akin to human cognitive development. As shown in Figure 1, inter-problem knowledge accumulation requires the model to learn from the current problem-solving procedure to obtain transferable knowledge for future problems. To achieve this, we propose meta-reflection, which reflects on the reasoning process of the current problem and summarizes the experiences ( e.g.mistakes, common patterns) into concise takeaways. In future problem-solving, the condensed experiences are used as external knowledge to guide the thinking process. Intra-problem lesson sharing enables the agent to learn from the lessons of other agents and use these experiences to guide the current problem-solving process. To achieve this, we propose cross-referencing that takes the solutions from other agents as well as the feedback into account when reasoning and proposing new solutions. We integrate the two components into a unified framework named Meta-Reflection with Cross-Referencing ( MARCO ). Moreover, we perform extensive experiments across eight datasets and three sub-tasks in code reasoning ( i.e.induction, deduction and abduction), and the results validate the superiority of MARCO compared to baselines. The contribution of this paper can be summarized as follows. ①New Perspective : We propose a cognitive-evolving perspective that enhances the LLM’s capabilities through both inter- and intra- problem aspects, rather than relying on a static approach. ②Novel Methodology : We propose MARCO , which consists of meta-reflection—reflecting on the current reasoning path for inter-problem knowledge accumulation—and cross-referencing, which incorporates the experiences of others into current problem-solving for intra-problem lesson sharing. ③Extensive Experiments : We conduct extensive experiments across eight datasets and three sub-tasks in code reasoning, and the results demonstrate the effectiveness of the proposed MARCO
https://arxiv.org/abs/2505.17481v1
. 2 Preliminary 2.1 Problem Setup The execution of programs can be written as: IF− → O , forming a triplet of ⟨I,F,O⟩, where Iis the input, Fis the function written in code, and Ois the output of program execution. The goal of 2 Rule: To map the given inputs to their corresponding outputs, we need to convert all the letters in the input string to uppercase if the input is a word (i.e., contains only letters), and leave the input unchanged if it contains any non- letter characters. Find a rule that maps the following inputs to outputs and write a python function of it: Case 1: "rome" -> "ROME" , Case 2: "3" -> "3" , Case 3 : "sRcvKZ A" -> "srcvkz a" , Case 4: "100%" -> "100%" , Case 5: "hello" -> "HELLO"RobustFill Problem 20 Rule: F or inputs that are single characters (like "3" and "1"), the output is the input repeated twice. For inputs that are longer strings, the output is the first character of the input repeated twice. Find a rule that maps the following inputs to outputs and write a python function of it: Case 1: "3" -> "33" , Case 2: "1" -> "11" , Case 3: ".kyqL6" -> ".." , Case 4 : "morning" -> "MM" , Case 5 : "hello-you" -> "HH"RobustFill Problem 23No ImprovementFailed: Case 3 Failed: Case 4, 5Repeated MistakesFigure 2: Existing methods adopt a static perspective, and the LLM agents do not improve during the problem-solving process, making repeated mistakes (in this case, lack of consideration of differences in upper/lower cases in constructing the transformation). code reasoning is to infer one of the elements in the triplet (the target element) with the other two (source elements), forming three sub-tasks: induction, deduction, and abduction. Inductive code reasoning attempts to infer the function from the inputs and outputs, i.e.,⟨I,O⟩ → F . Deductive code reasoning aims to deduce the output from the input and the function, i.e.,⟨I,F⟩ → O . Abductive code reasoning aims to infer the input from the output and the function, i.e.,⟨F,O⟩ → I . For simplicity, we denote the source elements of the problem as Xand the target element as Y. In practice, we are given a set of problems, i.e.,{Xi}N i=1, where Nis the number of problems. The large language model agent Aj(j∈ {1,2,···, M}) infers the solution Yi j,TwithXithrough Titerative reflection with feedback from the python interpreter. For each problem, we denote the corresponding answers as Yi j,t, and the feedback as Bi j,t, where t∈ {1,2,···, T}. The reasoning path can be represented as Pi j,t= (Xi,Yi j,1,Bi j,1,Yi j,2,Bi j,2,···,Yi j,t,Bi j,t). Under the static perspective, the model generates the next solution based on the current reasoning path of the current problem, i.e. Yi j,t∼ A j(Y | Pi j,t−1,Tj), (1) where Ajis the j-th LLM agent, and Tjis the textual prompt template of agent Aj. Under this paradigm, the solutions Yi j,Tare independent from each other in dimensions i(inter-problem) and j (intra-problem), and therefore, the model’s ability is not improved
https://arxiv.org/abs/2505.17481v1
when dealing with a sequence of problems {Xi}N i=1. In comparison, we propose to incorporate both inter-problem and intra-problem information into the problem-solving process, i.e. Yi j,t∼ A j(Y | Pi j,t−1,inter-problemz }| { {{Pi′ j,T}i−1 i′=1}M j=1,intra-problemz }| { {Pi j′,t−1}j′∈{1,···,M}\{j},Tj). (2) 2.2 Problems with Existing Methods: Repeated Mistakes Existing methods [ 68,39,78] take a static perspective, focusing on solving single problems indepen- dently. However, the ability of LLMs is not enhanced when solving a sequence of problems, resulting in repeated mistakes in the problem-solving process. To illustrate this, we provide a concrete example of the problems with existing methods in Figure 2. In problem 20, the LLM agent fails to generate the correct transformation due to a lack of consideration of the letter cases. Concretely, in strings with all lower-case letters, they are transformed to upper case, while in strings with both lower and upper cases, they are transformed to lower case. The generated transformation rule fails to consider the second case. A similar letter case mistake reappears in problem 23, where the cases of the letters are treated differently (for strings starting with lower case letters, the first characters are first raised to upper case and then repeated twice). However, as the problems are treated separately (as formulated in Eq. 1), the LLM agents cannot utilize previous experience for the current problem-solving process, making repeated mistakes. It is conceivable that if the agent is given guidance like "pay attention to different letter cases", it has a better chance of 3 Input 1: "123" , Output 1: "123.0" , Input 2: "987" , Output 2: "987.0" Input 3: "12.0" , Output 3: "12.0" Input 4: "NONE" , Output 4: "NONE" LLM Agent Rule: When the input is an integer, add ".0" at the end of it. When the input is a float number, return the input directly. Python function: ... Feedback: The generated rule works successfully with Case 1, Case 2, and Case 3. However, for Case 4, with input "NONE" , we e xpect output "NONE" , but we got output "NONE.0"Generation Consider all cases before generating the rule.Consider both integers and float numbers. Meta-Reflection... ... Inter-Problem Knowledge AccumulationPrevious Problem-Solving Lesson from Peer AgentsLesson SharingCondensed Knowledge Cross-Referencing Python InterpreterSummarized ExperienceProblem i: Peer Agent Prompt TemplateFigure 3: The overall framework of the proposed MARCO , which includes meta-reflection and cross- referencing. Meta-reflection summarizes previous problem-solving experiences into transferable knowledge accumulated for future usage. Cross-referencing enables the LLM agent to learn from the lessons of its peer agents so as to improve the current problem-solving process. solving the problem correctly. Therefore, in this paper, we break the isolation from both inter-problem and intra-problem perspectives, incorporating the accumulated knowledge and shared lessons into the current problem-solving process (Eq. 2 and its simplified form Eq. 7). 3 Methodology 3.1 Framework Overview Incorporating both inter- and intra-problem information into the current problem-solving process is appealing, but a naive solution would require excessively long contexts, leading to extremely high computation costs. Therefore, the MARCO framework addresses this challenge by adopting a cognitive-evolving perspective, using meta-reflection
https://arxiv.org/abs/2505.17481v1
for inter-problem knowledge accumulation and cross-referencing for intra-problem lesson sharing, as shown in Figure 3. MARCO maintains a knowledge bank that stores summarized experiences of previous reasoning processes, with knowledge accumulating throughout the whole problem-solving process. When the LLM agents are given a specific problem, they first use the condensed knowledge from the knowledge bank to generate solutions. Subsequently, the code interpreter analyzes the solutions to provide feedback to the agents. For intra-problem lesson sharing, each agent is allowed to refer to the solutions and corresponding feedback of peer agents, thereby learning from the experiences of others. 3.2 Meta-Reflection for Inter-Problem Knowledge Accumulation Previous efforts [ 32,8,77] that utilize the reflection ability of LLMs primarily apply it within the context of a single problem. From a cognitive-evolving perspective, the capabilities of LLM agents do not improve as they solve a sequence of problems, which can lead to repeated mistakes ( e.g., applying numerical operations to verbal strings). To address this, we propose meta-reflection that reflects on the reasoning paths to generate transferable knowledge for future reference. Specifically, MARCO maintains a knowledge bank Kwhich is initially empty ( K0=∅). When each agent finishes one problem (say, problem Xiand agent Aj), it is provided with binary feedback Bi j from the code interpreter about the preliminary checks of their solution, e.g., "All your answers are wrong for the given examples". Subsequently, the agent is asked to reflect on its reasoning process 4 for problem Xito generate a key takeaway that may be helpful for future problem-solving: Si j= SUMMARIZE( Aj,Pi j,T,Bi j), (3) whereSi jis the summarized experience of this problem-solving experience. Then, Si jis added to the knowledge bank Ki−1to form Ki,i.e., Ki=Ki−1∪ {Si j}M j=1. (4) A naive solution is to directly use this knowledge in the future. However, the knowledge bank grows during problem-solving, so incorporating all accumulated information becomes computationally burdensome. Additionally, key takeaways generated from different problem-solving instances may overlap, or be specific to a particular problem, rendering them non-transferable. Therefore, we employ a knowledge condenser that distills the knowledge bank into concise, transferable sentences, ˆKi= CONDENSE( Acondense,Ki), (5) where ˆKiis the condensed knowledge and Acondenseis the condenser agent. In practice, we condense the knowledge every Tcproblems to save computation and replace the previous Tcproblems with the condensed version to control the size of the knowledge bank. The condensed knowledge can then be used in future problem solving, simplifying Eq. 2 as: Yi+1 j,t∼ A j(Y | Pi+1 j,t−1,ˆKi,{Pi+1 j′,t−1}j′∈{1,···,M}\{j},Tj). (6) 3.3 Cross-Referencing for Intra-Problem Lesson Sharing Generating multiple solutions for each problem is a common strategy in recent research [ 43,63]. Nevertheless, the generation process is often isolated, and multiple agents can make similar mistakes in their reasoning paths. From the perspective of an LLM agent, when one of its peers makes a mistake, the agent should learn from that peer’s experience and avoid making similar mistakes in future iterations. As multiple agents are working on the same problem, the lessons learned from peers can be valuable for adjusting solutions. Therefore, this paper proposes cross-referencing to enable intra-problem
https://arxiv.org/abs/2505.17481v1
lesson sharing, allowing the agents to learn from the faults of others. Specifically, when agent Ajis given a problem Xi, it falls into "proposal-feedback" iterations, where it first proposes a solution and receives feedback of it through the code interpreter. At iteration t−1, it receives feedback Bi j,t−1of the proposal Yi j,t−1. The proposal-feedback pair can be regarded as a "lesson", from which other agents may learn. To obtain a more compact version of this lesson for more efficient prompting, we extract the core part ( e.g., python code) from the solution Yi j,t−1asˆYi j,t−1, and the corresponding lesson can be denoted as ⟨Bi j,t−1,ˆYi j,t−1⟩. When generating the t-th solution, the LLM agent incorporates the lessons shared by other agents, i.e.,{⟨Bi j,t−1,ˆYi j,t−1⟩}j′∈{1,···,M}\{j}, and generates the solution with the simplified version of Eq. 6 as follows: Yi j,t∼ A j(Y | Pi j,t−1,ˆKi,{⟨Bi j,t−1,ˆYi j,t−1⟩}j′∈{1,···,M}\{j},Tj). (7) 3.4 Summary TheMARCO framework incorporates both meta-reflection and cross-referencing when dealing with a set of problems {Xi}N i=1. For each problem Xi, it first constructs an initial prompt using the problem and the condensed knowledge ˆKi−1. Subsequently, each agent is asked to generate solutions Yi j,t, and the code interpreter is used to provide feedback Bi j,t. The agents then exchange their lessons ⟨Bi j,t,ˆYi j,t⟩, and adjust their solutions using information from their previous reasoning trajectories ( i.e., chat history) and the lessons from peers. The iterative "proposal-feedback" process continues until a satisfactory solution is achieved ( e.g., when the proposed function has passed all seen examples) or when reaching a certain computational budget ( e.g.,Titerations). 4 Experiments 4.1 Experimental Setup Datasets. We conduct extensive experiments on eight datasets across three sub-tasks. For the inductive reasoning sub-task, we use four datasets: ListFunction [ 57], MiniARC [ 34,53], RobustFill 5 ListFunction MiniARC RobustFill DeepCoderModelsAcc. Prob. Acc. Acc. Prob. Acc. Acc. Prob. Acc. Acc. Prob. Acc. GPT 4o-mini CoT 36.70 26.40 3.85 2.31 37.39 17.39 16.32 7.29 CoC 34.50 ↓2.20 26.40 ↑0.00 1.79↓2.06 0.77↓1.54 50.43 ↑13.04 21.74 ↑4.35 18.75 ↑2.43 8.33↑1.04 RHDA 42.55 ↑5.85 33.20 ↑6.80 5.38↑1.53 3.85↑1.54 53.91 ↑16.52 30.43 ↑13.04 25.69 ↑9.37 11.46 ↑4.17 MARCO (ours) 49.35 ↑12.65 39.60 ↑13.20 5.90↑2.05 4.62↑2.31 57.39 ↑20.00 34.78 ↑17.39 35.07 ↑18.75 19.79 ↑12.50 Qwen 2.5 72B CoT 39.50 30.00 5.64 3.85 36.52 21.74 18.06 7.29 CoC 14.40 ↓25.10 10.00 ↓20.00 1.28↓4.36 0.00↓3.85 20.87 ↓15.65 8.70↓13.04 18.40 ↑0.34 8.33↑1.04 RHDA 44.45 ↑4.95 35.60 ↑5.60 4.87↓0.77 3.85↑0.00 53.91 ↑17.39 30.43 ↑8.69 28.47 ↑10.41 15.63 ↑8.34 MARCO (ours) 54.90 ↑15.40 47.20 ↑17.20 6.15↑0.51 4.62↑0.77 63.48 ↑26.96 34.78 ↑13.04 30.90 ↑12.84 19.79 ↑12.50 LLaMA 3 70B CoT 35.20 25.20 4.62 3.08 35.65 17.39 14.93 9.38 CoC 34.85 ↓0.35 26.80 ↑1.60 0.00↓4.62 0.00↓3.08 31.30 ↓4.35 13.04 ↓4.35 19.44 ↑4.51 7.29↓2.09 RHDA 44.70 ↑9.50 38.40 ↑13.20 5.13↑0.51 3.85↑0.77 52.17 ↑16.52 26.09 ↑8.70 26.04 ↑11.11 16.67 ↑7.29 MARCO (ours) 52.60 ↑17.40 41.20 ↑16.00 8.46↑3.84 6.15↑3.07 60.00 ↑24.35 39.13 ↑21.74 32.29 ↑17.36 19.79 ↑10.41 Table 1: Accuracies and problem accuracies on the inductive reasoning datasets, i.e., ListFunction, MiniARC, RobustFill, and DeepCoder. We bold the best results and underline the second-best. [12], and DeepCoder [ 3].
https://arxiv.org/abs/2505.17481v1
Among these datasets, RobustFill and DeepCoder use specially designed domain-specific languages (DSLs, with details in Appendix ??). For the deductive reasoning sub-task, we use two datasets: CRUXEval-O and LiveCodeBench-O that are built upon the CRUXEval [ 20] and LiveCodeBench [ 30] datasets by excluding the outputs. Similarly, for the abductive sub-task, we use CRUXEval-I and LiveCodeBench-I, built upon CRUXEval and LiveCodeBench respectively by excluding the inputs. Experimental Details. We compare the proposed MARCO against three competitive reasoning meth- ods: Chain-of-Code [ 39], Chain-of-Thought [ 68], and RHDA [ 78]. The comparison covers three representative LLM backbones, including ChatGPT-4o-mini, LLaMA 3 70B [ 62], Qwen 2.5 72B [73]. For the inductive sub-task, half of the input-output pairs are visible to the agent while the other half is not visible during the problem-solving process. For this sub-task, we report both accuracy (as measured by the ratio of correct input-output pairs and the total number of pairs) and problem accuracy (a problem is correctly solved if and only if all the input-output pairs of it are correct). 4.2 Main Results We compare the proposed MARCO against baselines across three sub-tasks, i.e., inductive reasoning, deductive reasoning, and transductive reasoning. Results of inductive reasoning are shown in Table 1, while the results of deductive and abductive reasoning are shown in Table 2. Inductive Reasoning. Inductive reasoning requires the model to infer the mapping from the inputs to the outputs with the given examples of input-output pairs. In the experiments, we report both accuracy (how the proposed mapping fits the input-output pairs) and problem accuracy (how the problem is correctly solved with a mapping that fits all input-output pairs). According to the results in Table 1, we can see that the proposed MARCO outperforms baselines in terms of both accuracy and problem accuracy by a large margin on various LLM backbones. This shows the overall effectiveness of the proposed MARCO on inductive reasoning. Additionally, we find that Chain-of-X methods ( i.e., CoT and CoC) generally do not perform well on inductive reasoning. A possible explanation is that these methods focus on decomposing the problems, while the inductive code reasoning task requires a wholistic understanding of the input-output mapping. By comparison, reflection-based methods (i.e., RHDA and MARCO ) are able to iteratively refine the solution with feedback, and this explains their better performance. The proposed MARCO accumulates knowledge through meta-reflection and learns from shared lessons through cross-referencing, leading to the best performance. Deductive Reasoning. Deductive code reasoning requires the model to deduce the output from the input and the program (in the form of a Python function). We show the performance of MARCO in deductive code reasoning compared to baselines in Table 2. As can be seen from the results, our method significantly improves accuracy on deductive code reasoning, e.g., 28.3% improvement relative to CoT in LiveCodeBench on LLaMA 3 70B. Additionally, we find that for models with weaker performance ( e.g., LLaMA 3 70B), the proposed MARCO can achieve greater performance, unleashing the power of these models. 6 Abductive Reasoning. Abductive code reasoning requires the model to infer
https://arxiv.org/abs/2505.17481v1
the inputs that lead to the cor- responding outputs with the given program (function). In many cases, this would be harder than deductive code reasoning, and we can see from the results in Table 2 that the ab- ductive reasoning accuracies are gen- erally lower than deductive reason- ing. Despite the difficulty, the pro- posed MARCO still achieves substan- tial improvement, e.g., 24.0% im- provement relative to CoT in CRUX- Eval on GPT4o-mini. Moreover, we find that the proposed MARCO achieves more improvement on ab- ductive reasoning than inductive rea- soning. One possible explanation isCRUXEval LiveCodeBenchModelsDeductive Abductive Deductive Abductive GPT 4o-mini CoT 80.50 65.12 79.41 50.00 CoC 81.00 ↑0.50 64.88 ↓0.24 80.39 ↑0.98 50.00 ↑0.00 RHDA 80.75 ↑0.25 71.75 ↑6.63 79.41 ↑0.00 58.82 ↑8.82 MARCO (ours) 83.62 ↑3.12 80.75 ↑15.63 87.25 ↑7.84 64.71 ↑14.71 Qwen 2.5 72B CoT 78.38 70.50 86.27 54.90 CoC 70.13 ↓8.25 70.88 ↑0.38 77.45 ↓8.82 54.90 ↑0.00 RHDA 80.38 ↑2.00 74.63 ↑4.13 78.43 ↓7.84 58.82 ↑3.92 MARCO (ours) 84.38 ↑6.00 81.75 ↑11.25 88.23 ↑1.96 65.69 ↑10.79 LLaMA 3 70B CoT 69.88 60.63 65.69 46.08 CoC 70.38 ↑0.50 61.25 ↑0.62 52.94 ↓12.75 41.18 ↓4.90 RHDA 78.25 ↑8.37 71.75 ↑11.12 66.67 ↑0.98 55.88 ↑9.80 MARCO (ours) 83.38 ↑13.50 79.25 ↑18.62 84.31 ↑18.62 65.69 ↑19.61 Table 2: Prediction accuracies on the deductive and abductive reasoning. We bold the best and underline the second-best. that inferring the inputs from the outputs requires a deeper understanding of the functionality of the code, which calls for knowledge from past experiences and potential pitfalls. The proposed MARCO allows LLM agents to accumulate knowledge from past problem-solving processes through meta-reflection and to avoid potential pitfalls from shared lessons of peers through cross-referencing, leading to the best performance. 4.3 Ablation Studies In this part, we investigate how the pro- posed meta-reflection and cross-referencing mechanisms affect the overall performance on code reasoning. Specifically, we adopt ListFunction for the inductive reasoning sub-task and LiveCodeBench for deductive and abductive reasoning sub-tasks. Three variants of MARCO are constructed: MARCO v1 removes meta-reflection from the originalListFunction LiveCodeBenchModelsAcc. Prob. Acc. Deductive Abductive MARCO 49.35 39.60 87.25 64.71 MARCO v1 46.60 ↓2.75 36.80 ↓2.80 84.31 ↓2.94 62.75 ↓1.96 MARCO v2 42.90 ↓6.45 32.80 ↓6.80 86.27 ↓0.98 62.75 ↓1.96 MARCO v3 41.30 ↓8.05 32.80 ↓6.80 81.37 ↓5.88 61.76 ↓2.95 Table 3: Ablation studies on the ListFunction and Live- CodeBench datasets across three reasoning sub-tasks. version. MARCO v2 preserves the meta-reflection but excludes the knowledge condenser in Eq. 5. MARCO v3 disables cross-referencing from the original version. The results of the variants in compari- son with the original version are shown in Table 3. As can be seen from the results, meta-reflection, the condenser, and cross-referencing are all important for the overall performance of MARCO , as re- moving each of them leads to performance degradation.Moreover, we find that the condense operation in Eq. 5 is important for the success of meta-reflection. As reflecting on specific problem-solving may yield rules or experiences that only hold under certain conditions, the condense operation can identify the common knowledge and express it in a concise manner. Figure 4: Leftandmiddle
https://arxiv.org/abs/2505.17481v1
: performance under different iterations and condensation periods in terms of accuracy and problem accuracy on the ListFunction dataset. Right : the comparison of absolute improvements of MARCO and the baseline in both the first half and the second half of the datasets. 7 A concrete strategy is to clearly separate and with precise string manipulations.The mistake was in incorrectly identifying the indices for the dsl.SubStrand dsl.GetFromfunctions, leading to incorrect substrings being extracted; ensure precise index calculationsand function usage in thefuture.RobustFill(stringmanipulation,inductivecodereasoning): DeepCoder(manipulationofmultiplelistsofintegers,inductive):Thoroughly analyzing the pattern in the output and matching it with the available operations in the DSL, particularly using Scanl1for cumulative operations, is a key strategy for solving similar problems effectively.The takeaway is to carefully match the expected output by ensuring the correct case transformation. The transformation for negative numbers should be re-evaluated to ensure the correct floor division.ListFunction(manipulation of a list of integers, inductive):Focus on identifying specific patterns in the relationships between input elements and outputs, such as frequency, position, and unique values. Systematically analyze index positions or value occurrences. Thoroughly analyzing specific patterns of change in the values of the input and output grids, ensuring that I accurately identify which values are retained, shifted, or clearedbased on their positions relative to one another.MiniARC(manipulation of a matrix of integers, inductive):Carefully analyze the function’s ifconditions and corresponding outputs to identify special inputs that directly yield the desired results.CRUXEval-I (abductive code reasoning):Clearly outlining the expected behavior for edge cases, such as an empty input, can significantly enhance clarity and accuracy in problem-solvingCRUXEval-O (deductive code reasoning): LiveCodeBench-I (abductive code reasoning):I will ensure to clearly identify and analyze the conditions required for different outputs of a function, particularly focusing on edge cases that lead to early returns or specific results, to efficiently derive valid test inputs.Consider edge cases such as lists with leading zeros or multiple maximum values.Using a dictionary to track occurrences and a separate list to gather results.Figure 5: We present examples of the summarized reasoning experiences using meta-reflection on various datasets across three code reasoning sub-tasks ( i.e., inductive, deductive, and abductive). The results suggest that meta-reflection can provide useful knowledge for future problem-solving. 4.4 Hyperparameter Analysis We then investigate the model’s performance under different hyperparameters. Specifically, we focus on two hyperparameters: the number of iterations T, and the condensation period Tc, and the results on the ListFunction dataset using Qwen 2.5 72B are shown in Figure 4, left, and middle . For the number of iterations, we find that the performance ( i.e., accuracy and problem accuracy) plateaued after two iterations. This suggests that directly increasing the number of iterations and performing more reflections and revisions on the same problem is not sufficient to achieve better performance. Balancing the computation costs and performance, we set this hyperparameter to 2. As for the condensation period, we find that a moderate period of around 8 to 10 yields the best performance. For shorter periods, the condenser may not obtain enough knowledge to summarize, focusing on guidance hard to generalize. For longer periods, the model may fall short in updating the knowledge, and
https://arxiv.org/abs/2505.17481v1
weaker adaptability causes a mild decrease in performance. 4.5 Further Analysis Effects of Meta-Reflection in the Problem-Solving Process. To better understand the effect of inter-problem knowledge accumulation in meta-reflection, we measure the absolute improvement (in accuracy) of MARCO compared to CoT in both the first half and the second half of the whole problem-solving process. The results using LLaMA 3 70B on various datasets are shown in Figure 4 (right ). As can be seen from the results, the absolute improvement of MARCO compared to the baseline increases during the whole problem-solving process, demonstrated by the fact that the second half experiences more improvement compared to the first half. These results suggest that, with the accumulated knowledge, the model is able to perform better with more understanding of the problems and additional guidance from previous reasoning processes. In other words, meta-reflection enhances the LLM’s code reasoning ability during the problem-solving process. Meta-Reflection Results Analysis. We also provide examples of meta-reflection results in Figure 5, from which we have the following observations. ▷Firstly, by reflecting on the reasoning process, the LLM can generate guidance for handling different cases (marked as blue). For example, in string manipulation (RobustFill, Example 1), the guidance can be "handle each case (single letter, word/phrase, and digit-containing strings) with precise string manipulations" . Since separating letters and digits is common in string operations, this would be helpful for solving future problems. 8 ▷Secondly, meta-reflection allows the LLM to point out pitfalls that may recur in future problem- solving (marked as green). For example, when dealing with integers (DeepCoder, Example 2), it points out "negative numbers should be re-evaluated to ensure the correct floor division" . This is helpful for the model to avoid mistakes by paying more attention to certain operations ( e.g.floor division of negative numbers). ▷Thirdly, meta-reflection helps find strategies that can be applied to future problem-solving (marked as orange). For example, in abductive code reasoning, a simple strategy is to utilize special conditions as a shortcut to the return value. The LLM finds this simple strategy with meta-reflection, i.e.,"carefully analyze the function’s if conditions and corresponding outputs to identify special inputs that directly yield the desired results" (CRUXEval-I). 5 Related Works LLM Reasoning. LLMs have achieved promising performance in understanding and generating texts. Recently, their ability to handle complex logical tasks through reasoning has drawn increasing attention. Inspired by the human thinking process, some studies attempt to break down complicated problems into more manageable sub-problems, which are handled one by one [ 68,70]. This line of research represents the reasoning process with various structures, including sequences [ 68,72], trees [74,7,14], and graphs [ 69,25,6]. Another line of research focuses on searching for better solutions with LLMs according to the feedback provided by reflection [ 56,71], consistency regularization [63], human user [ 37], or physical environment [ 47,2]. To optimize the solutions, they often adopt reinforcement learning algorithms like Monte Carlo Tree Search [ 75,18,65], in combination with structures like trees [ 24] and graphs [ 5]. Compared to these works, which focus primarily on the problems
https://arxiv.org/abs/2505.17481v1
via task decomposition or solution searching, we take a cognitive-evolving perspective that uses meta-reflection for inter-problem knowledge accumulation and cross-referencing for intra- problem lesson sharing to enhance the code ability of the LLM itself. LLM Reflection. The ability to perform reflection is an essential part of reasoning, involving evaluating the current solution and regenerating better answers [ 56,36]. In order to perform reflection, feedback is required either from LLMs themselves [ 56,52] or from the outside world ( e.g., human user [ 37], simulated or physical environments [ 2,55]). While it may be attractive to reflect without external feedback, prior studies [ 27,35,78] suggest that, as analogous to the human learning process, self-correction without external feedback is hard to achieve without training and in some cases it may lead to performance degradation [ 27]. In this work, we investigate the problem of code reasoning, which fortunately has compilers and Python interpreters as a source of external feedback [ 31,29]. Moreover, unlike previous methods [ 58,77,78] that use feedback solely for reflection on the current problem, this paper proposes meta-reflection that reflects on the whole reflection process to obtain transferable knowledge that can be used for future problem-solving and thus enhance the capabilities of the LLM from a cognitive-evolving perspective. LLM for Code. Recent years have witnessed increasing performance of LLMs on code understanding and generation [ 67,40,46,15,10,28,33]. These works are often designed for code generation based on human instructions or requirements [ 15,33,79], code completion based on existing code snippets [22, 44, 29], or code debugging [80, 61]. While this line of research has shown promising results, it largely relies on the LLMs’ ability to recall [ 66] without much reasoning ability. By comparison, this work investigates a more challenging topic of code reasoning that requires both logical reasoning and recall capability in code-related tasks, involving induction, deduction, and abduction [78]. 6 Conclusion This paper investigates the problem of code reasoning and proposes MARCO that adopts a cognitive- evolving perspective, aiming to improve the code reasoning capabilities of LLMs during inference. Specifically, MARCO employs meta-reflection that reflects on the previous problem-solving processes to obtain transferable guidance for inter-problem knowledge accumulation. Moreover, it adopts cross-referencing that utilizes the failed experiences from peers to enable intra-problem lesson sharing. Extensive experiments on several benchmark datasets across three code reasoning sub-tasks demonstrate the effectiveness of the proposed MARCO . 9 References [1]Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774 , 2023. [2]Tuo An, Yunjiao Zhou, Han Zou, and Jianfei Yang. Iot-llm: Enhancing real-world iot task reasoning with large language models. arXiv preprint arXiv:2410.02429 , 2024. [3]Matej Balog, Alexander L Gaunt, Marc Brockschmidt, Sebastian Nowozin, and Daniel Tarlow. Deepcoder: Learning to write programs. arXiv preprint arXiv:1611.01989 , 2016. [4]Erkan Ba s,ar, Xin Sun, Iris Hendrickx, Jan de Wit, Tibor Bosse, Gert-Jan De Bruijn, Jos A Bosch, and Emiel Krahmer. How well can large language models reflect? a human evaluation of llm-generated reflections for motivational interviewing dialogues.
https://arxiv.org/abs/2505.17481v1
In Proceedings of the 31st International Conference on Computational Linguistics , pages 1964–1982, 2025. [5]Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Michal Podstawski, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Hubert Niewiadomski, Piotr Nyczyk, et al. Graph of thoughts: Solving elaborate problems with large language models. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 38, pages 17682–17690, 2024. [6]Maciej Besta, Florim Memedi, Zhenyu Zhang, Robert Gerstenberger, Guangyuan Piao, Nils Blach, Piotr Nyczyk, Marcin Copik, Grzegorz Kwa ´sniewski, Jürgen Müller, et al. Demystifying chains, trees, and graphs of thoughts. arXiv preprint arXiv:2401.14295 , 2024. [7]Zhenni Bi, Kai Han, Chuanjian Liu, Yehui Tang, and Yunhe Wang. Forest-of-thought: Scaling test-time compute for enhancing llm reasoning. arXiv preprint arXiv:2412.09078 , 2024. [8]Xiaohe Bo, Zeyu Zhang, Quanyu Dai, Xueyang Feng, Lei Wang, Rui Li, Xu Chen, and Ji-Rong Wen. Reflective multi-agent collaboration based on large language models. Advances in Neural Information Processing Systems , 37:138595–138631, 2024. [9]Shulin Cao, Jiajie Zhang, Jiaxin Shi, Xin Lv, Zijun Yao, Qi Tian, Juanzi Li, and Lei Hou. Probabilistic tree-of-thought reasoning for answering knowledge-intensive complex questions. arXiv preprint arXiv:2311.13982 , 2023. [10] Liguo Chen, Qi Guo, Hongrui Jia, Zhengran Zeng, Xin Wang, Yijiang Xu, Jian Wu, Yidong Wang, Qing Gao, Jindong Wang, et al. A survey on evaluating large language models in code generation tasks. arXiv preprint arXiv:2408.16498 , 2024. [11] Changwoo Chun, Daniel Rim, and Juhee Park. Llm contextbridge: A hybrid approach for intent and dialogue understanding in ivsr. In Proceedings of the 31st International Conference on Computational Linguistics: Industry Track , pages 794–806, 2025. [12] Jacob Devlin, Jonathan Uesato, Surya Bhupatiraju, Rishabh Singh, Abdel-rahman Mohamed, and Pushmeet Kohli. Robustfill: Neural program learning under noisy i/o. In International conference on machine learning , pages 990–998. PMLR, 2017. [13] Kaustubh D Dhole, Ramraj Chandradevan, and Eugene Agichtein. An interactive query generation assistant using llm-based prompt modification and user feedback. arXiv preprint arXiv:2311.11226 , 2023. [14] Yifu Ding, Wentao Jiang, Shunyu Liu, Yongcheng Jing, Jinyang Guo, Yingjie Wang, Jing Zhang, Zengmao Wang, Ziwei Liu, Bo Du, et al. Dynamic parallel tree search for efficient llm reasoning. arXiv preprint arXiv:2502.16235 , 2025. [15] Xueying Du, Mingwei Liu, Kaixin Wang, Hanlin Wang, Junwei Liu, Yixuan Chen, Jiayi Feng, Chaofeng Sha, Xin Peng, and Yiling Lou. Evaluating large language models in class-level code generation. In Proceedings of the IEEE/ACM 46th International Conference on Software Engineering , pages 1–13, 2024. [16] Sarah Fakhoury, Aaditya Naik, Georgios Sakkas, Saikat Chakraborty, and Shuvendu K Lahiri. Llm-based test-driven interactive code generation: User study and empirical evaluation. IEEE Transactions on Software Engineering , 2024. 10 [17] Yujie Feng, Zexin Lu, Bo Liu, Liming Zhan, and Xiao-Ming Wu. Towards llm-driven dialogue state tracking. arXiv preprint arXiv:2310.14970 , 2023. [18] Zitian Gao, Boye Niu, Xuzheng He, Haotian Xu, Hongzhang Liu, Aiwei Liu, Xuming Hu, and Lijie Wen. Interpretable contrastive monte carlo tree search reasoning. arXiv preprint arXiv:2410.01707 , 2024. [19] Akash Ghosh, Arkadeep Acharya, Raghav Jain, Sriparna Saha, Aman Chadha, and Setu Sinha. Clipsyntel: clip and llm synergy for multimodal question summarization in healthcare. In Proceedings of the AAAI Conference
https://arxiv.org/abs/2505.17481v1
on Artificial Intelligence , volume 38, pages 22031–22039, 2024. [20] Alex Gu, Baptiste Rozière, Hugh Leather, Armando Solar-Lezama, Gabriel Synnaeve, and Sida I Wang. Cruxeval: A benchmark for code reasoning, understanding and execution. arXiv preprint arXiv:2401.03065 , 2024. [21] Qiuhan Gu. Llm-based code generation method for golang compiler testing. In Proceedings of the 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering , pages 2201–2203, 2023. [22] Daya Guo, Canwen Xu, Nan Duan, Jian Yin, and Julian McAuley. Longcoder: A long-range pre-trained language model for code completion. In International Conference on Machine Learning , pages 12098–12107. PMLR, 2023. [23] Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948 , 2025. [24] Shibo Hao, Yi Gu, Haodi Ma, Joshua Jiahua Hong, Zhen Wang, Daisy Zhe Wang, and Zhit- ing Hu. Reasoning with language model is planning with world model. arXiv preprint arXiv:2305.14992 , 2023. [25] Chi-Yang Hsu, Kyle Cox, Jiawei Xu, Zhen Tan, Tianhua Zhai, Mengzhou Hu, Dexter Pratt, Tianlong Chen, Ziniu Hu, and Ying Ding. Thought graph: Generating thought process for biological reasoning. In Companion Proceedings of the ACM Web Conference 2024 , pages 537–540, 2024. [26] Zhiyuan Hu, Yue Feng, Anh Tuan Luu, Bryan Hooi, and Aldo Lipani. Unlocking the potential of user feedback: Leveraging large language model as user simulators to enhance dialogue system. InProceedings of the 32nd ACM International Conference on Information and Knowledge Management , pages 3953–3957, 2023. [27] Jie Huang, Xinyun Chen, Swaroop Mishra, Huaixiu Steven Zheng, Adams Wei Yu, Xinying Song, and Denny Zhou. Large language models cannot self-correct reasoning yet. arXiv preprint arXiv:2310.01798 , 2023. [28] Tao Huang, Zhihong Sun, Zhi Jin, Ge Li, and Chen Lyu. Knowledge-aware code generation with large language models. In Proceedings of the 32nd IEEE/ACM International Conference on Program Comprehension , pages 52–63, 2024. [29] Maliheh Izadi, Jonathan Katzy, Tim Van Dam, Marc Otten, Razvan Mihai Popescu, and Arie Van Deursen. Language models for code completion: A practical evaluation. In Proceedings of the IEEE/ACM 46th International Conference on Software Engineering , pages 1–13, 2024. [30] Naman Jain, King Han, Alex Gu, Wen-Ding Li, Fanjia Yan, Tianjun Zhang, Sida Wang, Ar- mando Solar-Lezama, Koushik Sen, and Ion Stoica. Livecodebench: Holistic and contamination free evaluation of large language models for code. arXiv preprint arXiv:2403.07974 , 2024. [31] Prithwish Jana, Piyush Jha, Haoyang Ju, Gautham Kishore, Aryan Mahajan, and Vijay Ganesh. Cotran: An llm-based code translator using reinforcement learning with feedback from compiler and symbolic execution. In ECAI 2024 , pages 4011–4018. IOS Press, 2024. 11 [32] Ziwei Ji, Tiezheng Yu, Yan Xu, Nayeon Lee, Etsuko Ishii, and Pascale Fung. Towards mitigating hallucination in large language models via self-reflection. arXiv preprint arXiv:2310.06271 , 2023. [33] Xue Jiang, Yihong Dong, Lecheng Wang, Zheng Fang, Qiwei Shang, Ge Li, Zhi Jin, and Wenpin Jiao. Self-planning code generation with large language models. ACM Transactions on Software Engineering and Methodology , 33(7):1–30, 2024. [34] Subin Kim,
https://arxiv.org/abs/2505.17481v1
Prin Phunyaphibarn, Donghyun Ahn, and Sundong Kim. Playgrounds for abstraction and reasoning. In NeurIPS 2022 Workshop on Neuro Causal and Symbolic AI (nCSI) , 2022. [35] Aviral Kumar, Vincent Zhuang, Rishabh Agarwal, Yi Su, John D Co-Reyes, Avi Singh, Kate Baumli, Shariq Iqbal, Colton Bishop, Rebecca Roelofs, et al. Training language models to self-correct via reinforcement learning. arXiv preprint arXiv:2409.12917 , 2024. [36] Harsh Kumar, Ruiwei Xiao, Benjamin Lawson, Ilya Musabirov, Jiakai Shi, Xinyuan Wang, Huayin Luo, Joseph Jay Williams, Anna N Rafferty, John Stamper, et al. Supporting self- reflection at scale with large language models: Insights from randomized field experiments in classrooms. In Proceedings of the eleventh ACM conference on learning@ scale , pages 86–97, 2024. [37] Alireza Rashidi Laleh and Majid Nili Ahmadabadi. A survey on enhancing reinforcement learning in complex environments: Insights from human and llm feedback. arXiv preprint arXiv:2411.13410 , 2024. [38] Bin Lei, Chunhua Liao, Caiwen Ding, et al. Boosting logical reasoning in large language models through a new framework: The graph of thought. arXiv preprint arXiv:2308.08614 , 2023. [39] Chengshu Li, Jacky Liang, Andy Zeng, Xinyun Chen, Karol Hausman, Dorsa Sadigh, Sergey Levine, Li Fei-Fei, Fei Xia, and Brian Ichter. Chain of code: Reasoning with a language model-augmented code emulator. arXiv preprint arXiv:2312.04474 , 2023. [40] Jia Li, Chongyang Tao, Jia Li, Ge Li, Zhi Jin, Huangzhao Zhang, Zheng Fang, and Fang Liu. Large language model-aware in-context learning for code generation. ACM Transactions on Software Engineering and Methodology , 2023. [41] Kun Li, Tingzhang Zhao, Wei Zhou, and Songlin Hu. Dora: Dynamic optimization prompt for continuous reflection of llm-based agent. In Proceedings of the 31st International Conference on Computational Linguistics , pages 7546–7557, 2025. [42] Ming Li, Lichang Chen, Jiuhai Chen, Shwai He, Jiuxiang Gu, and Tianyi Zhou. Selective reflection-tuning: Student-selected data recycling for llm instruction-tuning. In Findings of the Association for Computational Linguistics ACL 2024 , pages 16189–16211, 2024. [43] Weichen Li and Weimin Pan. Enhancing chain-of-thought reasoning in large language models through text style diversity and prompt fusion. In Third International Conference on Electronic Information Engineering, Big Data, and Computer Technology (EIBDCT 2024) , volume 13181, pages 226–232. SPIE, 2024. [44] Zongjie Li, Chaozheng Wang, Zhibo Liu, Haoxuan Wang, Dong Chen, Shuai Wang, and Cuiyun Gao. Cctest: Testing and repairing code completion systems. In 2023 IEEE/ACM 45th International Conference on Software Engineering (ICSE) , pages 1238–1250. IEEE, 2023. [45] Aixin Liu, Bei Feng, Bing Xue, Bingxuan Wang, Bochao Wu, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, et al. Deepseek-v3 technical report. arXiv preprint arXiv:2412.19437 , 2024. [46] Jiawei Liu, Chunqiu Steven Xia, Yuyao Wang, and Lingming Zhang. Is your code generated by chatgpt really correct? rigorous evaluation of large language models for code generation. Advances in Neural Information Processing Systems , 36:21558–21572, 2023. 12 [47] Xiaoyu Luo, Daping Liu, Fan Dang, and Hanjiang Luo. Integration of llms and the physical world: Research and application. In Proceedings of the ACM Turing Award Celebration Conference-China 2024 , pages 1–5, 2024. [48] Silin Meng, Yiwei Wang, Cheng-Fu Yang, Nanyun Peng, and Kai-Wei Chang. Llm-a*:
https://arxiv.org/abs/2505.17481v1
Large language model enhanced incremental heuristic search on path planning. arXiv preprint arXiv:2407.02511 , 2024. [49] Terufumi Morishita, Gaku Morio, Atsuki Yamaguchi, and Yasuhiro Sogawa. Enhancing reasoning capabilities of llms via principled synthetic logic corpus. Advances in Neural Information Processing Systems , 37:73572–73604, 2024. [50] Christopher E Mower, Yuhui Wan, Hongzhan Yu, Antoine Grosnit, Jonas Gonzalez-Billandon, Matthieu Zimmer, Jinlong Wang, Xinyu Zhang, Yao Zhao, Anbang Zhai, et al. Ros-llm: A ros framework for embodied ai with task feedback and structured reasoning. arXiv preprint arXiv:2406.19741 , 2024. [51] Tushar Pandey, Ara Ghukasyan, Oktay Goktas, and Santosh Kumar Radha. Adaptive graph of thoughts: Test-time adaptive reasoning unifying chain, tree, and graph structures. arXiv preprint arXiv:2502.05078 , 2025. [52] Alexandre Piché, Aristides Milios, Dzmitry Bahdanau, and Chris Pal. Llms can learn self- restraint through iterative self-reflection. arXiv preprint arXiv:2405.13022 , 2024. [53] Linlu Qiu, Liwei Jiang, Ximing Lu, Melanie Sclar, Valentina Pyatkin, Chandra Bhagavatula, Bailin Wang, Yoon Kim, Yejin Choi, Nouha Dziri, et al. Phenomenal yet puzzling: Testing inductive reasoning capabilities of language models with hypothesis refinement. arXiv preprint arXiv:2310.08559 , 2023. [54] Leonardo Ranaldi, Giulia Pucci, Federico Ranaldi, Elena Sofia Ruzzetti, and Fabio Massimo Zanzotto. A tree-of-thoughts to broaden multi-step reasoning across languages. In Findings of the Association for Computational Linguistics: NAACL 2024 , pages 1229–1241, 2024. [55] Sudha Rao, Weijia Xu, Michael Xu, Jorge Leandro, Ken Lobb, Gabriel DesGarennes, Chris Brockett, and Bill Dolan. Collaborative quest completion with llm-driven non-player characters in minecraft. arXiv preprint arXiv:2407.03460 , 2024. [56] Matthew Renze and Erhan Guven. Self-reflection in llm agents: Effects on problem-solving performance. arXiv preprint arXiv:2405.06682 , 2024. [57] Joshua Stewart Rule. The child as hacker: building more human-like models of learning . PhD thesis, Massachusetts Institute of Technology, 2020. [58] Noah Shinn, Federico Cassano, Ashwin Gopinath, Karthik Narasimhan, and Shunyu Yao. Reflexion: Language agents with verbal reinforcement learning. Advances in Neural Information Processing Systems , 36:8634–8652, 2023. [59] Hwanjun Song, Taewon Yun, Yuho Lee, Jihwan Oh, Gihun Lee, Jason Cai, and Hang Su. Learning to summarize from llm-generated feedback. arXiv preprint arXiv:2410.13116 , 2024. [60] Hongda Sun, Weikai Xu, Wei Liu, Jian Luan, Bin Wang, Shuo Shang, Ji-Rong Wen, and Rui Yan. Determlr: Augmenting llm-based logical reasoning from indeterminacy to determinacy. arXiv preprint arXiv:2310.18659 , 2023. [61] Runchu Tian, Yining Ye, Yujia Qin, Xin Cong, Yankai Lin, Yinxu Pan, Yesai Wu, Haotian Hui, Weichuan Liu, Zhiyuan Liu, et al. Debugbench: Evaluating debugging capability of large language models. arXiv preprint arXiv:2401.04621 , 2024. [62] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timo- thée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 , 2023. [63] Guangya Wan, Yuqi Wu, Jie Chen, and Sheng Li. Dynamic self-consistency: Leveraging reasoning paths for efficient llm sampling. arXiv preprint arXiv:2408.17017 , 2024. 13 [64] Boyuan Wang, Yun Qu, Yuhang Jiang, Jianzhun Shao, Chang Liu, Wenming Yang, and Xi- angyang Ji. Llm-empowered state representation for reinforcement learning. arXiv preprint arXiv:2407.13237 , 2024. [65] Xiyao Wang, Linfeng Song, Ye Tian, Dian Yu, Baolin Peng, Haitao
https://arxiv.org/abs/2505.17481v1
Mi, Furong Huang, and Dong Yu. Towards self-improvement of llms via mcts: Leveraging stepwise knowledge with curriculum preference learning. arXiv preprint arXiv:2410.06508 , 2024. [66] Yifei Wang, Yuheng Chen, Wanting Wen, Yu Sheng, Linjing Li, and Daniel Dajun Zeng. Unveiling factual recall behaviors of large language models through knowledge neurons. arXiv preprint arXiv:2408.03247 , 2024. [67] Yue Wang, Hung Le, Akhilesh Deepak Gotmare, Nghi DQ Bui, Junnan Li, and Steven CH Hoi. Codet5+: Open code large language models for code understanding and generation. arXiv preprint arXiv:2305.07922 , 2023. [68] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems , 35:24824–24837, 2022. [69] Yilin Wen, Zifeng Wang, and Jimeng Sun. Mindmap: Knowledge graph prompting sparks graph of thoughts in large language models. arXiv preprint arXiv:2308.09729 , 2023. [70] Yu Xia, Rui Wang, Xu Liu, Mingyan Li, Tong Yu, Xiang Chen, Julian McAuley, and Shuai Li. Beyond chain-of-thought: A survey of chain-of-x paradigms for llms. arXiv preprint arXiv:2404.15676 , 2024. [71] Xiaotong Xu, Jiayu Yin, Catherine Gu, Jenny Mar, Sydney Zhang, Jane L E, and Steven P Dow. Jamplate: exploring llm-enhanced templates for idea reflection. In Proceedings of the 29th International Conference on Intelligent User Interfaces , pages 907–921, 2024. [72] Yige Xu, Xu Guo, Zhiwei Zeng, and Chunyan Miao. Softcot: Soft chain-of-thought for efficient reasoning with llms. arXiv preprint arXiv:2502.12134 , 2025. [73] An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, et al. Qwen2. 5 technical report. arXiv preprint arXiv:2412.15115 , 2024. [74] Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Tom Griffiths, Yuan Cao, and Karthik Narasimhan. Tree of thoughts: Deliberate problem solving with large language models. Ad- vances in neural information processing systems , 36:11809–11822, 2023. [75] Dan Zhang, Sining Zhoubian, Ziniu Hu, Yisong Yue, Yuxiao Dong, and Jie Tang. Rest-mcts*: Llm self-training via process reward guided tree search. Advances in Neural Information Processing Systems , 37:64735–64772, 2024. [76] Xuan Zhang, Chao Du, Tianyu Pang, Qian Liu, Wei Gao, and Min Lin. Chain of preference optimization: Improving chain-of-thought reasoning in llms. Advances in Neural Information Processing Systems , 37:333–356, 2024. [77] Lili Zhao, Yang Wang, Qi Liu, Mengyun Wang, Wei Chen, Zhichao Sheng, and Shijin Wang. Evaluating large language models through role-guide and self-reflection: A comparative study. InThe Thirteenth International Conference on Learning Representations , 2025. [78] Yuze Zhao, Tianyun Ji, Wenjun Feng, Zhenya Huang, Qi Liu, Zhiding Liu, Yixiao Ma, Kai Zhang, and Enhong Chen. Unveiling the magic of code reasoning through hypothesis decompo- sition and amendment. arXiv preprint arXiv:2502.13170 , 2025. [79] Li Zhong and Zilong Wang. Can llm replace stack overflow? a study on robustness and reliability of large language model code generation. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 38, pages 21841–21849, 2024. [80] Li Zhong, Zilong Wang, and Jingbo Shang. Debug like a human: A large language model debugger via verifying runtime execution step-by-step. arXiv preprint
https://arxiv.org/abs/2505.17481v1
arXiv:2505.17482v1 [cs.AI] 23 May 2025From Reasoning to Generalization: Knowledge-Augmented LLMs for ARC Benchmark Chao Lei, Nir Lipovetzky, Krista A. Ehinger, Yanchuan Chang School of Computing and Information Systems, The University of Melbourne, Australia clei1@student.unimelb.edu.au, {kris.ehinger, nir.lipovetzky, yanchuan.chang}@unimelb.edu.au Abstract Recent reasoning-oriented LLMs have demonstrated strong performance on chal- lenging tasks such as mathematics and science examinations. However, core cognitive faculties of human intelligence, such as abstract reasoning and generaliza- tion, remain underexplored. To address this, we evaluate recent reasoning-oriented LLMs on the Abstraction and Reasoning Corpus (ARC) benchmark, which explic- itly demands both faculties. We formulate ARC as a program synthesis task and propose nine candidate solvers. Experimental results show that repeated-sampling planning-aided code generation (RSPC) achieves the highest test accuracy and demonstrates consistent generalization across most LLMs. To further improve performance, we introduce an ARC solver, Knowledge Augmentation for Abstract Reasoning (KAAR), which encodes core knowledge priors within an ontology that classifies priors into three hierarchical levels based on their dependencies. KAAR progressively expands LLM reasoning capacity by gradually augmenting priors at each level, and invokes RSPC to generate candidate solutions after each augmenta- tion stage. This stage-wise reasoning reduces interference from irrelevant priors and improves LLM performance. Empirical results show that KAAR maintains strong generalization and consistently outperforms non-augmented RSPC across all evaluated LLMs, achieving around 5% absolute gains and up to 64.52% relative improvement. Despite these achievements, ARC remains a challenging benchmark for reasoning-oriented LLMs, highlighting future avenues of progress in LLMs. 1 Introduction Learning from extensive training data has achieved remarkable success in major AI fields such as computer vision, natural language processing, and autonomous driving [ 1–3]. However, achieving human-like intelligence goes beyond learning purely from large-scale data; it requires rapid reasoning and generalizing from prior knowledge to novel tasks and situations [ 4]. Chollet [5]introduced Ab- straction and Reasoning Corpus (ARC) to assess the generalization and abstract reasoning capabilities of AI systems. In each ARC task, the solver is required to infer generalized rules or procedures from a small set of training instances, typically fewer than five input-output image pairs, and apply them to generate output images for given input images provided in test instances (Figure 1 (a)). Each image in ARC is a pixel grid represented as a 2D matrix, where each value denotes a pixel color (Figure 1 (b)). ARC evaluates broad generalization , encompassing reasoning over individual input-output pairs and inferring generalized solutions via high-level abstraction, akin to inductive reasoning [6]. ARC is grounded in core knowledge priors, which serve as foundational cognitive faculties of human intelligence, enabling equitable comparisons between AI systems and human cognitive abilities [ 7]. These priors include: (1) objectness – aggregating elements into coherent, persistent objects; (2) geometry and topology – recognizing and manipulating shapes, symmetries, spatial transformations, and structural patterns (e.g., containment, repetition, projection); (3) numbers and counting – counting, Preprint. Under review. ? [[0, 0, 0], [1, 1, 1], [0, 0, 0]]test instance training instances [[1, 1, 1], [0, 0, 0], [0, 0, 0]][[0, 1, 0], [1, 1, 0], [0, 0, 0]][[0, 0, 0], [0, 1, 0],
https://arxiv.org/abs/2505.17482v1
[1, 1, 0]][[0, 2, 2], [0, 0, 2], [0, 0, 0][[0, 0, 0], [0, 2, 2], [0, 0, 2]][[2, 0, 0], [2, 0, 0], [0, 0, 0]][[0, 0, 0], [2, 0, 0], [2, 0, 0]](a) Image Visualization(b) Matrix RepresentationFigure 1: An ARC problem example ( 25ff71a9 ) with image visualizations (a), including three input- output pairs in the training instances, and one input image in the test instance, along with their corresponding 2D matrix representations (b). The ground-truth test output is enclosed in a red box. sorting, comparing quantities, performing basic arithmetic, and identifying numerical patterns; and (4)goal-directedness – inferring purposeful transformations between initial and final states without explicit temporal cues. Incorporating these priors allows ARC solvers to replicate human cognitive processes, produce behavior aligned with human expectations, address human-relevant problems, and demonstrate human-like intelligence through generalization and abstract reasoning [ 5]. These features highlight ARC as a crucial benchmark for assessing progress toward general intelligence. Chollet [5]suggested approaching ARC tasks as instances of program synthesis, which studies automatically generating a program that satisfies a high-level specification [ 8]. Following this pro- posal, recent studies [ 9,10] have successfully solved partial ARC tasks by searching for program solutions encoded within object-centric domain-specific languages (DSLs). Reasoning-oriented LLMs integrate chain-of-thought (CoT) reasoning [ 11], often trained via reinforcement learning, further advancing program synthesis performance. Common approaches using LLMs for code gener- ation include repeated sampling, where multiple candidate programs are generated [ 12], followed by best-program selection strategies [ 13–16], and code refinement, where initial LLM-generated code is iteratively improved using error feedback from execution results [ 17,18] or LLM-generated explanations [ 17,19,18]. We note that ARC presents greater challenges than existing program synthesis benchmarks such as HumanEval [ 12], MBPP [ 20], and LiveCode [ 21], due to its stronger emphasis on generalization and abstract reasoning grounded in core knowledge priors, which remain underexplored. This gap motivates our evaluation of recent reasoning-oriented LLMs on the ARC benchmark, and our proposed knowledge augmentation approach to improve their performance. We systematically assess how reasoning-oriented LLMs approach ARC tasks within the program synthesis framework. For each ARC problem, we begin by providing 2D matrices as input. We adopt three established program generation strategies: direct generation ,repeated sampling , and refinement . Each strategy is evaluated under two solution representations: a text-based solution plan and Python code. When generating code solutions, we further examine two modalities: standalone andplanning- aided , where a plan is generated to guide subsequent code development, following recent advances [18,22,23]. In total, nine ARC solvers are considered. We evaluate several reasoning-oriented LLMs, including proprietary models, GPT-o3-mini [ 24,25], and Gemini-2.0-Flash-Thinking (Gemini-2.0) [26], and open-source models, DeepSeek-R1-Distill-Llama-70B (DeepSeek-R1-70B) [ 27] and QwQ- 32B [ 28]. Accuracy on test instances is reported as the primary metric. When evaluated on the ARC public evaluation set (400 problems), repeated-sampling planning-aided code generation (RSPC) demonstrates consistent generalization and achieves the highest test accuracy across most LLMs, 30.75% with GPT-o3-mini, 16.75% with Gemini-2.0, 14.25% with QwQ-32B, and 7.75% with DeepSeek-R1-70B. We treat the most competitive ARC solver, RSPC, as
https://arxiv.org/abs/2505.17482v1
the solver backbone. Motivated by the success of manually defined priors in ARC solvers [ 9,10], we propose Knowledge Augmentation for Abstract Reasoning (KAAR) for solving ARC tasks using reasoning-oriented LLMs. KAAR formalizes manually defined priors through a lightweight ontology that organizes priors into hierarchical levels based on their dependencies. It progressively augments LLMs with priors at each level via structured prompting. Specifically, core knowledge priors are introduced in stages: beginning with objectness, followed by geometry, topology, numbers, and counting, and concluding with goal-directedness. After each stage, KAAR applies the ARC solver backbone (RSPC) to generate the solution. This progressive augmentation enables LLMs to gradually expand their reasoning capabilities and facilitates stage-wise reasoning, aligning with human cognitive development [ 29]. Empirical results show that KAAR improves accuracy on test instances across all evaluated LLMs, achieving the largest absolute gain of 6.75% with QwQ-32B and the highest relative improvement of 64.52% with DeepSeek-R1-70B over non-augmented RSPC. 2 (1)Direct Generation (a) Problem Description for each cell in row i of the output (where i > 0), set its value equal to the value from row (i – 1) in the same column of the input. For the top row of the output (row 0), fill every cell with 0 (the background color) (b) Solution Plan(c) Python Code The training example(s): input:[[1,1,1],[0,0,0],[0,0,0]] output:[[0,0,0],[1,1,1],[0,0,0]] The test input image(s): input:[[2,0,0],[2,0,0],[0,0,0]] (2)Repeat Samplingpass failfailpass pass fail (3)Refinementpass failpass fail pass fail standalone standalone Planning-aided Planning-aided standalone Planning-aided Figure 2: An illustration of the three ARC solution generation approaches, (1) direct generation , (2) repeated sampling , and (3) refinement , with the GPT-o3-mini input and response fragments (a–c) for solving task 25ff71a9 (Figure 1). For each approach, when the solution sis code, s:=c, a plan pis either generated from the problem description Qto guide code generation ( planning-aided ) or omitted ( standalone ). Otherwise, when s:=p, the plan pserves as the final solution instead. We outline our contributions as follows: •We evaluate the abstract reasoning and generalization capabilities of reasoning-oriented LLMs on ARC using nine solvers that differ in generation strategies, modalities, and solution representations. •We introduce KAAR, a knowledge augmentation approach for solving ARC problems using LLMs. KAAR progressively augments LLMs with core knowledge priors structured via an ontology and applies the best ARC solver after augmenting same-level priors, further improving performance. •We conduct a comprehensive performance analysis of the proposed ARC solvers, highlighting failure cases and remaining challenges on the ARC benchmark. 2 Problem Formulation We formulate each ARC task as a tuple P=⟨Ir, It⟩, where IrandItare sets of training and test instances. Each instance consists of an input-output image pair (ii, io), represented as 2D matrices. The goal is to leverage the LLM Mto generate a solution sbased on training instances Irand test input images {ii|(ii, io)∈It}, where smaps each test input iito its output io, i.e., s(ii) =io, for(ii, io)∈It. We note that the test input images are visible during the generation of solution s, whereas test output images become accessible only after sis produced to validate the correctness of
https://arxiv.org/abs/2505.17482v1
s. We encode the solution sin different forms, as a solution plan p, or as Python code c, optionally guided by p. We denote each ARC problem description, comprising Irand{ii|(ii, io)∈It}, asQ. 3 ARC Solver Backbone LLMs have shown promise in solving tasks that rely on ARC-relevant priors [ 30–33]. We initially assume that reasoning-oriented LLMs implicitly encode sufficient core knowledge priors to solve ARC tasks. We cast each ARC task as a program synthesis problem, which involves generating a solution sfrom a problem description Qwithout explicitly prompting for priors. We consider established LLM-based code generation approaches [ 17–19,23] as candidate ARC solution generation strategies, illustrated at the top of Figure 2. These include: (1) direct generation , where the LLM produces the solution sin a single attempt, and then validates it on test instances It; (2) repeated sampling , where the LLM samples solutions until one passes training instances Ir, and then evaluates it on It; and (3)refinement , where the LLM iteratively refines an initial solution sbased on failures on Iruntil it succeeds, followed by evaluation on It. In addition, we extend the solution representation beyond code to include text-based solution plans. Given the problem description Qas input (Figure 2, block (a)), all strategies prompt the LLM to generate a solution s, represented either as a natural language planp(block (b)), s:=p, or as a Python code c(block (c)), s:=c. Fors:=p, the solution is derived directly from Q. Fors:=c, we explore two modalities: the LLM either generates cdirectly from Q (standalone ), or first generates a plan pforQ, which is then concatenated with Qto guide subsequent code development ( planning-aided ), a strategy widely adopted in recent work [18, 22, 23]. 3 Repeated sampling and refinement iteratively produce new solutions based on the correctness of s on training instances Ir, and validate son test instances Itonce it passes Iror the iteration limit is reached. When s:=p, its correctness is evaluated by prompting the LLM to generate each output image iogiven its corresponding input iiand the solution plan p, where (ii, io)∈Iror(ii, io)∈It. Alternatively, when s:=c, its correctness is assessed by executing conIrorIt. In repeated sampling, the LLM iteratively generates a new plan pand code cfrom the problem description Q without additional feedback. In contrast, refinement revises pandcby prompting the LLM with the previously incorrect pandc, concatenated with failed training instances. In total, nine ARC solvers are employed to evaluate the performance of reasoning-oriented LLMs on the ARC benchmark. 4 Knowledge Augmentation Xu et al. [34] improved LLM performance on the ARC benchmark by prompting object-based representations for each task derived from graph-based object abstractions. Building on this insight, we propose KAAR, a knowledge augmentation approach for solving ARC tasks using reasoning- oriented LLMs. KAAR leverages Generalized Planning for Abstract Reasoning (GPAR) [ 10], a state-of-the-art object-centric ARC solver, to generate the core knowledge priors. GPAR encodes priors as abstraction-defined nodes enriched with attributes and inter-node relations, which are extracted using standard image processing algorithms. To align with the four knowledge dimensions in ARC, KAAR maps GPAR-derived priors into their categories. In
https://arxiv.org/abs/2505.17482v1
detail, KAAR adopts fundamental abstraction methods from GPAR to enable objectness. Objects are typically defined as components based on adjacency rules and color consistency (e.g., 4-connected or 8-connected components), while also including the entire image as a component. KAAR further introduces additional abstractions: (1) middle-vertical , which vertically splits the image into two equal parts, and treats each as a distinct component; (2) middle-horizontal , which applies the same principle along the horizontal axis; (3) multi-lines , which segments the image using full-length rows or columns of uniform color, and treats each resulting part as a distinct component; and (4) no abstraction , which considers only raw 2D matrices. Under no abstraction , KAAR degrades to the ARC solver backbone without incorporating any priors. KAAR inherits GPAR’s geometric and topological priors, including component attributes (size, color, shape) and relations (spatial, congruent, inclusive). It further extends the attribute set with symmetry, bounding box, nearest boundary, and hole count, and augments the relation set with touching. For numeric and counting priors, KAAR follows GPAR, incorporating the largest/smallest component sizes, and the most/least frequent component colors, while extending them with statistical analysis of hole counts and symmetry, as well as the most/least frequent sizes and shapes. Please determine which category or categories this task belongs to. Please select from the following predefined categories This task involves color change. If this task involves color change: 1. Which components require color change? 2. Determine the conditions used to select these target components: Components (color 0) with the minimum and maximum sizes. If this task involves color change, please determine which source color maps to which target color for the target components. 2. Determine the conditions used to dictate this color change: – minimum-size component (from color 0) to 7. – maximum-size component (from color 0) to 8.(a) Action(s) Selection (b) Component(s) Selection (c) Color Change Rule action schema Figure 3: The example of goal-directedness priors augmentation in KAAR with input and response fragments from GPT-o3-mini.GPAR approaches goal-directedness priors by search- ing for a sequence of program instructions [ 35] de- fined in a DSL. Each instruction supports condition- als, branching, looping, and action statements. KAAR incorporates the condition and action concepts from GPAR, and enables goal-directedness priors by aug- menting LLM knowledge in two steps: 1) It prompts the LLM to identify the most relevant actions for solv- ing the given ARC problem from ten predefined ac- tion categories (Figure 3 block (a)), partially derived from GPAR and extended based on the training set, such as color change, movement, and extension; 2) For each selected action, KAAR prompts the LLM with the associated schema to resolve implementation details. For example, for a color change action, KAAR first prompts the LLM to identify the target components (Figure 3 blocks (b)), and then specify the source and target colors for modification based on the target com- ponents (Figure 3 blocks (c)). We note that KAAR also prompts the LLM to incorporate condition-aware reasoning when determining action implementation de- tails, using knowledge derived from geometry, topol- ogy, numbers, and
https://arxiv.org/abs/2505.17482v1
counting priors. This enables fine-grained control, for example, applying color changes only to black components conditioned on the maximum or minimum size: from black (value 4 ARC solver backboneARC solver backboneARC solver backboneGoal-directedness Objectness (b) Augmentation process in KAAR When we consider 4-connected black pixels (value 0) as components, the components in each input and output image are as follows: For Training Pair 1 input image: Component 1Locations=[(0,0), (0,1)] Component 8Locations=[(4, 14)] (c) Objectness ? (a) ARC examplefail on fail onGeometry and Topology Numbers and Counting ForTraining Pair 1 input image: For component 1: Shape: horizontal line. Different/Identical: Component 1 is different from ALL OTHERS! Component 1 is not touching with Component 2. Component 1 is at top-left of Component 2, and Component 2 is at bottom-right of Component 1. (d) Geometry and Topology (e) Numbers and CountingForTraining Pair 1 input image: component 5, with the maximum size 10. component 8, with the minimum size 1. There are two components, 4 and 6, each of size 7, which appear most frequently (twice).Pass Pass PassFigure 4: Augmentation process in KAAR (block (b)) and the corresponding knowledge augmentation fragments (blocks (c-e)) for ARC problem 62ab2642 (block (a)). 0) to blue (value 8) if largest, or to orange (value 7) if smallest. Figure 3 shows fragments of the goal-directedness priors augmentation. See Appendix A.2 for the full set of priors in KAAR. KAAR encodes the full set of core knowledge priors assumed in ARC into an ontology, where priors are organized into three hierarchical levels based on their dependencies. KAAR prompts LLMs with priors at each level to enable incremental augmentation. This reduces context interference and supports stage-wise reasoning aligned with human cognitive development [ 29]. Figure 4, block (b), illustrates the augmentation process in KAAR alongside the augmented prior fragments used to solve the problem shown in block (a). KAAR begins augmentation with objectness priors, encoding images into components with detailed coordinates based on a specific abstraction method (block (c)). KAAR then prompts geometry and topology priors (block (d)), followed by numbers and counting priors (block (e)). These priors are ordered by dependency while residing at the same ontological level, as they all build upon objectness. Finally, KAAR augments goal-directedness priors, as shown in Figure 3, where target components are derived from objectness analysis and conditions are inferred from geometric, topological, and numerical analyses. After augmenting each level of priors, KAAR invokes the ARC solver backbone to generate solutions. If any solution passes training instances Ir, it is validated on the test instances It; otherwise, augmentation proceeds to the next level of priors. While the ontology provides a hierarchical representation of priors, it may also introduce hallucina- tions, such as duplicate abstractions, irrelevant component attributes or relations, and inapplicable actions. To address this, KAAR integrates restrictions from GPAR to filter out inapplicable priors. KAAR adopts GPAR’s duplicate-checking strategy, retaining only abstractions that yield distinct components by size, color, or shape, in at least one training instance. In KAAR, each abstraction is associated with a set of applicable priors. For
https://arxiv.org/abs/2505.17482v1
instance, when the entire image is treated as a component, relation priors are excluded, and actions such as movement and color change are omitted, whereas symmetry and size attributes are retained and actions such as flipping and rotation are considered. In contrast, 4-connected and 8-connected abstractions include all component attributes and relations, and the full set of ten action priors. See Appendix A.3 for detailed restrictions. 5 Experiments In ARC, each task is unique and solvable using only core knowledge priors [ 5]. We begin by comparing nine candidate solvers on the full ARC public evaluation set of 400 tasks. This offers broader insights than previous studies limited to subsets of 400 training tasks [ 10,9,36], given the greater difficulty of the evaluation set [ 37]. We experiment with recent reasoning-oriented LLMs, including proprietary models, GPT-o3-mini and Gemini 2.0 Flash-Thinking (Gemini-2.0), and open- source models, DeepSeek-R1-Distill-Llama-70B (DeepSeek-R1-70B) and QwQ-32B. We compute accuracy on test instances Itas the primary evaluation metric. It measures the proportion of problems where the first solution successfully solves Itafter passing the training instances Ir; otherwise, if none passIrwithin 12 iterations, the last solution is evaluated on It, applied to both repeated sampling 5 Direct Generation Repeated Sampling Refinement P C PC P C PC P C PC GPT-o3-miniIr - - - 35.50 52.50 35.50 31.00 47.25 32.00 It 20.50 24.50 22.25 23.75 32.50 30.75 24.75 29.25 25.75 Ir&It - - - 22.00 31.75 29.25 21.75 28.50 25.00 Gemini-2.0Ir - - - 36.50 39.50 21.50 15.50 25.50 15.50 It 7.00 6.75 6.25 10.00 14.75 16.75 8.75 12.00 11.75 Ir&It - - - 9.50 14.25 16.50 8.00 10.50 10.75 QwQ-32BIr - - - 19.25 13.50 15.25 16.75 15.00 14.25 It 9.50 7.25 5.75 11.25 13.50 14.25 11.00 14.25 14.00 Ir&It - - - 9.25 12.75 13.00 8.75 13.00 11.75 DeepSeek-R1-70BIr - - - 8.75 6.75 7.75 6.25 5.75 7.75 It 4.25 4.75 4.50 4.25 7.25 7.75 4.75 5.75 7.75 Ir&It - - - 3.50 6.50 7.25 4.25 5.25 7.00 Table 1: Performance of nine ARC solvers measured by accuracy on Ir,It, and Ir&Itusing four reasoning-oriented LLMs. For each LLM, the highest accuracy on IrandIr&Itis in bold; the highest accuracy on Itis in red. Accuracy is reported as a percentage. Pdenotes the solution plan; C andPCrefer to standalone and planning-aided code generation, respectively. Ir It Ir&It Acc ∆ γ Acc ∆ γ Acc ∆ γ GPT-o3-miniRSPC 35.50 - - 30.75 - - 29.25 - - KAAR 40.00 4.50 12.68 35.00 4.25 13.82 33.00 3.75 12.82 Gemini-2.0RSPC 21.50 - - 16.75 - - 16.50 - - KAAR 25.75 4.25 19.77 21.75 5.00 29.85 20.50 4.00 24.24 QwQ-32BRSPC 15.25 - - 14.25 - - 13.00 - - KAAR 22.25 7.00 45.90 21.00 6.75 47.37 19.25 6.25 48.08 DeepSeek-R1-70BRSPC 7.75 - - 7.75 - - 7.25 - - KAAR 12.25 4.50 58.06 12.75 5.00 64.52 11.50 4.25 58.62 Table 2: Comparison of RSPC (repeated-sampling planning-aided code generation) and its knowledge- augmented variant, KAAR, in terms of accuracy (Acc) on Ir,It, and Ir&It.∆andγdenote the absolute and relative improvements over RSPC, respectively. All values are reported
https://arxiv.org/abs/2505.17482v1
as percentages. The best results for IrandIr&Itare in bold; the highest for Itis in red. and refinement. We also report accuracy on IrandIr&It, measuring the percentage of problems whose solutions solve Irand both IrandIt. See Appendix A.4 for parameter settings. Table 1 reports the performance of nine ARC solvers across four reasoning-oriented LLMs. For direct generation methods, accuracy on IrandIr&Itis omitted, as solutions are evaluated directly onIt. GPT-o3-mini consistently outperforms all other LLMs, achieving the highest accuracy on Ir (52.50%), It(32.50%), and Ir&It(31.75%) under repeated sampling with standalone code generation (C), highlighting its strong abstract reasoning and generalization capabilities. Notably, QwQ-32B, the smallest model, outperforms DeepSeek-R1-70B across all solvers and surpasses Gemini-2.0 under refinement. Among the nine ARC solvers, repeated sampling-based methods generally outperform those based on direct generation or refinement. This diverges from previous findings where refinement dominated conventional code generation tasks that lack abstract reasoning and generalization demands [10,17,19]. Within repeated sampling, planning-aided code generation ( PC) yields the highest accuracy on Itacross most LLMs. It also demonstrates the strongest generalization with GPT-o3-mini and Gemini-2.0, as evidenced by the smallest accuracy gap between IrandIr&It, compared to solution plan ( P) and standalone code generation ( C). A similar trend is observed for QwQ-32B and DeepSeek-R1-70B, where both CandPCgeneralize effectively across repeated sampling and refinement. Overall, repeated sampling with planning-aided code generation, denoted as RSPC, shows the best performance and thus serves as the ARC solver backbone. We further compare the performance of RSPC with its knowledge-augmented variant, KAAR. For each task, KAAR begins with simpler abstractions, i.e., no abstraction and whole image, and progresses to complicated 4-connected and 8-connected abstractions, consistent with GPAR. KAAR reports the accuracy on test instances Itbased on the first abstraction whose solution solves all training instances Ir; otherwise, it records the final solution from each abstraction and selects the one that passes the most Irto evaluate on It. KAAR allows the solver backbone (RSPC) up to 6 4 iterations per invocation, totaling 12 iterations, consistent with the non-augmented setting. See Appendix A.5 for KAAR execution details. As shown in Table 2, KAAR consistently outperforms non-augmented RSPC across all LLMs, yielding around 5% absolute gains on Ir,It, andIr&It. This highlights the effectiveness and model-agnostic nature of the augmented priors. KAAR achieves the highest accuracy using GPT-o3-mini, with 40% on Ir, 35% on It, and 33% on Ir&It. KAAR shows the greatest absolute improvements ( ∆) using QwQ-32B and the largest relative gains ( γ) using DeepSeek-R1-70B across all evaluated metrics. Moreover, KAAR maintains generalization comparable to RSPC across all LLMs, indicating that the augmented priors are sufficiently abstract and expressive to serve as basis functions for reasoning, in line with ARC assumptions. GPT-o3-mini Gemini-2.0 QwQ-32BDeepSeek-R1-70B (a) RSPCGPT-o3-mini Gemini-2.0 QwQ-32B DeepSeek-R1-70B1.00 0.50 0.40 0.22 0.91 1.00 0.60 0.40 0.86 0.70 1.00 0.44 0.87 0.87 0.81 1.00GPT-o3-mini Gemini-2.0 QwQ-32BDeepSeek-R1-70B (b) KAARGPT-o3-mini Gemini-2.0 QwQ-32B DeepSeek-R1-70B1.00 0.55 0.54 0.34 0.89 1.00 0.72 0.48 0.88 0.74 1.00 0.53 0.92 0.82 0.88 1.000.00.51.0 Coverage Figure 5: Asymmetric relative coverage matrices for RSPC (a) and KAAR (b), showing the proportion of problems whose
https://arxiv.org/abs/2505.17482v1
test instances are solved by the row model that are also solved by the column model, across four LLMs.We compare relative problem cov- erage across evaluated LLMs under RSPC and KAAR based on successful solutions on test instances. As shown in Figure 5, each cell (i, j)represents the proportion of problems solved by the row LLM that are also solved by the column LLM. This is computed as |Ai∩Aj| |Ai|, where AiandAjare the sets of problems solved by the row and column LLMs, respectively. Values near 1 indicate that the column LLM covers most problems solved by the row LLM. Under RSPC (Figure 5 (a)), GPT-o3-mini exhibits broad coverage, with column values consistently above 0.85. Gemini-2.0 and QwQ-32B also show substantial alignment, with mutual coverage exceeding 0.6. In contrast, DeepSeek-R1-70B shows lower alignment, with column values below 0.45 due to fewer solved problems. Figure 5 (b) illustrates that KAAR generally improves or maintains inter-model overlap compared to RSPC. Notably, KAAR raises the minimum coverage between GPT-o3-mini and DeepSeek-R1-70B from 0.22 under RSPC to 0.34 under KAAR. These results highlight the effectiveness of KAAR in improving cross-model generalization, with all evaluated LLMs solving additional shared problems. In particular, it enables smaller models such as QwQ-32B and DeepSeek-R1-70B to better align with stronger LLMs on the ARC benchmark. Movement Total: 55Extension Total: 129Recolor Total: 115Others Total: 101010203040Accuracy on It (%) 41.83.6 38.80.8 24.37.8 21.85.0 20.012.7 19.41.6 13.96.1 14.94.0 18.214.5 17.82.3 10.47.8 11.97.9 10.99.1 7.81.6 4.37.0 9.95.0GPT-o3-mini: RSPC GPT-o3-mini: KAAR Gemini-2.0: RSPC Gemini-2.0: KAARQwQ-32B: RSPC QwQ-32B: KAAR DeepSeek-R1-70B: RSPC DeepSeek-R1-70B: KAAR Figure 6: Accuracy on test instances Itfor RSPC and KAAR across the movement ,extension ,recolor , and others categories using four LLMs. Each stacked bar shows RSPC accuracy (darker segment) and the addi- tional improvement from KAAR (lighter segment).Following prior work [ 9,10], we categorize 400 problems in the ARC public evaluation set into four classes based on their primary transformations: (1) movement (55 prob- lems), (2) extension (129 problems), (3) re- color (115 problems), and (4) others (101 problems). The others category comprises infrequent tasks such as noise removal, se- lection, counting, resizing, and problems with implicit patterns that hinder system- atic classification into the aforementioned categories. See Appendix A.7 for examples of each category. Figure 6 illustrates the accuracy on test instances Itfor RSPC and KAAR across four categories with evalu- ated LLMs. Each stacked bar represents RSPC accuracy and the additional improvement achieved by KAAR. KAAR consistently outperforms RSPC with the largest accuracy gain in movement (14.5% with QwQ-32B). In contrast, KAAR shows limited improvements in extension , since several problems involve pixel-level extension, which reduces the reliance on component-level recognition. Moreover, extension requires accurate spatial inference across multiple components and poses greater difficulty than movement , which requires mainly direction identification. Although KAAR augments spatial priors, LLMs still struggle to accurately infer positional relations among multiple components, consistent with prior findings [38–40]. Overlaps from component extensions further complicate reasoning, as LLMs often fail to recognize truncated components as unified wholes, contrary to human perceptual intuition. 7 (0,25] Total: 19(25,100] Total:
https://arxiv.org/abs/2505.17482v1
139(100,225] Total: 129(225,400] Total: 51(400,625] Total: 39(625,900] Total: 23 Average Image Size Interval (width x height)01020304050607080Accuracy on It (%) 73.75.3 48.95.0 24.84.7 11.85.9 5.1 4.342.115.8 23.711.5 8.56.2 9.82.0GPT-o3-mini RSPC GPT-o3-mini KAAR QwQ-32B RSPC QwQ-32B KAARFigure 7: Accuracy on test instances Itfor RSPC and KAAR across average image size intervals, evaluated using GPT-o3-mini and QwQ-32B. See Figure 12 in Appendix for the results with the other LLMs.A notable feature of ARC is the variation in image size both within and across prob- lems. We categorize tasks by averaging the image size per problem, computed over both training and test image pairs. We report the accuracy on Itfor RSPC and KAAR across average image size inter- vals using GPT-o3-mini and QwQ-32B, the strongest proprietary and open-source mod- els in Tables 1 and 2. As shown in Figure 7, both LLMs experience performance degra- dation as image size increases. When the average image size exceeds 400 (20 ×20), GPT-o3-mini solves only three problems, while QwQ-32B solves none. In ARC, iso- lating relevant pixels in larger images, rep- resented as 2D matrices, requires effective attention mechanisms in LLMs, which remains an open challenge noted in recent work [ 41,34]. KAAR consistently outperforms RSPC on problems with average image sizes below 400, benefiting from object-centric representations. By abstracting each image into components, KAAR reduces interference from irrelevant pixels, directs attention to salient components, and facilitates component-level transformation analysis. However, larger images often produce both oversized and numerous components after abstraction, which continue to challenge LLMs during reasoning. Oversized components hinder transformation execution, and numerous components complicate the identification of target components. 1 4 8 12 # Iterations05101520253035Accuracy on Ir&It (%) Objectness Geometry, Topology, Numbers and CountingGoal-directednessGPT-o3-mini: RSPC GPT-o3-mini: KAAR QwQ-32B: RSPC QwQ-32B: KAAR Figure 8: Variance in accuracy on Ir&Itwith in- creasing iterations for RSPC and KAAR using GPT-o3-mini and QwQ-32B. See Figure 13 in Ap- pendix for the results with the other LLMs.Figure 8 presents the variance in accuracy on Ir&Itfor RSPC and KAAR as iteration count increases using GPT-o3-mini and QwQ-32B. For each task under KAAR, we include only iterations from the abstraction that solves both IrandIt. For KAAR, performance improve- ments across each 4-iteration block are driven by the solver backbone invocation after augment- ing an additional level of priors: iterations 1–4 introduce objectness; 5–8 incorporate geometry, topology, numbers, and counting; 9–12 further involve goal-directedness. RSPC shows rapid improvement in the first 4 iterations and plateaus around iteration 8. At each iteration, the accu- racy gap between KAAR and RSPC reflects the contribution of accumulated priors via augmen- tation. KAAR consistently outperforms RSPC, with the performance gap progressively increas- ing after new priors are augmented and peaking after the integration of goal-directedness. We note that objectness priors alone yield marginal gains with GPT-o3-mini. However, the inclusion of object attributes and relational priors (iterations 4–8) leads to improvements in KAAR over RSPC. This advantage is further amplified after the augmentation of goal-directedness priors (iterations 9–12). These results highlight the benefits of KAAR. Representing core knowledge priors through a hierarchical, dependency-aware ontology enables KAAR to incrementally
https://arxiv.org/abs/2505.17482v1
augment LLMs, per- form stage-wise reasoning, and improve solution accuracy. Compared to augmentation at once and non-stage-wise reasoning, KAAR consistently yields superior accuracy, as detailed in Appendix A.6. 6 Discussion ARC and KAAR . ARC serves as a visual abstract reasoning benchmark, requiring models to infer transformations from few examples for each unique task, rather than fitting to a closed rule space as in RA VEN [ 42] and PGM [ 43]. ARC assumes tasks are solvable using core knowledge priors. However, the problems are intentionally left undefined to preclude encoding complete solution rules [ 5]. This pushes models beyond closed-form rule fitting and toward truly domain-general capabilities. While 8 some of the knowledge in KAAR is tailored to ARC, its central contribution lies in representing knowledge through a hierarchical, dependency-aware ontology that enables progressive augmentation. This allows LLMs to gradually expand their reasoning scope and perform stage-wise inference, improving performance on ARC without relying on an exhaustive rule set. Moreover, the ontology of KAAR is transferable to other domains requiring hierarchical reasoning, such as robotic task planning [ 44], image captioning [ 45], and visual question answering [ 46], where similar knowledge priors and dependencies from ARC are applicable. In KAAR, knowledge augmentation increases token consumption, while the additional tokens remain relatively constant since all priors, except goal-directedness, are generated via image processing algorithms from GPAR. On GPT-o3-mini, augmentation tokens constitute around 60% of solver backbone token usage, while on QwQ-32B, this overhead decreases to about 20%, as the solver backbone consumes more tokens. See Appendix A.8 for a detailed discussion. Incorrect abstraction selection in KAAR also leads to wasted tokens. However, accurate abstraction inference often requires validation through viable solutions, bringing the challenge back to solution generation. ? Figure 9: Fragment of ARC problem e7dd8335 .Solution Analysis . RSPC achieves over 30% accuracy across evaluated metrics using GPT-o3-mini, even without knowledge augmentation. To assess its alignment with core knowledge priors, we manually reviewed RSPC-generated solution plans and code that successfully solve Itwith GPT-o3-mini. RSPC tends to solve problems without object-centric reasoning. For instance, in Figure 1, it shifts each row downward by one and pads the top with zeros, rather than reasoning over objectness to move each 4-connected component down by one step. Even when applying objectness, RSPC typically defaults to 4-connected abstraction, failing on the problem in Figure 9, where the test input clearly requires 8-connected abstraction. We note that object recognition in ARC involves grouping pixels into task-specific components based on clustering rules, differing from feature extraction approaches [ 47] in conventional computer vision tasks. Recent work seeks to bridge this gap by incorporating 2D positional encodings and object indices into Vision Transformers [ 41]. However, its reliance on data-driven learning weakens generalization, undermining ARC’s core objective. In contrast, KAAR enables objectness through explicitly defined abstractions, implemented via standard image processing algorithms, thus ensuring both accuracy and generalization. Generalization . For all evaluated ARC solvers, accuracy on Irconsistently exceeds that on Ir&It, revealing a generalization gap. Planning-aided code generation methods, such as RSPC and KAAR, exhibit smaller gaps
https://arxiv.org/abs/2505.17482v1
than other solvers, though the issue persists. One reason is that solutions include low-level logic for the training pairs, thus failing to generalize. See Appendix A.9 for examples. Another reason is the usage of incorrect abstractions. For example, reliance solely on 4-connected abstraction leads RSPC to solve only Irin Figure 9. KAAR similarly fails to generalize in this case. It selects 4-connected abstraction, the first one that solves Ir, to report accuracy on It, instead of the correct 8-connected abstraction, as the former is considered simpler. Table 1 also reveals that LLMs differ in their generalization across ARC solvers. While a detailed analysis of these variations is beyond the scope of this study, investigating the underlying causes could offer insights into LLM inference and alignment with intended behaviors, presenting a promising direction for future work. 7 Conclusion We explored the generalization and abstract reasoning capabilities of recent reasoning-oriented LLMs on the ARC benchmark using nine candidate solvers. Experimental results show that repeated- sampling planning-aided code generation (RSPC) achieves the highest test accuracy and demonstrates consistent generalization across most evaluated LLMs. To further improve performance, we propose KAAR, which progressively augments LLMs with core knowledge priors organized into hierarchical levels based on their dependencies, and applies RSPC after augmenting each level of priors to enable stage-wise reasoning. KAAR improves LLM performance on the ARC benchmark while maintaining strong generalization compared to non-augmented RSPC. However, ARC remains challenging even for the most capable reasoning-oriented LLMs, given its emphasis on abstract reasoning and generalization, highlighting current limitations and motivating future research. 9 References [1]Abdullah Ayub Khan, Asif Ali Laghari, and Shafique Ahmed Awan. Machine learning in computer vision: A review. EAI Endorsed Transactions on Scalable Information Systems , 8 (32), 2021. [2]Daniel W Otter, Julian R Medina, and Jugal K Kalita. A survey of the usages of deep learning for natural language processing. IEEE transactions on neural networks and learning systems , 32(2):604–624, 2020. [3]Sorin Grigorescu, Bogdan Trasnea, Tiberiu Cocias, and Gigel Macesanu. A survey of deep learning techniques for autonomous driving. Journal of field robotics , 37(3):362–386, 2020. [4]Brenden M Lake, Tomer D Ullman, Joshua B Tenenbaum, and Samuel J Gershman. Building machines that learn and think like people. Behavioral and brain sciences , 40:e253, 2017. [5] François Chollet. On the measure of intelligence. arXiv preprint arXiv:1911.01547 , 2019. [6]Charles S Peirce. Questions concerning certain faculties claimed for man. The Journal of Speculative Philosophy , 2(2):103–114, 1868. [7]Elizabeth S Spelke and Katherine D Kinzler. Core knowledge. Developmental science , 10(1): 89–96, 2007. [8]Sumit Gulwani, Oleksandr Polozov, Rishabh Singh, et al. Program synthesis. Foundations and Trends® in Programming Languages , 4:1–119, 2017. [9]Yudong Xu, Elias B Khalil, and Scott Sanner. Graphs, constraints, and search for the abstraction and reasoning corpus. In Proceedings of the 37th AAAI Conference on Artificial Intelligence , AAAI, pages 4115–4122, 2023. [10] Chao Lei, Nir Lipovetzky, and Krista A Ehinger. Generalized planning for the abstraction and reasoning corpus. In Proceedings of the 38th AAAI Conference on Artificial Intelligence , AAAI, pages 20168–20175, 2024. [11] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten
https://arxiv.org/abs/2505.17482v1
Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. In Proceedings of the 36th Advances in Neural Information Processing Systems , NeurIPS, pages 24824–24837, 2022. [12] Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374 , 2021. [13] Yujia Li, David Choi, Junyoung Chung, Nate Kushman, Julian Schrittwieser, Rémi Leblond, Tom Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, et al. Competition-level code generation with alphacode. Science , 378:1092–1097, 2022. [14] Bei Chen, Fengji Zhang, Anh Nguyen, Daoguang Zan, Zeqi Lin, Jian-Guang Lou, and Weizhu Chen. Codet: Code generation with generated tests. In Proceedings of the 11th International Conference on Learning Representations , ICLR, pages 1–19, 2023. [15] Tianyi Zhang, Tao Yu, Tatsunori Hashimoto, Mike Lewis, Wen-tau Yih, Daniel Fried, and Sida Wang. Coder reviewer reranking for code generation. In Proceedings of the 40th International Conference on Machine Learning , ICML, pages 41832–41846, 2023. [16] Ansong Ni, Srini Iyer, Dragomir Radev, Veselin Stoyanov, Wen-tau Yih, Sida Wang, and Xi Victoria Lin. Lever: Learning to verify language-to-code generation with execution. In Proceedings of the 40th International Conference on Machine Learning , ICML, pages 26106– 26128, 2023. [17] Li Zhong, Zilong Wang, and Jingbo Shang. Debug like a human: A large language model debugger via verifying runtime execution step by step. In Findings of the Association for Computational Linguistics: ACL 2024 , pages 851–870, 2024. 10 [18] Chao Lei, Yanchuan Chang, Nir Lipovetzky, and Krista A Ehinger. Planning-driven program- ming: A large language model programming workflow. arXiv preprint arXiv:2411.14503 , 2024. [19] Xinyun Chen, Maxwell Lin, Nathanael Schärli, and Denny Zhou. Teaching large language models to self-debug. In Proceedings of the 12th International Conference on Learning Representations , ICLR, 2024. [20] Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, et al. Program synthesis with large language models. arXiv preprint arXiv:2108.07732 , 2021. [21] Naman Jain, King Han, Alex Gu, Wen-Ding Li, Fanjia Yan, Tianjun Zhang, Sida Wang, Ar- mando Solar-Lezama, Koushik Sen, and Ion Stoica. Livecodebench: Holistic and contamination free evaluation of large language models for code. In Proceedings of the 13th International Conference on Learning Representations , ICLR, 2025. [22] Xue Jiang, Yihong Dong, Lecheng Wang, Fang Zheng, Qiwei Shang, Ge Li, Zhi Jin, and Wenpin Jiao. Self-planning code generation with large language models. ACM Transactions on Software Engineering and Methodology , 33(7):1–28, 2023. [23] Md. Ashraful Islam, Mohammed Eunus Ali, and Md Rizwan Parvez. MapCoder: Multi-agent code generation for competitive problem solving. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics , ACL, pages 4912–4944, 2024. [24] Tianyang Zhong, Zhengliang Liu, Yi Pan, Yutong Zhang, Yifan Zhou, Shizhe Liang, Zihao Wu, Yanjun Lyu, Peng Shu, Xiaowei Yu, et al. Evaluation of openai o1: Opportunities and challenges of agi. arXiv preprint arXiv:2409.18486 , 2024. [25] OpenAI. Openai o3-mini. OpenAI ,
https://arxiv.org/abs/2505.17482v1
2025. URL https://openai.com/index/openai-o 3-mini/ . Accessed: 2025-03-22. [26] Google DeepMind. Gemini 2.0 flash thinking. Google DeepMind , 2024. URL https: //deepmind.google/technologies/gemini/flash-thinking/ . Accessed: 2025-03-22. [27] Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948 , 2025. [28] Alibaba Cloud. Alibaba cloud unveils qwq-32b: A compact reasoning model with cutting-edge performance. Alibaba Cloud , 2025. URL https://www.alibabacloud.com/blog/alibab a-cloud-unveils-qwq-32b-a-compact-reasoning-model-with-cutting-edge-p erformance_602039 . Accessed: 2025-03-22. [29] Zana H Babakr, Pakstan Mohamedamin, and Karwan Kakamad. Piaget’s cognitive developmen- tal theory: Critical review. Education Quarterly Reviews , 2(3):517–524, 2019. [30] Hourui Deng, Hongjie Zhang, Jie Ou, and Chaosheng Feng. Can llm be a good path planner based on prompt engineering? mitigating the hallucination for path planning. arXiv preprint arXiv:2408.13184 , 2024. [31] Silin Meng, Yiwei Wang, Cheng-Fu Yang, Nanyun Peng, and Kai-Wei Chang. LLM-a*: Large language model enhanced incremental heuristic search on path planning. In Findings of the Association for Computational Linguistics: EMNLP 2024 , pages 1087–1102, 2024. [32] Janice Ahn, Rishu Verma, Renze Lou, Di Liu, Rui Zhang, and Wenpeng Yin. Large language models for mathematical reasoning: Progresses and challenges. In Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics: Student Research Workshop , EACL, pages 225–237, 2024. [33] Yuhang Zang, Wei Li, Jun Han, Kaiyang Zhou, and Chen Change Loy. Contextual object detection with multimodal large language models. International Journal of Computer Vision , 133(2):825–843, 2025. 11 [34] Yudong Xu, Wenhao Li, Pashootan Vaezipoor, Scott Sanner, and Elias B Khalil. Llms and the abstraction and reasoning corpus: Successes, failures, and the importance of object-based representations. arXiv preprint arXiv:2305.18354 , 2023. [35] Chao Lei, Nir Lipovetzky, and Krista A Ehinger. Novelty and lifted helpful actions in generalized planning. In Proceedings of the International Symposium on Combinatorial Search , SoCS, pages 148–152, 2023. [36] Ruocheng Wang, Eric Zelikman, Gabriel Poesia, Yewen Pu, Nick Haber, and Noah Goodman. Hypothesis search: Inductive reasoning with language models. In Proceedings of the 12 th International Conference on Learning Representations , ICLR, 2024. [37] Solim LeGris, Wai Keen V ong, Brenden M Lake, and Todd M Gureckis. H-arc: A robust estimate of human performance on the abstraction and reasoning corpus benchmark. arXiv preprint arXiv:2409.01374 , 2024. [38] Yutaro Yamada, Yihan Bao, Andrew Kyle Lampinen, Jungo Kasai, and Ilker Yildirim. Eval- uating spatial understanding of large language models. Transactions on Machine Learning Research , 2024. [39] Anthony G Cohn and Jose Hernandez-Orallo. Dialectical language model evaluation: An initial appraisal of the commonsense spatial reasoning abilities of llms. arXiv preprint arXiv:2304.11164 , 2023. [40] Yejin Bang, Samuel Cahyawijaya, Nayeon Lee, Wenliang Dai, Dan Su, Bryan Wilie, Holy Love- nia, Ziwei Ji, Tiezheng Yu, Willy Chung, Quyet V . Do, Yan Xu, and Pascale Fung. A multitask, multilingual, multimodal evaluation of ChatGPT on reasoning, hallucination, and interactivity. InProceedings of the 13th International Joint Conference on Natural Language Processing and the 3rd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics , IJCNLP-AACL, pages
https://arxiv.org/abs/2505.17482v1
675–718, 2023. [41] Wenhao Li, Yudong Xu, Scott Sanner, and Elias Boutros Khalil. Tackling the abstraction and reasoning corpus with vision transformers: the importance of 2d representation, positions, and objects. arXiv preprint arXiv:2410.06405 , 2024. [42] John Raven. The raven’s progressive matrices: change and stability over culture and time. Cognitive psychology , 41(1):1–48, 2000. [43] David Barrett, Felix Hill, Adam Santoro, Ari Morcos, and Timothy Lillicrap. Measuring abstract reasoning in neural networks. In Proceedings of the 37th International conference on machine learning , ICML, pages 511–520, 2018. [44] Yongcheng Cui, Ying Zhang, Cui-Hua Zhang, and Simon X Yang. Task cognition and planning for service robots. Intelligence & Robotics , (1):119–142, 2025. [45] Matteo Stefanini, Marcella Cornia, Lorenzo Baraldi, Silvia Cascianelli, Giuseppe Fiameni, and Rita Cucchiara. From show to tell: A survey on deep learning-based image captioning. IEEE transactions on pattern analysis and machine intelligence , (1):539–559, 2022. [46] Ngoc Dung Huynh, Mohamed Reda Bouadjenek, Sunil Aryal, Imran Razzak, and Hakim Hacid. Visual question answering: from early developments to recent advances–a survey. arXiv preprint arXiv:2501.03939 , 2025. [47] Zhong-Qiu Zhao, Peng Zheng, Shou-tao Xu, and Xindong Wu. Object detection with deep learning: A review. IEEE transactions on neural networks and learning systems , 30(11): 3212–3232, 2019. [48] Grégoire Mialon, Roberto Dessi, Maria Lomeli, Christoforos Nalmpantis, Ramakanth Pasunuru, Roberta Raileanu, Baptiste Roziere, Timo Schick, Jane Dwivedi-Yu, Asli Celikyilmaz, Edouard Grave, Yann LeCun, and Thomas Scialom. Augmented language models: a survey. Transactions on Machine Learning Research , 2023. ISSN 2835-8856. 12 [49] Yuqi Zhu, Shuofei Qiao, Yixin Ou, Shumin Deng, Shiwei Lyu, Yue Shen, Lei Liang, Jinjie Gu, Huajun Chen, and Ningyu Zhang. KnowAgent: Knowledge-augmented planning for LLM-based agents. In Findings of the Association for Computational Linguistics: NAACL 2025 , pages 3709–3732, 2025. [50] Tu Vu, Mohit Iyyer, Xuezhi Wang, Noah Constant, Jerry Wei, Jason Wei, Chris Tar, Yun-Hsuan Sung, Denny Zhou, Quoc Le, and Thang Luong. FreshLLMs: Refreshing large language models with search engine augmentation. In Findings of the Association for Computational Linguistics: ACL 2024 , pages 13697–13720, 2024. [51] Xingxuan Li, Ruochen Zhao, Yew Ken Chia, Bosheng Ding, Shafiq Joty, Soujanya Poria, and Lidong Bing. Chain-of-knowledge: Grounding large language models via dynamic knowledge adapting over heterogeneous sources. In Proceedings of the 12th International Conference on Learning Representations , ICLR, 2024. [52] Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot, and Ashish Sabharwal. Interleaving retrieval with chain-of-thought reasoning for knowledge-intensive multi-step questions. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics , ACL, pages 10014–10037, 2023. [53] Shuofei Qiao, Honghao Gui, Chengfei Lv, Qianghuai Jia, Huajun Chen, and Ningyu Zhang. Making language models better tool learners with execution feedback. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies , NNACL, pages 3550–3568, 2024. [54] J S Wind. 1st place solution + code and official documentation. https://www.kaggle.c om/competitions/abstraction-and-reasoning-challenge/discussion/154597 , 2020. Accessed: 2025-03-22. [55] Giacomo Camposampiero, Loic Houmard, Benjamin Estermann, Joël Mathys, and Roger Wattenhofer. Abstract visual reasoning enabled by language. arXiv preprint arXiv:2306.04091 , 2023. [56] Tan John Chong Min. An approach
https://arxiv.org/abs/2505.17482v1
to solving the abstraction and reasoning corpus (arc) challenge. arXiv preprint arXiv:2306.03553 , 2023. [57] John Chong Min Tan and Mehul Motani. Llms as a system of multiple expert agents: An approach to solve the abstraction and reasoning corpus (arc) challenge. In Proceedings of the 2024 IEEE Conference on Artificial Intelligence , CAI, pages 782–787, 2024. [58] Kiril Bikov, Mikel Bober-Irizar, and Soumya Banerjee. Reflection system for the abstraction and reasoning corpus. In Proceedings of the 2nd AI4Research Workshop: Towards a Knowledge- grounded Scientific Research Lifecycle , 2024. [59] Daniel Franzen, Jan Disselhoff, and David Hartmann. The llm architect: Solving arc-agi is a matter of perspective. https://github.com/da-fr/arc-prize-2024/blob/main/the_ architects.pdf , 2024. Accessed: 2025-03-22. [60] Michael Hodel. Addressing the abstraction and reasoning corpus via procedural example generation. arXiv preprint arXiv:2404.07353 , 2024. [61] Arseny Moskvichev, Victor Vikram Odouard, and Melanie Mitchell. The conceptarc bench- mark: Evaluating understanding and generalization in the arc domain. arXiv preprint arXiv:2305.07141 , 2023. [62] Wen-Ding Li, Keya Hu, Carter Larsen, Yuqing Wu, Simon Alford, Caleb Woo, Spencer M. Dunn, Hao Tang, Wei-Long Zheng, Yewen Pu, and Kevin Ellis. Combining induction and transduction for abstract reasoning. In Proceedings of the 13th International Conference on Learning Representations , ICLR, 2025. [63] Shraddha Barke, Emmanuel Anaya Gonzalez, Saketh Ram Kasibatla, Taylor Berg-Kirkpatrick, and Nadia Polikarpova. Hysynth: Context-free llm approximation for guiding program synthesis. InProceedings of the 38th Advances in Neural Information Processing Systems , NeurIPS, pages 15612–15645, 2024. 13 A Appendix A.1 Related Work Knowledge-Augmented LLMs. Augmenting LLMs with external knowledge can improve reasoning capabilities and mitigate hallucination in text generation [ 48]. Previous studies achieve this by incorporating domain-specific knowledge, designed by human experts [ 49], retrieved via search engines [ 50], or extracted from Wikipedia documents [ 51]. Trivedi et al. [52] demonstrated that interleaving knowledge augmentation within reasoning steps further reduces model hallucination, resulting in more accurate multi-step reasoning. Additionally, augmenting LLMs with execution feedback improves performance on both question answering [ 53] and program synthesis tasks [10, 17, 19]. Search in DSL. An abstract, expressive, and compositional representation of core knowledge priors is essential for solving ARC tasks [ 5]. Previous studies have manually encoded these priors into domain- specific languages (DSLs) with lifted relational representations [ 9,10,54]. Various program synthesis methods have been proposed to search for valid solution programs within their DSLs, including DAG-based search [ 54], graph-based constraint-guided search [ 9], and generalized planning [ 10]. Hand-crafted DSLs encode core knowledge priors with high precision and interpretability, enabling structured program synthesis. However, comprehensive DSLs induce large search spaces, limiting synthesis efficiency. LLMs for ARC. Recent studies have explored using LLMs as ARC solvers to directly generate test output matrices and have prompted LLMs with different problem descriptions to improve output accuracy. Camposampiero et al. [55] employed LLMs to generate output grids from textual task descriptions, derived from a vision module which is designed to capture human-like visual priors. Min [56] prompted LLMs with the raw 2D matrices of each task, along with transformation and abstraction examples. Xu et al. [34] demonstrated that object representations derived
https://arxiv.org/abs/2505.17482v1
from predefined abstractions can improve LLM performance on ARC tasks. Recent advances in code generation by LLMs [ 18,17,14] highlight their potential to replace search-based program synthesis, addressing efficiency limitations. Tan and Motani [57] evaluated LLM performance on the ARC benchmark by generating Python program solutions. Additionally, Wang et al. [36] approached ARC as an inductive reasoning problem and introduced hypothesis search, where program solutions are generated by selecting LLM-generated hypotheses encoded as functions. Training-Based Methods. To further improve LLM performance, Bikov et al. [58] fine-tuned LLMs on augmented ARC tasks using standard techniques such as rotation, flipping, and permutation. Beyond these methods, Franzen et al. [59] fine-tuned LLMs on large-scale synthetic ARC tasks [ 60] and ARC-related datasets such as Concept-ARC [ 61] and ARC-Heavy [ 62], achieving a state-of-the- art 56% accuracy on the private evaluation set of 200 tasks. Instead of fine-tuning LLMs, Barke et al. [63] trained a probabilistic context-free grammar (PCFG) using LLM-generated plausible solutions to learn weighted functions. This enables the synthesizer to efficiently generate final program solutions. However, this approach requires a dedicated synthesizer for each DSL, limiting its generalization. When leveraging LLMs as ARC solvers, existing studies tend to emphasize accuracy on partial training set problems and overlook the core principle of ARC, where solutions should be constructed using core knowledge priors [ 5]. LLMs still lack these priors, such as objectness, as evidenced by RSPC-generated solutions. Although fine-tuning approaches have achieved state-of-the-art performance, their failure to incorporate core knowledge priors remains a fundamental limitation. KAAR addresses this gap by progressively augmenting LLMs with structured core knowledge priors introduced by GPAR, along with exclusive implementations of goal-directedness priors. It interleaves augmentation within the reasoning process by applying an advanced LLM-based program synthesis solver tailored to the ARC benchmark after augmenting priors at each level. KAAR achieves strong performance, 32.5% test accuracy on the full evaluation set of 400 problems using GPT-o3-mini, demonstrates substantial generalization, and produces solutions aligned with core knowledge priors. A.2 Core Knowledge Priors in KAAR KAAR incorporates abstractions to enable objectness priors; component attributes, relations, and statistical analysis of component attributes to encode geometry, topology, numbers, and counting priors; and predefined actions to support goal-directedness priors. Table 5 presents all abstractions 14 ?Figure 10: ARC problem 0520fde7 used in KAAR, organized by their prioritization. KAAR incorporates fundamental abstractions, such as 4-connected and 8-connected components, from GPAR, and extends them with additional abstractions unique to KAAR, highlighted in red. Table 6 introduces geometry, topology, numbers, and counting priors, and ten predefined transformations used in KAAR. For each action, KAAR augments the LLM with its corresponding schema to resolve implementation details. The actions and their schemas are detailed in Table 7. Most actions can be specified within three steps, keeping them tractable for LLMs. A.3 Restrictions in KAAR For certain abstractions, some priors are either inapplicable or exclusive. The specific priors assigned to some abstractions are detailed in Table 8. For the whole image abstraction, few priors apply as only a single component is present. In contrast, the 4/8-connected-multi-color-non-background abstractions retain most priors.
https://arxiv.org/abs/2505.17482v1
The highlighted priors that capture per-component color diversity are used exclusively for 4/8-connected-multi-color-non-background abstractions, while priors tailored to a single-color component, such as components with same color ,components with most frequent color , andcomponents with least frequent color , are excluded. For the middle-vertical andmiddle-horizontal abstractions, where the image is evenly divided into two components, flipping and movement actions are enabled to facilitate reasoning over overlapping components. For instance, in the problem shown in Figure 10, the solution involves splitting the image along a middle-vertical grid line and moving one component to overlap the other. In the resulting component, a pixel is colored red if the overlapping pixels in both components are blue; otherwise, it is colored black. A.4 Parameter Settings KAAR operates on all LLMs through API access with the full conversational history. For proprietary models, GPT-o3-mini and Gemini-2.0 Flash-Thinking (Gemini-2.0), we use default parameter settings. For open-source models, DeepSeek-R1-Distill-Llama-70B (DeepSeek-R1-70B) and QwQ-32B, we set temperature to 0.6, top-p to 0.95, and top-k to 40 to reduce repetitive outputs and filter rare tokens while preserving generation diversity. We conduct experiments on a virtual machine with 4 NVIDIA A100 80GB GPUs. A.5 KAAR Algorithm 1 presents the pseudocode of KAAR. For each abstraction, KAAR incrementally augments the LLM with core knowledge priors, structured into three dependency-aware levels: beginning with objectness (Line 5), followed by geometry and topology (Lines 10 and 12), numbers and counting (Line 14), and concluding with goal-directedness priors(Line 18). We note that KAAR encodes geometry and topology priors through component attributes (Line 9) and relations (Line 11). The full set of priors is detailed in Tables 5, 6, and 7. After augmenting each level of priors, KAAR invokes the solver backbone (RSPC) at Lines 6, 15, and 19 to generate code solutions guided by text-based plans, allowing up to 4 iterations (Lines 25–37). In each iteration, the solver backbone first validates the generated code on the training instances Ir; if successful, it then evaluates the solution on the test instances It. The solver backbone returns solve if the generated solution successfully solves Itafter passing Ir;pass if only Iris solved; or continues to the next iteration if the solution fails on Ir. If the solver backbone fails to solve Irwithin the allotted 4 iterations at Lines 6 and 15, KAAR augments the next level of priors. KAAR proceeds to the next abstraction when the solver backbone fails to solve Irat Line 19, after the 4-iteration limit. KAAR terminates abstraction iteration upon receiving either pass orsolve from the solver backbone and reports accuracy on Ir, It, and Ir&Itaccordingly. If no abstraction fully solves Ir, KAAR records the final code solution for each abstraction (Line 22), selects the one that passes the most training instances (Line 23), and evaluates it on Itto determine additional accuracy gains (Line 24). 15 Algorithm 1: KAAR Input : LLMM; ARC problem P= (Ir, It); description Q= (Ir,{ii|(ii, io)∈It}); abstraction list A; max iterations t= 4 1Function KnowledgeAugmentation (M,Q,P,A,t): 2 solutionList ←[]; 3 foreach abstraction absinAdo 4 objectnessPriors ←GenerateObjectnessPriors( Q,abs); 5 AugmentKnowledge( M, objectnessPriors); 6 result, code,
https://arxiv.org/abs/2505.17482v1
passedCount ←SolverBackbone (M,P,Q,t); 7 ifresult̸=failure then 8 return result 9 attributePriors ←GenerateAttributePriors( Q,abs); 10 AugmentKnowledge( M, attributePriors); 11 relationPriors ←GenerateRelationPriors( Q,abs); 12 AugmentKnowledge( M, relationPriors); 13 numberPriors ←GenerateNumbersCountingPriors( Q,abs); 14 AugmentKnowledge( M, numberPriors); 15 result, code, passedCount ←SolverBackbone (M,P,Q,t); 16 ifresult̸=failure then 17 return result 18 AugmentGoalPriors ←(M,Q,abs); 19 result, code, passedCount ←SolverBackbone (M,P,Q,t); 20 ifresult̸=failure then 21 return result 22 solutionList.append((code, passedCount)); 23 bestCode ←SelectMostPassed(solutionList); 24 return EvaluateOnTest(bestCode, It); 25Function SolverBackbone (M,P,Q,t): 26 i←0; 27 while i < t do 28 plan← M .generatePlan( Q); 29 code← M .generateCode( Q, plan); 30 passedCount ←EvaluateOnTrain(code, Ir); 31 ifpassedCount == |Ir|then 32 ifEvaluateOnTest(code, It)then 33 return solve , code, passedCount; 34 else 35 return pass , code, passedCount; 36 i←i + 1; 37 return failure, code, passedCount; KAAR generates priors offline using image processing algorithms introduced in GPAR at Lines 4, 9, 11 and 13. In contrast, KAAR enables goal-directedness priors at Line 18 by prompting the LLM to select the most suitable actions and identify their implementation details, as described in Table 7. KAAR iterates over abstractions from simpler to more complex, following the order specified in Table 5. We note that the highest-priority abstraction is no abstraction , where KAAR degrades to the solver backbone (RSPC) as no priors are applied. A.6 Ablation Study Table 3 reports the accuracy decrease resulting from removing incremental knowledge augmentation and stage-wise reasoning in KAAR, denoted as KAAR∗. Unlike KAAR, which invokes the solver backbone (RSPC) after augmenting each level of priors to enable stage-wise reasoning, KAAR∗uses RSPC to solve the problem within 12 iterations after augmenting all priors at once. We evaluate 16 KAAR KAAR∗∆ Gemini-2.0Ir 25.75 23.00 -2.75 It 21.75 19.00 -2.75 Ir&It 20.50 18.00 -2.50 QwQ-32BIr 22.25 18.50 -3.75 It 21.00 17.75 -3.25 Ir&It 19.25 16.25 -3.00 DeepSeek-R1-70BIr 12.25 9.00 -3.25 It 12.75 9.00 -3.75 Ir&It 11.50 8.50 -3.00 Table 3: Accuracy on Ir,It, and Ir&Itfor KAAR and KAAR∗across three LLMs. KAAR∗ invokes the solver backbone (RSPC) only after all knowledge priors are augmented. ∆denotes the performance drop relative to KAAR. All values are reported as percentages. ?Task b15fca0b (Extension) ?Task 3b4c2228 (Others) ?Task 6ea4a07e (Recolor) ?Task f3e62deb (Movement) Figure 11: Example ARC tasks for movement ,extension ,recolor , and others categories. KAAR∗using the same reasoning-oriented LLMs as in Tables 1 and 2, excluding GPT-o3-mini due to its computational cost. KAAR∗shows decreased accuracy on all metrics, Ir,It, and Ir&It, for all evaluated LLMs. These results underscore the effectiveness of progressive augmentation and stage-wise reasoning. Presenting all knowledge priors simultaneously introduces superfluous information, which may obscure viable solutions and impair the LLM reasoning accuracy. We note that we construct the ontology of core knowledge priors based on their dependencies, thereby establishing a fixed augmentation order. A.7 Example Tasks by Category in the ARC Evaluation Set ARC comprises 1000 unique tasks, with 400 allocated to the training set and 600 to the evaluation set. The evaluation set is further divided into a public subset (400 tasks) and a private subset (200 tasks). Figure 11 illustrates example ARC tasks for the movement ,extension ,recolor ,
https://arxiv.org/abs/2505.17482v1
and others categories in the public evaluation set. In the movement example, components are shifted to the image boundary in directions determined by their colors. The extension example is more complex, requiring LLMs to find the shortest path between two red pixels while avoiding obstacles, which presents challenges for current reasoning-oriented models. Additionally, reliance on pixel-level recognition weakens the effectiveness of KAAR, which is designed to facilitate component identification. The recolor example involves changing non-black components to black and updating black components based on original non-black colors. The others example requires generating a blue diagonal line whose length depends on the number of 4-connected components in the input image that are green and have a size 17 Knowledge Augmentation Solver Backbone (RSPC) GPT-o3-mini 66K 106K Gemini 58K 110K QwQ-32B 79K 427K DeepSeek-R1-70B 66K 252K Table 4: Average token cost for knowledge augmentation and solver backbone (RSPC) in KAAR across four evaluated LLMs. K is 103. greater than one. The combination of numerical reasoning and structural pattern generation makes this task difficult to classify within the other three categories. A.8 Cost Analysis Table 4 reports the average token cost, including both prompts and LLM responses, for knowledge augmentation and the solver backbone (RSPC), when using KAAR as the ARC solver. For each ARC task, we consider the abstraction whose solution solves It; if none succeed, the one that passes Ir; otherwise, the abstraction with the lowest token usage is selected. Except for goal-directedness priors, all core knowledge priors in KAAR are generated offline using image processing algorithms from GPAR, resulting in comparable augmentation costs across all evaluated models. In contrast, token usage by the solver backbone varies substantially due to differences in the LLMs’ abstract reasoning and generalization capabilities. GPT-o3-mini solves most tasks efficiently, with the lowest token consumption by the solver backbone, where tokens used for knowledge augmentation account for approximately 62% of the solver backbone’s token usage. However, the solver backbone consumes more tokens with QwQ-32B, as QwQ-32B consistently generates longer reasoning traces. In this case, tokens used for knowledge augmentation constitute only 19% of the solver backbone’s token usage. Figure 14 illustrates the average token cost for augmenting priors at each level in KAAR. A.9 Generalization Figures 15 and 16 illustrate two ARC problems, 695367ec andb1fc8b8e , where both RSPC and KAAR successfully solve the training instances Irbut fail on the test instances Itwhen using GPT- o3-mini. For problem 695367ec , the correct solution involves generating a fixed 15 ×15 output image by repeatedly copying the input image, changing its color to black, and adding internal horizontal and vertical lines colored with the original input image’s color. However, the RSPC-generated code applies a distinct rule to each input image size without considering generalization. For problem b1fc8b8e , the solution requires accurate object recognition despite component contact, and correctly placing each component into one of the four corners. However, RSPC fails to recognize objectness, and its solution deviates from human intuition, being overfitted to Ir. For problems 695367ec and b1fc8b8e , KAAR exhibits the same limitations, although it adopts abstractions to
https://arxiv.org/abs/2505.17482v1
enable objectness. KAAR begins with the simplest abstraction, no abstraction , where KAAR degrades to RSPC. As a result, it generates the same solution as RSPC and terminates without attempting other abstractions, since the solution already solves Irand is then evaluated on It, resulting in overfitting. A.10 Problem Coverage across ARC Solvers We report the relative problem coverage across nine ARC solvers based on successful test instance solutions using GPT-o3-mini (Figure 17), Gemini-2.0 (Figure 18), QwQ-32B (Figure 19), and DeepSeek-R1-70B (Figure 20). Each cell (i, j)indicates the proportion of problems solved by the row solver that are also solved by the column solver. This is computed as|Ai∩Aj| |Ai|, where Aiand Ajare the sets of problems solved by the row and column solvers, respectively, following the same method used in Figure 5. Values close to 1 indicate that the column solver covers most problems solved by the row solver. GPT-o3-mini demonstrates the strongest overall coverage, with pairwise overlap consistently exceeding 0.55. Among all solvers, repeated sampling with standalone ( P) and planning-aided code generation ( PC) show the highest coverage, with column values consistently above 0.8 for GPT-o3-mini. This trend persists across Gemini-2.0, QwQ-32B, and DeepSeek-R1-70B. 18 Under these models, repeated sampling with planning-aided code generation exhibits better alignment than its standalone code generation counterpart, generally yielding higher coverage values. However, planning-aided code generation under the direct generation setting shows weaker alignment, with column values around 0.40 for Gemini-2.0 and 0.35 for QwQ-32B. Among the four evaluated LLMs, DeepSeek-R1-70B demonstrates the lowest average off-diagonal coverage (i.e., i̸=j) of 0.603, suggesting potential output instability and variation attributable to solver choice. A.11 Performance Analysis Table 1 highlights performance variations across reasoning-oriented LLMs and ARC solvers with respect to both accuracy and generalization. Notably, the ARC solver, repeated sampling with standalone code generation, exhibits a substantial accuracy gap between IrandIr&It, indicating limited generalization capability when using GPT-o3-mini and Gemini-2.0. In contrast, repeated sampling with planning-aided code generation demonstrates markedly improved generalization by preventing solutions from directly replicating the output matrices of training instances, as illustrated in Figure 21. This output copying, observed under repeated sampling with standalone code generation, accounts for approximately 24% and 95% of 83 and 101 overfitting problems with GPT-o3-mini and Gemini-2.0, respectively. When planning is incorporated, output copying is reduced to around 8% and 35% of 25 and 20 overfitting problems with GPT-o3-mini and Gemini-2.0, respectively. Additionally, the incorporation of planning facilitates accurate code generation. For example, in Figure 22, repeated sampling with planning-aided code generation produces a correct solution using GPT-o3-mini by replicating the input image horizontally or vertically based on the presence of a uniform row or column, as specified in the plan and implemented accordingly in code. In contrast, without planning assistance, standalone code generation produces incomplete logic, considering only whether the first column is uniform to determine the replication direction, which leads to failure on the test instance. For the ARC benchmark, repeated sampling–based methods achieve higher accuracy on Ir,It, and Ir&Itcompared to refinement-based approaches when using GPT-o3-mini and Gemini-2.0. Figure 23 presents an ARC problem where repeated
https://arxiv.org/abs/2505.17482v1
sampling with planning-aided code generation yields a correct solution, whereas its refinement variant fails to correct the initial erroneous code, and the flawed logic persists across subsequent refinements when using GPT-o3-mini. Previous studies have shown that refinement can benefit from control flow graph information [ 17] and verified plans [18], which assist LLMs in locating and correcting bugs. However, these methods typically incur substantial token consumption, making them difficult to scale affordably. A.12 Limitations KAAR improves the performance of reasoning-oriented LLMs on ARC tasks by progressively prompting with core knowledge priors. Although this inevitably increases token usage, the trade- off can be justified, as the exploration of LLM generalization remains in its early stages. KAAR integrates diverse abstraction methods to enable objectness and iteratively applies abstractions in order of increasing complexity. In contrast, humans typically infer appropriate abstractions directly from training instances, rather than leveraging exhaustive search. To address this, we prompt different LLMs with raw 2D matrices of each ARC problem to select one or three relevant abstractions, but the results are unsatisfactory. As previously discussed, accurate abstraction inference often depends on validation through viable solutions, thereby shifting the challenge back to solution generation. Additionally, KAAR augments core knowledge priors through prompting but lacks mechanisms to enforce LLM adherence to these priors during reasoning. While the KAAR-generated solutions generally conform to core knowledge priors, the intermediate reasoning processes may deviate from the intended patterns. Future work could explore fine-tuning or reinforcement learning to better align model behavior with the desired reasoning patterns. 19 Abstractions Definitions No Abstraction - Whole Image We consider the whole image as a component. Middle-Vertical We vertically split the image into two equal parts, treating each as a distinct component. Middle-Horizontal We horizontally split the image into two equal parts, treating each as a distinct component. Multi-Lines We use rows or columns with a uniform color to divide the input image into multiple components. 4-Connected∗We consider the 4-adjacent pixels of the same color as a compo- nent. 4-Connected-Non- Background∗We consider the 4-adjacent pixels of the same color as a compo- nent, excluding components with the background color. 4-Connected-Non- Background-Edge∗We consider the 4-adjacent pixels of the same color as a compo- nent, containing components with the background color when they are not attached to the edges of the image. 4-Connected-Multi-Color- Non-Background∗We consider 4-adjacent pixels as a component, which may con- tain different colors, while excluding components with the back- ground color. 4-Connected-Bounding-Box∗We consider 4-adjacent pixels of the same color, and treat all pixels within their bounding box as a component, which may include different colors. 4-Connected-With-Black∗We consider the 4-adjacent pixels of black color, represented by the value 0, as a component, excluding components with other colors. Same-Color We consider pixels of the same color as a component, excluding components with the background color. Table 5: Abstractions in KAAR. The superscript “∗” denotes that the 8-connected version is consid- ered. The background color is black if black exists; otherwise, it is the most frequent color in the image. We present abstractions according to their prioritization in KAAR, where the
https://arxiv.org/abs/2505.17482v1
order is given by the table from top to bottom, and making 8-connected abstraction to follow that of the corresponding 4-connected abstraction at the end of the sequence. Abstractions highlighted in red are exclusive to KAAR. 20 (0,25] Total: 19(25,100] Total: 139(100,225] Total: 129(225,400] Total: 51(400,625] Total: 39(625,900] Total: 23 Average Image Size Interval (width x height)01020304050607080Accuracy on It (%) 63.215.8 28.87.9 9.34.7 5.947.45.3 15.16.5 0.87.0 2.0Gemini-2.0 RSPC Gemini-2.0 KAAR DeepSeek-R1-70B RSPC DeepSeek-R1-70B KAARFigure 12: Accuracy on test instances Itfor RSPC and KAAR across average image size intervals, evaluated with Gemini-2.0 and DeepSeek-R1-70B. 1 4 8 12 # Iterations0510152025Accuracy on Ir&It (%) Objectness Geometry, Topology, Numbers and CountingGoal-directednessGemini-2.0: RSPC Gemini-2.0: KAAR DeepSeek-R1-70B: RSPC DeepSeek-R1-70B: KAAR Figure 13: Variance in accuracy on Ir&Itwith increasing iterations for RSPC and KAAR using Gemini-2.0 and DeepSeek-R1-70B. 21 Classifications Priors Geometry and Topology Size (Width and Height); Color; Shape (One Pixel; Horizontal Line; Vertical Line; Diagonal Line; Square; Rectangle; Cross; Irregular Shape); Symmetry (Horizontal Symmetry; Vertical Symmetry; Diagonal Symmetry; Anti-Diagonal Symmetry; Central Symmetry); Bounding Box; Hole Count; Nearest Boundary; Different/Identical with Other Components; Touching; Inclusive; Spatial (Horizontally Aligned to the Right; Horizontally Aligned to the Left; Vertically Aligned Below; Vertically Aligned Above; Top-Left; Top-Right; Bottom-Left; Bottom-Right; Same Position) Numbers and Counting Component Size Counting; Components with Same Size; Compo- nents with Most Frequent Size; Components with Least Frequent Size; Components with Maximum Size; Components with Mini- mum Size; Component Color Counting; Components with Same Color; Com- ponents with Same Number of Colors; Components with Most Frequent Color; Components with Least Frequent Color; Compo- nent with Most Distinct Colors; Component with Fewest Distinct Colors; Component Shape Counting; Components with Same Shape; Components with Most Frequent Shape; Components with Least Frequent Shape; Component Hole Number Counting; Components with Same Number of Holes; Components with Maximum Number of Holes; Components with Minimum Number of Holes; Component Symmetry Counting Goal-directedness Color Change (modifying component value); Movement (shifting component’s position); Extension (expanding component’s area); Completing (filling in missing parts of a component); Resizing (altering component size); Selecting (isolating a component); Copying (duplicating a component); Flipping (mirroring a component); Rotation (rotating a component); Cropping (cutting part of a component) Table 6: KAAR priors classified into geometry and topology, numbers and counting, and goal- directedness. For goal-directedness, we incorporate ten predefined actions, with their corresponding action schemas detailed in Table 7. 22 Actions Schemas (Implementation Details) Color Change Targets Source and Target Colors Movement Targets Direction Start and End Locations Pattern Order Overlapping Extension Targets Direction Start and End Locations Pattern Order Intersection Completing Targets Pattern Resizing Targets Source and Target Sizes Selecting Targets Copying Targets Locations Overlapping Flipping Targets Flipping Axis Overlapping Rotation Targets Degrees Cropping Targets Subsets Table 7: Actions in KAAR and their schemas (implementation details). Each action schema is presented according to its prompting order in KAAR (left to right). Some actions include a pattern schema that prompts the LLM to identify underlying logic rules, such as repeating every two steps in movement or extension, or completing based on three-color repetition. Targets denote the target components. Objectness Geometry, Topology, Numbers
https://arxiv.org/abs/2505.17482v1
and CountingGoal-directedness0K10K20K30K40K50KTokens 11K40K 19K 12K24K31K 20K29K43K 15K37K 18KGPT-o3-mini Gemini-2.0 QwQ-32B DeepSeek-R1-70B Figure 14: Average token cost for augmenting priors at each level across four LLMs. K is 103. 23 Abstractions Geometry and Topology Numbers and Counting Goal-directedness whole image Symmetry, Size - Flipping; Rotation; Extension; Complet- ing, Cropping middle-vertical Size - Flipping; Move- ment middle-horizontal Size - Flipping; Move- ment multi-lines Size; Color; Shape; Sym- metry; Bounding Box; Hole CountALL ALL 4-connected- multi-color-non- background∗ALL ... Component Color Counting; Components with Same Number of Colors; Component with Most Distinct Colors; Component with Fewest Distinct Colors ...ALL Table 8: Abstractions with their assigned knowledge priors. “–” denotes no priors, while “ALL” indicates all priors in the corresponding category, as defined in Table 6. The superscript “∗” indicates that the 8-connected version is also applicable. The highlighted priors apply exclusively to their corresponding abstractions. For the 4/8-connected-multi-color-non-background abstractions, we present color-counting priors specific to multi-colored components, while all other non-color-counting priors follow those in Table 6. 24 Task 695367ec def generate_output_image(input_image): # Determine the color value from the input image. v = input_image[0][0] # Get the input image dimension (assumed square). n = len(input_image) # Set the output image dimensions (always 15x15 as per the transformation rule). out_size = 15 # Define grid‐line indices for rows and columns based on the input dimension. # These mappings are inferred from the examples: # For 2x2 and 5x5 inputs, grid‐lines occur at indices: 2, 5, 8, 11, 14. # For a 3x3 input, grid‐lines occur at indices: 3, 7, 11. # For a 4x4 input, grid‐lines occur at indices: 4, 9, 14. if n in (2, 5): grid_indices = {2, 5, 8, 11, 14} elif n 3: grid_indices = {3, 7, 11} elif n 4: grid_indices = {4, 9, 14} else: # Default: evenly space n grid-lines over a 15-element dimension. # This is a fallback if the input size is not one of the above. block_size = out_size (n + 1) grid_indices = {(i + 1) * block_size - 1 for i in range(n)} # Create the 15x15 output image based on the grid drawing rule. output = [] for r in range(out_size): if r in grid_indices: # This is a separator (grid‐line) row; paint the entire row with v. output.append([v] * out_size) else: # For a pattern row, only the pixels at grid‐line column positions are painted. row = [] for c in range(out_size): if c in grid_indices: row.append(v) else: row.append(0) output.append(row) return output?Code Solution TextGenerate a rule for each input size.Figure 15: ARC problem 695367ec , where RSPC and KAAR generate the same code solution that passes the training instances but fails on the test instance using GPT-o3-mini. def generate_output_image(input_image): # Determine the border pattern based on the top row of the 6x6 input. # Count how many 8's appear in the first row. count_eights = sum(1 for pixel in input_image[0] if pixel 8) # If two or more 8's are present, use the "full-active" pattern for active rows. if count_eights 2: active_pattern = [8, 8, 0, 8, 8]
https://arxiv.org/abs/2505.17482v1
top_active = active_pattern second_active = active_pattern else: # Use the "softer-border" pattern: top_active = [0, 8, 0, 0, 8] second_active = [8, 8, 0, 8, 8] # The blank row (middle row) is always all zeros. blank = [0, 0, 0, 0, 0] # Construct the final 5x5 output image. # The output consists of: # - The first active row (top_active) # - The second active row (second_active) # - A middle blank row (blank) # - The vertical mirror of the active rows (top_active then second_active). output_image = [ top_active, second_active, blank, top_active, second_active ] return output_image?No objective-centric reasoning. Rules are only applied to training instances.Task b1fc8b8e Code Solution Figure 16: ARC problem b1fc8b8e , where RSPC and KAAR generate the same code solution that passes the training instances but fails on the test instance using GPT-o3-mini. 25 Direct Generation P Direct Generation C Direct Generation PC Repeated Sampling P Repeated Sampling C Repeated Sampling PC Refinement P Refinement C Refinement PC gpt-o3-miniDirect Generation P Direct Generation C Direct Generation PC Repeated Sampling P Repeated Sampling C Repeated Sampling PC Refinement P Refinement C Refinement PC1.00 0.74 0.75 0.80 0.89 0.84 0.74 0.81 0.75 0.61 1.00 0.71 0.68 0.89 0.83 0.76 0.80 0.70 0.69 0.79 1.00 0.73 0.91 0.84 0.73 0.81 0.80 0.68 0.71 0.68 1.00 0.87 0.88 0.80 0.75 0.69 0.55 0.67 0.62 0.64 1.00 0.80 0.65 0.78 0.69 0.55 0.66 0.61 0.68 0.85 1.00 0.67 0.76 0.67 0.61 0.75 0.66 0.77 0.85 0.84 1.00 0.75 0.71 0.56 0.67 0.62 0.61 0.87 0.79 0.63 1.00 0.73 0.59 0.67 0.69 0.64 0.87 0.81 0.68 0.83 1.00 0.00.51.0 CoverageFigure 17: Asymmetric relative coverage matrix of nine ARC solvers using GPT-o3-mini, showing the proportion of problems whose test instances are solved by the row solver that are also solved by the column solver. Pdenotes the solution plan; CandPCrefer to standalone and planning-aided code generation, respectively. 26 Direct Generation P Direct Generation C Direct Generation PC Repeated Sampling P Repeated Sampling C Repeated Sampling PC Refinement P Refinement C Refinement PC Gemini-2.0Direct Generation P Direct Generation C Direct Generation PC Repeated Sampling P Repeated Sampling C Repeated Sampling PC Refinement P Refinement C Refinement PC1.00 0.54 0.46 0.64 0.79 0.82 0.57 0.75 0.79 0.56 1.00 0.48 0.78 0.89 0.89 0.63 0.81 0.74 0.52 0.52 1.00 0.72 0.84 0.88 0.56 0.72 0.84 0.45 0.53 0.45 1.00 0.85 0.88 0.57 0.70 0.72 0.37 0.41 0.36 0.58 1.00 0.86 0.49 0.63 0.68 0.34 0.36 0.33 0.52 0.76 1.00 0.45 0.58 0.61 0.46 0.49 0.40 0.66 0.83 0.86 1.00 0.66 0.80 0.45 0.47 0.38 0.60 0.79 0.83 0.49 1.00 0.70 0.46 0.42 0.44 0.60 0.83 0.85 0.58 0.69 1.00 0.00.51.0 CoverageFigure 18: Asymmetric relative coverage matrix of nine ARC solvers using Gemini-2.0, showing the proportion of problems whose test instances are solved by the row solver that are also solved by the column solver. Pdenotes the solution plan; CandPCrefer to standalone and planning-aided code generation, respectively. 27 Direct Generation P Direct Generation C Direct Generation PC Repeated Sampling P Repeated Sampling C Repeated Sampling PC Refinement P
https://arxiv.org/abs/2505.17482v1
Refinement C Refinement PC QwQ-32BDirect Generation P Direct Generation C Direct Generation PC Repeated Sampling P Repeated Sampling C Repeated Sampling PC Refinement P Refinement C Refinement PC1.00 0.53 0.37 0.68 0.71 0.74 0.68 0.71 0.74 0.69 1.00 0.45 0.79 0.93 0.86 0.72 0.86 0.86 0.61 0.57 1.00 0.70 0.78 0.91 0.61 0.83 0.78 0.58 0.51 0.36 1.00 0.73 0.76 0.64 0.69 0.69 0.50 0.50 0.33 0.61 1.00 0.80 0.65 0.72 0.70 0.49 0.44 0.37 0.60 0.75 1.00 0.54 0.70 0.63 0.59 0.48 0.32 0.66 0.80 0.70 1.00 0.73 0.73 0.47 0.44 0.33 0.54 0.68 0.70 0.56 1.00 0.72 0.50 0.45 0.32 0.55 0.68 0.64 0.57 0.73 1.00 0.00.51.0 CoverageFigure 19: Asymmetric relative coverage matrix of nine ARC solvers using QwQ-32B, showing the proportion of problems whose test instances are solved by the row solver that are also solved by the column solver. Pdenotes the solution plan; CandPCrefer to standalone and planning-aided code generation, respectively. 28 Direct Generation P Direct Generation C Direct Generation PC Repeated Sampling P Repeated Sampling C Repeated Sampling PC Refinement P Refinement C Refinement PC DeepSeek-R1-70BDirect Generation P Direct Generation C Direct Generation PC Repeated Sampling P Repeated Sampling C Repeated Sampling PC Refinement P Refinement C Refinement PC1.00 0.76 0.65 0.65 0.71 0.82 0.53 0.71 0.76 0.68 1.00 0.58 0.58 0.84 0.89 0.53 0.79 0.68 0.61 0.61 1.00 0.56 0.72 0.72 0.44 0.72 0.56 0.65 0.65 0.59 1.00 0.76 0.76 0.59 0.71 0.65 0.41 0.55 0.45 0.45 1.00 0.66 0.41 0.62 0.62 0.45 0.55 0.42 0.42 0.61 1.00 0.39 0.48 0.65 0.64 0.71 0.57 0.71 0.86 0.86 1.00 0.64 0.64 0.52 0.65 0.57 0.52 0.78 0.65 0.39 1.00 0.61 0.42 0.42 0.32 0.35 0.58 0.65 0.29 0.45 1.00 0.00.51.0 CoverageFigure 20: Asymmetric relative coverage matrix of nine ARC solvers using DeepSeek-R1-70B, showing the proportion of problems whose test instances are solved by the row solver that are also solved by the column solver. Pdenotes the solution plan; CandPCrefer to standalone and planning-aided code generation, respectively. 29 Task 358ba94e ?def generate_output_image (input_image): freq = {} for row in input_image: for pix in row: if pix 0: freq[pix] = freq.get(pix, 0) + 1 # If there is no non‐zero pixel, return a 5×5 zero matrix. if not freq: return [[0, 0, 0, 0, 0] for _ in range(5)] # Determine the “digit” (nonzero value) that occurs most often. digit = max(freq, key=freq.get) # Return the fixed normalized 5×5 pattern corresponding to the detected digit. if digit 7: return [ [ 7, 7, 7, 7, 7], [ 7, 0, 7, 7, 7], [ 7, 7, 7, 0, 7], [ 7, 0, 7, 7, 7], [ 7, 7, 7, 7, 7] ] elif digit 8: return [ [ 8, 8, 8, 8, 8], [ 8, 0, 8, 0, 8], [ 8, 8, 8, 8, 8], [ 8, 0, 8, 0, 8], [ 8, 8, 8, 8, 8] ] elif digit 6: return [ [ 6, 6, 6, 6, 6], [ 6, 0, 6, 0, 6], [ 6, 6, 6, 6, 6], [ 6, 0, 6, 6,
https://arxiv.org/abs/2505.17482v1
6], [ 6, 6, 6, 6, 6] ] elif digit 2: return [ [ 2, 2, 2, 2, 2], [ 2, 2, 2, 2, 2], [ 2, 0, 2, 2, 2], [ 2, 2, 2, 2, 2], [ 2, 2, 2, 2, 2] ] elif digit 1: # For the test image the drawn digit is 1. # We choose a pattern that follows a similar style to the others. return [ [ 1, 1, 1, 1, 1], [ 1, 0, 1, 0, 1], [ 1, 1, 1, 1, 1], [ 1, 0, 1, 1, 1], [ 1, 1, 1, 1, 1] ] else: # For any other digit, simply fill a 5x5 matrix with that digit. return [[digit]* 5 for _ in range(5)]Code Solution Copy the output matrices.Figure 21: ARC problem 358ba94e , where repeated sampling with standalone code generation produces an incorrect solution using GPT-o3-mini. 30 ?Task 15696249 def generate_output_image(input_image): n = len(input_image) m = len(input_image[0]) uniform_row = None for i in range(n): if all(pixel input_image[i][0] for pixel in input_image[i]): uniform_row = i break uniform_col = None if uniform_row is None: for j in range(m): col_pixels = [input_image[i][j] for i in range(n)] if all(pixel col_pixels[0] for pixel in col_pixels): uniform_col = j break output_size = 9 output = [[0 for _ in range(output_size)] for _ in range(output_size)] if uniform_row is not None: block_row = uniform_row # 0-indexed: 0 for first, 1 for middle, 2 for last. start_row = 3 * block_row for i in range(3): new_row = input_image[i] * 3 # Repeat side-by-side three times. output[start_row + i] = new_row elif uniform_col is not None: block_col = uniform_col # 0-indexed column position. start_col = 3 * block_col for r in range(9): in_row = input_image[r % 3] for c in range(3): output[r][start_col + c] = in_row[c] return outputLook at the input’s rows and columns to see whether any entire row is “uniform” (all the same number) or any entire column is uniform Check the examples: Example 1: Input row 2 is [3, 3, 3] (uniform). So the entire input is copied horizontally Example 2: Input column 1 is [2, 2, 2] (uniform). So the input is “stacked” Example 3: Input row 1 is [1, 1, 1] (uniform). So the input is copied horizontally Example 4: Input column 1 is uniform ([3, 3, 3]), so the input is stacked vertically Code SolutionPlan The plan identifies correct conditions and implements code accordingly, yielding the correct solution.def generate_output_image(input_image): output = [[0 for _ in range(9)] for _ in range(9)] def is_uniform(lst): return all(x lst[0] for x in lst) # Check vertical condition: if the first column is uniform. first_col = [row[0] for row in input_image] if is_uniform(first_col): for r in range(9): in_row = input_image[r % 3] for j in range(3): output[r][j] = in_row[j] else: # Horizontal replication: # Decide which horizontal band to fill based on the uniformity of the first row. block_row_index = 0 if is_uniform(input_image[0]) else 1 for i in range(3): for block in range(3): for j in range(3): output[block_row_index * 3
https://arxiv.org/abs/2505.17482v1
+ i][block * 3 + j] = input_image[i][j]Code SolutionRepeated sampling with standalone code generation Repeated sampling with planning-aided code generationIncorrect condition.Figure 22: ARC problem 15696249 , where repeated sampling with standalone code generation produces an incorrect solution, whereas repeated sampling with planning-aided code generation yields the correct solution using GPT-o3-mini. 31 ?Task d19f7514 def generate_output_image(input_image): cropped = input_image[:6] rows = len(cropped) cols = len(cropped[0]) # Step 2. Build the base image: every nonzero becomes 4. base = [[4 if pixel 0 else 0 for pixel in row] for row in cropped] # The output image starts with the first (top) row of the base. output = [base[0][:]] def is_active(row): return any(pixel !=0 for pixel in row) # Process each subsequent row. for r in range(1, rows): # Start with the base row. new_row = base[r][:] if is_active(cropped[r]): for c in range(cols): # Only consider blank cells that were not drawn originally. if new_row[c] 0 and cropped[r][c] 0: # Propagate only if the cell directly above is 4. if output[r-1][c] 4: # For a right border cell, propagate automatically. if c cols - 1: new_row[c] = 4 else: left_drawn = (c - 1 0 and cropped[r][c - 1] 0) right_drawn = (c + 1 < cols and cropped[r][c + 1] 0) if left_drawn or right_drawn: new_row[c] = 4 output.append(new_row) return outputdef generate_output_image(input_image): # Step 1: Discard the lower 6 rows to work only with the top 6 rows. cropped = [row[:] for row in input_image[:6]] # Step 2: Change every nonzero pixel to 4 (convert 3's or any nonzero values to 4). for r in range(len(cropped)): for c in range(len(cropped[r])): if cropped[r][c] 0: cropped[r][c] = 4 # Step 3: Vertical propagation. # Process from the second row (index 1) downwards for r in range(1, len(cropped)): for c in range(len(cropped[r])): # If this pixel is 0 and the pixel immediately above is 4, fill it with 4. if cropped[r][c] 0 and cropped[r-1][c] 4: cropped[r][c] = 4 return croppedInitial incorrect code Incorrect code after refinements Incorrect logic persists after refinements. def generate_output_image(input_image): # Split the input image into two halves num_rows = len(input_image) mid = num_rows 2 top_half = [] bottom_half = [] # Replace nonzero pixels with 4 in both halves for i in range(mid): top_half.append([4 if pixel !=0 else 0 for pixel in input_image[i]]) for i in range(mid, num_rows): bottom_half.append([4 if pixel !=0 else 0 for pixel in input_image[i]]) # Combine the corresponding rows of the top and bottom halves using pixelwise OR (nonzero becomes 4) output_image = [] for i in range(mid): combined_row = [] for j in range(len(top_half[i])): # If either pixel in the corresponding column is nonzero (i.e., equals 4), output 4; otherwise 0. if top_half[i][j] or bottom_half[i][j]: combined_row.append(4) else: combined_row.append(0) output_image.append(combined_row) return output_imageRepeated sampling with planning-aided code generation Code Solution Correct logic.Refinement with planning-aided code generationFigure 23: ARC problem d19f7514 , where repeated sampling with planning-aided code generation produces a correct solution, whereas its refinement variant fails to refine the initial erroneous code, and the incorrect logic
https://arxiv.org/abs/2505.17482v1
persists across subsequent refinements when using GPT-o3-mini. 32 A.13 Prompts for LLMs We include all prompts used by KAAR and nine ARC solvers described in Section 3. We adopt a bash-like notation for input arguments within the prompts, such as ${test_inputs} denotes the test input 2D matrices. A brief description of the prompts used for each solver is provided below. •Direct generation with solution plan : Prompt 1 describes how to generate the solution plan, and Prompt 2 uses the generated plan to produce the output images. •Direct generation with standalone code : Prompt 3 describes how to generate the code to produce the output images. •Direct generation with planning-aided code : It first generates a solution plan using Prompt 1, then uses Prompt 4 to produce code based on the generated plan. •Repeated sampling with solution plan : It can be regarded as an iterative version of direct generation with solution plan, and thus also uses Prompts 1 and 2. •Repeated sampling with standalone code : It can be regarded as an iterative version of direct generation with standalone code, and thus also uses Prompt 3. •Repeated sampling with planning-aided code : It can be regarded as an iterative version of direct generation with planning-aided code, and thus also uses Prompts 1 and 4. •Refinement with solution plan : Prompt 5 describes the process of refining the generated solution plan with the validation samples. It uses Prompts 1 and 2 to generate the initial plan and the result image. •Refinement with the standalone code : Prompt 6 describes the process of refining the generated code with the validation samples. It uses Prompt 3 to produce the initial code solution. •Refinement with the planning-aided code : Prompt 7 describes the process of refining the generated plan and code with the validation samples. It use Prompts 1 and 4 to generate the initial plan and produce the initial code guided by the plan, respectively. •KAAR : Prompt 8 describes the augmentation of objectness priors. Prompts 9 and 10 intro- duce the augmentation of geometry and topology priors, encoded as component attributes and relations, respectively. Prompt 11 outlines the augmentation of numbers and counting priors. Prompts 12 and 13 describe action selection and target component identification in the process of augmenting goal-directedness priors. For prompts implementing each action’s implementation details, please refer to our code. Prompt 1: Direct generation with solution plan - solution plan generation. ================================ System ================================ You are an expert in analyzing grid-based image processing tasks. Your objective is to derive a text transformation plan (not Python code) from each given input-output image pair (both represented as 2D matrices), and then apply this plan to generate output image(s), represented as a 2D matrix, based on the given test input image(s) (2D matrix). Ensure that the derived plan generalizes across different cases while preserving consistency with the observed transformations. ================================= User ================================= The input data consists of a few pairs of input and output images, where the left image in each pair represents the input, and the right image represents the corresponding
https://arxiv.org/abs/2505.17482v1
output. Each image can be represented as a 2 D matrix: ${matrix} Please note that each number in the matrix corresponds to a pixel, and its value represents the color. Derive a text transformation plan (not Python code) that maps each given input image (2D matrix) to its corresponding output image (2D matrix). Ensure that the plan generalizes across different cases and the test input image(s) (2D matrix) while maintaining consistency with the observed transformations. 33 The test input image(s): ${test_inputs} Prompt 2: Direct generation with solution plan - output image(s) generation from the plan. ================================ System ================================ You are an expert in analyzing grid-based image processing tasks. Your objective is to generate output image(s), represented as a 2D matrix, based on the given input images (2D matrix) and a derived text transformation plan. ================================= User ================================= Please generate the output image(s) as a 2D matrix (not Python code) based on the given input image(s) (2D matrix) and the text transformation plan. Output only the test output image(s) in 2D matrix format (not Python code). For each test input image, start with [Start Output Image] and end with [End Output Image]. For example, if there is one test input image, the output image should be: [Start Output Image] [[0,0,0], [0,0,0], [0,0,0]] [End Output Image] If there are multiple (2) test input images, the the output images should be outputted as: [Start Output Image] [[0,0,0], [0,0,0], [0,0,0]] [End Output Image] [Start Output Image] [[1,1,1], [1,1,1], [1,1,1]] [End Output Image] The test input image(s): ${test_inputs} Prompt 3: Direct generation with standalone code. ================================ System ================================ You are an expert in analyzing grid-based image processing tasks. Your goal is to generate Python code that produces output image(s), represented as a 2D matrix, based on the given input image(s) (2D matrix). ================================= User ================================= The input data consists of a few pairs of input and output images, where the left image in each pair represents the input and the right image represents the corresponding output. Each image can be represented as a 2D matrix: ${matrix} The test input image(s): ${test_inputs} Please note that each number in the matrix corresponds to a pixel, and its value represents the color. Generate a Python script to map each input image (2D matrix) to the corresponding output image (2D matrix). Ensure that the Python script generalizes across different cases and test input image(s) while maintaining consistency with the observed input- output image pairs. Please output the Python program, starting with [Start Program] and ending with [End Program]. Include an assert statement with the function signature to verify that the generated output matches the expected result, starting with [Assert Statement]. Use placeholders like input_image and output_image for the variables representing the input and output images. 34 For example: [Start Program] def generate_output_image(input_image): rows = len(input_image) cols = len(input_image[0]) def dfs(r, c): """Depth-first search to mark all 4-connected ’1’s to ’2’s.""" if r < 0 or r >= rows or c < 0 or c >= cols or input_image[r][c] != 1: return # Change the current component from 1 to 2 input_image[r][c]
https://arxiv.org/abs/2505.17482v1
= 2 # Explore neighbors (up, down, left, right) dfs(r - 1, c) # Up dfs(r + 1, c) # Down dfs(r, c - 1) # Left dfs(r, c + 1) # Right # Traverse the image to find all components with ’1’ for r in range(rows): for c in range(cols): if input_image[r][c] == 1: dfs(r, c) return input_image [End Program] [Assert Statement] assert generate_output_image(input_image) == output_image Please note, the assert statement should strictly follow the provided format, and the output image should be represented in list format! Please note, the script should not include an if __name__ == "__main__": block. Prompt 4: Direct generation with planning-aided code - code generation based on the generated plan. ================================ System ================================ You are an expert in analyzing grid-based image processing tasks. Your goal is to generate Python code that produces output image(s) represented as a 2D matrix, based on the given input image(s) (2D matrix). This code should be generated using a text transformation plan inferred from a set of input-output image pairs (both represented as 2D matrices). ================================= User ================================= Generate a Python script based on your text transformation plan to map the input image (2D matrix) to the output image (2D matrix). Please output the Python program, starting with [Start Program] and ending with [End Program]. Include an assert statement with the function signature to verify that the generated output matches the expected result, starting with [Assert Statement]. Use placeholders like input_image and output_image for the variables representing the input and output images. For example: [Start Program] def generate_output_image(input_image): rows = len(input_image) cols = len(input_image[0]) def dfs(r, c): """Depth-first search to mark all 4-connected ’1’s to ’2’s.""" if r < 0 or r >= rows or c < 0 or c >= cols or input_image[r][c] != 1: return # Change the current component from 1 to 2 35 input_image[r][c] = 2 # Explore neighbors (up, down, left, right) dfs(r - 1, c) # Up dfs(r + 1, c) # Down dfs(r, c - 1) # Left dfs(r, c + 1) # Right # Traverse the image to find all components with ’1’ for r in range(rows): for c in range(cols): if input_image[r][c] == 1: dfs(r, c) return input_image [End Program] [Assert Statement] assert generate_output_image(input_image) == output_image Please note, the assert statement should strictly follow the provided format, and the output image should be represented in list format! Please note, the script should not include an if __name__ == "__main__": block. Prompt 5: Refinement with solution plan - plan refinement. ================================ System ================================ As an expert in analyzing grid-based image processing tasks, your objective is to refine your solution plan based on the provided feedback. ================================= User ================================= The problem description: [start problem description] The input data consists of a few pairs of input and output images, where the left image in each pair represents the input, and the right image represents the corresponding output. Each image can be represented as a 2 D matrix: ${matrix} Please note that each number in the matrix corresponds to a pixel, and its value represents the
https://arxiv.org/abs/2505.17482v1
color. [end problem description] The INCORRECT text transformation plan fails to solve some example training input and output pairs in the above problem! [start incorrect transformation plan] ${plan} [end incorrect transformation plan] The incorrect output(s) generated by the incorrect plan: [start incorrect output] ${incorrect_output} [end incorrect output] The generated correct output(s): [start correct output] ${correct_output} [end correct output] Please analyze the incorrect reasoning step-by-step, and then generate the revised correct transformation plan (text only), starting with [Start Revised Transformation Plan] and ending with [End Revised Transformation Plan]. Ensure that the revised transformation plan generalizes across different cases and the test input image(s), while maintaining consistency with the observed transformations. 36 Prompt 6: Refinement with standalone code - code refinement. ================================ System ================================ As an expert in analyzing grid-based image processing tasks, your objective is to refine your program based on the provided feedback. ================================= User ================================= The problem description: [start problem description] The input data consists of a few pairs of input and output images, where the left image in each pair represents the input, and the right image represents the corresponding output. Each image can be represented as a 2 D matrix: ${matrix} Please note that each number in the matrix corresponds to a pixel, and its value represents the color. [end problem description] The generated incorrect program fails to solve some example training input and output pairs in the above problem! [start incorrect program] ${code} [end incorrect program] The incorrect output(s) generated by the incorrect program: [start incorrect output] ${incorrect_output} [end incorrect output] The generated correct output(s): [start correct output] ${correct_output} [end correct output] Please analyze the incorrect reasoning step-by-step, and then generate the revised program (Python program only), starting with [Start Revised Program] and ending with [End Revised Program]. Ensure that the revised program generalizes across different cases and the test input image(s), while maintaining consistency with the observed input and output image pairs. Please include an assert statement with the function signature to verify that the generated output matches the expected result, starting with [ Assert Statement]. Use placeholders like input_image and output_image for the variables representing the input and output images. For example: [Start Revised Program] def generate_output_image(input_image): rows = len(input_image) cols = len(input_image[0]) def dfs(r, c): """Depth-first search to mark all 4-connected ’1’s to ’2’s.""" if r < 0 or r >= rows or c < 0 or c >= cols or input_image[r][c] != 1: return # Change the current component from 1 to 2 input_image[r][c] = 2 # Explore neighbors (up, down, left, right) dfs(r - 1, c) # Up dfs(r + 1, c) # Down dfs(r, c - 1) # Left dfs(r, c + 1) # Right 37 # Traverse the image to find all components with ’1’ for r in range(rows): for c in range(cols): if input_image[r][c] == 1: dfs(r, c) return input_image [End Revised Program] [Assert Statement] assert generate_output_image(input_image) == output_image Please note, the assert statement should strictly follow the provided format, and the output image should be represented in list format! Please note, the script should not include an if __name__
https://arxiv.org/abs/2505.17482v1
== "__main__": block. Prompt 7: Refinement with planning-aided code - refinement on both generated plan and code. ================================ System ================================ As an expert in analyzing grid-based image processing tasks, your objective is to refine your transformation plan and program based on the provided feedback. ================================= User ================================= The problem description: [start problem description] The input data consists of a few pairs of input and output images, where the left image in each pair represents the input, and the right image represents the corresponding output. Each image can be represented as a 2 D matrix: ${matrix} Please note that each number in the matrix corresponds to a pixel, and its value represents the color. [end problem description] The generated incorrect transformation plan and program fail to solve some example training input and output pairs in the above problem! [start incorrect transformation plan] ${plan} [end incorrect transformation plan] [start incorrect program] ${code} [end incorrect program] The incorrect output(s) generated by the incorrect transformation plan and program: [start incorrect output] ${incorrect_output} [end incorrect output] The generated correct output(s): [start correct output] ${correct_output} [end correct output] Please analyze the incorrect reasoning step-by-step, and then generate the revised transformation plan (text only) and program (Python program only). For the revised transformation plan, start with [Start Revised Transformation Plan] and end with [End Revised Transformation Plan]. Ensure that the revised transformation plan generalizes across different cases and the test input image(s), while maintaining consistency with the observed transformations. 38 For the revised Python program, start with [Start Revised Program] and end with [End Revised Program]. Ensure that the revised program generalizes across different cases and the test input image(s), while maintaining consistency with the observed input and output image pairs. For the revised Python program, please include an assert statement with the function signature to verify that the generated output matches the expected result, starting with [Assert Statement]. Use placeholders like input_image and output_image for the variables representing the input and output images. For example: [Start Revised Program] def generate_output_image(input_image): rows = len(input_image) cols = len(input_image[0]) def dfs(r, c): """Depth-first search to mark all 4-connected ’1’s to ’2’s.""" if r < 0 or r >= rows or c < 0 or c >= cols or input_image[r][c] != 1: return # Change the current component from 1 to 2 input_image[r][c] = 2 # Explore neighbors (up, down, left, right) dfs(r - 1, c) # Up dfs(r + 1, c) # Down dfs(r, c - 1) # Left dfs(r, c + 1) # Right # Traverse the image to find all components with ’1’ for r in range(rows): for c in range(cols): if input_image[r][c] == 1: dfs(r, c) return input_image [End Revised Program] [Assert Statement] assert generate_output_image(input_image) == output_image Please note, the assert statement should strictly follow the provided format, and the output image should be represented in list format! Please note, the script should not include an if __name__ == "__main__": block. Prompt 8: Objectness priors augmentation ================================ System ================================ You are an expert in grid-based image analysis. ================================= User ================================= The training instances consist of several pairs of
https://arxiv.org/abs/2505.17482v1
input and output images, where the left image in each pair represents the input and the right image represents the corresponding output. Please note that the test instance(s) only contains input image(s). Each image is represented as a 2D matrix: ${matrix} Please note that each number in the matrix corresponds to a pixel and its value represents the color. We treat the color represented by the number {background_color} as the background color. ${abstraction_rule} 39 The components in each input and output image pair are as follows: ${component_description} Prompt 9: Geometry and topology priors augmentation - component attributes ================================ System ================================ You are an expert in geometry and topology analysis. Below is a summary of component attributes, including: Size (Width and Height); Color; Shape; Symmetry; Bounding Box; Hole Count; Nearest Boundary. ================================= User ================================= ${geometry_and_topology_priors_attributes}$ Prompt 10: Geometry and topology priors augmentation - component relations ================================ System ================================ You are an expert in geometry and topology analysis, Below is a summary of component relations, including: Different/Identical with other components; Inclusive; Touching or or not touching with other component; Spatial Relations, ================================= User ================================= ${geometry_and_topology_priors_relations}$ Prompt 11: Numbers and counting priors augmentation ================================ System ================================ You are an expert in numbers and counting analysis. Below is a summary of component statistics, including: Symmetry numerical summary; Size numerical summary; Color numerical summary; Shape numerical summary; Hole counting summary. ================================= User ================================= ${numbers_and_couting_priors}$ Prompt 12: Goal-directedness priors augmentation - action selection ================================ System ================================ You are an expert in analyzing and categorizing grid-based image tasks. ================================= User ================================= Please determine which category or categories this task belongs to. Please select from the following: 1. color change: color change involves modifying the value of a component, and the component size and position always does not change. 2. movement: movement involves shifting the position of a component to a new location within the image, and the component size always does not change. 3. extension: extending involves expanding the boundaries of a component to increase its size or reach within the image, and the component size always changes. 4. completing: completing an image involves filling in missing or incomplete parts of a component to achieve a coherent and fully formed image. 5. resizing: resizing involves altering the dimensions of a component by expanding or shrinking its size within the image. 6. selecting: selecting involves identifying and isolating a specific component within the image as the output component, and the component size and color always does not change. 7. copying: copying involves duplicating a component and either placing the duplicate in a new location or replacing the existing component within the image. 8. flipping: flipping involves mirroring a component along a specified axis to reverse its orientation within the image. 40 9. rotation: rotation involves turning a component around a fixed point or center by a specified angle within the image. 10. cropping: cropping involves cutting out a specific portion of a component. Please select the best suitable one or multiple categories from the provided list that best describe the task. Format your response by starting with [start category] and
https://arxiv.org/abs/2505.17482v1
ending with [ end category], numbering each category selected. For example, if the task belongs only to "color change", your response should be: [start category] 1. color chang [end category] If the task belongs to both "selecting" and "extension", your response should be: [start category] 1. selecting 2. extension [end category] Prompt 13: Goal-directedness priors augmentation - target component idetification ================================ System ================================ You are an expert in analyzing grid-based image tasks, specifically in ${ action} components. ================================= User ================================= If this task involves ${action}: 1. Begin by identifying WHICH COMPONENTS are to be ${action} in all input images (training and test pairs). - Refer to these components as TARGET components (e.g., component 1 in the first input image, component 2 and component 3 in the second input image, etc.). - List ALL target components in each training and test input image. - For EACH target component, provide: - Attribute Analysis result - Relation analysis result - Numerical analysis result 2. Determine the CONDITIONS used to select these TARGET components for ${ action} from each training and test input image. - These conditions must be based on common priorities across all targeted components and must differ from the unselected components. - For example: the size of all target components might be equal to 3 while the size of the unselected components is not 3. 2.1. Analyze whether these conditions are EMPTY or not. 2.2. Evaluate if these conditions are derived from attribute analysis, including: 2.2.1. Color 2.2.2. Size 2.2.3. Shape 2.2.4. Width 2.2.5. Height 2.2.6. The number of holes 2.2.7. Bounding box 2.2.8. Symmetry 2.2.9. Nearest boundary 2.3. Evaluate if these conditions are derived from relation analysis, including: 2.3.1. Relative position with other components 2.3.2. Touching with other components 2.3.3. Whether they differ from or are identical with other components 41 2.3.4. Enclosure of other components 2.4. Evaluate if these conditions are derived from numerical analysis, including: 2.4.1. Symmetry numerical analysis 2.4.2. Size numerical analysis 2.4.3. Color numerical analysis 2.4.4. Shape numerical analysis 2.4.5. Hole counting analysis You must evaluate each condition ONE by ONE and determine the best conditions. Note: - The conditions MUST work for ALL training and test input and output image pairs. - Conditions CANNOT come from the output images! - A condition can be EMPTY. - If a condition is based on numerical features (e.g., size (width and height), or the number of holes), you may use the operators =, <, >, >=, or <=. - For cropping or selecting tasks, consider using a bounding box to extract each component. 42
https://arxiv.org/abs/2505.17482v1
arXiv:2505.17485v1 [cs.CL] 23 May 2025keepitsimple at SemEval-2025 Task 3: LLM-Uncertainty based Approach for Multilingual Hallucination Span Detection Saketh Reddy Vemula IIIT Hyderabad saketh.vemula@research.iiit.ac.inParameswari Krishnamurthy IIIT Hyderabad param.krishna@iiit.ac.in Abstract Identification of hallucination spans in black- box language model generated text is essential for applications in the real world. A recent at- tempt at this direction is SemEval-2025 Task 3, Mu-SHROOM—a Multilingual Shared Task on Hallucinations and Related Observable Over- generation Errors. In this work, we present our solution to this problem, which capitalizes on the variability of stochastically-sampled re- sponses in order to identify hallucinated spans. Our hypothesis is that if a language model is certain of a fact, its sampled responses will be uniform, while hallucinated facts will yield different and conflicting results. We measure this divergence through entropy-based analysis, allowing for accurate identification of hallu- cinated segments. Our method is not depen- dent on additional training and hence is cost- effective and adaptable. In addition, we con- duct extensive hyperparameter tuning and per- form error analysis, giving us crucial insights into model behavior.1 1 Introduction Hallucination is a situation where Large Language Models (LLMs) produce outputs that are inconsis- tent with real-world facts or unverifiable, posing challenges to the trustworthiness of AI systems (Huang et al., 2025). Hallucination Detection is the process of identifying such sections of text where a model generates content that is untrue, misleading, or unverifiable by any source. As LLMs are used to generate massive texts in all applications, it is es- sential to make sure their output is accurate (Bom- masani et al., 2022). Undetected hallucinations can propagate misinformation, lower confidence in AI systems, and have severe implications in applica- tions such as healthcare and law. Identification of particular spans of hallucinated text, as opposed to 1The code is available at https://github.com/ SakethReddyVemula/semeval-2025_Mu-SHROOM Figure 1: Architecture Diagram describing proposed method for detecting hallucination spans.(Manakul et al., 2023) merely marking whole outputs, is critical for real- world application, as it enables accurate corrections and improved comprehension of where and why a model hallucinate. In this paper, we describe an LLM-uncertainty based method for Hallucination span detection. Our hypothesis builds upon Manakul et al. (2023) that if an LLM is certain of a given concept, stochastically-sampled responses are likely to be similar and contain consistent facts. However, for hallucinated facts, these sampled responses are likely to diverge and contradict one another. We utilize entropy information to identify the precise spans of hallucinated text using sampled responses (Xiao and Wang, 2021), allowing us to effectively identify inconsistencies that signal hallucination. Our approach works well in zero-resource and black-box environments without any extra train- ing. In addition, since our approach is language- independent, it works equally well in a variety of languages. Our model ranks 18th on average among over 40 submissions, achieving its best rank of 10th in Chinese (Mandarin).2 2 Related Work The problem of hallucination detection in Large Language Models (LLMs) has been a focus of much attention recently. Hallucinations are de- fined as cases when LLMs produce outputs that sound plausible but are factually false or
https://arxiv.org/abs/2505.17485v1
unsup- ported, compromising their validity for real-world usage. Farquhar et al. (2024) proposed a technique employing semantic entropy to identify such con- fabulations through uncertainty estimation in the semantic space of model outputs. This method cal- culates uncertainty at the meaning level as opposed to actual word sequences and allows for recogniz- ing arbitrary and poor-quality generations for dif- ferent datasets and tasks without explicit domain knowledge. Following this, Kossen et al. (2024) introduced Semantic Entropy Probes (SEPs), which estimate semantic entropy directly from one generation’s hidden states. SEPs are efficient in computation, avoiding repeated model samplings at inference time. Their experiments showed that SEPs have high performance in hallucination detection and generalize well to out-of-distribution test sets, in- dicating that model hidden states contain semantic uncertainty relevant to hallucinations. In parallel, Manakul et al. (2023) introduced SelfCheckGPT, a zero-resource black-box method for fact-checking LLM responses independent of external databases. The technique exploits the con- sistency of stochastically generated responses by assuming that when an LLM has knowledge about a concept, its sampled responses will be consistent and similar in content while hallucinated facts re- sult in diverse and contradictory responses. Their results show that SelfCheckGPT efficiently identi- fies non-factual sentences and evaluates the factual- ity of passages, providing an efficient solution for situations where model internals are not available. These studies together highlight the need to cre- ate effective and efficient techniques for halluci- nation detection in LLMs. Methods based on se- mantic entropy, model hidden states, and response consistency provide promising directions for im- proving the reliability of LLM outputs in different applications. 2https://mushroomeval.pythonanywhere.com/ submission/3 Task Description Mu-SHROOM3(Multilingual Shared-task on Hal- lucinations and Related Observable Overgenera- tion Mistakes) focuses on detecting hallucinated spans in text output from instruction-tuned LLMs. The task includes 14 languages: Arabic (Modern standard), Basque, Catalan, Chinese (Mandarin), Czech, English, Farsi, Finnish, French, German, Hindi, Italian, Spanish, and Swedish. (Vázquez et al., 2025) Evaluation is conducted separately for each lan- guage and is based on the following two character- level metrics: •Intersection-over-Union (IoU): Measures the overlap between predicted and reference hallucination spans. IoU=|P∩G| |P∪G| where Pis the set of predicted hallucination characters and Gis the set of gold reference hallucination characters. •Probability Correlation (Cor): Evaluates how well the predicted hallucination probabil- ities match empirical annotator probabilities. ρ=corr(ˆp, p) where ˆpare the predicted probabilities and p are the human-annotated probabilities. Data format is described in Table 1. The hard_labels are used for intersection-over-union accuracy, while the soft_labels are used for cor- relation evaluation. Table 5 shows the number of samples in the task dataset. 4 Methodology In this section we describe our methodology for detecting hallucination spans. Given generated textGand stochastically-sampled responses S= s′ 1, s′ 2, ..., s′ nfrom models, our method predicts hal- licination spans as follows: Given a generated text G, we segment it into overlapping spans using a sliding window approach. Each span siis extracted using a window size w and stride tsuch that: si=G[(i−1)t: (i−1)t+w] (1) 3https://helsinki-nlp.github.io/shroom/ Field Description lang Language of the text. model_input Input query provided to the LLM. model_output_text Generated
https://arxiv.org/abs/2505.17485v1
text from the LLM. hard_labels List of pairs (si, ei)rep- resenting hallucination spans (start-inclusive, end-exclusive). soft_labels List of dictionaries, each containing: •start : Start in- dex of hallucina- tion span. •end: End index of hallucination span. •prob : Probability of the span being a hallucination. Table 1: Data fields used from Mu-SHROOM Dataset. for all valid indices iwith step size t. This ensures each part of the text is analyzed with sufficient context. For each span si, we retrieve the most simi- lar spans from a set of sampled responses S= s′ 1, s′ 2, ..., s′ nusing a lexical matching function based on sequence similarity. The matching spans Miare defined as: Mi=s′ j∈ S | Similarity (si, s′ j)> τ (2) where τis a threshold for similarity. We compute the hallucination score for each spansiusing a combination of semantic entropy, lexical entropy, and frequency-based scoring. Semantic Entropy To measure semantic incon- sistency, we compute cosine similarity between the span siand each matched span s′ j, using a pre- trained sentence embedding model: sim(si, s′ j) =E(si)·E(s′ j) |E(si)||E(s′ j)|(3) where E(s)denotes the embedding representation of span s. The probability distribution over similar- ities is given by: P(s′ j|si) =esim(si,s′ j) P kesim(si,s′ k)(4) The semantic entropy is then computed as: Hs(si) =−X s′ j∈M iP(s′ j|si) logP(s′ j|si)(5)Higher entropy values indicate greater semantic inconsistency. Lexical Entropy To measure lexical variability, we compute the Shannon entropy over the fre- quency distribution of matched spans: Hl(si) =−X s′ j∈M ip(s′ j) logp(s′ j) (6) where p(s′ j)is the probability of span s′ jappearing in the matched set Mi. Frequency Score The frequency-based confi- dence score is computed as: F(si) = 1−|Mi| |S|(7) where a lower |Mi|suggests fewer matches and a higher likelihood of hallucination. The final hallucination score for each span siis computed as a weighted sum: Sh(si) =αHs(si) +βHl(si) +γF(si)(8) where α, β, γ are hyperparameters controlling the contribution of each component. For our sub- mission, we heuristically choose α= 0.4, β= 0.4 andγ= 0.2. We plan to tune these parameters in our future work. To ensure hallucination spans align with mean- ingful text units, we refine span boundaries using: •Token boundaries : Adjusting span edges to align with word boundaries. •Phrase boundaries : Ensuring spans do not split meaningful phrases. •Named entity boundaries : Avoiding incor- rect segmentation of entity names. The refined spans are selected by maximizing the entropy gradient at span boundaries. Detected hallucination spans that overlap sig- nificantly are merged into a single span with an updated score: S′ h(s) =P i∈OSh(si)· |si|P i∈O|si|(9) where Ois the set of overlapping spans. The final output is a set of hallucination spans H: H= (si, Sh(si))|Sh(si)> λ (10) where λis a threshold for hallucination detection. 5 Experiments 5.1 Models Our experiments utilize Llama-3.2-3B-Instruct model (Dubey et al., 2024), a 3 billion parame- ter instruction-tuned language model. We generate responses using a temperature of 0.1 to maintain relatively deterministic outputs while allowing for some diversity, along with top-p sampling (nucleus sampling) set to 0.9 and top-k sampling with k=50. To avoid
https://arxiv.org/abs/2505.17485v1
repetitive patterns of text, we use a 3- gram repetition penalty. We produce 20 candidate responses with a maximum of 64 tokens per input query. The model is executed in mixed-precision using FP16 to save memory, with memory con- sumption limited to 6GB GPU memory and 8GB CPU memory via gradient offloading. 5.2 Hyperparameter Tuning Considering the presence of various hyperparam- eters in our methodology, we perform extensive hyperparameter tuning on validation split for each language. We observe that, while many languages have same set of hyperparameters performing the best on evaluation, there exist few languages where notable differences exist. We summarize our hyper- parameters choice in Table 2 Language w t λ MSL BT arabic 4 2 0.6 3 0.3 german 4 2 0.6 3 0.3 english 5 3 0.5 3 0.3 spanish 4 2 0.6 3 0.3 finnish 4 3 0.6 3 0.3 french 4 2 0.6 3 0.3 hindi 5 2 0.6 3 0.3 italian 4 2 0.7 3 0.3 sweden 4 2 0.5 3 0.3 chinese 7 3 0.6 3 0.3 Table 2: Hyperparameters choosen for different lan- guages. Notations include w: Window Size, t: Stride, λ: Entropy Threshold, MSL: Minimum Span Length, BT: Boundary Threshold 6 Results and Analysis Our submission demonstrated consistent perfor- mance across multiple languages as shown in Ta- ble 3, achieving similar Intersection over Union (IoU) and Correlation (Cor) scores across various languages. The system performed particularly wellin Basque (IoU: 0.4193, Cor: 0.3525), Finnish (IoU: 0.4554, Cor: 0.3323), Italian (IoU: 0.4009, Cor: 0.386) and Hindi (IoU: 0.3598, Cor: 0.3508), indicating its effectiveness in identifying and han- dling hallucinated text. Similarly, for languages such as English (IoU: 0.3466, Cor: 0.2104), Ger- man (IoU: 0.3651, Cor: 0.2199), and Chinese (IoU: 0.4703, Cor: 0.1601), the system maintained con- sistent performance, demonstrating its adaptability to different linguistic structures. The findings reveal that our model is aptly suit- able for detecting hallucinations for a wide variety of languages that possess intricate morphological and syntactic features. The high correlation scores across numerous languages confirm that our sys- tem makes good predictions which correlate well with ground truth annotation. Further, the high IoU values verify its capacity for good localiza- tion of hallucinated text, which enables it to be a trustworthy model in addressing the problems of hallucinations in multilingual environments. 6.1 Error Analysis Table 4 reports a sample data point from test split, where our model’s prediction successfully detects the hallucination span. But, it also labels other spans as hallucinated due to noise in generated responses. This behavior of false positives poses significant challenge and it must be handled. We plan to pinpoint why this happens and potentially fix this in our future work. 7 Conclusion In this paper, we utilized an LLM-uncertainty- based method for hallucination span detection which works equally well in multiple languages. By using entropy-based uncertainty measures from sample responses, our approach accurately detects hallucinated spans without the need for further training. Our model performed competitively in various languages, ranking highly in Basque, Ital- ian, and Hindi. The experiments emphasize the strength of our method,
https://arxiv.org/abs/2505.17485v1
as they show its effective- ness in coping with varied linguistic forms and in yielding precise hallucination span detection. Our error analysis also informs on typical failure in- stances, presenting potential for additional refine- ments. Although our approach is strong, it has limita- tions, specifically in exploiting supervised learning to achieve better span prediction. Our future re- Language Arabic Catalan Czech German English System IoU Cor IoU Cor IoU Cor IoU Cor IoU Cor Baseline (neural) 0.0418 0.119 0.0524 0.0645 0.0957 0.0533 0.0318 0.1073 0.031 0.119 Baseline (mark none) 0.0467 0.0067 0.08 0.06 0.13 0.1 0.0267 0.0133 0.0325 0 Baseline (mark all) 0.3614 0.0067 0.2423 0.06 0.2632 0.1 0.3451 0.0133 0.3489 0 Our Submission 0.3631 0.2499 0.3161 0.3377 0.2895 0.2423 0.3651 0.2199 0.366 0.2104 Language Spanish Basque Farsi Finnish French System IoU Cor IoU Cor IoU Cor IoU Cor IoU Cor Baseline (neural) 0.0724 0.0359 0.0208 0.1004 0.0001 0.1078 0.0042 0.0924 0.0022 0.0208 Baseline (mark none) 0.0855 0.0132 0.0101 0 0 0.01 0 0 0 0 Baseline (mark all) 0.1853 0.0132 0.3671 0 0.2028 0.01 0.4857 0 0.4543 0 Our Submission 0.2131 0.2335 0.4193 0.3525 0.3132 0.357 0.4554 0.3323 0.4651 0.2756 Language Hindi Italian Swedish Chinese System IoU Cor IoU Cor IoU Cor IoU Cor Baseline (neural) 0.0029 0.1429 0.0104 0.08 0.0308 0.0968 0.0236 0.0884 Baseline (mark none) 0 0 0 0 0.0204 0.0136 0.02 0 Baseline (mark all) 0.2711 0 0.2826 0 0.5373 0.0136 0.4772 0 Our Submission 0.3598 0.3508 0.4009 0.386 0.3967 0.217 0.4703 0.1601 Table 3: Performance comparison across different languages. IoU ( ⇑) : Intersection over Union. Cor ( ⇑) : Correlation. Baseline (neural) represents the baseline provided in participant kit, while Baseline (mark none) andBaseline (mark all) represents no characters labelled hallucinated and all characters labelled as hallucinated respecitively. ⇑denotes higher is better. Language Input Text Ground Truth Predicted Italian Chi ha doppiato in italiano l’attrice Catherine McCormack nel film Il sarto di Panama?L’attrice Catherine McCormack è stata doppiata in italiano da Elisa Di Stefano nel film "Il sarto di Panama"L’attr iceCatherine McCormack è stata doppiata in italiano da Elisa Di Stefano nel film "Il sarto di Panama " English When did Cape Verde first participate in the Summer Olympics?Cape Verde participated in its first Summer Olympic games in 2000 in Sydney, Australia .Cape Verde participated in its first Summer Olympic games in 2000 in Sydney, Australia. Swedish Vad heter slottsruinen i Pen- nard?Den kallades vid tiden för sitt största riktigt, Castle of the Dromon . Det är en liten ruin på ett berg nära stranden av St Brides Bay .Den kallades vid tiden för sitt största riktigt, Castle of the Dromon . Det är en liten ruin på ett berg nära stranden av St Brides Bay . Table 4: Hallucinated spans highlighted in redfor a sample datapoints in some langauges. Predicted refer to the predicted hallucinated spans by our system. search might consider fine-tuning over accessible training data in order to make performance even better while keeping our zero-resource model flexi- ble. More context and fact-based verification meth- ods can be incorporated to improve hallucination detection
https://arxiv.org/abs/2505.17485v1
even further. With LLMs still evolving, creating scalable and accurate methods of halluci- nation detection remains a critical step to maintain the integrity of AI-produced text across real-world use cases. Limitations Our method does not employ supervised learning for predicting the exact spans. Under-utilization of training splits of the task is a major drawback of our system. Utilizing the training split for any kind of supervised learning could potentially improvethe performance. Moreover, failing to incorporate contextual and factual verification techniques poses a major challenge to our approach. Acknowledgments We would like to thank Mu-SHROOM shared task organizers, Raúl Vázquez, Timothee Mickus, and their team, for their effort and commitment to orga- nizing this task. References Rishi Bommasani, Drew A Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, et al. 2022. On the opportunities and risks of foundation models. Preprint , arXiv:2108.07258. Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Let- man, Akhil Mathur, Alan Schelten, Amy Yang, An- gela Fan, et al. 2024. The llama 3 herd of models. Preprint , arXiv:2407.21783. Sebastian Farquhar, Jannik Kossen, Lorenz Kuhn, and Yarin Gal. 2024. Detecting hallucinations in large language models using semantic entropy. Nature , 630(8017):625–630. Lei Huang, Weijiang Yu, Weitao Ma, Weihong Zhong, Zhangyin Feng, Haotian Wang, Qianglong Chen, Weihua Peng, Xiaocheng Feng, Bing Qin, and Ting Liu. 2025. A survey on hallucination in large lan- guage models: Principles, taxonomy, challenges, and open questions. ACM Transactions on Information Systems , 43(2):1–55. Jannik Kossen, Jiatong Han, Muhammed Razzak, Lisa Schut, Shreshth Malik, and Yarin Gal. 2024. Seman- tic entropy probes: Robust and cheap hallucination detection in llms. arXiv preprint arXiv:2406.15927 . Potsawee Manakul, Adian Liusie, and Mark J. F. Gales. 2023. Selfcheckgpt: Zero-resource black-box hal- lucination detection for generative large language models. Preprint , arXiv:2303.08896. Raúl Vázquez, Timothee Mickus, Elaine Zosa, Teemu Vahtola, Jörg Tiedemann, Aman Sinha, Vincent Segonne, Fernando Sánchez-Vega, Alessandro Ra- ganato, Jind ˇrich Libovický, Jussi Karlgren, Shaox- iong Ji, Jind ˇrich Helcl, Liane Guillou, Ona de Gib- ert, Jaione Bengoetxea, Joseph Attieh, and Mari- anna Apidianaki. 2025. SemEval-2025 Task 3: Mu- SHROOM, the multilingual shared-task on hallucina- tions and related observable overgeneration mistakes. Yijun Xiao and William Yang Wang. 2021. On hal- lucination and predictive uncertainty in conditional language generation. Preprint , arXiv:2103.15025. A Mu-SHROOM Dataset Statistics Language Validation Test ar 50 150 ca - 100 cs - 100 de 50 150 en 50 154 es 50 152 eu - 100 fa - 100 fi 50 150 fr 50 150 hi 50 150 it 50 150 sv 50 150 zh 50 150 Table 5: Number of Samples in Validation and Test data in Mu-SHROOM. For Hyperparameter Tuning, we considered validation split for languages containing val- idation data points. For others, we heuristically approxi- mate the parameters.
https://arxiv.org/abs/2505.17485v1
arXiv:2505.17492v1 [cs.AI] 23 May 2025PD3: A Project Duplication Detection Framework via Adapted Multi-Agent Debate Dezheng Bao Zhejiang University baodezheng@zju.edu.cnYueci Yang Zhejiang University baodezheng@zju.edu.cnXin Chen Zhejiang University xin.21@intl.zju.edu.cn Zhengxuan Jiang Zhejiang University mystery_jiang@zju.edu.cnZeguo Fei Zhejiang University feizg@zju.edu.cnDaoze Zhang Zhejiang University zhangdz@zju.edu.cn Xuanwen Huang Independent Contributor xuanwhuang@gmail.comJunru Chen Zhejiang University jrchen_cali@zju.edu.cnChutian Yu State Grid Power Supply Co. Ltd. yu_chutian@zj.sgcc.com.cn Xiang Yuan State Grid Power Supply Co. Ltd. yuan_xiang@zj.sgcc.com.cnYang Yang† Zhejiang University yangya@zju.edu.cn Abstract Project duplication detection is critical for project quality assessment, as it improves resource utilization efficiency by preventing investing in newly proposed project that have already been studied. It requires the ability to understand high-level semantics and generate constructive and valuable feedback. Existing detection methods rely on basic word- or sentence-level comparison or solely apply large language models, lacking valuable insights for experts and in-depth comprehension of project content and review criteria. To tackle this issue, we propose PD3, a Project Duplication Detection framework via adapted multi-agent Debate. Inspired by real-world expert debates, it employs a fair competition format to guide multi- agent debate to retrieve relevant projects. For feedback, it incorporates both qualitative and quantitative analysis to improve its practicality. Over 800 real-world power project data spanning more than 20 specialized fields are used to evaluate the framework, demonstrating that our method outperforms existing approaches by 7.43% and 8.00% in two downstream tasks. Furthermore, we establish an online platform, Review Dingdang , to assist power experts, saving 5.73 million USD in initial detection on more than 100 newly proposed projects. 1 Introduction Project duplication detection is important for quality assessment, comparing newly proposed projects with reference projects to avoid redundant research investments. In recent years, research and development funding and project volumes continuously grow. For example, the State Grid Corporation of China invests over 5.25 billion USD in science and technology projects in 2023, obtaining 8,521 authorized patents [ 28]. The U.S. Department of Energy establishes the Smart Grid Grants funding, †Corresponding author. Preprint. Under review. I believe that Project A has a higher relevance compared to Project B. Becaus e the research content of Project A is more similar to the project unde r review, both involving ice prediction, especially forecast correction techniques based on ot her data. ProjectA Title: Key Technology Research and Application of Precise Ice Accretion Prediction for Transmission Lines Based on Earth System Simulation Equipment Content: ... carry out icing gradient observation and online monitoring of meteorological conditions ..., through the regional cloud-resolvable forecasting system, ..., synchronously generate online icing intensity level forecast information ..., combine with geographi c information data for refined temperature, wind field, and icing forecast correction ... ProjectUnderDetection Title: Research on ice disaster early warning technology based on the integration of precipitation dynamic correction and icing monitoring Main Research Content: This project aims to optimize and improve the ice disaster early warning technology, … and establishing an automatic data correction method. ..., conducting optimization research on the ice forecasting technology of the power grid through multi-data fusion. ..., establishing a dynamic correction method for numerical weather prediction based on Kalman filtering. ...,
https://arxiv.org/abs/2505.17492v1
constructing an icing fault early warning model driven by monitoring, forecasting, and historical failure data ..., developing intelligent de-icing and ice measurement devices, and realizing precise and efficient de-icing operations through drone-mounted devices. ...Compared to Project C, I have decided that Project B has a higher relevance. Becaus e Project B also involves the icing prediction of multi-data fusion mentioned in the reviewed project, which is a key technology in the reviewed project. Comparing Project A with Project C, I believe that Project C is more relevant. Although involving ice detection, Project C is more consistent with the project under review in final application results, both designed for icing measurement drones.. ProjectB Title: Technology for Enhanc ing Grid Disaster Risk Monitoring and Early Warning Capabilities under Extreme Weather Conditions Content: ..., research on the construction techno logy of icing database for multi-source heterogeneous data fusion, research on the weight analysis method of various characteristic variables affecting icing, …, research on the optimization method of icing prediction model under different machine learning combination algorithms, ...., ProjectC Title: Development and Application of an Accurate Ice Coverage Measurement Platform for Live Wires Based on Drones Content: This project will conduct in-depth research on a power line icing measurement system based on drone technology, studying high-accuracy and high-real-time icing monitoring methods ..., researching customized drone design solutions … and realizing the development of efficient and accurate icing measurement drones ... to improve the efficiency of icing measurement. (a) (b)Figure 1: (a) Case on the lack of strict partial order. This case shows exhibiting ranking intransitivity: Reference projects A, B, and C form a pair-wise comparison cycle because each demonstrates closer relevance than the other in either research content, key technology, or application results comparing to the target project. (b) Top-5 coverage ratio in number of retrieved reference projects. investing 300 billion USD from 2022 to 2026 to fund projects that improve the power system [ 29]. Therefore, the growing number of projects expands comparison requirements, making manual duplication detection increasingly difficult and unsustainable. Furthermore, the rapid surge in project volumes and the increasing investment of resources necessitate more comprehensive and valuable detection feedback to help with the rational allocation of resources and the improvement of newly proposed projects. Simple numerical metrics fail to provide experts with detailed information or help applicants optimize their projects, highlighting the need for more comprehensive feedback. The primary objective of project duplication detection is to retrieve the most relevant reference projects. An ideal detection method requires a deep understanding of the semantics of the project and domain knowledge, with recall capabilities tailored to domain-specific standards. Traditional approaches include word frequency-based methods (e.g., ROUGE [ 17], BM25 [ 23]) and vector distance-based methods (e.g., embedders with different parameter scales: Bert [ 7], gte [ 15]). Word frequency-based methods use word occurrences in the text to represent duplication among articles. However, these methods are overly dependent on tokenization, making them vulnerable to synonym substitution and other text manipulation, and thus ineffective in detecting intentional duplication avoidance. Also, their uniform treatment of all text
https://arxiv.org/abs/2505.17492v1
prevents them from prioritizing core project content in duplication detection. To address these limitations, other researchers propose vector distance-based methods by encoding full-text semantics into vectors. However, their static semantic representations lack task- specific adaptability. Although some embedding models (e.g., gte, bge [ 4,14]) employ prefix-guided pretraining(e.g., "Given a web search query, retrieve relevant passages that answer the query" in gte) to enhance task alignment, their highly compressed full-text embeddings only capture coarse-grained semantic relationships, lacking more detailed semantic comparisons. Due to their inflexibility, coarse granularity and the limited expressivity from the similarity function [ 27], these methods are primarily suited for preliminary retrieval. For further fine-grained retrieval, Large Language Models (LLMs) offer a direct solution. For tasks requiring deep semantic comprehension (e.g., duplication detection or peer review), LLMs can serve as expert reviewers to leverage their understanding and generation capabilities. For instance, LLMs are successfully applied in automated peer review [ 24,35]. Current LLM-based retrieval approaches typically either (1) utilize LLMs as judges to decide the ranking through point-wise scoring or pair-wise comparison [ 37], or (2) employ chain-of-thought (CoT) prompting or use LLMs with reasoning ability [ 12] to enable task-aware data sorting. Benefiting from test-time scalability, these methods overcome rigid word or vector dependencies. Although the aforementioned point-wise scoring and pair-wise comparison methods advance the field, project duplication detection remains challenging. From a scenario perspective, how to retrieve from reference projects without strict partial order is challenging . The performance of existing methods is limited to the existence of partial order relationship within the dataset (where A ≻B and B ≻C⇒A≻C). However, the relevance relationship among projects in duplication detection is not in strict partial order. As shown in Figure 1 (a), relevance assessment must consider multiple factors (e.g., research content, key technologies, and applications), even experienced experts cannot provide a definitive ranking through point-wise or pair-wise comparisons. This indicates that more global information on whole reference projects 2 Project Under Detection Quantitative Feedback Qualitative FeedbackTop-5 Relevant Reference Projects Expert F eedbac kFigure 2: Review Dingdang , the power project duplication platform based on PD3. set is required. Nevertheless, current point-wise or pair-wise scoring methods (also including vector distance-based methods) incorrectly assume partial ordering, limiting their effectiveness. From a methodological viewpoint, an effective detection method should support multi-perspective analysis to ensure comprehensive project comparisons. Conventional single-LLM approaches are inherently limited by their monolithic analytical perspective. The Multi-Agent Debate (MAD) paradigm [ 3,16,8] presents a promising solution by orchestrating multiple LLM agents to simulate deliberative debates. By considering global information across all candidates, MAD offers more comprehensive analysis incorporating diverse viewpoints from different agents. However, directly applying vanilla MAD becomes impractical when processing all candidate reference projects simulta- neously, due to negative performance impact caused by long context. Prior work [ 18] demonstrates that longer contexts degrade model performance due to attention lost-in-middle. Figure 1 (b) implies that to cover approximately 90% of top-5 relevant projects, at least 30 candidates are needed. In such scale of context length, both the performance of single LLM and MAD-based methods decreases. While vector
https://arxiv.org/abs/2505.17492v1
distance-based methods help narrow the candidate pool, radically reducing the num- ber of candidates affects final performance by prematurely eliminating relevant projects, failing to fundamentally solve the issue. To overcome these limitations, we propose PD3, aProject Duplication Detection framework via adapted multi-agent Debate. Through an adapted MAD mechanism, PD3optimizes context length by limiting concurrent project comparisons while maintaining essential contextual information through its round-robin competition structure. This balanced approach enables more accurate retrieval of the top-5 most relevant reference projects. In addition to retrieval, a helpful duplication detection framework should enhance human collaboration through valuable insights. Unlike traditional methods that output only numerical scores or verbatim matches, PD3generates both qualitative summaries (with key similarity conclusion and origin text comparisons) and quantitative duplication scores. This dual feedback approach helps experts verify results efficiently while providing applicants with clear guidance for project refinement, making PD3a human-centered framework. Building upon PD3, we developed an online platform called Review Dingdang to help detect duplicate scientific projects by power system experts. During the platform’s live test in April 2025, the platform demonstrates significant efficacy, enabling the prevention of approximately 5.73 million USD being invested in duplicate projects. Our key contributions include: •We propose PD3, a project duplication detection framework via adapted multi-agent debate, which is, to the best of our knowledge, the first LLM-based framework for project duplication detection. •In our framework, we introduce a novel round-robin competition format in MAD-based retrieval method, enabling fairer and more comprehensive candidate evaluation. Additionally, we first propose the LLM-based dual feedback design for project duplication detection, integrating both quantitative and qualitative analysis to enhance human collaboration and support. •Through validation with real-world power project data involving multiple professional domains, PD3outperforms baseline methods by 7.43% and 8.00% in two downstream tasks. Based on PD3, 3 1DataPreprocessing 2Database&PreliminaryRetrieval3MAD-Based Round-robin Retrieval 4LLM -as-a-Judge-based Feedback Origin Project Under Detection DocumentFormatTransform & Content DesensitizeProcesse dProject UnderDetection Processe dReferen ce ProjectsVector Database of Referen ce ProjectsEmbeddingProject UnderDetection N Relevant Candidates...... Round-robin MAD Competition ≥ × + × + × +VoteFinal Top-5Group1 Debate Senior Judge Top-5 GroupG Debate Senior Judge Top-5Group2 Debate Senior Judge Top-5Qualitative Output: SimilarityConclusionBetweenProjects Quantitative Output: DuplicationScoreQualitative Output: Original Text Comp arisonSimilarityConclusion ExpertAgent DuplicationScore Original Text Comp arisonExpertAgent ExpertAgent Feedbackto UserFigure 3: Overview of PD3framework. According to the I/O sequence of the project under detection, the framework consists of four parts: data pre-processing, database and preliminary retrieval, MAD- based round-robin retrieval and LLM-as-a-Judge-based Feedback. we also implement an online platform, Review Dingdang , achieving practical impact by preventing millions of USD in redundant investments during live test for new projects. 2 PD3Framework Design 2.1 Overview Problem Formulation Before presenting the overall framework, we first formalize the key problems addressed in this work. Let Pdenotes a project, with two main project duplication detection tasks defined as follows:(1) Retrieve the top-K most relevant reference projects R={P∗ j}K j=1⊆Sfor a given project under detection Pu, where S={Pi}M i=1is the candidate set of reference projects. (2) Provide a quantitative comprehensive duplication score su∈ D and (optionally) qualitative evaluation output according to the retrieval result R={Pi}K i=1, where Dis
https://arxiv.org/abs/2505.17492v1
the score domain and the score function f: (Pu, R)→ D evaluates the overall duplication level of Puagainst R. As illustrated in Figure 3, PD3framework comprises four key components. We first briefly outline each component before delving into detailed discussions of core components in subsequent chapters: 1 Data Pre-processing To handle heterogeneous project data with varying formats and sensitivity levels (whether for detection or reference), the framework first standardizes input through format unification, sensitive information desensitization (via regex matching) and text content extraction. 2 Database and Preliminary Retrieval The framework maintains a vector database (offline updat- able) for storing reference projects. The framework screens the database using the text content of the project under detection, retrieving 30 candidate reference projects (optimal count, as validated in preliminary experiments; see Figure 1 (b)). 3 MAD-based Round-robin Retrieval The module organizes the 30 candidate projects into several 5-out-of- Mround-robin sub-competition tasks. Multiple expert agents debate to select the top-5 candidates per sub-competition. And a senior judge makes group final decision by analyzing debate records. The final top-5 projects are determined through aggregated voting across all sub-tasks. 4 LLM-as-a-Judge-based Feedback This module analyzes the project under detection by: (1) generating a similarity conclusion, (2) assigning a quantitative duplication score using an LLM judge (based on review criteria), and (3) enriching output with a text-comparison agent that highlights specific similar expressions in the original texts. The combined feedback provides both measurable and actionable insights for human experts. 4 2.2 MAD Based Round-robin Retrieval: Debate Makes the Truth Shine Through Retrieving the most relevant reference project is a critical step in project duplication detection. High- quality retrieval significantly reduces downstream workload, while errors may compromise the entire detection process. Inspired by practical expert discussion, we propose an adapted multi-agent debate mechanism. Our approach enables more relevant projects to emerge through a limited number of debate rounds, where projects supported by stronger evidence naturally prevail. Unlike traditional MAD methods designed for QA tasks (e.g., MMLU-Pro [ 33], GPQA [ 22]), our scenario faces s unique challenge: the retrieval task must balance global information with context length constraints, requiring tailored decomposition rather than direct MAD application. To optimally strike the best trade-off between the two conflicting constraints, we enhance vanilla MAD with a round-robin competition mechanism. Inspired by real-world tournaments, this format ensures fairness over intensity, unlike in knockout or double-elimination systems. In our task, where equitable competition among candidates is crucial, the round-robin format is proved more suitable. Specifically, we randomly partition the 30 candidate projects into 6 sets of 5 candidate reference projects each. We then organize these into G=CM/5 6 unique 5-out-of- Msub-competitions (non- repeating combinations), where each sub-competition consists of Mreference projects from M/5 sets (M=20 and G=CM/5 6=15 under our setting). In each group sub-competition, independent expert agents first select their top-5 candidates and briefly justify their choices. They then engage in a structured debate—critiquing proposals, answering questions, or revising selections, while reviewing all prior debate records before speaking. After a fixed number of debate rounds, a senior expert makes the final group decision based
https://arxiv.org/abs/2505.17492v1
on the discourse. Once all group competitions finish, we aggregate results via voting (inspired by voting for knowledge-based tasks in [ 13]) to select the global top-5. Another advantage of this format is that group competitions are independent and can run in parallel, significantly reducing execution time. In summary, our adapted MAD-based round-robin retrieval outperforms traditional methods by leveraging LLMs to deeply analyze project content and review criteria, enabling more accurate horizontal comparison and candidate selection. Compared to other LLM-based methods, this approach offers three key advantages over LLM-as-a-Judge, R1-like methods and vanilla MAD methods: (1) Eliminating cost uncertainty caused by uncontrolled reasoning length; (2) Better balancing non- strict partial order relationships with context length; (3) Enhancing efficiency through parallel task decomposition, while retaining the benefits of test-time scaling. 2.3 LLM-as-a-Judge Based Feedback: Quantitative and Qualitative Output Through literature review and expert consultation, we identify a key limitation in duplicate detection: lack of constructive feedback. Effective feedback should bridge human-system interaction by deliver- ing actionable insights to improve both efficiency and project quality. Quantitatively, most algorithms fail to generate comprehensive complexity scores for reviewed content, and inconsistent scoring ranges/distributions undermine result credibility. Qualitatively, few algorithms detect duplication beyond continuous word matches, neglecting overlaps in research topics, applied technologies, and focus areas. This feedback deficiency hampers review efficiency, forcing experts and applicants to rely on rigid duplication thresholds. Consequently, non-core repetitions in high-quality projects risk misjudgment, while meritorious projects miss improvement opportunities. To address these limitations, we develop an LLM-based expert agent feedback module that delivers comprehensive quantitative and qualitative feedback. For quantitative feedback, we employ a full- project-level LLM-as-a-Judge approach with well-defined criteria (1-10 scale, higher scores indicating greater duplication) and task-specific prompts. Unlike point-wise or pair-wise scoring, our agent assesses the target project using all top-5 relevant reference projects as context, avoiding reliance on strict ordinal comparisons while leveraging richer global information. This method also offers flexibility: scoring rules can be easily adapted by modifying the prompt-for instance, instructing the agent to assign high scores when detecting any one highly relevant reference project. Besides, we design two key qualitative feedback: (1) Similarity Conclusion. A human-readable con- clusion comparing the project under detection with the top-5 relevant reference projects, highlighting similarities in main content, key technologies and application achievements. Derived from pairwise analysis, this summary not only aids in repetition review but also enhances quantitative assessment 5 performance when integrated into the input. (2) Original Text Comparison. Leveraging LLMs, this output identifies semantically similar text segments in the original content based on similarity conclusion. Unlike traditional word-based duplication detection, this method effectively detects relevant content while mitigating effects from deception tactics like word substitution. As a module capable of directly interacting with people, we prioritize improving designs that enhance efficiency and deliver tangible benefits. By optimizing quantitative results and addressing gaps in qualitative analysis, PD3significantly strengthens its ability to support users, establishing a more human-centered framework for project duplication detection. 3 Experiments 3.1 Dataset To evaluate PD3’s real-world performance, we analyze a dataset of 833 scientific and technological projects (from 2022 to
https://arxiv.org/abs/2505.17492v1
2024), with an average length of 957 tokens, sourced from the State Grid Corporation of China (SGCC). These projects span 22 broad domains, including dispatching, dis- tribution networks, transmission and transformation, digitalization, and informatization. Common topics include AI-based power consumption forecasting, line icing prediction and deicing, and car- bon emission detection. Human experts primarily assess research content, key technologies, and application outcomes for duplication detection. See Section A.2 for more details. 3.2 Experiment Settings Tasks Utilizing the symbols described in Section 2.1, we define two evaluation tasks with expert- annotated data from the power field. Task 1: Most relevant top-5 retrieval Given a project under detection Puand the set of 30 candidate reference projects from preliminary retrieval using vector distance-based retrieval from the database S={Pi}30 i=1. The task outputs a result set R={P∗ j}5 j=1⊆Sconsisting of the top-5 items most relevant reference projects. For cost efficiency, we randomly select 331 scientific and technological projects as test items. Review experts annotate the optimal result ˆR={ˆP∗ j}5 j=1for each test item. We employ a cross-repetitive detection setting: when a project is under detection, all other 832 projects serve as reference candidates. This approach, unlike comparing only with reference projects, better reflects real-world scenarios where projects from the same batch are detected together. Task 2: Comprehensive duplication score assessment of the project Given two projects under detection Pu-AandPu-B, along with their top-5 relevant reference projects sets RA={PAj}5 j=1and RB={PBj}5 j=1, the task outputs duplication scores su-A, su-B∈ D. Where Dis the score range. Since different algorithms use distinct score ranges Dand scoring function with different distribution f: (Pu, R)→ D , we set up task 2 in such a way to ensure fair comparison. For test set, we randomly sample 100 project pairs C={(Pu-Aj, Pu-Bj)}100 j=1from 331 human-annotated projects. Three experts independently vote on which project has higher duplication, yielding annotations ˆH={Hj}100 j=1, where Hj∈ {u-A,u-B}. Baselines. For a comprehensive comparison, we evaluate methods from four categories: word frequency-based, vector distance-based, LLM-based and MAD-based: •Word frequency-based methods (WF) :ROUGE-L [17]: Measures text similarity via longest common subsequence (LCS), commonly used for summarization evaluation. BM25 [23]: An enhanced TF-IDF approach that calculates query-document relevance through term frequency and inverse document frequency. •Vector distance-based methods (VD) :gte-Qwen2-1.5B-instruction [15] (gte-1.5B): Transformer- based embedding model ( with/without instruction ) for query enhancement, which is also the model used in preliminarily retrieval from the database in experiments. Reranker : Em- ploys larger models (e.g., gte-Qwen2-7B-instruction [15] (gte-7B)) or pretrained rerankers 6 Table 1: Experiments result of task 1: most relevant top-5 retrieval Method Classification Precision @5Match@K K= 1 K= 2 K= 3 K= 4 K= 5(Hit Rate@5) Random Random 0.1680 209 | 0.6314 64 | 0.1934 5 | 0.0151 0 | 0.0000 0 | 0.0000 ROUGE-LWF0.2399 248 | 0.7492 125 | 0.3776 23 | 0.0695 1 | 0.0030 0 | 0.0000 BM25 0.2870 273 | 0.8248 145 | 0.4381 52 | 0.1571 5 | 0.0151 0 | 0.0000 gte-1.5B VD0.3813 296 | 0.8943 219 | 0.6166 97 | 0.2931 18 | 0.0544 1 | 0.0030 gte-1.5B with instruction 0.3782 294
https://arxiv.org/abs/2505.17492v1
| 0.8882 213 | 0.6435 96 | 0.2900 21 | 0.0636 2 | 0.0060 gte-7B as reranker 0.3897 298 | 0.9003 212 | 0.6405 105 | 0.3172 29 | 0.0876 1 | 0.0030 jina as reranker 0.2798 275 | 0.8308 141 | 0.4260 42 | 0.1269 4 | 0.0121 1 | 0.0030 DeepSeek V3 LLM0.3686 297 | 0.8973 194 | 0.5861 88 | 0.2659 27 | 0.0816 4 | 0.0121 DeepSeek R1 0.3952 298 | 0.9003 214 | 0.6465 110 | 0.3323 29 | 0.0876 3 | 0.0091 LLM-as-a-Judge 0.3849 303 | 0.9154 211 | 0.6375 99 | 0.2991 21 | 0.0634 3 | 0.0091 MAD Direct MAD with different competition formats0.3964 307 | 0.9275 229 | 0.6918 129 | 0.3897 42 | 0.1269 3 | 0.0091 MAD Traversal 0.4211 304 | 0.9184 224 | 0.6727 125 | 0.3776 40 | 0.1208 4 | 0.0121 MAD Random 0.4272 309 | 0.9335 234 | 0.7069 125 | 0.3776 35 | 0.1057 4 | 0.0121 MAD Sliding Window 0.4344 307 | 0.9275 233 | 0.7039 133 | 0.4018 42 | 0.1269 4 | 0.0121 PD3MAD Round-Robin(Ours) MAD 0.4423 310 | 0.9366 238 | 0.7190 133 | 0.4018 41 | 0.1239 10 | 0.0302 (jina-reranker-v2-base-multilingual [26] (jina), bge-reranker-v2-m3 [14,4] (bge)) for candidate reranking. •LLM-based methods (LLM) :DeepSeek V3 [6]: Standard LLM that generates responses directly from prompts. DeepSeek R1 [12]: Reasoning-enhanced LLM that performs self-critique before responding. LLM-as-a-Judge [3,16,8] (using DeepSeekV3 as base-model): Use a single LLM as judge and evaluate the task through generative or scoring methods. •MAD-based methods with different competition format (MAD) :Direct : Vanilla MAD for 5-out-of-30 retrieval. Traversal : Each round selects 5 winners from 10 candidates, then merges winners with 5 new candidates until completion. Random : Matches round-robin sub-event counts but randomly select Mparticipants per round. Sliding Window : Maintains round-robin sub-event counts but selects Mparticipants via sliding window (step=1). Settings We employ gte-Qwen2-1.5B-instruction(without instruction) as the embedding model for vector database retrieval to obtain the preliminary 30 candidate projects in Task 1. While in task 2, for methods lacking direct duplication score output, we report both the maximum and average score among top-5 candidates to ensure fair and balanced comparison. In addition, task 2 evaluates scoring methods under two settings using different top-5 reference sets R:(1) Human Retrieval : Uniform expert-annotated reference set from Task 1, allowing direct method comparison. (2) Method Retrieval: Method-specific top-5 reference set, assessing end-to-end review performance. Since expert annotations for Task 2 are derived from the annotation of task 1, it can be used to evaluate the overall performance. For PD3, we choose DeepSeek V3 as base-model. As an open-source model, it excels in Chinese support and benchmark performance [ 30]. Further details are provided in Appendix A.3. In MAD round-robin retrieval, we set the number of agents and debate rounds (except initial round) to 3 and 2 respectively refer to the settings in [8]. Evaluation Metrics In Task 1, we adopt Precision@5 andMatch@K .Precision@5 =|R∧ˆR|/5 measures selection overlap with experts. Match@K =PI( R∧ˆR ≥K), K= 1,2, . . . , 5
https://arxiv.org/abs/2505.17492v1
calculates the results that overlap with the expert selection greater than or equal to K. Where I(·)is the indicator function (1 if true, else 0). Particularly, Match@1 ≡Hit Rate@5 . For Match@K , we additionally report its ratio to the size of test set in the form Match@K | (Match@K /Size of test set . In Task 2, we use Accuracy on 100 test sets, with two evaluation approaches: (1)Origin Group : Strict expert majority as ground truth. (2) Weighted Group : Expert votes as weights (e.g., 2A:1B →B scores 0.33) This is due to consideration that expert disagreement reflects sample comparison difficulty and potential multi-dimensional repetition rates. 3.3 Experiment Results Analysis of Task 1 Table 1 presents the experimental results for Task 1. Our proposed MAD method with round-robin competition achieves superior results across all metrics. Specifically, it improves Precision@5 by 7.43% on average, and demonstrates consistent gains in Match@K 7 Table 2: Experimental results of task 2: comprehensive duplication score assessment of the project. Method ClassificationHuman Retrieval Method Retrieval Acc Origin Group Acc Weighted Group Acc Origin Group Acc Weighted Group ROUGE-L MAXWF0.6200 0.5867 0.5600 0.5733 ROUGE-L A VG 0.6400 0.5933 0.6500 0.6233 BM25 MAXWF0.5300 0.5500 0.5300 0.5500 BM25 A VG 0.5300 0.5500 0.5400 0.5533 gte-1.5B MAXVD0.5800 0.4900 0.6500 0.5100 gte-1.5B A VG 0.6500 0.5400 0.6100 0.5100 gte-1.5B with reranker MAXVD- - 0.6300 0.4900 gte-1.5B with reranker A VG - - 0.6200 0.5200 LLM-as-a-Judge MAXVD0.5000 0.4867 0.4700 0.4500 LLM-as-a-Judge A VG 0.5800 0.5767 0.6400 0.6200 PD3feedback(Ours)MAD0.6400 0.6267 0.6600 0.6300 PD3feedback with conclusion(Ours) - - 0.6700 0.6467 (K=1-5) with improvements of 5.14%, 11.62%, 11.74%, 5.09%, and 2.32% respectively. Traditional methods relying on word frequency and vector distance show limited effectiveness, suggesting these simplistic methods are inadequate for duplicate detection retrieval tasks. Among LLM-based methods, DeepSeek R1 and LLM-as-a-Judge outperform others (including DeepSeek V3), indicating that enhanced reasoning during inference can better utilize LLM capabilities. The superior performance of MAD-based methods demonstrate their effectiveness in integrating global information through debate mechanisms when strict partial order relationships are absent. We evaluate various competition formats in MAD methods to analyze their impact on performance. MAD Direct, which performs direct 5-out-of-30 retrieval without decomposition, shows the poorest results. Among all competition formats, MAD Traversal creates unfairness by increasing final winning chances of later candidates. MAD Random and MAD Sliding Window improve candidate exposure but fail to ensure equal competition. Our round-robin competition format guarantees equal participation opportunities for all candidates, achieving optimal performance. Different Ksettings in Match@K metric further reveals our method’s growing advantage with increasing task difficulty (higher K values). Compared to baselines, it achieves average improvements of 5.81% (K=1), 19.27% (K=2), 41.26% (K=3), 69.78% (K=4), and 332.86% (K=5).This highlights our method is more competitive and has greater application value. Analysis of Task 2 Table 2 presents the experimental results for Task 2.1Our method outperforms baselines in both evaluation settings under all retrieval settings. Under "Human Retrieval" setting, it achieves average improvements of 6.13% and 8.00% across groups, demonstrating more reasonable quantitative feedback generation given identical input. The larger gain in the "Weighted Group"
https://arxiv.org/abs/2505.17492v1
(8.00%) particularly indicates that our method is more aligned with the preferences of human review experts. Under "Method Retrieval" setting, performance gains increase to 7.00% and 9.00%, confirming that better retrieval enhances final scoring quality and validating PD3’s effectiveness. Notably, when incorporating similarity summaries as prior knowledge, the advantages expand further to 8.00% and 10.67%, highlighting how intermediate agent processing can optimize quantitative feedback without reference answers. 1In the Human Retrieval setting, reranker scores were omitted due to the standardized use of manually annotated top-5 relevant documents, while PD3feedback with conclusion scores were excluded as they represent composite performance metrics only specific to Method Retrieval setting. 8 The project unde r review significantly differs from reference projects in core technol ogies (multi-source data fusion, dynamic correction) and application scenarios (end-to-end warning). Although some technol ogies (e.g., tension correction) overlap, its overall innovation is outstanding,… . Approval is recom mended. ProjectUnderDetection Title: Research on ice disaster early warning technology based on the integration of precipitation dynamic correction and icing monitoring Main Research Content:This project aims to optimize and improve the ice disaster early warning technology, … and establishing an automatic data correction method. ..., conducting optimization research on the ice forecasting technology of the power grid through multi-data fusion. ..., developing intelligent de-icing and ice measurement devices, and realizing precise and efficient de-icing operations through drone-mounted devices. ... I believe that projects 1, 4, 6, 10, and 12 are the most similar to the candi date projects. The aforementioned projects highly align with the review project in terms of research topics (icing monitoring and early warning), core technologies (multi- source data fusion, dynamic correction model), and application scenarios (transmission line icing disasters). … My choice is project 1, 4, 6, 12, 13. The aforementioned 5 projects all revolve around transmission line icing monitoring, early warning, and disaster prevention and control. They form a deep connection with the project under review in terms of research themes (icing mechanism and early warning), core technol ogies (data fusion and model construction), application scenarios….ExpertAgentA ExpertAgentB Project 1, 4, 6, 12, and 13 are highly consistent with the research topicand …, … covering the full chain of techni cal needs for icing disaster early warning and all of them have clear engineering application value. Other projects were excluded due to their focus on deicing/anti-icing (such as Project 2, 10) or deviation from the research field (such as Project 3, 7).… ExpertAgentC Debate Rounds Initial Round Senior Judge SeniorExpertAgent Ourfinal choice is project 1, 4, 6, 12, and 13. I agree with the core matching analysis of experts B and C for projects 1, 4, 6, and 12. However, Project 13 has an outstanding innovativeness, but its technical maturity and engineering feasibility may be weaker than Project 10 …, further weigh the practical data complementarity between projects 10 and 13. ExpertAgentA I agree with Expert A's assessment of the supplementary value of Project 10 on the study of ice accumulation mechanisms, but I believe that Project 13 is more crucial … . It is recommended to retain Project 13
https://arxiv.org/abs/2505.17492v1
and consider Project 10 as an alternative. The current selection (1/4/6/12/13) already covers the full chain of technical needs from monitoring to warning..ExpertAgentB … I acknow ledge the analysis of experts B and C on the advantages of project 13 in data fusion and spatial visualization,... If it is necessary to take into account 3D reconstruction techno logy, project 13 can be considered as an extended reference... . ExpertAgentA I fully suppor t Expert B's viewpoint and suggest maintaining the current combination of 1/4/6/12/13. The current five projects have completely covered the entire techni cal chain from icing monitoring, data correction to intelligent early warning, and the complementarity of the innova tive points is stronger, so it is recom mended to choose this as the final option.ExpertAgentC FeedbackGeneration Feedback ExpertAgent Conclusion)Similarity Conclusion Feedback ExpertAgent (Score) The project unde r detection and reference projects 1, 3, and 5 all involve transmission line icing monitoring and prediction, …, demonstrating thematic expansion and innovation. Based on the evaluation criteria, the project unde r review shows low duplication with reference projects, particularly with significant innovations in core techno logies and application scenarios. Score: 6.67 /10 (Average of 3 ratings)Duplication Score "Reference Project Original Paragraph" : "Combined with simulation analysis, study the mechanism of line ice shedding and … .", "Project Under Review Original Paragraph": "Conduc t research on multiple types of information such as ice coating image characteristics, … .", "Similar Paragraph Indicated Content": "Both study the ice shedding jump mechanism, but … ."Original Text ComparisonFeedback ExpertAgent (Comparison)Figure 5: A case study from the Review DingDang detection process conducted by PD3. In one sub-competition phase, three experts first independently select their top-5 choices. After debating their differing perspectives, Expert A ultimately accepts the views of Experts B and C. A senior judge then makes the final decision. With the voted top-5 reference projects proceeding to the feedback stage for quantitative scoring and qualitative feedback. 3.4 Hyperparameter Analysis (a) (b)(c) Figure 4: (a) Performance comparison across different group size. (b) Performance comparison across different number of debate rounds. (c) Performance comparison across different number of debate agents. Building on prior findings regarding the parameter sensitivity of MAD[ 25], we conduct systematic ablation experiments on three core hyperparameters of the MAD framework: the number of candidate items Mfrom preliminary retrieval, debate rounds, and agent count. As illustrated in Figure 4 (a), experimental results across varying values of Mdemonstrate a char- acteristic "peak-then-decline" trend in reasoning performance, with optimal performance achieved atM= 20 . This phenomenon suggests that while increasing Menhances the agents’ access to more global information, excessively large values introduce longer context, ultimately compromising reasoning quality. Complete results are provided in Table 5. To optimize computational costs, we conduct following experiments on a 60-sample random subset. As shown in Figure 4 (b), when fixing the number of agents at 3, model performance improves with additional debate rounds but declines beyond 3 rounds. Figure 4 (c) reveals that the optimal performance is achieved with 3 agents and 3 debate rounds. These findings highlight two competing factors: (1) Sufficient
https://arxiv.org/abs/2505.17492v1
debate rounds and agents facilitate multi-perspective analysis and comprehen- sive reasoning. (2) Excessive rounds or agents introduce diminishing returns due to error propagation and context overload, as discussed in [9]. While our PD3implementation does not adopt the empirically optimal configuration (3 agents + 3 rounds, compared to our 3 agents + 2 rounds setting) due to resource constraints, the results suggest its potential for performance improvement. Meanwhile, it’s worth to balance effectiveness and computational cost in practical applications. See detailed results in Table 6 and Table 7. 4 Application Building on the PD3framework, we develop Review DingDang , an online system for power field project duplication detection. As illustrated in Figure 2, it clearly presents all key duplication detection details, including project under detection information, relevant reference projects, and 9 quantitative-qualitative feedback. As a human-centered system, Review DingDang also enables experts to guide system optimization—for example, by modifying debate prompt rules. Case Study Review DingDang ’s workflow starts by retrieving 30 candidate reference projects based on vector distance. Following the round-robin competition format, the 30 candidate projects are divided into 15 parallel sub-competition tasks. Figure 5 details one such sub-competition: three expert agents first independently select their top-5 choices, then debate until the specified number of rounds is reached. In this case, Expert A ultimately accepts the opinions of Experts B and C, and the senior expert makes the final decision. After completing all sub-competitions, a voting mechanism determines the final top-5 projects. In the feedback stage, specialized agents then produce both quantitative scores and qualitative assessments to the human expert. Application Impact PD3is tested online via the Review DingDang platform, assisting experts in detecting duplicates among 118 newly proposed projects (totaling 43.28 million USD) attempting to apply SGCC’s fund. The platform helps experts detect 20 ineligible projects (16.95%, 5.73 million USD), demonstrating its effectiveness and its potential for positive social impact. 5 Conclusion We present PD3, a project duplication detection framework via adapted multi-agent debate. Its novel round-robin competition MAD-based retrieval method achieves a balance between context length and global information. Designed as a human-centered platform, PD3provides both quantitative duplication scores and qualitative feedback. Real-world power project data experiments demonstrate its superiority, and our deployed platform Review DingDang already delivers social impact. This work has some limitations. First, the test set size is constrained by expert annotation costs and computational resources, necessitating validation with broader domain data despite comprehensive evaluation metrics. Second, while framework-agnostic to base-model selection (we use DeepSeek V3 for its open-source advantage), testing with alternative LLMs (including proprietary or different size variants) would strengthen validation. Future directions include enhancing review performance through LLM-based reinforcement learning and expanding to multi-modal project analysis. 10 References [1]Imene Bensalem, Paolo Rosso, and Salim Chikhi. Intrinsic plagiarism detection using n-gram classes. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP) , pages 1459–1464, 2014. [2] ByteDance. volcengine, 2025. URL https://www.volcengine.com/ . Accessed: 2025-05-15. [3]Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, and Zhiyuan Liu. Chateval: Towards better llm-based evaluators through multi-agent debate.
https://arxiv.org/abs/2505.17492v1
arXiv preprint arXiv:2308.07201 , 2023. [4]Jianlv Chen, Shitao Xiao, Peitian Zhang, Kun Luo, Defu Lian, and Zheng Liu. Bge m3-embedding: Multi-lingual, multi-functionality, multi-granularity text embeddings through self-knowledge distillation. arXiv preprint arXiv:2402.03216 , 2024. [5]Xi Chen, Mao Mao, Shuo Li, and Haotian Shangguan. Debate-feedback: A multi-agent framework for efficient legal judgment prediction. In Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers) , pages 462–470, 2025. [6] DeepSeek-AI. Deepseek-v3 technical report, 2024. URL https://arxiv.org/abs/2412.19437 . [7]Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidi- rectional transformers for language understanding. In Proceedings of the 2019 conference of the North American chapter of the association for computational linguistics: human language technologies, volume 1 (long and short papers) , pages 4171–4186, 2019. [8]Yilun Du, Shuang Li, Antonio Torralba, Joshua B Tenenbaum, and Igor Mordatch. Improving factuality and reasoning in language models through multiagent debate. In Forty-first International Conference on Machine Learning , 2023. [9]Andrew Estornell and Yang Liu. Multi-llm debate: Framework, principals, and interventions. Advances in Neural Information Processing Systems , 37:28938–28964, 2024. [10] Juraj Gottweis, Wei-Hung Weng, Alexander Daryin, Tao Tu, Anil Palepu, Petar Sirkovic, Artiom Myaskovsky, Felix Weissenberger, Keran Rong, Ryutaro Tanno, et al. Towards an ai co-scientist. arXiv preprint arXiv:2502.18864 , 2025. [11] Jiawei Gu, Xuhui Jiang, Zhichao Shi, Hexiang Tan, Xuehao Zhai, Chengjin Xu, Wei Li, Yinghan Shen, Shengjie Ma, Honghao Liu, et al. A survey on llm-as-a-judge. arXiv preprint arXiv:2411.15594 , 2024. [12] Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948 , 2025. [13] Lars Benedikt Kaesberg, Jonas Becker, Jan Philip Wahle, Terry Ruas, and Bela Gipp. V oting or consensus? decision-making in multi-agent debate. arXiv preprint arXiv:2502.19130 , 2025. [14] Chaofan Li, Zheng Liu, Shitao Xiao, and Yingxia Shao. Making large language models a better foundation for dense retrieval. arXiv preprint arXiv:2312.15503 , 2023. [15] Zehan Li, Xin Zhang, Yanzhao Zhang, Dingkun Long, Pengjun Xie, and Meishan Zhang. Towards general text embeddings with multi-stage contrastive learning. arXiv preprint arXiv:2308.03281 , 2023. [16] Tian Liang, Zhiwei He, Wenxiang Jiao, Xing Wang, Yan Wang, Rui Wang, Yujiu Yang, Shuming Shi, and Zhaopeng Tu. Encouraging divergent thinking in large language models through multi-agent debate. InProceedings of the 2024 Conference on Empirical Methods in Natural Language Processing , pages 17889–17904, 2024. [17] Chin-Yew Lin. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74–81, 2004. [18] Nelson F Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, and Percy Liang. Lost in the middle: How language models use long contexts. arXiv preprint arXiv:2307.03172 , 2023. [19] Tongxuan Liu, Xingyu Wang, Weizhe Huang, Wenjiang Xu, Yuting Zeng, Lei Jiang, Hailong Yang, and Jing Li. Groupdebate: Enhancing the efficiency of multi-agent debate using group discussion. arXiv preprint arXiv:2409.14051 , 2024. 11 [20] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient
https://arxiv.org/abs/2505.17492v1
estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 , 2013. [21] Chau Pham, Boyi Liu, Yingxiang Yang, Zhengyu Chen, Tianyi Liu, Jianbo Yuan, Bryan A Plummer, Zhaoran Wang, and Hongxia Yang. Let models speak ciphers: Multiagent debate through embeddings. In The Twelfth International Conference on Learning Representations , 2024. [22] David Rein, Betty Li Hou, Asa Cooper Stickland, Jackson Petty, Richard Yuanzhe Pang, Julien Dirani, Julian Michael, and Samuel R Bowman. Gpqa: A graduate-level google-proof q&a benchmark. In First Conference on Language Modeling , 2024. [23] Stephen Robertson, Hugo Zaragoza, et al. The probabilistic relevance framework: Bm25 and beyond. Foundations and Trends® in Information Retrieval , 3(4):333–389, 2009. [24] Chenhui Shen, Liying Cheng, Ran Zhou, Lidong Bing, Yang You, and Luo Si. Mred: A meta-review dataset for structure-controllable text generation. In Findings of the Association for Computational Linguistics: ACL 2022 , pages 2521–2535, 2022. [25] Andries Smit, Nathan Grinsztajn, Paul Duckworth, Thomas D Barrett, and Arnu Pretorius. Should we be going mad? a look at multi-agent debate strategies for llms. In Proceedings of the 41st International Conference on Machine Learning , pages 45883–45905, 2024. [26] Saba Sturua, Isabelle Mohr, Mohammad Kalim Akram, Michael Günther, Bo Wang, Markus Krimmel, Feng Wang, Georgios Mastrapas, Andreas Koukounas, Nan Wang, et al. jina-embeddings-v3: Multilingual embeddings with task lora. arXiv preprint arXiv:2409.10173 , 2024. [27] Nandan Thakur, Nils Reimers, Andreas Rücklé, Abhishek Srivastava, and Iryna Gurevych. Beir: A hetero- geneous benchmark for zero-shot evaluation of information retrieval models. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2) , 2021. [28] The State Grid Corporation of China. The State Grid Corporation of China 2023 Social Re- sponsibility Report, 2024. URL http://www.sgcc.com.cn/u/cms/sgcc_main/other/202408/ 20240806163601853978745.pdf . [Online; accessed 2025-05-15]. [29] The U.S. Department of Energy. Smart Grid Grants | Department of Energy, 2024. URL https: //www.energy.gov/gdo/smart-grid-grants . [Online; accessed 2025-05-15]. [30] TIGER AI Lab. MMLU-Pro Leaderboard, 2025. URL https://huggingface.co/spaces/ TIGER-Lab/MMLU-Pro . [Online; accessed 2025-05-15]. [31] Qineng Wang, Zihao Wang, Ying Su, Hanghang Tong, and Yangqiu Song. Rethinking the bounds of llm reasoning: Are multi-agent discussions the key? In 62nd Annual Meeting of the Association for Computational Linguistics, ACL 2024 , pages 6106–6131. Association for Computational Linguistics (ACL), 2024. [32] Sijia Wang and Lifu Huang. Debate as optimization: Adaptive conformal prediction and diverse retrieval for event extraction. In Findings of the Association for Computational Linguistics: EMNLP 2024 , pages 16422–16435, 2024. [33] Yubo Wang, Xueguang Ma, Ge Zhang, Yuansheng Ni, Abhranil Chandra, Shiguang Guo, Weiming Ren, Aaran Arulraj, Xuan He, Ziyan Jiang, et al. Mmlu-pro: A more robust and challenging multi-task language understanding benchmark. In The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track , 2024. [34] Yang Yu, Lei Liu, Xiangyi Xu, and Shaoqian Bai. Text detection methods, devices, computing equipment, and computer-readable storage media , cn108829780b edition, 2022. [35] Daoze Zhang, Zhijian Bao, Sihang Du, ZHiyi Zhao, Kuangling Zhang, Dezheng Bao, and Yang Yang. Re2: A consistency-ensured dataset for full-stage peer review and multi-turn rebuttal discussions. arXiv preprint arXiv:2505.07920 , 2025. [36] Zhenhai Zhang and Xiongyong Sun. A method and
https://arxiv.org/abs/2505.17492v1
system for automatically detecting academic misconduct literature , cn101833579b edition, 2012. [37] Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. Judging llm-as-a-judge with mt-bench and chatbot arena. Advances in Neural Information Processing Systems , 36:46595–46623, 2023. 12 A APPENDIX A.1 Related work A.1.1 Textual Duplication Detection Textual duplication detection is a crucial computational linguistics task for identifying content replica- tion across documents, playing a vital role in protecting academic integrity and intellectual property. Early methods primarily relied on lexical-level analysis, such as n-gram overlap quantification [ 1] and dynamic programming algorithms like Smith-Waterman for local alignment. Notably, Yu et al. [34] introduced syntactic-level processing—including sentence segmentation, lexical decomposition, and TF-IDF differential computation—for similarity matrix construction, as implemented in the Wanfang duplication detection platform. While these character-matching methods achieve high precision in verbatim detection, they lack semantic comprehension, making them susceptible to evasion through paraphrasing (e.g., synonym substitution and syntactic restructuring). Later advancements adopts distributed semantic represen- tations, employing topic modeling (e.g., Latent Dirichlet Allocation) and word embeddings (e.g., Word2Vec [ 20]) for document similarity. Zhang and Sun [36] proposed a hierarchical detection framework combining document-level keyword ranking with sentence-level synonymy detection, widely used in the CNKI duplication detection platform. Although these approaches capture surface- level semantic relationships, they struggle with complex semantic transformations. The rise of large language models (LLMs) has revolutionized reasoning tasks [ 11], yet their mechanisms inherently conflict with the non-strict partial order reality of project duplication detection. A.1.2 Multi-Agent Debate Systems Multi-Agent Debate (MAD) [ 16,8] implements the "society of minds" framework through collab- orative interactions among LLM agents. This approach addresses key limitations of single-model reasoning—such as confirmation bias, hallucinations, and logical inconsistencies—through itera- tive adversarial knowledge refinement. Empirical studies show that MAD improves reasoning via three mechanisms: (1) collective error correction, (2) perspective diversification, and (3) systematic reasoning reinforcement. MAD has proven effective in diverse applications. In scientific discovery, Gottweis et al. [10] uses tournament-style debates to generate and refine biomedical hypotheses. For legal judgment prediction, Chen et al. [5]combines MAD with reliability assessment to reduce reliance on large datasets. In event extraction, the Debate as Optimization (DAO) system [ 32] iteratively improves outputs without parameter tuning. However, MAD remains unexplored for duplication detection a gap our work bridges. Recent advancements in MAD have primarily focused on three key dimensions: (1) Communication optimization, where Pham et al. [21] demonstrates the superiority of embedding-based interaction over natural language debate; (2) Role specialization, with Chan et al. [3]establishing that heterogeneous agent personas significantly outperform homogeneous configurations; and (3) Decision-making efficiency, where Liu et al. [19] introduces grouped debates to reduce computational overhead, while Kaesberg et al. [13] systematically evaluates voting versus consensus protocols across task types. However, these methodological refinements have predominantly targeted single-knowledge question answering scenarios, leaving their applicability to complex, multi-faceted tasks like project duplication detection largely unexplored. Despite its strengths, MAD faces criticism. Studies show it outperforms single-agent reasoning mainly in zero-shot settings [ 31], and debates may amplify biases when
https://arxiv.org/abs/2505.17492v1
agents share training data [9]. For example, Wang et al. [31] finds that well-prompted single agents can match MAD with demonstrations, while Estornell and Liu [9]shows debates often converge to majority opinions, reinforcing misconceptions. These limitations underscore the need for careful role diversity and consensus design—challenges our work tackles via tailored agent roles and task-specific voting. Our empirical results in project duplication detection further confirm MAD’s superiority over single LLM. 13 A.2 Dataset details The data used in this work consists of real projects from the State Grid Corporation of China. We anonynize sensitive information (e.g., applicant details) and randomly generating IDs, retaining only project titles and text content. More information is shown in Table 3. Table 3: Dataset details of power scientific projects Year Number of projects Average length (in tokens) 2022 223 835.0404 2023 292 997.1678 2024 318 1004.3459 Total 833 956.5054 A.3 Experiment settings details Experiment settings details on task 1 For the methods based on word frequency, vector distance retrieval, and LLM-as-a-Judge, we directly calculate the scores of the projects under detection and the preliminary retrieval results and then select the top-5 results with the highest scores. For DeepSeek V3 and DeepSeek R1, we directly input the project under detection with all preliminary retrieval results as prompts to generate outputs. Experiment settings details on task 2 For all LLM-as-a-Judge methods (including LLM-as-a- Judge MAX, LLM-as-a-Judge A VG, and PD ³’s LLM-as-a-Judge-based feedback with or without conclusion), we perform three independent generations and use the average value as the final score. While we employ the average of three scores as the final evaluation metric, instances may occur where the LLM assigns identical scores to both Pu-AandPu-Bin the test set. For fairness, our scoring protocol differs between evaluation settings in such cases: (1) Under the "Origin Group" setting, these cases receive a score of 0. (2) Under the "Weighted Group" setting, we assign the score corresponding to the less frequent label in the annotation (e.g., a 2:1 ratio would yield 0.33 points). A.4 Experiment results details Table 4: Additional experiment result on task 1 Method Precision @5Match@K K= 1 K= 2 K= 3 K= 4 K= 5(Hit Rate@5) gte-7B as reranker+ Task INST 0.3770 293 | 0.8852 210 | 0.6344 93 | 0.2810 26 | 0.0785 2 | 0.0060 gte-7B as reranker(w/o INST) 0.3897 298 | 0.9003 212 | 0.6405 105 | 0.3172 29 | 0.0876 1 | 0.0030 gte-1.5B+ EN default INST 0.3692 290 | 0.8761 209 | 0.6314 92 | 0.2779 19 | 0.0574 1 | 0.0030 gte-1.5B+ CN default INST 0.3722 295 | 0.8912 207 | 0.6254 94 | 0.2840 19 | 0.0574 1 | 0.0030 gte-1.5B with instruction (+ Task INST) 0.3782 294 | 0.8882 213 | 0.6435 96 | 0.2900 21 | 0.0636 2 | 0.0060 bge as reranker 0.2435 260 | 0.7855 113 | 0.3413 25 | 0.0755 4 | 0.0121 1 | 0.0030 jina as reranker 0.2798 275 | 0.8308 141 | 0.4260 42 | 0.1269 1 | 0.0030 1 | 0.0030 Experiment on task 1 results details For vector distance-based methods, we
https://arxiv.org/abs/2505.17492v1
conduct additional experiments beyond those reported in the main text. Due to space limications, Table 1 presents only the top-performing method from each category, while Table 4 in the appendix provides complete experimental results. Within each category, the bold entries indicate the methods selected for Table 1 based on superior performance. 14 Table 5: Hyperparameter analysis experiment result on the group size. Round-robin initial item count ( M ) Precision@5Match @ K K=5 K=4 K=3 K=2 K=1 10 0.4230 6 | 0.0181 39 | 0.1178 119 | 0.3595 229 | 0.6918 307 | 0.9275 15 0.4344 4 | 0.0121 45 | 0.1360 132 | 0.3988 229 | 0.6918 309 | 0.9335 20 (ours) 0.4423 10 | 0.0302 41 | 0.1239 133 | 0.4018 238 | 0.7190 310 | 0.9366 25 0.4290 3 | 0.0091 42 | 0.1269 129 | 0.3897 229 | 0.6918 307 | 0.9275 30 0.3964 3 | 0.0091 42 | 0.1269 129 | 0.3897 229 | 0.6918 307 | 0.9275 Table 6: Hyperparameter analysis experiment result on the debate round. Debate rounds Precision@5Match @ K K=5 K=4 K=3 K=2 K=1 1 0.4667 2 | 0.0333 9 | 0.1500 28 | 0.4667 45 | 0.7500 56 | 0.9333 2 0.4767 2 | 0.0333 8 | 0.1333 29 | 0.4833 47 | 0.7833 57 | 0.9500 3 0.4933 1 | 0.0167 11 | 0.1833 33 | 0.5500 45 | 0.7500 58 | 0.9667 4 0.4633 1 | 0.0167 8 | 0.1333 30 | 0.5000 43 | 0.7161 57 | 0.9500 Table 7: Hyperparameter analysis experiment result on the number of debate agent. Agent Count Precision@5Match @ K K=5 K=4 K=3 K=2 K=1 2 0.4733 1 | 0.0167 12 | 0.2000 25 | 0.4167 48 | 0.8000 56 | 0.9333 3 0.4933 1 | 0.0167 11 | 0.1833 33 | 0.5500 45 | 0.7500 58 | 0.9667 4 0.4800 1 | 0.0167 10 | 0.1667 30 | 0.5000 46 | 0.7667 57 | 0.9500 5 0.4867 1 | 0.0167 10 | 0.1667 34 | 0.5667 45 | 0.7500 56 | 0.9333 Experiment on hyperparameters results details Table 5, Table 6, and Table 7 respectively show the detailed experimental results of the analysis for hyperparameters - the number of candidate items Mfrom preliminary retrieval, debate rounds, and agent count. A.5 Runtime Performance This section evaluates the runtime performance of Review Dingdang , executed in parallel via Python’s concurrent module using V olcengine’s Model API [ 2]. Our results demonstrate an average execution time of 3 minutes per project duplication detection. As detailed in Table 8, token consumption aver- ages 4.42 million tokens per project (4.36 million prompt tokens/57.98 thousand completion tokens) during debates, while the feedback stage requires 35.36 thousand tokens per project (31.75 thousand prompt tokens/3.61 thousand completion tokens). Based on V olcengine’s Model API pricing[ 2], it takes around 1.34 USD per project. These metrics quantify the temporal and computational cost of PD3framework and Review Dingdan g platform. 15 Table 8: Running time token consumption analysis. Process Stage Token Consumption MetricAverage Token Usage prompt completion total DebateAverage per Project 4,358,515.9 57,978.6
https://arxiv.org/abs/2505.17492v1
4,416,494.5 Average per Debate (Average per Project /15)290,567.7 3,865.2 294,432.9 Feedback Average per Project 31753.2 3607.2 35,360.4 A.6 Prompt Templates Here provides several prompt templates use in PD3. Prompttemplatefor MAD round -robin retrieval – initial round You are an expe rt in the field of power cond ucting project duplication detection named {expert_name}. Please select the five m ost relevant projects to the pr oject under detection based on the following review criteria and project information. As an independent expert, you have no preconceived biases towards the research cont ent of each project and focus solely on determining the best choice. ## Review Objective: Based on predefined r eview criteria and discussion procedures, strictly discuss and determine five cand idate projects that are most relevant to the project under detection. ## Review Criteria: {review_criteria} ## Project Under Detection Information : {project_unde r_detection_info} ## Candidate Relevant Reference Project Information: {candi dates_project_info} Please select the five m ost relevant projects you believe and briefly explain the reasons for your selection in a com plete sentence. Figure 6: Prompt template used in the initial round of MAD-based round-robin retrieval. Prompttemplatefor MAD round-robin retrieval – debate round You are an exper t in the field of power cond ucting project duplication detection named {expert_name}. You are participating in a project duplication detection debate involving fellow expe rts, aiming to select the five m ost relevant candidate projects to the pr oject under detection from several options. Currently, it is the {round_num}th round of debate. I will provide yo u with the review objectives, review criteria, relevant project information, and records of previous rounds of debate. Please make a statement based on the pr evious discussions. You can: respond to the op inions of other experts, present new arguments to support your choi ces; or adop t the op inions of other exper ts, modify your previous choi ces and explain the reasons; or also question the choi ces and statements of other experts. ## Review Objectives: Based on predefined r eview criteria and discussion procedures, strictly discuss and determine five cand idate projects that are most relevant to the project under detection. ## Review Criteria: {review_criteria} ## Project Under Detection Information : {project_under_detection_info} ## Candidate Relevant Reference Project Information: {candi dates_project_info} ## Records of previous debate: {debate_records} Please make a brief statement in a com plete sentence to express your views. Figure 7: Prompt template used in the debate round of MAD-based round-robin retrieval. 16 Prompttemplatefor MAD round-robin retrieval – senior expert As a senior exper t in the field of power for project duplication detection, you are responsible for organizing expert debate to select the five m ost relevant projects among several candidate related projects. You will serve as a discussion reviewer in this debate, evaluating the experts’ debate and determining the final five selected projects. ## Review Objective: Based on predefined r eview criteria and discussion procedures, strictly discuss and determine five candi date projects that are most relevant to the project under detection. ## Review Criteria: {review_criteria} ## Project Under Detection Information :
https://arxiv.org/abs/2505.17492v1
{project_under_detection_info} ## Candidate Relevant Reference Project Information: {candi dates_project_info} ## Records of debate: {debate_records} Please analyze the experts' cons ensus, and finally, output the list of project numbers in order of relevance after [RESULT].Figure 8: Prompt template used to set senior expert round in MAD-based round-robin retrieval. Prompttemplatefor LLM -as-judge -based feedback – dupl ication scoring You are an exper t in the field of power cond ucting project duplication detection. For a given p roject under detection, you are provided w ith the five m ost relevant historical projects from the pr ovided d atabase, as well as the review exper ts’ conc lusion on the relevant cont ent of the reference projects and the pr oject under detection. Your task is to score the degr ee of duplication of the pr oject under review according to the review criteria. ## Review Criteria -Scoring is on a 10-point scale: 1 is the lowest, indicating that all historical projects are basically unrelated to the project under review; 4-6 is in the middle, indicating that multiple projects from the historical projects have dupl ication w ith the project unde r review in some dimension; 10 is the highest, indicating that one or more reference projects are completely identical to the project under detection. Encourage scores with differentiation. -When scoring, it is necessary to comprehensively consider the similarity of the candidate relevant projects in terms of research themes, core techn ologies, and application scena rios. -It should be noted that if one of the five reference projects is highly relevant to the project under detection, a higher score should be given. Only when all five reference projects are not sufficiently relevant should a lower score be given. -It should be noted that the five reference projects provided may not be highly relevant to the project under review. ## Project Under Detection Information : {project_under_detection_info} ## Candidate Relevant Reference Project Information: {candi dates_project_info} ## Conclusion of Review Experts {expert_conclusion} You need to provide the analysis reasons first, and then g ive the score in the form of '[RESULT]score', where score is an integer from 1 to 10. Figure 9: Prompt template used to generate duplication score of LLM-as-a-Judge-based feedback. 17
https://arxiv.org/abs/2505.17492v1
arXiv:2505.17495v1 [cs.LG] 23 May 2025ProxySPEX: Inference-Efficient Interpretability via Sparse Feature Interactions in LLMs Landon Butler∗ Department of EECS UC Berkeley landonb@berkeley.eduAbhineet Agarwal∗ Department of Statistics UC Berkeley aa3797@berkeley.eduJustin Singh Kang∗ Department of EECS UC Berkeley justin_kang@berkeley.edu Yigit Efe Erginbas Department of EECS UC Berkeley erginbas@berkeley.eduBin Yu Departments of Statistics and EECS UC Berkeley binyu@berkeley.eduKannan Ramchandran Department of EECS UC Berkeley kannanr@berkeley.edu Abstract Large Language Models (LLMs) have achieved remarkable performance by capturing complex interactions between input features. To identify these interactions, most existing approaches require enumerating all possible combinations of features up to a given order, causing them to scale poorly with the number of inputs n. Recently, Kang et al. (2025) proposed SPEX, an information-theoretic approach that uses interaction sparsity to scale to n≈103fea- tures. SPEX greatly improves upon prior methods but requires tens of thousands of model inferences, which can be prohibitive for large models. In this paper, we observe that LLM feature interactions are often hierarchical —higher-order interactions are accompanied by their lower-order subsets—which enables more efficient discovery. To exploit this hierarchy, we propose PROXY SPEX , an interaction attribution algorithm that first fits gradient boosted trees to masked LLM outputs and then extracts the important interactions. Experiments across four challenging high-dimensional datasets show that PROXY SPEX more faithfully recon- structs LLM outputs by 20% over marginal attribution approaches while using 10×fewer inferences than SPEX. By accounting for interactions, PROXY SPEX identifies features that influence model output over 20% more than those selected by marginal approaches. Further, we apply PROXY SPEX to two interpretability tasks. Data attribution , where we identify interactions among CIFAR-10 training samples that influence test predictions, and mechanistic interpretability , where we uncover interactions between attention heads, both within and across layers, on a question-answering task. PROXY SPEX identifies interactions that enable more aggressive pruning of heads than marginal approaches. 1 Introduction Large language models (LLMs) have achieved great success in natural language processing by capturing complex interactions among input features. Modeling interactions is not only crucial for language, but also in domains such as computational biology, drug discovery and healthcare, which require reasoning over high-dimensional data. In high-stakes contexts, responsible decision-making based on model outputs requires interpretability. For example, in healthcare, a physician relying on LLM diagnostic assistance must intelligibly be able to explain their decision to a patient. Post-hoc feature explanation methods such as SHAP [ 1] and LIME [ 2] focus on marginal attributions and do not explicitly capture the effect of interactions. To address this limitation, recent work has ∗Equal contribution. Order determined by coin flip. Preprint. Under review. 104105 # Model Inferences0.40.50.60.70.80.9Faithfulness ( R2) 12.8×10.6×8.6×Sentiment Analysis (BERT) 103104 # Model Inferences0.40.50.60.70.80.9Faithfulness ( R2) 8.9×9.0×6.7×Image Captioning (CLIP) LASSO ProxySPEX (Ours) SPEXFigure 1: PROXY SPEX requires ∼10×fewer inferences to achieve equally faithful explanations as SPEX for a sentiment classification and image-captioning task using a BERT and CLIP model respectively. LASSO faithfulness plateaus indicating limits of marginal approaches. proposed interaction indices, such as Faith-Shap [ 3], that attribute all interactions up to a given order d by exhaustively enumerating them. With nfeatures, enumerating
https://arxiv.org/abs/2505.17495v1
O(nd)interactions quickly becomes infeasible for even small nandd. Kang et al. [4]recently introduced SPEX, the first interaction attribution method capable of scaling up to n= 1000 features. SPEX scales with nby observing that LLM outputs are driven by a small number of interactions. It exploits this sparsity by utilizing a sparse Fourier transform to efficiently search for influential interactions without enumeration. For example, withn= 100 features, SPEX requires approximately 2×104model inferences to learn order 5 interactions—a small fraction of all possible 1005interactions. Nonetheless, 2×104inferences is prohibitively expensive for large models. Hence, the question naturally arises: Can we identify additional structural properties among interactions to improve inference-efficiency? We show empirically that local (i.e., input specific) LLM feature interactions are often hierarchical : for an order dinteraction, an LLM includes lower-order interactions involving subsets of those d features (see Figure 2). We use this to develop PROXY SPEX , an interaction attribution algorithm that reduces the number of inferences compared to SPEX by 10×while achieving equally faithful explanations. PROXY SPEX exploits this local hierarchical structure by first fitting gradient boosted trees (GBTs) as a proxy model to predict the output of LLMs on masked input sequences. Then, PROXY SPEX extracts important interactions from the fitted GBTs [5]. Evaluation overview. We compare PROXY SPEX to marginal feature attributions and SPEX across four high-dimensional datasets with hundreds of features. Results are summarized below: 1.Faithfulness. PROXY SPEX learns more faithful representations of LLM outputs than marginal approaches (≈15% to25%) on average across datasets as we vary the number of inferences. Figure 1 compares explanation faithfulness of P ROXY SPEX to marginal attributions and SPEX. 2.Feature identification. By accounting for interactions, PROXY SPEX identifies influential features that impact model outputs more significantly than marginal approaches. 3.Case study 1: Data attribution. Data Attribution is the problem of identifying training points responsible for a given test prediction. On CIFAR-10 [ 6]PROXY SPEX identifies the interactions between training samples that most significantly impact classification performance. 4.Case study 2: Model component attribution. We use PROXY SPEX to study interactions be- tween attention heads, both within and across layers, on MMLU [ 7] for Llama-3.1-8B-Instruct [8]. We observe that intra-layer interactions become more significant for deeper layers. PROX- YSPEX identifies interactions that allow it to prune more heads than the LASSO. 2 Related work and applications Feature and interaction attribution. SHAP [ 1] and LIME [ 2] are widely used for model-agnostic feature attribution. SHAP uses the game-theoretic concept of Shapley values [ 9] for feature attribution, while LIME fits a sparse linear model [ 10]. Cohen-Wang et al. [11] also consider fitting a sparse linear model for feature attribution. Chen et al. [12] uses an information-theoretic approach for feature attributions. Other methods [ 13,14] study model structure to derive feature attributions. Tsai et al. [3], Sundararajan et al. [15] and Bordt and von Luxburg [16] define extensions to Shapley 2 values that consider interactions. Fumagalli et al. [17] provides a framework for computing several interaction attribution scores, but their approach does not scale past n≈20features, which prevents them from being applied to
https://arxiv.org/abs/2505.17495v1
modern ML problems that often consist of hundreds of features. Fourier transforms and deep learning explainability. Several works theoretically study the spectral properties of transformers. Ren et al. [18] show transformers have sparse spectra and Hahn and Rofin [19], Abbe et al. [20] establish that they are low degree. Abbe et al. [21,22]study the bias of networks learning interactions via a “staircase” property, i.e., using lower-order terms to learn high-order interactions. Sparsity and low degree structure is also empirically studied in [ 23,24]. Kang et al. [25] shows that under sparsity in the Möbius basis [ 26], a representation closely related to Shapley values and the Fourier transform, interaction attributions can be computed efficiently. Kang et al. [4]use these insights to propose SPEX, the first robust interaction attribution algorithm to scale to the order of n≈1000 features. Gorji et al. [5]apply sparse Fourier transforms [ 27–30] for computing Shapley values. They also provide an algorithm to extract the Fourier transform of tree-based models using a single forward pass. Mechanistic Interpretability (MI). MI seeks to uncover the underlying mechanisms of neural networks and transformers [ 31] in order to move past treating these models as black boxes .PROX- YSPEX answers the question "what combinations of inputs matter?" which is a vital precursor and complement to MI investigations that subsequently address "how does the model compute based on those specific inputs?" Some closely related MI work attempts to recover circuits to explain underlying model behavior [ 32,33]. Hsu et al. [34] use MI for interaction attribution. See Sharkey et al. [35] for a review of open problems and recent progress in MI. 3 P ROXY SPEX In this section, we first empirically justify our premise that significant interactions affecting LLM output are hierarchical—influential high-order interactions imply important lower-order ones. Next, we introduce PROXY SPEX , which aims to identify feature interactions for a given input xwhile minimizing the number of expensive calls to an LLM. 3.1 Preliminaries Value function. Letxbe the input to the LLM consisting of nfeatures2. For S⊆[n], where [n] = 1 , . . . , n , denote xSas the masked input where we retain features indexed in Sand replace all others with the [MASK] token. For example, in the sentence x=“The sequel truly elevated the original”, if S={1,2,5,6},xS=“The sequel [MASK] [MASK] the original”. Masks can be more generally applied to any type of input such as image patches in a vision-language model. For a masked input xSand LLM f, letf(xS)∈Rdenote the output of the LLM under masking pattern S. The value function fis problem dependent. For classification tasks, a common choice is the logit of the predicted class for unmasked input, f(x). In generative tasks, f(xS)can represent the perplexity of generating the original output for the unmasked input. Since we focus on providing input-specific explanations, we suppress notation on xand denote f(xS)asf(S). Fourier transform of value function. Let2[n]be the powerset of the index set. The value function fcan be equivalently thought of as a set function from f: 2[n]7→R. Every such function admits a Fourier transform F: 2[n]7→Roff, related
https://arxiv.org/abs/2505.17495v1
as follows: Transform: F(T) =1 2nX S⊆[n](−1)|S∩T|f(S), Inverse: f(S) =X T⊆[n](−1)|T∩S|F(T).(1) The parameters F(T)are known as Fourier coefficients and capture the importance of an interaction of features in a subset T. Equation (1) represents an orthonormal transform onto a parity (XOR) basis [36]. For the rest of the paper, we use the terms Fourier coefficient and interaction interchangeably. Further, we refer to the set of Fourier coefficients {(T, F(T)) :T⊆[n]}as the spectrum . Interpretable approximation of value function. We aim to learn an interpretable approximate function ˆfthat satisfies the following: 2Features refer to inputs at a given granularity, e.g., tokens in an LLM or image patches in a vision model. 3 x1x2x3 x1x2 x1x3 x2x3 x1 x2 x3 x4 x5x3x4 x3x5 x4x5x3x4x5Figure 2: We observe that LLM feature interactions are often hierarchical—higher-order interactions are accompanied by their lower-order subsets. 1.Faithful representation. To characterize how well the surrogate function ˆfapproximates the true function, we define faithfulness [37]: R2= 1−∥ˆf−f∥2 f−¯f 2,where ∥f∥2=X S⊆[n]f(S)2,¯f=1 2nX S⊆[n]f(S). (2) Faithfulness measures how well ˆfpredicts model output. High faithfulness implies accurate approximation of F(T)(this follows from orthonormality of (1)). 2.Sparse representation. ˆfshould be succinct . Previous works [ 4,25,38–40] have shown that a sparse and low-degree ˆfcan achieve high R2. That is, F(T)≈0for most T(sparsity ), and |F(T)|is only large when |T| ≪n(low degree ). 3.Efficient computation. Without any additional assumptions on the spectrum, learning fis exponentially hard since there are 2npossible subsets T.PROXY SPEX relies on the sparse, low degree Fourier transform along with the hierarchy property to reduce LLM inferences. A faithful and sparse ˆfallows straightforward computation of allpopular feature or interaction attribution scores defined in the literature, e.g., Shapley, Banzhaf, Influence Scores, Faith-Shapley. Closed-form formulas for converting Fto various attribution indices are provided in Appendix A.1. 3.2 Empirical evidence of spectral hierarchies To quantify the degree of hierarchical structure in LLMs, we introduce the following definition called Direct Subset Rate (DSR),3defined for any value function fand integer k. DSR (f, k) =1 kX S∈Fk1 |S|X i∈S1{S\ {i} ∈ F k},where Fkdenotes the klargest Fourier coefficients of f.(3) For the top kcoefficients (i.e., interactions), DSR measures the average fraction of Fourier coefficients that exclude only oneof the features F(S\ {i}). For example, an fwithF4={∅,{1},{2},{1,3}} would have DSR of1 4 1 + 1 + 1 +1 2 =7 8. High DSR implies that significant high-order interac- tions have corresponding significant lower-order Fourier coefficients. Figure 2 visualizes hierarchical interactions. Next, we show that two LLM based value functions have high DSR. We take 20samples from a sentiment analysis task and an image captioning task [ 41]; see Section 4 for a detailed description and our choice of value function. We generate masks Sand apply SPEX until our learned value function has faithfulness ( R2) more than 0.9. Figure 3 visualizes the DSR for various values of k, i.e., number of top interactions. DSR is consistently larger than 80%, indicating strong hierarchical structure. In Appendix B.2, we consider two additional metrics measuring hierarchical structure, and demonstrate that the top- kinteractions are faithful. Using GBTs to capture hierarchical Interactions.
https://arxiv.org/abs/2505.17495v1
Tan et al. [42] proved that decision trees learn “staircase” functions, e.g., f=x1+x1x2+x1x2x3, effectively due to their greedy construction procedure. We empirically confirm this by comparing the performance of various proxy models on a synthetic hierarchical function (i.e., sum of staircase functions resembling Figure 2) as well as the Sentiment dataset in Appendix Figure 12. Appendix B.4 details the simulation set-up. GBTs vastly outperform other proxy models, indicating their natural ability to identify hierarchical interactions with limited training data. Interestingly, GBTs outperform random forests as well. This is because random forests are ineffective at learning hierarchical functions [ 43], i.e., sums of staircases, while GBT-like algorithms disentangle sums effectively [44]. 3ForS=∅, we set0 0= 1. 4 8 16 32 64 128 256 TopkCoefficients0%20%40%60%80%100%Direct Subset RateSentiment Analysis (BERT) 8 16 32 64 128 256 TopkCoefficients0%20%40%60%80%100%Direct Subset RateImage Captioning (CLIP)Figure 3: The top- kinteractions in both a sentiment analysis and image captioning task have high DSR indicating strong hierarchical structure. The film embraces bold visuals and theplot rarely drags. However, glaring plot holes and forced jokes undercut itsimpact. ...The film embraces bold visuals and theplot rarely drags. However, glaring plot holes and forced jokes undercut itsimpact. ...The film embraces bold visuals and theplot rarely drags. However, glaring plot holes and forced jokes undercut itsimpact. ... Prompt LLM Generate embraces bold visuals rarely drags undercut impact glaring plot holes=57embraces bold visuals +4film its −31However +7and theplot +41rarely drags−38undercut impact −70glaring plot holes−29forced jokesThe film embraces bold visuals and theplot rarely drags. However, glaring plot holes and forced jokes undercut itsimpact. ...Input Masked Inference Proxy Fitting Top Interactions Fourier ExtractionPositive Negative...1 2 3 Figure 4: (1) PROXY SPEX masks subsets of words and queries the LLM using this masked input. (2) It then fits GBTs as a proxy model to learn the LLM’s hierarchical interactions. (3) An interpretable sparse representation is extracted from the fitted GBT which captures the influential interactions. 3.3 P ROXY SPEX via Gradient Boosted Trees to fit hierarchies The P ROXY SPEX algorithm (see Figure 4): 100101102103104 Sparsity (k)0.00.20.40.60.81.0Rel. Faithfulness Captioning Sentiment Figure 5: Relative faithfulness as a function of Fourier sparsity. Only ≈200coefficients are required to achieve equivalent faithfulness. Spar- sity for sentiment is higher since inputs have larger n.Step 1 - Sampling and querying. Given LLM f and input instance xto explain, generate a dataset D= (Si, f(Si))ℓ i=1for training the proxy. The inputs Sirepresent the masks of x. Each mask Si is sampled uniformly from the set [n]. The labels f(Si)are obtained by querying the LLM. Step 2 - Proxy Training. Fit GBTs to Dwith 5-fold cross-validation (CV). Step 3 - Fourier extraction. We use Gorji et al. [5]to extract the Fourier representation of the fitted GBTs in a single forward pass; see Ap- pendix A.2. With Ttrees of depth dthere are at most O(T4d)non-zero Fourier coefficients. To improve interpretability, we sparsify the extracted representation by keeping only the top kFourier coefficients. Fig. 5 shows that only ≈200Fourier coefficients are needed to achieve equivalent 5 2 4 6 8 Inference Multiplier ( α)0.00.20.40.60.8Faithfulness ( R2) +28%+31% +31% +31%Sentiment Analysis
https://arxiv.org/abs/2505.17495v1
2 4 6 8 Inference Multiplier ( α)0.00.20.40.6Faithfulness ( R2) +14%+16%+18% +19%DROP 2 4 6 8 Inference Multiplier ( α)0.00.10.20.3Faithfulness ( R2) +11%+16%+21%+26%HotpotQA 2 4 6 8 Inference Multiplier ( α)0.00.20.40.60.8Faithfulness ( R2) +4%+14%+18% +20%MS-COCO LASSO ProxySPEX (Ours) SPEXFigure 6: Comparison of faithfulness of different attribution methods with α·nlog2(n)training masks for different inference multipliers α∈ {2,4,6,8}. While SPEX is only competitive with LASSO for large α, the gap between P ROXY SPEX and LASSO increases with α. faithfulness for a sentiment classification and image captioning (MS-COCO) dataset. Additional results regarding the sparsity of Fourier spectra learned by GBTs are in Appendix B.3. Step 4 (Optional): Coefficient refinement via regression. Optionally we regress the extracted, sparsified Fourier coefficients on the collected data Dto improve the estimation. This step is included if it leads to lower CV error. 4 Results Datasets and models 1.Sentiment is a classification task composed of the Large Movie Review Dataset [45] which consists of positive and negative IMDb movie reviews. We use words as input features and restrict to samples with n∈[256,512]. We use the encoder-only fine-tuned DistilBERT model [ 46,47], and the logit of the positive class as the value function. 2.HotpotQA [48] is a generative question-answering task over Wikipedia articles. Sentences are input features, and we restrict to samples with n∈[64,128]. We use Llama-3.1-8B-Instruct , and perplexity of the unmasked output as the value function. 3.Discrete Reasoning Over Paragraphs (DROP) [ 49] is a paragraph level question-answering task. We use words as input features and restrict to samples with n∈[256,512]. We use Llama-3-8B-Instruct and the perplexity of the unmasked output as the value function. 4.MS-COCO [41] contains images and corresponding text captions. Image patches and words are the input features with n∈[60,85]. We use CLIP-ViT-B/32 , a joint vision-language encoder, with the value function defined as the contrastive loss over all datapoints. Baselines and hyperparameters. For marginal feature attributions, we use the LASSO. We use the same datasets at [ 4] and add MS-COCO for an additional modality. It was shown in [ 4] that popular marginal metrics such as SHAP are significantly less faithful than the LASSO, e.g., have R2<0. We use the LASSO implementation from scikit-learn , and choose the l1regularization parameter via 5-fold CV . For interaction indices, we compare PROXY SPEX to SPEX. Due to the scale of nin our experiments, we cannot compare methods for computing interaction indices such as Faith-Shapley, Faith-Banzhaf, and Shapley-Taylor using SHAP-IQ [ 17], and SV ARM-IQ [ 50], because they enumerate all possible interactions, making them computationally infeasible. For PROXY SPEX, a list of GBT hyper-parameters we tune over are in Appendix B. 6 3 4 5 6 7 # Features Removed ( r)0.80.91.01.11.2∆ LLM Output Sentiment Analysis 3 4 5 6 7 # Features Removed ( r)4.55.05.56.06.57.0∆ LLM Output DROP 3 4 5 6 7 # Features Removed ( r)0.050.100.150.200.25∆ LLM Output HotpotQA 3 4 5 6 7 # Features Removed ( r)0.200.250.300.35∆ LLM Output MS-COCO LASSO ProxySPEX (Ours) SPEXFigure 7: By accounting for interactions, PROXY SPEX identifies more influential features across datasets than the LASSO.
https://arxiv.org/abs/2505.17495v1
Apart from the sentiment analysis task (top left), SPEX does not collect enough training masks to out-perform LASSO. 4.1 Faithfulness We compare attribution method faithfulness by varying the number of training masks. For each sample with nfeatures, we generate α·nlog2(n)masks, varying α∈ {2,4,6,8}, to normalize difficulty across inputs of varying lengths (some by over 100 tokens). This nlog(n)type scaling is heuristically guided by compressed sensing bounds [ 51]. These suggest the number of samples required grows with sparsity (assumed ∝n) and logarithmically with problem dimensionality (if dimensionality for degree- dinteractions is ≈nd, this yields a log(nd) =dlog(n)factor). Together, these factors support an nlog(n)scaling. While not directly applicable, these bounds offer a useful heuristic for how sampling complexity scales with n. Figure 6 shows average faithfulness over 1,000 test masks per sample. PROXY SPEX outperforms LASSO with limited inferences and continues to improve where LASSO plateaus, indicating that it is learning influential interactions. While SPEX is often faster for the same number of masks, SPEX needs additional inference time to match R2, making PROXY SPEX faster overall. For the smaller DistilBERT model under the sentiment analysis task, the wall clock speedup is ∼3×, while with the bigger CLIP-ViT-B/32 model with MS-COCO we see ∼5×speedup (See Appendix B.5). 4.2 Feature Identification We measure the ability of methods to identify the top rinfluential features that influence LLM output: ∆LLM Output (r) =|f([n])−f(S∗)| |f([n])|, S∗= argmax |S|=n−r|ˆf([n])−ˆf(S)|. (4) Solving Eq. 4 for an arbitrary ˆfpresents a challenging combinatorial optimization problem. However, PROXY SPEX and SPEX represent ˆfas a sparse Fourier transform. This representation facilitates solving the optimization as a tractable linear integer program. The sparsity of the extracted Fourier representation ensures that the time required to solve this program is negligible compared to sam- pling the LLM and fitting the GBTs. Full details of the construction of this program are given in Appendix A.3. Under LASSO, Eq. 4 is easily solved through selecting features by the size of their coefficients. We measure the removal ability of different attribution methods when we collect 8nlog2(n)training masks and plot the result in Figure 7. By accounting for interactions, PROX- YSPEX identifies significantly more influential features than the LASSO. Apart from the sentiment analysis task, SPEX does not collect enough training masks to outperform the LASSO. 7 Airplane Horse Frog Bird Bird Bird Dog Dog Dog Bird Bird Bird Bird Dog Automobile Bird Truck Airplane Horse Cat Cat Automobile Truck TruckHeldout Image Redundant Interactions Heldout Image Synergistic InteractionsFigure 8: Synergistic interactions: data that together are more valuable together than the sum of their parts and aid in classification. Redundant interactions: Data that may contain similar information, their combined influence is less than the sum of the parts. 5 Case studies We now present two case studies of PROXY SPEX for two different interpretability problems: data attribution [52] and model component attribution [53], a key problem in mechanistic interpretability. We first show how both of these tasks can be reformulated as feature attribution tasks; recent work has highlighted the connections between feature, data, and model component attribution [54]. 5.1 Data Attribution
https://arxiv.org/abs/2505.17495v1
via Non-Linear Datamodels Data attribution for classification is the problem of understanding how fitting a model gθon a subset Sof training samples affects the prediction of a test point zof class c. This problem can be converted into our framework by defining an appropriate value function f, f(S)≜(logit for conz)−(highest incorrect logit on z),when gθis trained on S. (5) The value function fquantifies the impact of a subset Son the classification of z. Sampling f is very expensive since it involves training a new model gθfor every subset S. As a result, most data attribution approaches do not consider the impact of interactions. Notably, Ilyas et al. [52] use LASSO to learn fwhen training a ResNet model on the CIFAR-10 dataset [ 6]. As a case study, we apply P ROXY SPEX to understand the impact of interactions between CIFAR-10 training samples. Defining data interactions. Interactions between samples can be either redundant interactions or synergistic interactions . Redundant interactions are when the influence of a subset Sis not additive. Redundancy typically occurs between highly correlated samples, e.g., semantic duplicates [ 55]. Synergistic interactions occur when a subset Sinfluences a prediction by shaping a decision boundary that no individual sample in Scould do so by itself. That is, the model needs the combined effect of training samples in Sto correctly classify z. Results. We visualize interactions learned by PROXY SPEX in Figure 8 for randomly selected CIFAR- 10 test points. Experimental details are in Appendix C.1. PROXY SPEX identifies highly similar training samples (redundancies) as well as synergistic interactions between samples of different classes. See Appendix C.1 for examples of other randomly selected test samples. 5.2 Model Component Attribution We study the role of attention heads for a question-answering task using Llama-3.1-8B-Instruct and MMLU (high-school-us-history), which is a multiple-choice dataset. We treat each attention head as a feature and aim to identify interactions among heads using PROXY SPEX . LetLrepresent the number of layers in an LLM and let L ⊆[L]represent a subset of the layers. Let HLdenote the set of attention heads within these layers. For a subset of heads S⊆ HL, we set the output of heads 8 LASSO ProxySPEX (Ours) Best-of-N Unpruned Acc. 50% 66% 83%0.70.80.9 +11%+2%+4%Layers 1-3 50% 66% 83%0.70.80.9+6% +1%+5%Layers 14-16 50% 66% 83%0.70.80.9 +1% +3% +3%Layers 30-32 80% 8%12% 67% 23%10% 51% 39% 10%Test Accuracy % of Attention Heads Kept % of ProxySPEX’s Spectral Energy 50% 66% 83%0.70.80.9 +11%+2%+4%Layers 1-3 50% 66% 83%0.70.80.9+6% +1%+5%Layers 14-16 50% 66% 83%0.70.80.9 +1% +3% +3%Layers 30-32 80% 8%12% 67% 23%10% 51% 39% 10%Test Accuracy % of Attention Heads Kept % of ProxySPEX’s Spectral Energy Linear Within-Layer Inter. Across-Layer Inter.Figure 9: Attention head pruning for Llama-3.1-8B-Instruct for MMLU (high-school-us-history). Top: We report the test accuracy vs. percentage of heads retained, comparing PROXY SPEX , LASSO, and Best-of- Nacross layer groups ( 1-3,14-16,30-32). Unpruned accuracy shown by dashed line. Bottom: PROXY SPEX ’s learned spectral energy distribution into linear effects, within-layer, and across-layer interactions per layer group. inHL\Sto0and denote the ablated LLM as LLM S(·). Define fas: fL(S)≜Accuracy of LLM Son training set
https://arxiv.org/abs/2505.17495v1
of MMLU . (6) Pruning results. We use the LASSO and PROXY SPEX to identify the most important heads for various sparsity levels ( i.e., the number of retained heads) across different sets of layers. We also compare to a Best-of- Nbaseline, where we take the best of N= 5000 different randomly chosen S, further details are in Appendix C.2. We use the procedure detailed in Section 4.2 to identify heads to remove for both PROXY SPEX and LASSO. Test accuracies for each method are presented in Figure 9 at three different sparsity levels, and with three different layer ranges: initial ( 1-3), middle (14-16) and final ( 30-32). We observe that PROXY SPEX consistently outperforms both baselines, with a higher test accuracy on the pruned models identified using P ROXY SPEX. Characterizing interactions between attention heads. Analyzing the Fourier spectrum learned by PROXY SPEX offers insights into the nature of the internal mechanisms of the LLM. As shown in Figure 9 (bottom), the spectral energy attributed to interactions, particularly within-layer interactions, markedly increases in deeper layers of Llama-3.1-8B-Instruct . There are many works that look at the differing functional roles of attention heads across layers [ 56].PROXY SPEX provides an exciting new quantitative approach to further investigate these phenomena. 6 Discussion Conclusion. We introduce PROXY SPEX , an inference-efficient interaction attribution algorithm that efficiently scales with nby leveraging an observed hierarchical structure among significant inter- actions in the Fourier spectrum of the model. Experiments across 4high-dimensional datasets show thatPROXY SPEX exploits hierarchical interactions via a GBT proxy model to reduce inferences by ∼10×over SPEX [ 4] while achieving equally faithful explanations. We demonstrate the importance of efficient interaction discovery by applying PROXY SPEX to data and model component attribution. Limitations. GBTs effectively capture hierarchical interactions but may not perform as well when interactions have a different structure. For example, simulations in Appendix B.4 empirically confirm that GBTs suffer in the case of sparse but non-hierarchical functions. More generally, in cases where the proxy GBT model is not faithful, the interactions identified by PROXY SPEX might not be representative of the model’s reasoning. Another limitation is the degree of human interpretability that can be understood from computed interactions. While interactions can offer richer insights, they are more difficult to parse than marginal alternatives. Further improvements in visualization and post-processing of interactions are needed to fully harness the advances of P ROXY SPEX. 9 Future work. Inference-efficiency could be further improved by exploring alternative proxy models, additional Fourier spectral structures, or adaptive masking pattern designs. Integrating PROXY SPEX with internal model details, such as via hybrid approaches with MI or by studying its connection to sparsity in transformer attention [ 57], offers another promising avenue. Finally, further deepening and improving applications of PROXY SPEX in data attribution and mechanistic inter- pretability as well as potentially exploring more complex value functions or larger-scale component interactions remains interesting future work. 10 Acknowledgments and Disclosure of Funding This material is based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No.
https://arxiv.org/abs/2505.17495v1
DGE-2146752. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. This work used NCSA DeltaAI at UIUC through allocation CIS250245 from the Advanced Cyberin- frastructure Coordination Ecosystem: Services & Support (ACCESS) program, which is supported by U.S. National Science Foundation grants #2138259, #2138286, #2138307, #2137603, and #2138296. B.Y . gratefully acknowledge partial support from NSF grant DMS-2413265, NSF grant DMS 2209975, NSF grant 2023505 on Collaborative Research: Foundations of Data Science Institute (FODSI), the NSF and the Simons Foundation for the Collaboration on the Theoretical Foundations of Deep Learning through awards DMS-2031883 and 814639, NSF grant MC2378 to the Institute for Artificial CyberThreat Intelligence and OperatioN (ACTION), and NIH grant R01GM152718. References [1]S. M. Lundberg and S.-I. Lee, “A unified approach to interpreting model predictions,” in Proceedings of the 31st International Conference on Neural Information Processing Systems , ser. NIPS’17. Red Hook, NY , USA: Curran Associates Inc., 2017, p. 4768–4777. [2]M. T. Ribeiro, S. Singh, and C. Guestrin, “" why should i trust you?" explaining the predictions of any classifier,” in Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining , 2016, pp. 1135–1144. [3]C.-P. Tsai, C.-K. Yeh, and P. Ravikumar, “Faith-shap: The faithful Shapley interaction index,” Journal of Machine Learning Research , vol. 24, no. 94, pp. 1–42, 2023. [4]J. S. Kang, L. Butler, A. Agarwal, Y . E. Erginbas, R. Pedarsani, K. Ramchandran, and B. Yu, “Spex: Scaling feature interaction explanations for llms,” arXiv preprint arXiv:2502.13870 , 2025. [5]A. Gorji, A. Amrollahi, and A. Krause, “Amortized shap values via sparse fourier function approximation,” arXiv preprint arXiv:2410.06300 , 2024. [6]A. Krizhevsky, G. Hinton et al. , “Learning multiple layers of features from tiny images,” 2009, Technical Report. [7]D. Hendrycks, C. Burns, S. Basart, A. Zou, M. Mazeika, D. Song, and J. Steinhardt, “Mea- suring massive multitask language understanding,” in International Conference on Learning Representations , 2021. [8]A. Grattafiori et al. , “The llama 3 herd of models,” 2024. [Online]. Available: https://arxiv.org/abs/2407.21783 [9] L. S. Shapley, A Value for N-Person Games . Santa Monica, CA: RAND Corporation, 1952. [10] R. Tibshirani, “Regression shrinkage and selection via the lasso,” Journal of the Royal Statistical Society Series B: Statistical Methodology , vol. 58, no. 1, pp. 267–288, 1996. [11] B. Cohen-Wang, H. Shah, K. Georgiev, and A. Madry, “Contextcite: Attributing model generation to context,” 2024. [Online]. Available: https://arxiv.org/abs/2409.00729 [12] J. Chen, L. Song, M. Wainwright, and M. Jordan, “Learning to explain: An information-theoretic perspective on model interpretation,” in International conference on machine learning . PMLR, 2018, pp. 883–892. [13] M. Sundararajan, A. Taly, and Q. Yan, “Axiomatic attribution for deep networks,” in Interna- tional conference on machine learning . PMLR, 2017, pp. 3319–3328. 11 [14] A. Binder, G. Montavon, S. Bach, K.-R. Müller, and W. Samek, “Layer-wise relevance propagation for neural networks with local renormalization layers,” 2016. [Online]. Available: https://arxiv.org/abs/1604.00825 [15] M. Sundararajan, K. Dhamdhere, and A. Agarwal, “The Shapley Taylor interaction index,” in International Conference on Machine Learning
https://arxiv.org/abs/2505.17495v1
, Jul 2020, pp. 9259–9268. [Online]. Available: https://proceedings.mlr.press/v119/sundararajan20a.html [16] S. Bordt and U. von Luxburg, “From shapley values to generalized additive models and back,” in International Conference on Artificial Intelligence and Statistics . PMLR, 2023, pp. 709–745. [17] F. Fumagalli, M. Muschalik, P. Kolpaczki, E. Hüllermeier, and B. E. Hammer, “SHAP-IQ: Unified approximation of any-order shapley interactions,” in Conference on Neural Information Processing Systems , 2023. [Online]. Available: https://openreview.net/forum?id=IEMLNF4gK4 [18] Q. Ren, J. Gao, W. Shen, and Q. Zhang, “Where we have arrived in proving the emergence of sparse interaction primitives in DNNs,” in International Conference on Learning Representations , 2024. [Online]. Available: https://openreview.net/forum?id=3pWSL8My6B [19] M. Hahn and M. Rofin, “Why are sensitive functions hard for transformers?” arXiv preprint arXiv:2402.09963 , 2024. [20] E. Abbe, S. Bengio, A. Lotfi, and K. Rizk, “Generalization on the unseen, logic reasoning and degree curriculum,” Journal of Machine Learning Research , vol. 25, no. 331, pp. 1–58, 2024. [21] E. Abbe, E. Boix-Adsera, M. S. Brennan, G. Bresler, and D. Nagaraj, “The staircase property: How hierarchical structure can guide deep learning,” Advances in Neural Information Processing Systems , vol. 34, pp. 26 989–27 002, 2021. [22] E. Abbe, E. B. Adsera, and T. Misiakiewicz, “The merged-staircase property: a necessary and nearly sufficient condition for sgd learning of sparse functions on two-layer neural networks,” inConference on Learning Theory . PMLR, 2022, pp. 4782–4887. [23] D. Tsui and A. Aghazadeh, “On recovering higher-order interactions from protein language models,” in ICLR 2024 Workshop on Generative and Experimental Perspectives for Biomolecular Design , 2024. [Online]. Available: https://openreview.net/forum?id= WfA5oWpYT4 [24] J. Ren, Z. Zhou, Q. Chen, and Q. Zhang, “Can we faithfully represent absence states to compute shapley values on a DNN?” in International Conference on Learning Representations , 2023. [Online]. Available: https://openreview.net/forum?id=YV8tP7bW6Kt [25] J. S. Kang, Y . E. Erginbas, L. Butler, R. Pedarsani, and K. Ramchandran, “Learning to under- stand: Identifying interactions via the mobius transform,” arXiv preprint arXiv:2402.02631 , 2024. [26] J. C. Harsanyi, “A bargaining model for the cooperative n-person game,” Ph.D. dissertation, Department of Economics, Stanford University, Stanford, CA, USA, 1958. [27] X. Li, J. K. Bradley, S. Pawar, and K. Ramchandran, “The SPRIGHT algorithm for robust sparse Hadamard Transforms,” in IEEE International Symposium on Information Theory (ISIT) , 2014, pp. 1857–1861. [28] A. Amrollahi, A. Zandieh, M. Kapralov, and A. Krause, “Efficiently learning fourier sparse set functions,” Advances in Neural Information Processing Systems , vol. 32, 2019. [29] Y . E. Erginbas, J. Kang, A. Aghazadeh, and K. Ramchandran, “Efficiently computing sparse fourier transforms of q-ary functions,” in IEEE International Symposium on Information Theory (ISIT) , 2023, pp. 513–518. [30] R. Scheibler, S. Haghighatshoar, and M. Vetterli, “A fast hadamard transform for signals with sublinear sparsity in the transform domain,” IEEE Transactions on Information Theory , vol. 61, no. 4, pp. 2115–2132, 2015. 12 [31] C. Olah, N. Cammarata, L. Schubert, G. Goh, M. Petrov, and S. Carter, “Zoom in: An introduction to circuits,” Distill , 2020, https://distill.pub/2020/circuits/zoom-in. [32] A. Conmy, A. Mavor-Parker, A. Lynch, S. Heimersheim, and A. Garriga-Alonso, “Towards automated circuit discovery for
https://arxiv.org/abs/2505.17495v1
mechanistic interpretability,” Advances in Neural Information Processing Systems , vol. 36, pp. 16 318–16 352, 2023. [33] A. Syed, C. Rager, and A. Conmy, “Attribution patching outperforms automated circuit discov- ery,” arXiv preprint arXiv:2310.10348 , 2023. [34] A. R. Hsu, G. Zhou, Y . Cherapanamjeri, Y . Huang, A. Y . Odisho, P. R. Carroll, and B. Yu, “Efficient automated circuit discovery in transformers using contextual decomposition,” 2024. [Online]. Available: https://arxiv.org/abs/2407.00886 [35] L. Sharkey, B. Chughtai, J. Batson, J. Lindsey, J. Wu, L. Bushnaq, N. Goldowsky-Dill, S. Heimer- sheim, A. Ortega, J. Bloom et al. , “Open problems in mechanistic interpretability,” arXiv preprint arXiv:2501.16496 , 2025. [36] R. O’Donnell, Analysis of boolean functions . Cambridge University Press, 2014. [37] Y . Zhang, H. He, Z. Tan, and Y . Yuan, “Trade-off between efficiency and consistency for removal-based explanations,” Advances in Neural Information Processing Systems , vol. 36, pp. 25 627–25 661, 2023. [38] G. Valle-Perez, C. Q. Camargo, and A. A. Louis, “Deep learning generalizes because the parameter-function map is biased towards simple functions,” arXiv preprint arXiv:1805.08522 , 2018. [39] G. Yang and H. Salman, “A fine-grained spectral perspective on neural networks,” arXiv preprint arXiv:1907.10599 , 2019. [40] Q. Ren, Y . Xu, J. Zhang, Y . Xin, D. Liu, and Q. Zhang, “Towards the dynamics of a DNN learning symbolic interactions,” 2024. [Online]. Available: https://arxiv.org/pdf/2407.19198 [41] T.-Y . Lin, M. Maire, S. J. Belongie, L. D. Bourdev, R. B. Girshick, J. Hays, P. Perona, D. Ra- manan, P. Dollár, and C. L. Zitnick, “Microsoft COCO: Common objects in context,” in European Conference on Computer Vision , 2014, pp. 740–755. [42] Y . S. Tan, J. M. Klusowski, and K. Balasubramanian, “Statistical-computational trade-offs for greedy recursive partitioning estimators,” arXiv preprint arXiv:2411.04394 , 2024. [43] Y . S. Tan, A. Agarwal, and B. Yu, “A cautionary tale on fitting decision trees to data from additive models: generalization lower bounds,” 2021. [Online]. Available: https://arxiv.org/abs/2110.09626 [44] Y . S. Tan, C. Singh, K. Nasseri, A. Agarwal, J. Duncan, O. Ronen, M. Epland, A. Kornblith, and B. Yu, “Fast interpretable greedy-tree sums,” Proceedings of the National Academy of Sciences , vol. 122, no. 7, p. e2310151122, 2025. [45] A. L. Maas, R. E. Daly, P. T. Pham, D. Huang, A. Y . Ng, and C. Potts, “Learning word vectors for sentiment analysis,” in Annual Meeting of the Association for Computational Linguistics: Human Language Technologies . Portland, Oregon, USA: Association for Computational Linguistics, June 2011, pp. 142–150. [Online]. Available: http://www.aclweb.org/anthology/P11-1015 [46] V . Sanh, L. Debut, J. Chaumond, and T. Wolf, “Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter,” ArXiv , vol. abs/1910.01108, 2019. [47] K. Odabasi, “DistilBERT Finetuned Sentiment,” accessed January. [Online]. Available: https://huggingface.co/lyrisha/distilbert-base-finetuned-sentiment [48] Z. Yang, P. Qi, S. Zhang, Y . Bengio, W. W. Cohen, R. Salakhutdinov, and C. D. Manning, “Hotpotqa: A dataset for diverse, explainable multi-hop question answering,” arXiv preprint arXiv:1809.09600 , 2018. 13 [49] D. Dua, Y . Wang, S. Dasigi, S. Singh, M. Gardner, and T. Kwiatkowski, “Drop: A reading comprehension benchmark requiring discrete reasoning over paragraphs,”
https://arxiv.org/abs/2505.17495v1
in Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers) , 2019, pp. 2368–2378. [50] P. Kolpaczki, M. Muschalik, F. Fumagalli, B. Hammer, and E. Hüllermeier, “Svarm-iq: Effi- cient approximation of any-order shapley interactions through stratification,” arXiv preprint arXiv:2401.13371 , 2024. [51] E. Candes and T. Tao, “Decoding by linear programming,” IEEE Transactions on Information Theory , vol. 51, no. 12, pp. 4203–4215, 2005. [52] A. Ilyas, S. M. Park, L. Engstrom, G. Leclerc, and A. Madry, “Datamodels: Understanding predictions with data and data with predictions,” in International Conference on Machine Learning . PMLR, 2022, pp. 9525–9587. [53] H. Shah, A. Ilyas, and A. M ˛ adry, “Decomposing and editing predictions by modeling model computation,” in Proceedings of the 41st International Conference on Machine Learning , 2024, pp. 44 244–44 292. [54] S. Zhang, T. Han, U. Bhalla, and H. Lakkaraju, “Building bridges, not walls–advancing interpretability by unifying feature, data, and model component attribution,” arXiv preprint arXiv:2501.18887 , 2025. [55] A. K. M. Abbas, K. Tirumala, D. Simig, S. Ganguli, and A. S. Morcos, “Semdedup: Data- efficient learning at web-scale through semantic deduplication,” in ICLR 2023 Workshop on Mathematical and Empirical Understanding of Foundation Models , 2023. [56] W. Shi, S. Li, T. Liang, M. Wan, G. Ma, X. Wang, and X. He, “Route sparse autoencoder to interpret large language models,” 2025. [Online]. Available: https://arxiv.org/abs/2503.08200 [57] B. Chen, T. Dao, E. Winsor, Z. Song, A. Rudra, and C. Ré, “Scatterbrain: Unifying sparse and low-rank attention,” Advances in Neural Information Processing Systems , vol. 34, pp. 17 413–17 426, 2021. [58] M. Li and Q. Zhang, “Defining and quantifying and-or interactions for faithful and concise explanation of dnns,” arXiv preprint arXiv:2304.13312 , 2023. [59] E. Kushilevitz and Y . Mansour, “Learning decision trees using the fourier spectrum,” in Proceed- ings of the twenty-third annual ACM symposium on Theory of computing , 1991, pp. 455–464. [60] Y . Mansour, “Learning boolean functions via the fourier transform,” in Theoretical advances in neural computation and learning . Springer, 1994, pp. 391–424. 14 Appendices A Method Details 16 A.1 Fourier Conversions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 A.2 Fourier Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 A.3 Sparse Fourier Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 B Experimental Details 18 B.1 Implementation Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 B.1.1 Hyper-parameters
https://arxiv.org/abs/2505.17495v1
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 B.1.2 Sentiment Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 B.1.3 HotpotQA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 B.1.4 DROP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 B.1.5 MS-COCO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 B.2 Measuring Spectral Hierarchies . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 B.3 Sparsification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 B.4 Proxy Model Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 B.5 Practical Implications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 C Case Study Details 23 C.1 Data Attribution via Non-Linear Datamodels . . . . . . . . . . . . . . . . . . . . 23 C.2 Model Component Attribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 15 A Method Details A.1 Fourier Conversions INTERACTION INDEX FOURIER CONVERSION Banzhaf ψi −2F({i}) Shapley ϕi (−2)P S⊇{i} |S|is oddF(S) |S| Influence ξiP S∋iF(S)2 Möbius IM(T) ( −2)|T|P S⊇TF(S) OrIO(T)  P S⊆[n]F(S) ifT=∅ −(−2)|T|X S⊇T(−1)|S|F(S)ifT̸=∅ Banzhaf Interaction IB(T) −2F(T) Shapley Interaction IS(T) ( −2)|T|P S⊇T s.t. (−1)|S|=(−1)|T|F(S) |S|−|T|+1 Shapley Taylor IST ℓ(T)  IM(T), |T|< ℓ, X S⊇T|S| ℓ−1 IM(S),|T|=ℓ. Faith-Banzhaf IFB ℓ(T) ( −2)|T|P S⊇T |S|≤ℓF(S) Faith-Shapley IFS ℓ(T)IM(T) + (−1)ℓ−|T||T| ℓ+|T|ℓ |T|X S⊃T |S|>ℓF(S)γ(S, T, ℓ ) where γ(S, T, ℓ ) =X T⊂R⊆S |R|>ℓ|R|−1 ℓ |R|+ℓ−1 ℓ+|T|(−2)|R| The relationship between Fourier coefficients and influence scores are provided in [ 36]. We derive the conversion between Fourier and the OR interaction index [ 58] in this
https://arxiv.org/abs/2505.17495v1
work. All remaining conversions are derived in Appendix C of [4]. A.2 Fourier Extraction The exact Fourier transform of a decision tree can be computed recursively [ 5,59,60]. Due to the linearity of the Fourier transform, the Fourier transform of each boosted tree can be computed separately and added together. Algorithm 1, provided by [ 5], proceeds by traversing the nodes of each tree and summing the resultant Fourier transforms. 16 Algorithm 1 Fourier Extraction from Gradient Boosted Trees [5] Require: Gradient boosted model M Ensure: Fourier mapping F 1:Initialize F ← ∅ 2:forTreeTinMdo 3: F ← F .merge (EXTRACT TREE(T.root )) ▷Add mappings of the individual trees 4:end for 5:return F 6:procedure EXTRACT TREE(node n) 7: ifnis leaf then 8: return {∅ 7→ n.value} 9: else 10: NL←EXTRACT TREE(n.leftChild ) 11: NR←EXTRACT TREE(n.rightChild ) 12: N ← ∅ 13: forSin(NL.keys∪ NR.keys)do 14: vL← N L[S] ▷Mapping returns 0 if not contained 15: vR← N R[S] 16: N[S]←(vL+vR)/2 17: N[S∪ {n.featureSplit }]←(vL−vR)/2 18: end for 19: end if 20: return N 21:end procedure A.3 Sparse Fourier Optimization We assume ˆf(S)is a sparse, low-degree function with support K: ˆf(S) =X T∈K(−1)|S∩T|ˆF(T) Equivalently, the function can be represented (and efficiently converted) under the Möbius transform. Converting Fourier to Möbius (via Appendix A.1), letting K+= R⊆T T∈ K , and applying the inverse Möbius transform: ˆf(S) =X R∈K+,R⊆TˆIM(R) The optimization problem can then be expressed as a polynomial over {0,1}. Let xbe a binary vector of length nandS={i∈[n]|xi= 1}. We will focus on the maximization problem (minimization follows analogously). max S⊆[n]ˆf(S) = max x∈{0,1}nX R∈K+ˆIM(R)Y i∈Rxi To reduce the problem to a linear integer program, each monomialQ i∈Rxican be replaced with a decision variable yRand the following constraints: max y∈{0,1}|K+|X R∈K+ˆIM(R)yR (7) s.t.yR≤yQ ∀Q⊂R, R∈ K+(8) X i∈Ry{i}<|R|+yR∀R∈ K+(9) The first constraint guarantees that whenever a monomial is activated (i.e. xi= 1∀i∈R), all of its subsets are also activated. The second constraint ensures that if a monomial is deactivated (i.e.∃i∈R s.t. x i= 0), at least one of its constituent terms ( y{i}) is likewise deactivated. After the optimization is solved, the solution can be read-off from the univariate monomials y{i}. These monomial terms can also be used to impose cardinality constraints on the solution, as was used in Section 4.2 and Section 5.2. 17 B Experimental Details B.1 Implementation Details B.1.1 Hyper-parameters We performed 5-fold cross-validation over the following hyper-parameters for each of the models: Model Hyper-parameter LASSO L1 Reg. Param. λ(100 with λmin/λmax= 0.001) SPEX L1 Reg. Param. λ(100 with λmin/λmax= 0.001) PROXY SPEX Max. Tree Depth [3, 5, None] Number of Trees [500, 1000, 5000] Learning Rate [0.01, 0.1] L1 Reg. Param. λ(100 with λmin/λmax= 0.001) Random Forest Max. Tree Depth [3, 5, None] Number of Trees [100, 500, 1000, 5000] Neural Network Hidden Layer Sizes [(n 4), (n 4,n 4), (n 4,n 4,n 4)] Learning Rate [Constant, Adaptive] Learning Rate Init. [0.001, 0.01, 0.1] Number of Trees [100, 500, 1000, 5000] B.1.2 Sentiment Analysis 20 movie reviews were used from the Large Movie Review Dataset [45] with n∈[256,512] words. To measure
https://arxiv.org/abs/2505.17495v1
the sentiment of each movie review, we utilize a DistilBERT model [ 46] fine-tuned for sentiment analysis [ 47]. When masking, we replace the word with the [UNK] token. We construct an value function over the output logit associated with the positive class. B.1.3 HotpotQA We consider 50examples from the HotpotQA [48] dataset between n∈[64,128] sentences. We use aLlama-3.2-3B-Instruct model with 8-bit quantization. When masking, we replace with the [UNK] token, and measure the log-perplexity of generating the original output. Since HotpotQA is a multi-document dataset, we use the following prompt format. Title: {title_1} Content: {document_1} . . . Title: {title_m} Content: {document_m} Query: {question}. Keep your answers as short as possible. B.1.4 DROP We consider 50examples from the DROP [48] dataset with n∈[256,512] number of words. We use the same model as HotpotQA and mask in a similar fashion. We use the following prompt format. Context: {context} Query: {question}. Keep your answers as short as possible. 18 B.1.5 MS-COCO We utilize the Microsoft Common Objects in Context (MS-COCO) dataset [ 41], which comprises images paired with descriptive text captions. For our experiments, we treat image patches (there are 48patches per image) and individual words from the captions as the input features. We used the first 50 examples from the test set, which had n(image patches + words) between the range of [ 60,85]. To model the relationship between images and text, we employed the CLIP-ViT-B/32 model, a vision-language encoder designed to learn joint representations of visual and textual data. In our PROXY SPEX framework, when masking input features (either image patches or words), we replace them with a generic placeholder token suitable for the CLIP architecture (e.g., a zeroed-out patch vector or the text [MASK] words. The value function f(S)for a given subset of features Swas defined as the contrastive loss among the other image/caption pairs. By measuring the change in this contrastive loss upon masking different feature subsets, we can attribute importance to individual features and their interactions in the context of joint image-text understanding. B.2 Measuring Spectral Hierarchies To quantify the hierarchical structure observed in the Fourier spectra of the LLMs under study, we introduce and analyze two key metrics: the Staircase Rate ( SCR ) and the Strong Hierarchy Rate ( SHR ). These metrics are computed based on the set of the k largest (in magnitude) Fourier coefficients, denoted as Fk. TheStaircase Rate (SCR (f, k)) is defined as: SCR (f, k) =1 kX S∈Fk1( ∃(e1, . . . , e |S|)∈Perm( S)s.t. ∀j∈ {0, . . . ,|S|}:j[ l=1{el} ∈ F k!) , where Fkdenotes the klargest Fourier coefficients of f, andPerm( S)is the set of all ordered sequences of the elements in S. (10) TheSCR measures the proportion of top- kFourier coefficients F(S)for which there exists an ordering of its constituent elements (e1, . . . , e |S|)such that all initial subsets (i.e., e1,{e1, e2}, . . . , S itself) are also among the top- kcoefficients. A high SCR indicates that significant high-order interactions are built up from significant lower-order interactions in a
https://arxiv.org/abs/2505.17495v1
step-wise or "staircase" manner. TheStrong Hierarchy Rate (SHR (f, k)) is defined as: SHR (f, k) =1 kX S∈Fk1{∀S′⊆S, S′∈ Fk},where Fkdenotes the klargest Fourier coefficients of f.(11) TheSHR is a stricter measure, quantifying the proportion of top- kcoefficients F(S)for which all subsets of S(not just initial subsets, as in DSR ) are also present in Fk. A high SHR suggests a very robust hierarchical structure where the significance of an interaction implies the significance of all its underlying components. Figure 10 visualizes these rates alongside faithfulness ( R2) for the Sentiment Analysis and MS- COCO datasets. These empirical results aim to demonstrate that LLM feature interactions exhibit significant hierarchical structure. The high SCR andSHR scores support the core motivation for PROXY SPEX : that important interactions are often built upon their lower-order subsets, a structure that Gradient Boosted Trees (GBTs) are well-suited to capture and exploit. B.3 Sparsification The process of sparsification is crucial for enhancing the interpretability of the explanations generated byPROXY SPEX . By retaining only the top kFourier coefficients, we can achieve a more concise and understandable representation of the model’s behavior without significantly compromising the faithfulness of the explanation. As demonstrated in Figure 5, a relatively small number of Fourier coefficients (approximately 200) are often sufficient to achieve faithfulness comparable to using a much larger set of coefficients for tasks like sentiment classification and image captioning (MS-COCO). 19 8 16 32 64 128 256 TopkCoefficients0.00.20.40.60.81.0Faithfulness ( R2) 8 16 32 64 128 256 TopkCoefficients0.00.20.40.60.81.0Faithfulness ( R2) 8 16 32 64 128 256 TopkCoefficients0%20%40%60%80%100%Staircase Rate 8 16 32 64 128 256 TopkCoefficients0%20%40%60%80%100%Staircase Rate 8 16 32 64 128 256 TopkCoefficients0%20%40%60%80%100%Strong Hier. Rate(a)Sentiment 8 16 32 64 128 256 TopkCoefficients0%20%40%60%80%100%Strong Hier. Rate (b)MS-COCO Figure 10: (top row) We run SPEX until R2>0.9. We report the faithfulness of when we truncate the spectrum to keep just the top kcoefficients for a range of k. We include results from Sentiment n∈[256,512], and MS-COCO n∈[60,85]. In both cases faithfulness steadily increases as we increase k. (middle row) We report the SCR (10) for the same top kFourier truncated functions above. In all cases, the SCR is nearly 100% . (bottom row) We also report the SHR (11), which is the strongest of the metric we consider. Here we find that even though SHR decreases somewhat as kgrows, it is still strongly in favor of the hierarchy hypothesis. Further results in Figure 11 illustrate the relationship between relative faithfulness and Fourier sparsity for both Sentiment and MS-COCO datasets across different inference multipliers ( α). These plots show that faithfulness generally increases with k, plateauing after a certain number of coefficients, reinforcing the idea that a sparse representation can effectively capture the essential dynamics of the LLM’s decision-making process. B.4 Proxy Model Selection The choice of GBTs as the proxy model within PROXY SPEX is motivated by their inherent ability to identify and learn hierarchical interactions from limited training data. This is a critical characteristic, as LLM feature interactions often exhibit a hierarchical structure where higher-order interactions are built upon their lower-order subsets. As
https://arxiv.org/abs/2505.17495v1
indicated in the main text, GBTs have been shown to vastly outperform other proxy models, including random forests, particularly because random forests are less effective at learning hierarchical functions. GBT-like algorithms, on the other hand, are adept at disentangling sums of these hierarchical components. Figure 12 provides a comparative view of proxy model performance. Figure 12a and Figure 12b illustrate the faithfulness ( R2) of different proxy models (LASSO, Random Forest, Neural Network, 20 100101102103104 Sparsity (k)0.00.20.40.60.81.0Rel. Faithfulness α= 2 α= 4 α= 6 α= 8(a)Sentiment 100101102103104 Sparsity (k)0.00.20.40.60.81.0Rel. Faithfulness α= 2 α= 4 α= 6 α= 8 (b)MS-COCO Figure 11: We plot faithfulness ( R2) as a function of Fourier sparsity. Only ≈200coefficients are required to achieve equivalent faithfulness. and GBTs) on both a synthetic dataset with a complete hierarchy (defined below) and the Sentiment Analysis dataset, respectively, across various inference parameters ( α). These results empirically support the superiority of GBTs in capturing these complex interaction structures. However, it’s also important to acknowledge limitations; for instance, GBTs may not perform as well when interactions possess a different, non-hierarchical sparse structure, as empirically confirmed by simulations like the Synthetic-Peak example (which lacks hierarchical structure) shown in Figure 12c. Synthetic Peak Synthetic Complete Hierarchy fSP(S) =P T⊆P(−1)|S∩T|F(T) fSCH(S) =P R⊆H(−1)|S∩R|F(R) where Pis a set of 10 uniformly where H={R⊆T|T∈ P} sampled sets of cardinality 5 and F(R)∼Uniform (−1,1)forR∈ H andF(T)∼Uniform (−1,1)forT∈ P B.5 Practical Implications The practical implications of P ROXY SPEX are significant, primarily revolving around its inference efficiency and the resulting speedups in generating faithful explanations for LLMs. A major chal- lenge with existing interaction attribution methods, like SPEX, is the substantial number of model inferences required, which can be computationally expensive and time-consuming for large models. PROXY SPEX addresses this by leveraging a GBT proxy model, which dramatically reduces the number of inferences needed while maintaining or even improving explanation faithfulness. Figure 13 presents the practical benefits in terms of wall clock time for achieving different levels of faithfulness ( R2) on the Sentiment Analysis (Figure 13a) and MS-COCO (Figure 13b) datasets. These plots clearly demonstrate the speedups achieved by PROXY SPEX . For example, in the sentiment analysis task using the smaller DistilBERT model, PROXY SPEX offers a speedup of approxi- mately 3x, while for the larger CLIP-ViT-B/32 model with MS-COCO, the speedup is around 5x when compared to methods that require more extensive sampling. This increased efficiency makes PROXY SPEX a more viable tool for interpreting complex LLMs in real-world scenarios where computational resources and time are often constrained. 21 LASSO Random Forest Neural Network GBTs 2 4 6 8 Inference Parameter ( α)0.00.20.40.60.8Faithfulness ( R2)(a)Synthetic Complete Hierarchy 2 4 6 8 Inference Parameter ( α)0.00.20.40.60.8Faithfulness ( R2) (b)Sentiment 2 4 6 8 Sample Parameter ( α)0.00.51.0Faithfulness ( R2) (c)Synthetic Peak Figure 12: Comparison of proxy model faithfulness in capturing function structures. (a) Faithfulness of LASSO, Random Forest, Neural Network, and GBTs on a synthetic dataset with a complete hierarchical structure, across varying inference parameters ( α). (b) Faithfulness of the same proxy models on the Sentiment
https://arxiv.org/abs/2505.17495v1
Analysis dataset across varying α. (c) Faithfulness on a synthetic dataset with a sparse, non-hierarchical peak function, across varying α, illustrating a limitation of GBTs for non-hierarchical structures. 0 25 50 75 100 125 150 Wall Clock Time (sec.)0.700.750.80Faithfulness ( R2) 3.7×speed-up2.7×speed-up2.7×speed-up Inference Attr. Comp. (a)Sentiment 0 100 200 300 400 Wall Clock Time (sec.)0.700.750.80Faithfulness ( R2) 6.5×speed-up6.9×speed-up5.3×speed-up Inference Attr. Comp. (b)MS-COCO Figure 13: Wall clock time demonstrating PROXY SPEX ’s efficiency. Comparison of wall clock time (seconds) required to achieve different levels of faithfulness ( R2) for PROXY SPEX , showing breakdown of inference time and attribution computation time. (a) Results on the Sentiment Analysis dataset with the DistilBERT model. (b) Results on the MS-COCO dataset with the CLIP-ViT-B/32 model, highlighting speedups achieved by P ROXY SPEX. 22 C Case Study Details C.1 Data Attribution via Non-Linear Datamodels The training masks and margin outputs were provided by [ 52], corresponding to their subsampling rate of 50% (i.e., half the training images were used to fit each model). See [ 52] for the hyper- parameters selected. With n= 50,000training samples, 300,000training masks (model retrainings) were provided. This corresponds to α≈0.38, which underscores the inference-efficiency of PROXY SPEX to identify strong interactions. Utilizing these masks and margins, we randomly selected 60 test images (6 from each class) for analysis with PROXY SPEX . Below, in Figure 14 and Figure 15, we present the strongest second-order interactions of the first thirty of these selected test images. Figure 8 visualizes the six test images exhibiting the most significant third-order interactions identified through this analysis. After fitting PROXY SPEX , we convert the Fourier interactions to Möbius using Appendix A.1. Since target and non-target images affect the test margin in opposite directions, we partition the interaction space into the following categories: •Target-class interactions T:Interactions composed exclusively of training images that share the same label as the held-out test image. •Non-target-class interactions Tc:Interactions where at least one training image in the set has a label different from that of the held-out test image. Synergistic Interactions: The top synergistic interaction R∗of order- ris defined as: S∗= argmax S∈T,|S|=rIM(S) T∗= argmin T∈Tc,|T|=rIM(T) R∗=S∗if|IM(S∗)| ≥ |IM(T∗)| T∗otherwise(12) Visually, as presented in Figure 14 for r= 2, the interactions R∗identified by this rule often involve training images that appear to work together to reinforce or clarify the classification of the held-out image, frequently by contributing complementary features or attributes. It is important to acknowledge that this definition serves as a heuristic and does not perfectly isolate synergy; For example, the first frog image contains redundant bird images due to strong higher-order interactions involving these bird images. Redundant Interactions: The top redundant interaction R∗of order- ris defined as: S∗= argmin S∈T,|S|=rIM(S) T∗= argmax T∈Tc,|T|=rIM(T) R∗=S∗if|IM(S∗)| ≥ |IM(T∗)| T∗otherwise(13) Figure 15 demonstrates that this definition identifies redundant training images that are similar to the held-out image. 23 Truck Truck Truck Ship Automobile Airplane Horse Horse Horse Frog Bird Bird Dog Dog Dog Deer Dog Dog Cat Cat Cat Bird Cat Dog Automobile Automobile Automobile Airplane Bird Bird Truck Truck Truck Ship Automobile Automobile
https://arxiv.org/abs/2505.17495v1
Horse Dog Dog Frog Frog Frog Dog Bird Bird Deer Horse Horse Cat Cat Cat Bird Airplane Truck Automobile Truck Truck Airplane Airplane Airplane Truck Automobile Automobile Ship Ship Ship Horse Dog Dog Frog Frog Frog Dog Dog Dog Deer Deer Deer Cat Bird Dog Bird Bird Bird Automobile Ship Ship Airplane Bird CatFigure 14: For 30 random held-out images, their corresponding top second-order synergistic interac- tion(green box). 24 Truck Truck Truck Ship Ship Ship Horse Horse Horse Frog Frog Frog Dog Cat Cat Deer Dog Dog Cat Cat Cat Bird Cat Cat Automobile Truck Truck Airplane Bird Bird Truck Truck Truck Ship Automobile Automobile Horse Dog Dog Frog Bird Bird Dog Cat Horse Deer Deer Deer Cat Cat Cat Bird Cat Truck Automobile Automobile Automobile Airplane Airplane Airplane Truck Automobile Automobile Ship Ship Ship Horse Dog Dog Frog Frog Frog Dog Dog Dog Deer Deer Deer Cat Horse Dog Bird Bird Bird Automobile Automobile Automobile Airplane Airplane AirplaneFigure 15: For 30 random held-out images, their corresponding top second-order redundant interac- tion(red box). 25 C.2 Model Component Attribution We study the influence of specific model components on task performance, using a controlled ablation methodology. Our experiments are conducted on Llama-3.1-8B-Instruct evaluated on thehigh-school-us-history subset of the MMLU dataset, a benchmark comprising multiple- choice questions. MMLU includes 231 questions in the high-school-us-history subset. To perform pruning and then evaluate the ablated models, we split this data into two sets—training split Dtrainconsisting of the first 120 questions and test split Dtestwith the remaining questions. We use accuracy as the evaluation metric, which is computed as the proportion of correctly answered multiple-choice questions on a given data split. For an Llayer LLM, we let [L]denote the set of layers and let Hℓdenote the set of attention heads in layer ℓ∈[L]. For each experiment, we focus on a particular group of layers L ⊆ [L] within the model and denote the corresponding set of attention heads as HL=S ℓ∈LHℓ. The Llama-3.1-8B-Instruct model consists of L= 32 layers, each with 32 attention heads. At each layer ℓof the LLM, the output of the attention heads is combined into a latent representation by concatenating the outputs of the attention heads. Then, this latent vector is passed to the feed- forward network of layer ℓ. To study the contribution of specific heads, we define an ablated model LLM Sfor any subset S⊆ HL. InLLM S, the outputs of the heads in HL\Sare set to zero before the concatenation step. After concatenation, we apply a rescaling factor to the resulting latent vector at each layer ℓ∈ L, equal to the inverse of the proportion of retained heads in that layer, i.e.,|Hℓ| |S∩Hℓ|. This modified latent representation is then passed to the feed-forward network as usual. We define fLas fL(S)≜Accuracy of LLM SonDtrain, (14) and interpret fL(S)as a proxy for the functional contribution of head subset Sto model performance, enabling quantitative analyses of attribution and interaction effects among attention heads. Pruning. We perform pruning experiments across three different layer groups L: initial layers (L={1,2,3}), middle layers ( L={14,15,16}), and final layers
https://arxiv.org/abs/2505.17495v1