text
string
source
string
education using large language models,” PLOS Digital Health , vol. 2, no. 2, p. e0000198, Feb. 2023. [Online]. Available: https://dx.plos.org/10.1371/journal.pdig.0000198 [7] A. Gilson, C. W. Safranek, T. Huang, V . Socrates, L. Chi, R. A. Taylor, and D. Chartash, “How Does ChatGPT Perform on the United States Medical Licensing Examination (USMLE)? The Implications of Large Language Models for Medical Education and Knowledge Assessment,” JMIR Medical Education , vol. 9, p. e45312, Feb. 2023. [Online]. Available: https://mededu.jmir.org/2023/1/e45312 [8] H. Nori, N. King, S. M. McKinney, D. Carignan, and E. Horvitz, “Capabilities of GPT-4 on Medical Challenge Problems,” Apr. 2023, arXiv:2303.13375 [cs]. [Online]. Available: http://arxiv.org/abs/2303.13375 [9] A. J. Thirunavukarasu, R. Hassan, S. Mahmood, R. Sanghera, K. Barzangi, M. El Mukashfi, and S. Shah, “Trialling a Large Language Model (ChatGPT) in General Practice With the Applied Knowledge Test: Observational Study Demonstrating Opportunities and Limitations in Primary Care,” JMIR Medical Education , vol. 9, p. e46599, Apr. 2023. [Online]. Available: https://mededu.jmir.org/2023/1/e46599 [10] M. Chowdhury, E. Lim, A. Higham, R. McKinnon, N. Ventoura, Y . He, and N. De Pennington, “Can Large Language Models Safely Address Patient Questions Following Cataract Surgery?” in Proceedings of the 5th Clinical Natural Language Processing Workshop . Toronto, Canada: Association for Computational Linguistics, 2023, pp. 131–137. [Online]. Available: https://aclanthology.org/2023.clinicalnlp-1.17 [11] Z. W. Lim, K. Pushpanathan, S. M. E. Yew, Y . Lai, C.-H. Sun, J. S. H. Lam, D. Z. Chen, J. H. L. Goh, M. C. J. Tan, B. Sheng, C.-Y . Cheng, V . T. C. Koh, and Y .-C. Tham, “Benchmarking large language models’ performances for myopia care: a comparative analysis of ChatGPT-3.5, ChatGPT-4.0, and Google Bard,” eBioMedicine , vol. 95, p. 104770, Sep. 2023. [Online]. Available: https://linkinghub.elsevier.com/retrieve/pii/S2352396423003365 [12] M. M. Lucas, J. Yang, J. K. Pomeroy, and C. C. Yang, “Reasoning with large language models for medical question answering,” Journal of the American Medical Informatics Association , vol. 31, no. 9, pp. 1964–1975, Sep. 2024. [Online]. Available: https://academic.oup.com/jamia/article/31/9/1964/7705627 [13] J. Huang, X. Chen, S. Mishra, H. S. Zheng, A. W. Yu, X. Song, and D. Zhou, “LARGE LANGUAGE MODELS CANNOT SELF- CORRECT REASONING YET,” 2024. [14] J. Xie, K. Zhang, J. Chen, R. Lou, and Y . Su, “Adaptive Chameleon or Stubborn Sloth:,” 2024. [15] S. Feng, W. Shi, Y . Wang, W. Ding, V . Balachandran, and Y . Tsvetkov, “Don’t Hallucinate, Abstain: Identifying LLM Knowledge Gaps via Multi-LLM Collaboration.” [16] J. Zhang, X. Xu, N. Zhang, R. Liu, B. Hooi, and S. Deng, “Exploring collaboration mechanisms for LLM agents: A social psychology view,” inProceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , L.-W. Ku, A. Martins, and V . Srikumar, Eds. Bangkok, Thailand: Association for Computational Linguistics, Aug. 2024, pp. 14 544–14 607. [Online]. Available: https://aclanthology.org/2024.acl-long.782 [17] Y . Fang, M. Li, W. Wang, H. Lin, and F. Feng, “Counterfactual debating with preset stances for hallucination elimination of llms,” 2024. [Online]. Available: https://arxiv.org/abs/2406.11514 [18] H. Yang, M. Li, H. Zhou, Y . Xiao, Q. Fang, and R. Zhang, “One LLM is not Enough: Harnessing the Power of Ensemble
https://arxiv.org/abs/2505.16648v1
Learning for Medical Question Answering,” Dec. 2023. [Online]. Available: http://medrxiv.org/lookup/doi/10.1101/2023.12.21.23300380 [19] J. Wei, X. Wang, D. Schuurmans, M. Bosma, B. Ichter, F. Xia, E. Chi, Q. Le, and D. Zhou, “Chain-of-thought prompting elicits reasoning in large language models,” 2023. [Online]. Available: https://arxiv.org/abs/2201.11903 [20] T. Kojima, S. S. Gu, M. Reid, Y . Matsuo, and Y . Iwasawa, “Large language models are zero-shot reasoners,” 2023. [Online]. Available: https://arxiv.org/abs/2205.11916 [21] A. Ghaffari-Rafi, R. E. Lee, R. Fang, and J. D. Miles, “Multivariable analysis of factors associated with USMLE scores across U.S. medical schools,” BMC Medical Education , vol. 19, no. 1, p. 154, May 2019. [Online]. Available: https://doi.org/10.1186/s12909-019-1605-z [22] A. Q. Jiang, A. Sablayrolles, A. Roux, A. Mensch, B. Savary, C. Bamford, D. S. Chaplot, D. de las Casas, E. B. Hanna, F. Bressand, G. Lengyel, G. Bour, G. Lample, L. R. Lavaud, L. Saulnier, M.-A. Lachaux, P. Stock, S. Subramanian, S. Yang, S. Antoniak, T. L. Scao, T. Gervet, T. Lavril, T. Wang, T. Lacroix, and W. E. Sayed, “Mixtral of experts,” 2024. [Online]. Available: https://arxiv.org/abs/2401.04088 [23] H. Touvron, L. Martin, K. Stone, P. Albert, A. Almahairi, Y . Babaei, N. Bashlykov, S. Batra, P. Bhargava, S. Bhosale, D. Bikel, L. Blecher, C. C. Ferrer, M. Chen, G. Cucurull, D. Esiobu, J. Fernandes, J. Fu, W. Fu, B. Fuller, C. Gao, V . Goswami, N. Goyal, A. Hartshorn, S. Hosseini, R. Hou, H. Inan, M. Kardas, V . Kerkez, M. Khabsa, I. Kloumann, A. Korenev, P. S. Koura, M.-A. Lachaux, T. Lavril, J. Lee, D. Liskovich, Y . Lu, Y . Mao, X. Martinet, T. Mihaylov, P. Mishra, I. Molybog, Y . Nie, A. Poulton, J. Reizenstein, R. Rungta, K. Saladi, A. Schelten, R. Silva, E. M. Smith, R. Subramanian, X. E. Tan, B. Tang, R. Taylor, A. Williams, J. X. Kuan, P. Xu, Z. Yan, I. Zarov, Y . Zhang, A. Fan, M. Kambadur, S. Narang, A. Rodriguez, R. Stojnic, S. Edunov, and T. Scialom, “Llama 2: Open foundation and fine-tuned chat models,” 2023. [Online]. Available: https://arxiv.org/abs/2307.09288 [24] T. Dettmers, A. Pagnoni, A. Holtzman, and L. Zettlemoyer, “Qlora: Efficient finetuning of quantized llms,” 2023. [Online]. Available:https://arxiv.org/abs/2305.14314 [25] M. Sclar, Y . Choi, Y . Tsvetkov, and A. Suhr, “Quantifying Language Models’ Sensitivity to Spurious Features in Prompt Design or: How I learned to start worrying about prompt formatting,” Oct. 2023, arXiv:2310.11324 [cs]. [Online]. Available: http://arxiv.org/abs/2310.11324 [26] Y . Du, S. Li, A. Torralba, J. B. Tenenbaum, and I. Mordatch, “Improving Factuality and Reasoning in Language Models through Multiagent Debate.” [27] S. Feng, W. Shi, Y . Wang, W. Ding, V . Balachandran, and Y . Tsvetkov, “Don’t Hallucinate, Abstain: Identifying LLM Knowledge Gaps via Multi-LLM Collaboration,” Feb. 2024, arXiv:2402.00367 [cs]. [Online]. Available: http://arxiv.org/abs/2402.00367 [28] X. Wang, J. Wei, D. Schuurmans, Q. Le, E. Chi, S. Narang, A. Chowdhery, and D. Zhou, “Self-consistency improves chain of thought reasoning in language models,” 2023. [Online]. Available: https://arxiv.org/abs/2203.11171 [29] J. Zhang, X. Xu, N. Zhang, R. Liu, B. Hooi, and S. Deng, “Exploring Collaboration Mechanisms for LLM Agents: A Social Psychology View,” Feb. 2024, arXiv:2310.02124 [cs].
https://arxiv.org/abs/2505.16648v1
Can reasoning models comprehend mathematical problems in Chinese ancient texts? An empirical study based on data from Suanjing Shishu Chang Liu1, Dongbo Wan g1, Liu liu1, Zhixiao Zhao1 1 (College of Information Management, Nanjing Agricultural University, Nanjing, China ) Abstract : This study addresses the challenges in intelligent processing of Chinese ancient mathematical classics by constructing Guji_MATH, a benchmark for evaluating classical texts based on Suanjing Shishu ( 《算经十书 》). It systematically assesses the mathematical problem - solving capabilities of mainstream reasoning models under the unique linguistic constraints of classical Chinese. Through machine -assisted annotation and manual verification, 538 mathematical problems were extracted from 8 canonical texts, forming a st ructured dataset centered on the "Question -Answer -Solution " framework, supplemented by problem types and difficulty levels. Dual evaluation modes —closed -book (autonomous problem -solving) and open -book (reproducing classical solution methods) —were designed to evaluate the performance of six reasoning models on ancient Chinese mathematical problems. Results indicate that reasoning models can partially comprehend and solve these problems, yet their overall performance remains inferior to benchmarks on modern mathematical tasks. Enhancing models' classical Chinese com prehension and cultural knowledge should be prioritized for optimization. This study provides methodological support for mining mathematical knowledge from ancient texts and disseminating traditional culture, while offering new perspectives for evaluating cross -linguistic and cross -cultural capabilities of reasoning models. Keywords: Chinese mathematical classics, artificial intelligence, natural language processing, large -scale reasoning models, ancient mathematics 0 Introduction As one of the earliest civilizations to develop mathematics, China accumulated a vast repository of mathematical knowledge over its long history. From the emergence of numerical symbols and the heavenly stems and earthly branches system in ancient times, t o the birth of mature mathematical frameworks in the Qin and Han dynasties, followed by the flourishing of mathematical theories during the Song and Yuan periods, and the integration of Eastern and Western mathematical practices in the Ming and Qing dynast ies, ancient Chinese mathematics consistently exhibited a unique interplay of practical application and theoretical exploration. Effectively leveraging this critical cultural heritage —ancient mathematical texts —is essential for understanding China’s histor ical mathematical wisdom and fostering international exchange of classical mathematical ideas. However, due to the inherent complexity of classical Chinese and the distinct computational methods employed in ancient mathematics, modern scholars struggle to interpret these texts, risking the loss of traditional calculation logic and techniques. In recent years, rapidly advancing large language models (LLMs) and LLM -based reasoning models have demonstrated exceptional performance in tasks requiring complex reasoning, such as mathematical computation and code generation. These advancements offer ne w momentum for intelligent processing of ancient texts. Yet, existing research on AI -driven classical text analysis predominantly focuses on literary and historical documents, with limited attention to solving mathematically oriented problems expressed in classical Chine se—an interdisciplinary domain integrating humanities and STEM. Furthermore, current benchmarks for evaluating mathematical reasoning in LLMs are almost exclusively built on English texts, leaving a gap in understanding how linguistic characteristics influ ence model performance. To address these limitations, this study pioneers the use
https://arxiv.org/abs/2505.16660v1
of mathematical problems from Suanjing Shishu ( 《算经十书 》), the most renowned collection of Chinese mathematical works, to construct a reasoning model evaluation benchmark. We systematically evaluate and analyze the performance of mainstream reasoning models on ancient mathematical tasks. Through empirical investigation, this study aims to address three key questions: Q1: How can the " Question -Answer -Solution " structure in Chinese mathematical classics be utilized to formulate problems that enable reasoning models to think, respond, and verify solutions? Q2: How can the Guji_MATH benchmark assess the accuracy and efficiency of general - purpose reasoning models in solving ancient mathematical problems? Q3: How can an analytical framework be constructed to identify mathematical problem types that reasoning models excel at and quantitatively diagnose error causes? Using machine -assisted annotation, we developed the Guji_MATH benchmark comprising 538 mathematical problems. We deployed mainstream reasoning models to solve these problems and analyzed the experimental results. By systematically addressing these question s, this study reveals the potential and limitations of large -scale reasoning models in tackling ancient mathematical challenges —a domain bridging humanities and technical disciplines. It provides a methodological foundation for AI -driven knowledge mining f rom classical mathematical texts and introduces a novel perspective for evaluating mathematical reasoning capabilities: assessing models through problems described in non -dominant languages. This approach also offers insights into how linguistic features i nfluence mathematical reasoning. The dataset, code, and experimental results generated in this study have been open -sourced. (https://github.com/Xunzi -LLM -of-Chinese -classics/Guji_Math) 1 Related works 1.1 Large language model and Reasoning model Large language models (LLMs) refer to language models based on deep neural network architectures, possessing billions or more learnable parameters, and trained on massive text corpora (W. X. Zhao et al., 2025). Early exploratory LLMs such as T5 (Raffel et al., 2020) and GPT -3 (Brown et al., 2020) were built upon the Transformer architecture (Vaswani et al., 2017). During this period, standardized training paradigms had not yet been established, and researchers predominantly relied on empirical scaling laws (Kaplan et al., 2020; Hoffmann et al., 2022) to progressively increase model parameters and training dataset sizes. Through the combination of pre - training, fine -tuning, and prompt learning (Liu et al., 2023), these models acquired language processing capa bilities and world knowledge, enabling them to understand user queries and generate appropriate responses. However, LLMs from this era still struggled with solving complex problems. In late 2022, OpenAI released ChatGPT (ChatGPT, 2022), which demonstrated substantially enhanced linguistic capabilities and knowledge utilization compared to previous models, marking a milestone in LLM development. While generating significant societal im pact, the construction, capability evaluation, and domain applications of LLMs gradually attracted widespread research attention. Generally, ChatGPT -style LLMs established stable and standardized training processes, dividing LLM development into three key stages: (1) Pre-training: Training models on massive unannotated text data (2) Instruction tuning (J. Wei, Bosma, et al., 2022): Enhancing models' ability to properly respond to user instructions through supervised training on human -annotated instruction - response pairs (3) Reinforcement Learning from Human Feedback (RLHF) (Ouyang et al., 2022): Further optimizing
https://arxiv.org/abs/2505.16660v1
SFT (Supervised Fine -tuning) models using reinforcement learning algorithms with preference data to improve response safety, usefulness, and robustness This three -stage optimization framework significantly enhanced LLMs' question -answering performance. Numerous institutions subsequently adopted this approach to develop and release their own LLMs, including Llama (Touvron et al., 2023), Qwen (Bai et al., 2 023), and GLM (GLM et al., 2024), providing high -quality linguistic infrastructure for open -source communities. Through post - training, retrieval -augmented generation (RAG) (P. Zhao et al., 2024), and advanced AI Agent (Xi et al., 2025) development, LLMs ca n now deeply integrate with both general and vertical industry scenarios to solve problems beyond their native capabilities. Traditional large language models (LLMs) have achieved success in many fields, yet they still exhibit significant limitations in scenarios requiring complex logical reasoning and multi -step problem -solving. To address these challenges, researchers have beg un exploring specialized training approaches using mathematical and code datasets to enhance step -by-step problem -solving capabilities. The CoT (chain -of-thought) method (J. Wei, Wang et al., 2022) has emerged as a key technique to extend models' reasoning functions by forcing them to decompose complex tasks into intermediate steps for gradual deduction. While this approach effectively improves LLMs' performance on complex reasoning tasks (Sprague et al., 2024), the shallow and linear nature of short reason ing chains limits models' ability to explore alternative solution paths and makes them susceptible to error propagation. Consequently, models employing short reasoning chains still struggle to simulate sophisticated human cognitive processes and achieve ex pert-level performance in testing scenarios. Researchers are now investigating tree -structured reasoning processes (both explicit and implicit) to encourage models to explore more beneficial solution paths during reasoning while reflecting on previous steps to optimize final outcomes. Representative achievements in this direction include OpenAI's O1 model ("Learning to Reason with LLMs", n.d.) and Deepseek's R1 model (DeepSeek - AI et al., 2025), which demonstrate that long -chain reasoning models can achieve human -expert - level performance across various challenging benchmarks. Recent studies (Chen et al., 2025) have conducted in -depth analyses of long -chain reasoning phenomena in LLMs, investigating aspects such as the emergence of extended reasoning chains, test -time reasoning extensions, and overthinki ng patterns. These investigations aim to refine existing reasoning models through theoretical insights and evaluate their performance across general and domain -specific evaluation benchmarks. 1.2 Benchmark for Evaluating the Mathematical Capabilities of Large Language Models The ability to accurately and efficiently understand and solve mathematical problems is a critical metric for assessing the reasoning capabilities of large language models and reasoning models. Current mathematical evaluation benchmarks primarily consist o f two types of tasks: mathematical problem -solving and theorem proving (Ahn et al., 2024). Within mathematical problem -solving tasks, subcategories include arithmetic problems, mathematical word problems, geometry problems, and math problems in visual -language contexts. Typically, these ben chmarks are constructed by manually or automatically annotating existing math problems, including their questions, reasoning steps, and answers. Large models are then prompted to generate responses, from which answers are extracted and evaluated for
https://arxiv.org/abs/2505.16660v1
accura cy. Existing benchmarks for evaluating unimodal large language models and reasoning models predominantly focus on mathematical word problems, where problems are described in natural language rather than symbolic notation, requiring models to comprehend the problem's context and provide solutions. Notable examples include MATH (Hendrycks et al., 2021), which compiles 12,500 math problems from U.S. high school competitions, each accompanied by complete solution steps, final answers, and difficulty ratings. GS M8K (Cobbe et al., 2021) collects 8.5K elementary -level math problems created by human writers, with each problem requiring 2 –8 steps involving basic arithmetic operations. PRM800K divides the MATH dataset into a 12K -problem training set and a 500 -problem test set. The training set includes 75K generated solutions and 800K step -level annotations, forming the basis for training process - and result -based reward models. The remaining 500 test problems are known as the MATH -500 benchmark (HuggingFaceH4/MATH -500 · Datasets at Hugging Face, 2025), widely used recently to evaluate reasoning models. The AIME benchmark (HuggingFaceH4/Aime_2024 · Datasets at Hugging Face, 2025) aggregates annual problems from the U.S. Mathematical Invitational Examination. Despite its limited problem count, its high difficulty and timeliness have garnered significant institutional attention, establishing it as a key benchmark for assessing reasoning capabilities. In Chinese mathematical benchmarks, Cmath (T. Wei et al., 2023) compiles 1,700 math problems from primary school exercises and exams, each annotated with correct answers to evaluate models' understanding of Chinese math problems. TAL -SCQ5K (Math -Eval/TAL -SCQ5K · Datasets at Hugging Face, n.d.) ga thers 5K Chinese -English math problems from competitions across grade levels, translated into English to train and evaluate large language models. GAOKAO -Bench (Zhang et al., 2024) compiles questions from China's national college entrance examination (Gaok ao), covering both subjective and objective math problems. As China's most standardized, comprehensive, and widely recognized exam, Gaokao -based benchmarks effectively assess models' logical reasoning abilities. Overall, existing mathematical evaluation benchmarks exhibit the following notable characteristics: (1) Most benchmarks prioritize modern math problems, drawing from English -language standardized tests or competitions. Their phrasing and problem -solving logic reflect contemporary mathematical education systems. (2) While Chinese benchmarks have achieved progress, they predominantly cover conventional problems from basic education. They lack systematic coverage of ancient math problems expressed in classical Chinese and traditional solution methods. (3) Research on cross -linguistic mathematical reasoning remains insufficient, particularly regarding how non -Latin scripts (e.g., classical Chinese) affect model performance. This oversight risks cultural bias in current evaluation frameworks, limiting their a bility to fully capture models' math comprehension and application across diverse cultural and communicative contexts. 1.3 The Ten Mathematical Classics and the Compilation of Ancient Chinese Mathematical Texts Throughout the long history of struggle against nature and social practice, ancient Chinese people accumulated rich mathematical experiences and established a unique, practice -oriented mathematical system. However, China's historical decline in the modern era limited academic exchanges between East and West, leading to widespread neglect of ancient Chinese mathematical achievements. Starting in the 1950s, the renowned sinologist Joseph Needham systematically compiled Science and Civilisation in China (Needham, J., 1959)
https://arxiv.org/abs/2505.16660v1
based on extensiv e research, introducing a series of ancient Chinese scientific and technological achievements —including mathematics —to the West. This work overturned prevailing misconceptions about China's mathematical heritage. In V olume 3 of the series, Dr. Needham cata loged mathematical literature and accomplishments from prehistoric times to the Qing Dynasty, dividing Chinese mathematical history into three periods: from antiquity to the Three Kingdoms period, the Three Kingdoms to the early Song Dynasty, and the Song -Yuan -Ming eras, corresponding respectively to the origin, development, and maturation of the Chinese mathematical system. Suanjing Shishu used in this study originate from the first two periods. Compiled and annotated by Tang Dynasty scholars, these texts became textbooks for the mathematics department of National Imperial Academy, representing the most significant works reflecting ancient C hinese mathematical thought. Among these, The Nine Chapters on the Mathematical Art (《九章算术》 ),the earliest mathematical text compiled over 2,000 years ago, stands as the cornerstone of the Chinese mathematical system. Its problems are structured in a "Question -Answer -Method" format, covering nine categories: Fangtian (land area calculations), Sumi (commodity trade calculations), Cuifen (allocation algorithms), Shaoguang (Square Root Extraction ), Shanggong (construction -related problems), Junshu(Taxation Calculation ), Yingbuzu (Excess -Deficit Method ), Fangcheng ( Linear Equations ), and Gougu (Pythagorean Theorem Applications ). The text pioneered global advancements in fractional systems, negative numbers, linear equation solutions, and the Pythagorean theorem. Contemporaneously, The Zhou Bi Suan Jing (《周髀算经 》)integrated astronomical calculations with mathematics, providing the first proof of the Pythagorean theorem. The Sea Island Mathematical Classic ( 《海岛 算经》 ) , originally a continuation of the Pythagorean chapter in The Nine Chapters by the Three Kingdoms mathematician Liu Hui, describes nine problems on measuring heights and depths using repeated observations and difference calculations with gnomon tools. Texts like Sunzi Suanjing (《孙 子算经》 ), Zhang Qiujian Suanjing (《张邱建算经》 ),Xiahou Yang Suanjing (《夏侯阳算经》 ), and Wucao Suanjing ( 《五曹算经》 ) emerged during the Wei -Jin-Southern and Northern Dynasties, documenting cutting -edge mathematical techniques applied to area, volume, and quantity calculations. Wujing Suanshu (《五经算术 》), closely tied to Confucian studies, provided annotations on mathematical problems in classical texts, offering value for Confucian scholarship. The final text, Jigu Suanjing (《缉古算经 》), written in the Tang Dynasty, addressed complex problems in volume calculation, Pythagorean geometry, cubic equations, and biqu adratic equations. These solutions were among the most advanced globally at the time. Zhuishu(《缀术》) and Shushu Jiyi (《数术 记遗》) are unique within the collection: the former, authored by the renowned mathematician Zu Chongzhi, contained profound mathematical theories but has since been lost. The latter records 14 ancient Chinese calculation methods and tools, 13 of which are now ex tinct. Modern scholars have attempted to reconstruct these lost techniques. Since Zhuishu is irretrievable, this study adopts the 1963 edition edited by Qian Baocong, substituting Shushu Jiyi for Zhuishu to align with the standard Ten Mathematical Classics . In recent decades, historians of mathematics have established a modern interpretive framework for ancient Chinese mathematics through textual collation, annotation, and cross -cultural comparative studies. For example, Qian Baocong (QIAN, B. C.,
https://arxiv.org/abs/2505.16660v1
1963) and G uo Shuchun (Guo, S. C. & Liu, D., 1998) systematically annotated variants in different editions of the Suanjing Shishu , revealing the original forms of high -quality ancient texts to the public. In the era of rapid information technology and AI development, researchers have explored digitization and intelligent processing of classical texts. For instance, the 2024 academic symposium in Inner Mongolia (Yang et al., 2024) discussed innovative digital humanities approaches to organizing and promoting mathematic al classics, employing technologies like image recognition, knowledge graph construction, and LLM -based agents to automatically structure knowledge based on the unique features of ancient mathematical texts, achieving promising results. This study similarl y leverages state -of-the-art AI technologies, but from an evaluation perspective, to explore reasoning models' comprehension of ancient Chinese mathematical texts. 2 Methods The Research Framework of This Study is Illustrated in Figure 1: Figure 1. Research Framework The research framework of this study is illustrated in Figure 1. This framework encompasses three sequential phases: (1) The construction of an ancient mathematical problem dataset, the testing of reasoning models, and the analysis of results : In the dataset construction phase, high -quality texts from Suanjing Shishu are first collected. The Xunzi series of models are employed to annotate punctuation marks, enabling the extraction of all problem descriptions, standard answers, and solution steps. Following existing protocols for mathematical benchmark construction, th e extracted content is organized into a "Question -Answer -Solution" triplet structure. Subsequently, manual review is conducted to correct errors and supplement missing conditions, while a hybrid approach combining machine and human efforts assigns difficulty levels and mathematical methods to each problem. The fin al processed dataset is saved in JSON format for evaluation of reasoning models. (2) Reasoning model testing phase : Chinese closed -source reasoning models are accessed via API, while open -source reasoning models are deployed offline. Two types of prompts are used to guide the models in answering ancient mathematical problems. (3) Result analysis phase : an LLM is utilized as a judge by inputting the models' predictions and ground -truth answers to compute accuracy metrics. This enables an analysis of how different reasoning models perform on mathematical problems, with further exploration of which types of problems from classical texts each model is better suited to handle, based on problem difficulty and category. Finally, manually annotated and quantitative analysis of incorrectly answered problems is conducted to identify factors h indering the models' ability to solve ancient mathematical problems. Through these steps, we constructed the classical Chinese mathematical evaluation benchmark Guji_MATH , which provides a comprehensive understanding of how reasoning models perform on mathematical problems and explores the feasibility of applying such models to interpret ancient mathematical texts. All prompts used to invoke large language models for data processing mentioned in this section are included in the appendix of this article. 2.1 Data Construction 2.1.1 Data collection and preprocessing This study first collected the complete text of Suanjing Shishu as the corpus source. To obtain high - quality data, we downloaded the contents of the ten mathematical classics
https://arxiv.org/abs/2505.16660v1
from the official website of the Chinese Philosophy E -Book Project (Sturgeon, n.d.). On this platform, The Nine Chapters on the Mathematical Art , Sunzi suanjing , The Sea Island Mathematical Classic , and Zhou Bi Suan Jing had already undergone manual collation and punctuation annotation, making their texts directly usable. The remaining six texts lacked syntactic -level punctuation markers. Figure 2 displays the content of Xiahouyang Suanjing from the Ctext platform: Figure 2. Xiahouyang Suanjing on Ctext As shown in Figure 2, the texts of these six remaining mathematical classics originate from the digitized version of the Siku Quanshu database. After OCR (Optical Character Recognition), the resulting text is segmented and displayed on the website frontend. Since punctuation -free text hinders the model’s ability to understand and segment meaningful structures, this study employed the Xu nzi series dialogue models to add appropriate punctuation. The Xunzi series large language models, developed and open -sourced by a team from Nanjing Agricultural University (Xunzi -LLM - of-Chinese -Classics/XunziALLM, 2023/2025), are specialized for intellige nt processing of classical texts and demonstrate strong performance in tasks such as text translation, punctuation annotation, and entity labeling. In this study, we used the recommended punctuation task prompt from the Xunzi model in combination with unpu nctuated text, treating each paragraph as a processing unit to generate punctuation results. 2.1.2 Extraction and Structuring of Mathematical Problems from Classical Texts The typical structure of mathematical problems in Suanjing Shishu follows a "Question -Answer - Solution " ("问—答—术") format, where problem descriptions, standard answers, and corresponding solution methods appear adjacently and consecutively in the text. While this structure provides a natural foundation for automated extraction, simply extracting adjacent triplets of text often leads to incomplete problem conditions, reducing the accuracy of model responses. For example, in the Sumi chapter of The Nine Chapter s on the Mathematical Art , the author initially outlines grain conversion ratios, but subsequent problems and answers do not restate this critical premise. Additionally, problems within the same section may share implicit assumptions mentioned only in the first question. For insta nce, the fourth and fifth problems in the "Weight Measurement" section of Xiahouyang Suanjing share the same premise: during smelting, every jin ( “斤” ,Chinese pound) of yellow iron reduces by three liang ( “两” ,Chinese ounces). Howeve r, the fifth problem’s description omits this premise, merely referencing "according to the previously mentioned consumption rate." Extracting such problems without considering shared or implicit premises inevitably results in missing conditions. Figure 3. Utilizing Default Premise Conditions in Mathematical Problems from Classical Texts Figure 3 illustrates two scenarios of implicit and shared premise conditions in classical mathematical texts. In The Nine Chapters on the Mathematical Art , a problem involves converting millet into hemp and calculating the resulting quantity of hemp. However, the original text only includes the problem description and answer section, omitting the previously mentioned grain exchange ratio. In Xiahouyang Suanjing , two adjacent problems describe the conversion of yellow iron into steel during metallurgy. The premise condition appears in the fourth problem: when smelting yellow
https://arxiv.org/abs/2505.16660v1
iron into steel, each jin (Chinese pound) reduces by three liang (Chinese ounces). Yet, the fifth problem’s description omits this premise, merely referencing “according to the previously mentioned consumption rate.” If such problems with default conditions are automatically extracted without contextual an alysis, critical information will inevitably be lost. To address this issue, we first employed the Qwen2.5 model (Qwen et al., 2025) to extract all "Question -Answer -Solution " ("问—答—术") triplet texts from the full corpus. After saving the results in JSON format and conducting manual verification, human annotators analyzed each text in conjunction with the full context to identify hidden or shared premises. These premises were then appe nded as new JSON attributes to the corresponding mathematical problem triplets, ensuring that every problem theoretically contai ned sufficient information for resolution by humans or machines. Through this process, we extracted 538 mathematical problems and their answers from the ten classics, including 511 problems with solution steps and 115 problems requiring explicitly noted premises. Table 1 summarizes the number of extracted problems, solution methods, and premise conditions across the ten texts. Table 1 Statistical Results of Math Problem Extraction Classic Name Extracted Problems Proportion (%) Solution Methods Premise Conditions The Nine Chapters on the Mathematical Art 199 36.99% 172 22 Zhou Bi Suan Jing 0 0.00% 0 0 The Sea Island Mathematical Classic 9 1.67% 9 0 Zhang Qiujian Suanjing 82 15.24% 82 14 Xiahouyang Suanjing 80 14.87% 80 67 Wujing Suanshu 19 3.53% 19 0 Jigu Suanjing 20 3.72% 20 0 Wucao Suanjing 66 12.27% 66 0 Sunzi Suanjing 63 11.71% 63 8 Shushu Jiyi 0 0.00% 0 0 The extraction process and statistical results indicate that Suanjing Shishu do not uniformly contain mathematical problems, nor do all problems include corresponding solution methods. Since The Zhou Bi Suan Jing and Shushu Jiyi originally only present introductions to mathematical methods and proofs of mathematical theories —rather than fixed -answer arithmetic or word problems —we excluded these contents from the dataset. Among the remaining eight classics, a significant number of mathematical problems were extractable; however, 27 problems in the original texts lacked explicit solution steps. This was primarily because their solution methods closely resembled those of adjacent problems, and authors omitted repetitive explanations. Premise -dependent problems are concentrated in the Sumi chapter of The Nine Chapters on the Mathematical Art and in Xiahouyang Suanjing , with most premises being implicit and primarily related to unit conversion explanations. 2.1.3 Problem Type and Difficulty Level Annotation To systematically analyze the performance differences of reasoning models across mathematical problems of varying complexity, this study combined characteristics of the ancient Chinese mathematical system with modern mathematical classification standards. A closed -source Qwen reasoning model with the largest parameter count was invoked to structurally annotate the 538 extracted mathematical problems along two dimensions: difficulty level and problem type. 2.1.3.1 Question type Annotation Given the static nature of the dataset and the finite scope of mathematical theories and methods in ancient Chinese mathematical texts, these can be exhaustively enumerated. Joseph Needham’s Science and Civilisation in China previously classified mathematical
https://arxiv.org/abs/2505.16660v1
techniques and methods in ancient Chinese texts and provided examples corresponding to each category. This study referenced Needham’s classification framework, categorizing the mathematical methods reflected in Suanshu Shijin into 15 types. Using a rea soning model, we instructed it to output all mathematical methods involved in each problem, resulting in the classification shown in Table 2 : Table 2 Mathematical Method Classification System and Results Mathematical Method Type Number of Problems Proportion (%) Typical Example Basic Arithmetic Operations 121 14.32% Land consolidation problems in The Nine Chapters on the Mathematical Art Ratio and Fraction Calculations 262 31.01% Engineering problems in Zhang Qiujian Suanjing Area Calculation 96 11.36% Land area calculations problems in The Nine Chapters on the Mathematical Art V olume Calculation 82 9.70% Granary volume calculations problems in Sunzi Suanjing Pythagorean Theorem 39 4.62% Right triangle problems in The Nine Chapters on the Mathematical Art Basic Geometry 80 9.47% Double difference distance measurement in The Sea Island Mathematical Classic Basic Algebra 21 2.49% Cup-washing problem in Sunzi Suanjing Elementary Number Theory 16 1.89% Unknown quantity problem in Sunzi Suanjing Linear Equations or Systems 67 7.93% Chickens and rabbits count problem in Sunzi Suanjing Quadratic Equations 22 2.60% Square city problem in Jigu Suanjing Cubic Equations 13 1.54% Dimensions of a granary in Jigu Suanjing Higher -Degree Equations 1 0.12% Rectangular and circular storage depth calculation in Jigu Suanjing Linear Programming 1 0.12% Hundred coins chicken problem in Zhang Qiujian Suanjing Square Root Extraction 22 2.60% Area -to-volume conversion in The Nine Chapters on the Mathematical Art Arithmetic or Geometric Sequences 2 0.24% Money distribution problem in Zhang Qiujian Suanjing Due to the multifaceted nature of some problems, the total number of mathematical methods reached 845 (i.e., an average of 1.57 methods per problem). This highlights the characteristic of ancient Chinese mathematical problems: integration of multiple metho dologies. For example, Zhang Qiujian Suanjing "Hundred Coins Chicken Problem" (百钱买鸡 ) involves algebraic equations, optimization theory (linear programming), and integer constraints (number theory), while cubic equation problems in Jigu Suanjing often accompany volume calculations. Overall, problems in Suanjing Shishu are predominantly elementary mathematics, with knowledge fully covered in Chinese secondary education. Among them, problems requiring basic arithmetic, ratio/fraction calculations, area/volume computations, foundational geometry, Pythagorean theorem, or linear equations domina te. 2.1.3.2 Difficulty Level Annotation Difficulty levels were annotated by referencing the MATH dataset’s methodology and adapting it to the mathematical ideas in ancient texts. A four -tier difficulty standard was designed: Level 1 : Most intuitive arithmetic calculations solvable in 1 –2 steps using basic operations. Level 2 : Extended applications of arithmetic, including fractions, ratios, Pythagorean theorem, and area/volume calculations. Level 3 : More complex problems requiring advanced techniques (e.g., historical context knowledge, primary equations, or algebra). Level 4 : Highly complex problems demanding integration of multiple mathematical methods, historical knowledge, and potentially geometry, algebra, or number theory, or equations beyond linear/second -degree. The difficulty criteria were embedded in prompts, instructing the closed -source reasoning model to assign a level to each problem. The final annotations yielded 27, 284, 185, and 42 problems
https://arxiv.org/abs/2505.16660v1
for Levels 1 –4, respectively. Table 3 shows the distribution of problems across the eight classics: Table 3 Distribution of Problems by Difficulty Level in Ancient Texts Classic Name Level 1 Level 2 Level 3 Level 4 The Nine Chapters on the Mathematical Art 6 92 88 13 The Sea Island Mathematical Classic 0 0 3 6 Zhang Qiujian Suanjing 3 35 38 6 Xiahouyang Suanjing 4 60 16 0 Wujing Suanshu 0 5 12 2 Jigu Suanjing 0 0 5 15 Wucao Suanjing 9 51 6 0 Sunzi Suanjing 5 41 17 0 The data in Table 3 reveals two key patterns: (1) Practical Orientation The overall difficulty distribution approximates a normal distribution, with low - and high -difficulty problems being less frequent and mid -level problems (Levels 2 and 3) dominating. Level 2 problems, in particular, are most prevalent and concentrated in texts like The Nine Chapters on the Mathematical Art , Xiahouyang Suanjing , and Wucao Suanjing —works closely tied to bureaucratic governance. These problems involve taxation, land measurement, and resource allocation, reflecting the practical focus of ancient Chinese mathematics. (2) Impact of Textual Research Domains on Difficulty Texts addressing administrative tasks (e.g., Wucao Suanjing ) exhibit higher proportions of simple problems (Level 1 –2 accounts for 90.9% of its problems). In contrast, texts focused on engineering and measurement (e.g., Jigu Suanjing and The Sea Island Mathematical Classic ) contain more high - difficulty problems (Level 4 constitutes 75% and 66.67% of their problems, respectively). Fields annotated with problem type and difficulty information were appended to the JSON files described earlier, generating a 7 -element tuple for each problem: source, question description, standard answer, solution method, premise conditions, difficulty l evel, and problem type. This data guided model responses. Figure 4 displays an example tuple from the dataset: Figure 4. Example of Evaluation Data 2.2 Evaluation Methods and Model Selection 2.2.1 Evaluation Methods For evaluation, we modified prompts and adopted two distinct strategies to assess the performance of open -source and closed -source reasoning models: (1) Closed -Book Mode : Only the problem source, question description, and premise conditions from the JSON tuple were included in the prompt, requiring the model to directly output an answer after reasoning. This mode simulates human problem -solving without external knowledge, focusing on the model’s foundational mathematical reasoning capabilities and its abili ty to interpret classical mathematical concepts. (2) Open -Book Mode : The prompt was expanded to include the problem source, question description, premise conditions, and solution method in ancient math classics from the JSON tuple. The model was instructed to first reference and understand the provided solution in the classical text, replicate all steps of the original solution, and then derive the answer. This mode evaluates the model’s ability to transfer ancie nt methods to modern contexts, particularly i ts capacity to express traditional algorithmic logic using contemporary mathematical frameworks. These two modes simulate human approaches to understanding classical mathematical problems and reflect common user practices of leveraging reasoning models to aid comprehension of ancient texts. Employing both strategies effectively measures
https://arxiv.org/abs/2505.16660v1
a model’s abil ity to interpret and solve classical mathematical problems. 2.2.2 Model Selection The open -sourcing of DeepSeek R1 has advanced the democratization of large -scale reasoning models. Recent research has demonstrated that strong long -context reasoning capabilities can be replicated even in mid -sized language models. This study selected several reasoning models with comparable token budgets from recent academic and industrial releases for experimentation. The models are described as follows: (1) DeepSeek R1 , a 671B -parameter reasoning model developed by DeepSeek, renowned for robust Chinese comp rehension and reasoning. (2) Qwen -Plus-2025 -04-28, a closed -source reasoning model by Alibaba Cloud based on the Qwen3 series (Qwen3/Qwen3_Technical_Report.Pdf at Main · QwenLM/Qwen3, n.d.), achieving superior performance over DeepSeek R1 on benchmarks like AIME and Math500. (3) QwQ -32B (Tongyi Qianwen QwQ -32B, n.d.), a 32B -parameter reasoning model by Alibaba Cloud trained on Qwen2.5 with reinforcement learning focused on mathematics and programming tasks, delivering performance comparable to DeepSeek R1 on specific subtasks. (4) Skywork -OR1 -32B-Preview (Skywork -OR1 -32B-Preview, n.d.), a reasoning model by Kunlun Tech trained on the Qwen2.5 series. (5) DeepSeek -R1-Distill -Qwen -32B, a model developed by DeepSeek using 800,000 instruction samples to fine -tune the Qwen2.5 -32B base model, optimized for general user computational resources. (6) Light -R1-32B (Wen et al., 2025), a 32B -parameter model by 360 Research, the first full -featured reimplementation of DeepSeek -R1 from scratch, trained with two -stage learning and reinforcement learning to acquire reasoning capabilities. Due to the excessive parameter count of DeepSeek R1, which precluded local deployment, we accessed it and Qwen -Plus-2025 -04-28 via API. The remaining four models were deployed locally on two A800 GPUs using the vLLM framework, operating in offline inferenc e mode. 2.3 Evaluation Metrics and Answer Extraction The study uses Accuracy as the metric to evaluate the performance of mathematical problem -solving, calculated as the ratio of the number of correctly answered questions to the total number of questions. For generative models, due to the inherent unpredicta bility of their outputs, further extraction and processing of the results are required before comparing them with the standard answers to determine correctness. In this experiment, the non -reasoning part of the model's response is first extracted, and then the QwQ -32B model is used to derive the model's answer, which is subsequently compared with the standard answer. The most challenging aspect of this process is unit conversion between the two answers. The measurement units in the answers from the Suanjing Shishu are often inconsistent, requiring the establishment of a system based on historical metrological standards. We referenced the conversion relationships between measurement units listed in Science and Civilisation in China , combined with all units me ntioned in the text, to construct a comprehensive numerical conversion table. This table was then included as part of the input prompt for the QwQ - 32B model, instructing the model to first extract the answer and then standardize it to determine whether the model's output could be converted into the numerical value of the original text's answer. The
https://arxiv.org/abs/2505.16660v1
judgment results were used for the final accuracy calculation, thereby quantifying the performance of the reasoning model in solving ancient mathematical problem s. 3 Experimental Results 3.1 Experimental Environment and Hyperparameter Design The model was executed on a Linux server equipped with two A800 GPUs. The system specifications included CUDA version 12.4, PyTorch framework version 2.5.1, and vLLM version 0.7.2. The following key hyperparameters were selected for model loading: Table 4 Hyperparameter Settings for Evaluation Experiments Hyperparameter Name Description Value Max_len Maximum token length allowed by the model 32678 temperature Temperature coefficient controlling randomness (lower values yield more deterministic outputs) 0.6 top_p Nucleus sampling probability threshold (retains vocabulary up to cumulative probability) 0.95 top_k Sampling candidate word count (only considers top-k words) 20 repetition_penalty Repetition penalty factor suppressing redundant content generation 1 These hyperparameter values were informed by guidelines from DeepSeek R1 and Qwen models, which recommend maintaining a temperature coefficient within 0.5 –0.7 and using high nucleus sampling thresholds and candidate counts to optimize reasoning performance. Additionally, during deployment, we observed that longer model outputs tend to produce repetitive content. To mitigate this, a higher repetition penalty factor was applied when setting the maximum truncation length to 32678, ensuring output quality and p reventing degradation of reasoning effectiveness due to redundancy. 3.2 Closed -Book Mode Evaluation Results The experiment adapted the prompts from the appendix to align with each model’s native dialogue template, using a for loop to iterate over JSON tuples and construct complete input prompts. Notably, although all models were trained with reinforcement learni ng to enclose reasoning processes in paired <think>…</think> tags, deployment revealed that some models occasionally bypassed deep reasoning and directly generated answers. To address this, we followed DeepSeek R1’s recommendations by appending the </think > token to each prompt to explicitly trigger reasoning capabilities, ensuring every model engaged in some level of analysis before answering. Table 5 summarizes the overall performance of six models in closed -book mode, along with their accuracy across four difficulty levels: Table 5 Performance of Reasoning Models on Classical Mathematical Problems in Closed -Book Mode Model Total Correct Answers Overall Accuracy Level 1 Accurac y Level 2 Accuracy Level 3 Accuracy Level 4 Accuracy DeepSeek R1 339 63.01% 77.78% 65.85% 61.08% 42.86% Qwen -Plus-2025 - 04-28 326 60.59% 85.19% 63.38% 57.30% 40.48% QwQ -32B 294 54.65% 81.48% 59.51% 48.65% 30.95% Skywork -OR1 -32B- Preview 228 42.38% 62.96% 44.37% 39.46% 28.57% DeepSeek -R1- Distill -Qwen -32B 152 28.25% 55.56% 30.28% 24.32% 14.29% Light -R1-32B 236 43.87% 59.26% 46.83% 40.54% 28.57% As shown in Table 5, DeepSeek R1 achieved the highest overall accuracy (63.01%), outperforming all 32B -parameter open -source models, particularly in high -difficulty Level 3 –4 problems. This highlights the impact of parameter scale on performance. Among 32B models, QwQ -32B demonstrated the best balanced performance, while others lagged by at least 10 percentage points — likely due to superior training data diversity and techniques. DeepSeek -R1-Distill -Qwen -32B performed poorly, as noted in DeepSeek R1’s techni cal report: its limited training (only instruction fine-tuning with minimal mathematical prompts) likely
https://arxiv.org/abs/2505.16660v1
caused overfitting. Across all models, performance peaked at Level 1 (simple arithmetic), with accuracy declining as difficulty increased, indicating struggles with complex classical problems. Furthermore, all models underperformed compared to their English Math -500 benchmark results (≥70%), despite Math -500 covering only high school -level math. This suggests that models trained on modern Chinese or English struggle to interpret classical mathematical concepts as effectively as they do contemporary language. To better understand model strengths, we visualized performance across 15 problem types using bar charts. Figure 5. Bar Chart of Mathematical Problem -Solving Performance by Reasoning Models in Closed -Book Mode The above chart illustrates the performance of reasoning models across different mathematical problem types. Excluding problems with extremely low sample sizes (e.g., linear programming, higher -degree equations, and arithmetic/geometric sequences), all mod els demonstrated strongest performance in basic arithmetic operations, area/volume calculations, and linear equations/systems. Nearly all models achieved their highest accuracy in linear equations/systems, even surpassing performance on simpler arithmetic problems. This may be attributed to the practical relevance of linear equations, which are easier for models to interpret and require fewer complex computational steps, reducing error rates.The two trillion -parameter models (DeepSeek R1 and Qwen -Plus) outperformed smaller -parameter models across nearly all problem types, underscoring their superior understanding capabilities. Smaller models also showed significant performance gaps in challenging problems like quadratic/cubic equations, consistent with Table 5 results. 3.3 Open -Book Mode Evaluation Results In open -book mode, prompts differed from closed -book mode by including solution steps and procedural requirements from the original texts. With a slightly reduced dataset size (adjusted to Level 1: 25, Level 2: 266, Level 3: 178, Level 4: 42), Table 7 summarizes performance in open - book mode: Table 6. Performance of Reasoning Models on Classical Mathematical Problems in Open - Book Mode Model Total Correct Overall Accuracy Level 1 Accuracy Level 2 Accuracy Level 3 Accuracy Level 4 Accuracy Answers DeepSeek R1 354 69.28% 84.00% 71.05% 67.42% 57.14% Qwen -Plus- 2025 -04-28 359 70.25% 80.00% 72.18% 70.79% 50.00% QwQ -32B 333 65.17% 96.00% 66.17% 64.04% 45.24% Skywork -OR1 - 32B-Preview 283 55.38% 84.00% 57.14% 52.81% 38.10% DeepSeek -R1- Distill -Qwen - 32B 197 38.55% 72.00% 39.85% 35.39% 23.81% Light -R1-32B 296 57.93% 76.00% 62.78% 52.81% 38.10% In terms of the overall data trend, the distribution pattern of answer accuracy under open -book conditions shows little difference from closed -book mode - models generally excel at solving simple questions while experiencing declining accuracy as problem d ifficulty increases. Despite a reduction of 23 data points, all models demonstrate significant improvement across nearly all difficulty levels and total correct answers compared to closed -book mode. Light -R1-32B exhibits the most substantial growth in corr ect responses, solving 60 additional math problems correctly. For all models, Level 4 questions show the greatest performance enhancement, particularly Deepseek R1 and QwQ -32B achieving nearly 15 percentage point improvements. Since the number of Level 4 questions remained unchanged between modes, this indicates that explanatory problem -solving text significantly aids models in addressing complex problems. From model perspective, Qwen -
https://arxiv.org/abs/2505.16660v1
Plus and QwQ -32B demonstrate notable overall performance gains. Qwen -Plus achieves optimal performance, solving 5 more medium -difficulty math problems correctly than Deepseek R1. Meanwhile, QwQ -32B nearly perfectly answers all simple questions, narrowing its performance gap with Deepseek R1 overall. This reflects both the conve rgence effect of problem -solving methodology provision on model performance enhancement and the particular sensitivity of Qwen series models to medium -difficulty mathematical problems. We also visualized performance in open-book mode across 15 problem types using bar charts. Figure 6. Bar Chart of Mathematical Problem -Solving Performance by Reasoning Models in Open -Book Mode As shown in Figure 6 , the inclusion of solution steps led to improved performance across nearly all mathematical problem categories for the models. After excluding problem types with insufficient sample sizes, the models demonstrated significantly higher accuracy in number theory problems and basic arithmetic operations. Notably, the accuracy of elementary number theory problems even surpassed that of linear equations or systems of equations in som e cases. This phenomenon may be attributed to the fact that the descriptive solu tion steps for basic arithmetic and number theory problems more effectively facilitated the models’ comprehension of the tasks, whereas the solution methods for linear equations align closely with the models’ inherent reasoning patterns, making their inclu sion in prompts less influential on the models’ cognitive processes. 3.4 Analysis of Error Causes in Problem -Solving The experimental results in the preceding sections demonstrate that accurately answering classical mathematical problems using reasoning models remains a highly challenging task. Compared to the Math -500 dataset, which also restricts problem difficulty to the high school level, even the top- performing models —DeepSeek R1 and Qwen -Plus—achieve only approximately 70% accuracy in open -book mode, significantly lower than their reported performance of over 95% on Math -500. What factors contribute to the poor performance of reasoning models in s olving classical mathematical problems? This section provides a quantitative and case -based analysis of the DeepSeek R1 model, which exhibited the most stable and relatively optimal results. To achieve this, we first invoked the closed -source model’s API to compare DeepSeek R1’s full reasoning process against the standard solution, requesting the closed -source model to provide a comprehensive analysis of error causes. Subsequently, the reasoni ng model summarized these errors into nine categories, which were then annotated by both the model and human annotators using a multi -label classification system (allowing multiple error types per problem). The resulting classification framework and annota tion results are presented in Table 7: Table 7 Classification Framework and Annotation Results for Model Answer Errors Category ID Error Type Description Annotation Count Proportion 1 Misuse of formulas or models 56 10.94% 2 Calculation errors 22 4.30% 3 Format or expression errors 29 5.66% 4 Lack of specific historical/cultural context 119 23.24% 5 Misunderstanding of mathematical concepts 96 18.75% 6 Incorrect problem -solving methods/steps 42 8.20% 7 Misinterpretation of the problem statement 40 7.81% 8 Data processing and conversion errors 105 20.51% 9 Other issues 3 0.59% Since the multi -label classification system
https://arxiv.org/abs/2505.16660v1
permits multiple error types per problem, merely counting labels fails to capture intrinsic relationships between error categories. To address this, we further analyzed co -occurrence patterns among error labels, visualizing the results as a heatmap (see Figure 7). Figure 7. Co-occurrence Network Map of Error Labels As shown in Table 7, the primary sources of errors in the model’s mathematical problem -solving are the lack of specific historical and cultural context, misunderstanding of mathematical concepts within the problem, and data processing and conversion errors. Combined with the heatmap analysis, these factors exhibit high co -occurrence frequencies, particularly for label pairs 4 –5 (61 co - occurrences) and 4 –8 (73 co -occurrences). The inherent characteristics of these labels indicate that when models lack specif ic cultural context —often tied to unit conversion rules —they frequently misinterpret classical mathematical concepts. The misuse of formulas/models (10.94%) and incorrect problem -solving methods/steps (8.20%), ranking fourth and fifth in error frequency, a lso show the highest co -occurrence with conceptual misunderstandings in the heatmap. This suggests that the inability to grasp ancient mathematical terminology is a core issue preventing accurate reasoning. Calculation errors (4.30%) arise not only from computational mistakes in complex operations (e.g., quadratic equations) but also from the model’s failure to adhere to ancient mathematical conventions. For instance, ancient Chinese mathematics did not incor porate irrational numbers, yet models sometimes force approximations (e.g., π ≈ 3.14 or √2 ≈ 1.414) into calculations, resulting in discrepancies with original texts. Additionally, 29 problems contained errors in the final result conversion, often due to i nsufficient understanding of ancient unit conversion rules or difficulties in following traditional numerical notation. In the Nine Chapters on the Mathematical Art ’s calculation of circular segment area, for example, the model’s reasoning might produce a numerically correct value (e.g., 56/81) but incorrectly express it as “ 五十六分步之八十一 ” (81/56) instead of the correct “八十一分步之五十六 ” (56/81), leading to a final answer mismatch. These findings underscore that misunderstanding mathematical concepts, unit conversion rules, traditional problem -solving logic, and ancient numerical notation are the primary challenges for reasoning models in solving classical Chinese mathematical proble ms. At their core, these issues stem from a deficiency in specific historical and cultural knowledge. To improve performance on such tasks, models must not only enhance Classical Chinese comprehension but also integrate contextual knowledge about ancient mathematical theories and practices. 4 Discussion This study systematically reveals the performance patterns of mainstream reasoning models in solving classical mathematical problems expressed in Classical Chinese, through the construction of the Guji_MATH benchmark based on the Suanjing Shishu . Experimental results indicate that while current reasoning models demonstrate significant capabilities in modern tasks involving mathematical computation or code generation in Mandarin or English, their ability to interpret mathematical problems in Class ical Chines e contexts remains notably limited. The following analysis elaborates on the effectiveness and challenges of reasoning models in solving classical mathematical problems from three dimensions: model quality, problem types, and linguistic characteristics. (1) Model Quality Dimension In both closed -book and open -book modes, trillion -parameter models (e.g.,
https://arxiv.org/abs/2505.16660v1
DeepSeek R1, Qwen - Plus) consistently outperformed 32B -parameter models in overall accuracy. This performance gap widened significantly in high -difficulty Level 3 –4 problems. This phe nomenon confirms the critical role of parameter scale effects in complex reasoning tasks. Among 32B -parameter models, those trained with higher -quality data and more advanced training stages achieved better accuracy, suggesting that, at equivalent parameter scales, improving data quality and optimizing training algorithms can enhance a model’s reasoning generalization to low -resource linguistic tasks. (2) Problem Type Dimension Through the construction of difficulty and type labels, this study analyzed reasoning models’ performance across diverse mathematical problem categories. Results show that models achieve higher accuracy on simpler problems, such as basic arithmetic operations and linear equations (typically low -difficulty tasks). However, they struggle with more complex modeling and computational challenges, such as quadratic/cubic equations and volume calculations. Future research should focus on optimizing reasoning models to better understand and solve these complex classical mathematical problems. (3) Linguistic Characteristics Dimension Experimental results highlight the significant constraints of Classical Chinese texts on model reasoning capabilities, primarily due to comprehension limitations and lack of contextual knowledge. First, Classical Ch inese is more concise than Modern Chinese, often omitting mathematical concepts in the original text, making it difficult for models to infer problem intent. Second, classical mathematical problems are not merely computational exercises but also embed comp lex cultural phenomena and unique ancient scientific systems. For example, the problem of “determining an infant’s gender” in the Sunzi Suanjing involves China’s ancient Yin -Yang theory and numerological traditions, while the calculation of the Moon’s declination in the Jigu Suanjing is deeply tied to ancient astronomy. Due to insufficient training data, current reasoning models fail to address such problems effectively. Future research must develop models capable of not only mastering Classical Chinese but also deeply understanding Chinese traditional culture and scientific systems to accurately solve ancient mathematical challenges. 5 Conclusion and outlook This study presents the first Guji_MATH benchmark based on Suanjing Shishu , systematically evaluating reasoning models’ performance in solving mathematical problems expressed in Classical Chinese. Key findings include: (1) Model performance exhibits significant scale effects and mode differences . Trillion - parameter models demonstrate clear advantages in handling complex problems, while open -book mode offers smaller models opportunities for localized improvements. (2) Current reasoning models remain stronger in simpler tasks . Overall problem -solving effectiveness lags behind modern Mandarin or English -based mathematical benchmarks. (3) Models struggle with deep mathematical integration: Replication of classical algorithms and ancient mathematical reasoning remains superficial, hindered by a lack of cultural context and foundational knowledge. This research uncovers the coupling mechanism between linguistic structure and mathematical reasoning, breaking through the cultural homogeneity of existing mathematical evaluation frameworks. Methodologically, the proposed "Question -Answer -Method" structu red processing framework and dual -mode evaluation system provide transferable solutions for intelligent processing of non -Latin -script classical texts. Results confirm the strong potential of reasoning models in extracting mathematical knowledge from class ical texts, with applications in algorithm recreation and aiding comprehension of ancient mathematical works. However,
https://arxiv.org/abs/2505.16660v1
Our research also have some limitations . Firstly, t he dataset is derived solely from the Ten Mathematical Classics , covering Han to Tang Dynasty mathematics. Peak achievements of Chinese mathematics during the Song -Yuan -Ming periods (e.g., binomial theorems, trigonometric functions, Tianyuan Shu [天元术 ], Duoji Shu [垛积术 ]) remain unaddressed in the current benchmark. Future work will expand the dataset to evaluate models on these advanced topics. Secondly, p roblem processing lacks standardization, rely ing directly on raw text input without exploring whether post -training on mathematical datasets enhances model performance. Subsequent studies will investigate whether translating problems into pseudo -code or training with domain - specific data improves mod el outputs. Overall, leveraging reasoning models to solve classical Chinese mathematical problems represents a promising avenue. This approach not only aids modern researchers and enthusiasts in utilizing China’s ancient scientific heritage but also facilitates the in ternational dissemination of traditional Chinese mathematical culture. Acknowledgements The research was funded by the National Social Science Foundation of China (No.21&ZD331). Data availability Data is applicable. Declarations Consent to Publish declaration not applicable Consent to Participate declaration not applicable Competing interests The authors declare no competing interests. Refer ences Ahn, J., Verma, R., Lou, R., Liu, D., Zhang, R., & Yin, W. (2024). Large Language Models for Mathematical Reasoning: Progresses and Challenges (arXiv:2402.00157). arXiv. https://doi.org/10.48550/arXiv.2402.00157 Bai, J., Bai, S., Chu, Y., Cui, Z., Dang, K., Deng, X., Fan, Y., Ge, W., Han, Y., Huang, F., Hui, B., Ji, L., Li, M., Lin, J., Lin, R., Liu, D., Liu, G., Lu, C., Lu, K., … Zhu, T. (2023). Qwen Technical Report (arXiv:2309.16609). arXiv. https://doi.org/10.48550/arXiv.2309.16609 Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert -Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D., Wu, J., Winter, C., … Amodei, D. (20 20). Language Models are Few -Shot Learners. Advances in Neural Information Processing Systems , 33, 1877 – 1901. https://proceedings.neurips.cc/paper/2020/hash/1457c0d6bfcb4967418bfb8ac142f64a - Abstract.html OpenAI. (2023). ChatGPT (May 2, 2025) [Large language model]. https://chatgpt.com/ Chen, Q., Qin, L., Liu, J., Peng, D., Guan, J., Wang, P., Hu, M., Zhou, Y., Gao, T., & Che, W. (2025). Towards Reasoning Era: A Survey of Long Chain -of-Thought for Reasoning Large Language Models (arXiv:2503.09567). arXiv. https://doi.org/10.48550/arXiv.2503.09567 Cobbe, K., Kosaraju, V., Bavarian, M., Chen, M., Jun, H., Kaiser, L., Plappert, M., Tworek, J., Hilton, J., Nakano, R., Hesse, C., & Schulman, J. (2021). Training Verifiers to Solve Math Word Problems (arXiv:2110.14168). arXiv. https://doi.org/10.48550/arXiv.2110.14168 DeepSeek -AI, Guo, D., Yang, D., Zhang, H., Song, J., Zhang, R., Xu, R., Zhu, Q., Ma, S., Wang, P., Bi, X., Zhang, X., Yu, X., Wu, Y., Wu, Z. F., Gou, Z., Shao, Z., Li, Z., Gao, Z., … Zhang, Z. (2025). DeepSeek -R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning (arXiv:2501.12948). arXiv. https://doi.org/10.48550/arXiv.2501.12948 GLM, T., Zeng, A., Xu, B., Wang, B., Zhang, C., Yin, D., Zhang, D., Rojas, D., Feng, G., Zhao, H., Lai, H., Yu, H., Wang, H., Sun, J., Zhang, J., Cheng, J., Gui, J., Tang,
https://arxiv.org/abs/2505.16660v1
J., Zhang, J., … Wang, Z. (2024). ChatGLM: A Family of Large Language Models from GLM -130B to GLM -4 All Tools (arXiv:2406.12793). arXiv. https://doi.org/10.48550/arXiv.2406.12793 Guo, S. C., & Liu, D. (1998). Suan jing shi shu[ 算经十书 ]. Liaoning Education Press. Hendrycks, D., Burns, C., Kadavath, S., Arora, A., Basart, S., Tang, E., Song, D., & Steinhardt, J. (2021). Measuring Mathematical Problem Solving With the MATH Dataset (arXiv:2103.03874). arXiv. https://doi.org/10.48550/arXiv.2103.03874 Hoffmann, J., Borgeaud, S., Mensch, A., Buchatskaya, E., Cai, T., Rutherford, E., Casas, D. de L., Hendricks, L. A., Welbl, J., Clark, A., Hennigan, T., Noland, E., Millican, K., Driessche, G. van den, Damoc, B., Guy, A., Osindero, S., Simonyan, K., Elsen, E., … Sifre, L. (2022). Training Compute -Optimal Large Language Models (arXiv:2203.15556). arXiv. https://doi.org/10.48550/arXiv.2203.15556 HuggingFaceH4/aime_2024 · Datasets at Hugging Face . (2025, April 6). https://huggingface.co/datasets/HuggingFaceH4/aime_2024 HuggingFaceH4/MATH -500 · Datasets at Hugging Face . (2025, February 15). https://huggingface.co/datasets/HuggingFaceH4/MATH -500 Kaplan, J., McCandlish, S., Henighan, T., Brown, T. B., Chess, B., Child, R., Gray, S., Radford, A., Wu, J., & Amodei, D. (2020). Scaling Laws for Neural Language Models (arXiv:2001.08361). arXiv. https://doi.org/10.48550/arXiv.2001.08361 Learning to reason with LLMs . (n.d.). Retrieved May 3, 2025, from https://openai.com/index/learning -to-reason -with-llms/ Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., & Neubig, G. (2023). Pre -train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing. ACM Comput. Surv. , 55(9), 195:1 -195:35. https://doi.org/10.1145/3560815 Math -eval/TAL -SCQ5K · Datasets at Hugging Face . (n.d.). Retrieved May 4, 2025, from https://huggingface.co/datasets/math -eval/TAL -SCQ5K Needham, J. (1959). Science and civilisation in China: Vol. 3. Mathematics and the sciences of the heavens and the earth. Cambridge University Press. Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C., Mishkin, P., Zhang, C., Agarwal, S., Slama, K., Ray, A., Schulman, J., Hilton, J., Kelton, F., Miller, L., Simens, M., Askell, A., Welinder, P., Christiano, P. F., Leike, J., & Lowe, R. (2022). Tr aining language models to follow instructions with human feedback. Advances in Neural Information Processing Systems , 35, 27730 –27744. QIAN, B. C.(1963). Suan jing shi shu [算经十书 ]. Zhonghua Book Company. Qwen, Yang, A., Yang, B., Zhang, B., Hui, B., Zheng, B., Yu, B., Li, C., Liu, D., Huang, F., Wei, H., Lin, H., Yang, J., Tu, J., Zhang, J., Yang, J., Yang, J., Zhou, J., Lin, J., … Qiu, Z. (2025). Qwen2.5 Technical Report (arXiv:2412.15115). arXiv. https://doi.org/10.48550/arXiv.2412.15115 Qwen3/Qwen3_Technical_Report.pdf at main · QwenLM/Qwen3 . (n.d.). GitHub. Retrieved May 20, 2025, from https://github.com/QwenLM/Qwen3/blob/main/Qwen3_Technical_Report.pdf Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., & Liu, P. J. (2020). Exploring the Limits of Transfer Learning with a Unified Text -to-Text Transformer. Journal of Machine Learning Research , 21(140), 1 –67. Skywork -OR1-32B-Preview . (n.d.). Retrieved May 20, 2025, from //www.modelscope.cn/models/Skywork/Skywork -OR1 -32B-Preview Sprague, Z., Yin, F., Rodriguez, J. D., Jiang, D., Wadhwa, M., Singhal, P., Zhao, X., Ye, X., Mahowald, K., & Durrett, G. (2024). To CoT or not to CoT? Chain -of-thought helps mainly on math and symbolic reasoning (arXiv:2409.12183). arXiv. https://doi.org/10.48550/arXiv.2409.12183 Sturgeon D. (n.d.). 中國哲學書電子化計劃 . Retrieved
https://arxiv.org/abs/2505.16660v1
May 20, 2025, from https://ctext.org/zh Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M. -A., Lacroix, T., Roziè re, B., Goyal, N., Hambro, E., Azhar, F., Rodriguez, A., Joulin, A., Grave, E., & Lample, G. (2023). LLaMA: Open and Efficient Foundation Language Models (arXiv:2302.13971). arXiv. https://doi.org/10.48550/arXiv.2302.13971 Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł. ukasz, & Polosukhin, I. (2017). Attention is All you Need. Advances in Neural Information Processing Systems , 30. https://proceedings.neurips.cc/paper/2017/hash/3f5ee243547dee91fbd053c1c4a845aa - Abstract.html Wei, J., Bosma, M., Zhao, V. Y., Guu, K., Yu, A. W., Lester, B., Du, N., Dai, A. M., & Le, Q. V. (2022). Finetuned Language Models Are Zero -Shot Learners (arXiv:2109.01652). arXiv. https://doi.org/10.48550/arXiv.2109.01652 Wei, J., Wang, X., Schuurmans, D., Bosma, M., Ichter, B., Xia, F., Chi, E., Le, Q. V., & Zhou, D. (2022). Chain -of-Thought Prompting Elicits Reasoning in Large Language Models. Advances in Neural Information Processing Systems , 35, 24824 –24837. Wei, T., Luan, J., Liu, W., Dong, S., & Wang, B. (2023). CMATH: Can Your Language Model Pass Chinese Elementary School Math Test? (arXiv:2306.16636). arXiv. https://doi.org/10.48550/arXiv.2306.16636 Wen, L., Cai, Y., Xiao, F., He, X., An, Q., Duan, Z., Du, Y., Liu, J., Tang, L., Lv, X., Zou, H., Deng, Y., Jia, S., & Zhang, X. (2025). Light -R1: Curriculum SFT, DPO and RL for Long COT from Scratch and Beyond (arXiv:2503.10460). arXiv. https://doi.org/10.48550/arXiv.2503.10460 Xi, Z., Chen, W., Guo, X., He, W., Ding, Y., Hong, B., Zhang, M., Wang, J., Jin, S., Zhou, E., Zheng, R., Fan, X., Wang, X., Xiong, L., Zhou, Y., Wang, W., Jiang, C., Zou, Y., Liu, X., … Gui, T. (2025). The rise and potential of large language model based agents: A survey. Science China Information Sciences , 68(2), 121101. https://doi.org/10.1007/s11432 -024-4222 -0 Xunzi -LLM -of-Chinese -classics/XunziALLM . (2025). [Python]. Xunzi -LLM -of-Chinese -classics. https://github.com/Xunzi -LLM -of-Chinese -classics/XunziALLM (Original work published 2023) Zhang, X., Li, C., Zong, Y., Ying, Z., He, L., & Qiu, X. (2024). Evaluating the Performance of Large Language Models on GAOKAO Benchmark (arXiv:2305.12474). arXiv. https://doi.org/10.48550/arXiv.2305.12474 Zhao, P., Zhang, H., Yu, Q., Wang, Z., Geng, Y., Fu, F., Yang, L., Zhang, W., Jiang, J., & Cui, B. (2024). Retrieval -Augmented Generation for AI -Generated Content: A Survey (arXiv:2402.19473). arXiv. https://doi.org/10.48550/arXiv.2402.19473 Zhao, W. X., Zhou, K., Li, J., Tang, T., Wang, X., Hou, Y., Min, Y., Zhang, B., Zhang, J., Dong, Z., Du, Y., Yang, C., Chen, Y., Chen, Z., Jiang, J., Ren, R., Li, Y., Tang, X., Liu, Z., … Wen, J. -R. (2025). A Survey of Large Language Models (arXiv:2303.18223). arXiv. https://doi.org/10.48550/arXiv.2303.18223 杨承 , 白胡日查 , & 董杰 . (2024). 数字人文视野下的中国数学古籍研究新探索 —— “新趋势、 新议题、新期待:中国数学史研究的理论与实践 ”学术研讨会综述 . 数字人文研究 , 4(4), 125 –128. 通义千问 QwQ -32B. (n.d.). Retrieved May 20, 2025, from //www.modelscope.cn/models/Qwen/QwQ -32B Appendix Table 9. Summary of Model Prompts Used in the Study Task Type Prompt Model Prompt Example Text Punctuation Xunzi - Qwen1.5 - 7B_chat 请为以下文本添加标点符号: {text} Text Structuring Qwen2.5 - 14B-Instruct 接下来将给你一段中国古代数学典籍文本,里面可能涉及到多组数学 问题,及其对应的答案和解析,请你使用将每组问题、答案和解析都 抽取出来 .其中问题往往以‘问曰’开头,答案通常以‘答曰’开头, 解析通常以‘术曰’开头,请你按照 json格式输出多组抽取结果, json 的属性是:问题、答案和解析。如果某项不存在则输出为空 \n给定的 文本为 :{text} Difficulty Level Annotation qwen -plus- 2025 -04-28 现在将给你一道来自中国古代典籍的数学问题。请你判断 这道题目的难度,并在最终给出一个 1-4分之间的整数作为 难度评分,数字越大越难,注意由于是古代数学问题,整 体难度可能不高,请酌情打分,最终得分用 [[]]包裹,比如
https://arxiv.org/abs/2505.16660v1
得[[1]]分,得[[2]]分,你可以参考如下评分等级 :1分题为直 观的数学计算,只需适用简单的四则运算可迅速解决 \n2分 题多为四则运算的应用,包括简单的求周长和面积等 \n3分 题目是更加复杂的应用,需要结合历史知识并使用初次方 程等较高级的技术 \n4分题为更复杂的数学问题,可能涉及 到几何、代数或数论等领域的综合知识,或者需要二元、 二次以上方程解决 :请根据以上逻辑进行判断。这道题目 是:{text} Problem Type Annotation qwen -plus- 2025 -04-28 现在将给你一道来自中国古代典籍的数学问题。请你给出这 道题目所涉及的数学方法,如:基础四则运算、体积计算、 面积计算、几何、初等数论、线性方程组、二次方程、三次 方程、高次方程、线性规划、开方计算、比例计算、代数、 勾股计算、排列组合等等,你需要使用 [[]]包裹最终的判断结 果,例如: [[二元一次方程组 ]],[[线性规划 ]],[[比例计算 ]], [[面积计算、 三次方程 ]],[[勾股计算、 二次方程、 几何 ]]等(如 涉及多项数学技术请使用顿号分开 ),这道题目是 :{text} Closed - Book Mode Evaluation The reasoning model presented in Section 2.2.2 现在我会向你输入一段来自于古代典籍的数学问题,请你 回答该数学问题,在你给出回答时,如果问题涉及到古代 计量单位,则你需要结合题目回忆该时代使用的量制单 位、度制单位、衡制单位、货币单位、干支历法等类型的 计量单位,并将最终的答案需要转化为古代单位的进制, 例如:一百亩又四十二步、二丈八尺三寸四分、十斤十二 两三铢、正月乙丑日等等。如果题目不涉及明确的计量单 位要求则不需要修改进制,回答可为:一万五千零四十 个,三分之一只,十五天、四分之一天等等。我给你的文 本来自于 {source},可参考的前提条件是 :{premise },回答 以下问题 :{question} ,请一步一步思考,你的思考过程放 在一对 <think></think> 标签之间,格式如 :<think> { 用 ‘\n\n’ 分隔步骤的思考过程 } </think>, 在你思考完成后, 要将解决方案放在一对 <answer></answer> 标签之间,格式 如: <answer> { 最终的、格式化的、精确且清晰的解决方 案} </answer> 。请你按照上述要求给出思考过程和最终的 解决方案。 Open -Book Mode Evaluation The reasoning model presented in Section 2.2.2 现在我会向你输入一段来自于古代典籍的数学问题和对应 的解题方法,请你先理解题目和解题方法,并根据解题方 法复现原题的解法,解决这道数学问题。在你给出回答 时,如果问题涉及到古代计量单位,则你需要结合题目回 忆该时代使用的量制单位、度制单位、衡制单位、货币单 位、干支历法等类型的计量单位,并将最终的答案需要转 化为古代单位的进制,例如:一百亩又四十二步、二丈八 尺三寸四分、十斤十二两三铢、正月乙丑日等等。如果题 目不涉及明确的计量单位要求则不需要修改进制,回答可 为:一万五千零四十个,三分之一只,十五天、四分之一 天等等。我给你的文本来自于 {source},可参考的前提条件 是:{premise },回答以下问题 :{question} ,本题在原书中 的解题方法为 : {analysis} ,请一步一步思考,你的思考过程 放在一对 <think></think> 标签之间,格式如 :<think> { 用 ‘\n\n’ 分隔步骤的思考过程 } </think>, 在你思考完成后, 要将解决方案放在一对 <answer></answer> 标签之间,格式 如: <answer> { 最终的、格式化的、精确且清晰的解决方 案} </answer> 。请你按照上述要求给出思考过程和最终的 解决方案。 Answer Consistency Judgment QwQ -32B 现在我会向你输入一段涉及古代典籍的数学计算问题,问 题的标准答案,以及由某个大语言模型对该问题的输出, 你需要从大语言模型的输出中提取出答案文本,然后再和 标准答案进行对比,判断是否答对了这个问题。如果答对 了判断为正确得 1分,否则就是错误的得 0分,问题是 : {question} \n标准答案是 : {answer} \n模型输出是 : {pre} \n,请 你先从模型的输出文本中提取出最终答案,然后再对比标 准答案看是否作答准确,你的回答模板是:问题是 xxx\n标 准的答案是 xxx\n从模型输出文本中提取的答案是 xxx\n, xxx(分析文字 ),所以是正确 /错误的, \n\n因此最终得 [[0]]或 [[1]]分。注意得分一定要使用两个中括号括起来。如果不能 从模型输出中提取出答案,可以直接认为错误。注意有时 模型输出的形式可能与标准答案形式有所差别,如发现经 过简单换算或者添加单位后二者相等,或者精确到的单位 差异很小时,则同样认为正确,但计算错误时则不可认为 正确。另外,请注意古文中一些特殊的数字表述,比如假 设存在五位数,第四位为 0时,则可以忽略。如一万五百 等于一万零五百,不可认为这二者不相等。太半、少半分 别代表 2/3和1/3。在转换时,可以参考如下单位换算: 1 丝 = 10忽,\ ,1毫 = 10丝\ ,1氂 = 10毫\ ,1分 = 10氂\ ,1寸 = 10分\ ,1尺 = 10寸\ ,1丈 = 10尺\ ,1引 = 10丈\ ,1匹/疋 = 4丈\ ,1端 = 5丈\ ,1步 = 6尺\ ,1顷 = 100亩\ ,1亩 = 240步\ ,1里 = 300步\ ,1圭 = 10粟\ ,1撮 = 10圭\ ,1抄 = 10撮\ ,1勺 = 10抄\ ,1合 = 10勺\ ,1升 = 10合\ ,1斗 = 10升\ ,1斛 = 10斗\ ,1絫 = 10黍\ ,1銖 = 10絫\ ,1两 = 24銖\ ,1斤 = 16两\ ,1钧 = 30斤\ ,1石 = 4钧\ ,1贯 = 1000文。 Error Cause Analysis Deepseek R1 现在我会向你输入一段来自于古代典籍的数学问题 ,真实答 案,以及由模型输出的思考过程和答案 ,但模型对这道题目的 最终回答却是错误的,现在你需要分析为什么模型做错了 这道问题。我给你的文本来自于 {source},这道题目对应的 前提条件是: {premise },这道问题为: {question} ,这道题 目的正确答案是 : {answer} ,本题在原书中的解题方法为 : {analysis} ,模型的解题思路和结果是: {pre},请一步一步 思考分析模型的思考过程和错误的原因,你的思考过程放 在一对 <think></think> 标签之间,格式如 :<think> { 用 ‘\n\n’ 分隔步骤的思考过程 } </think>,在你思考完成后, 要将错误原因的分析放在 <answer></answer> 标签之间,格 式如: <answer> { 错误原因分析 :xxx\n可将错误原因总结 为:[[xxx]](中括号中为简短的错误归因) } </answer> 。请 你按照上述要求给出错误分析过程和模型回答的错误原 因。
https://arxiv.org/abs/2505.16660v1
arXiv:2505.16661v1 [cs.CL] 22 May 2025A Japanese Language Model and Three New Evaluation Benchmarks for Pharmaceutical NLP Issey Sukeda1,2, Takuro Fujii*, Kosei Buma1,3, Shunsuke Sasaki1,2, Shinnosuke Ono1,2, 1EQUES Inc.,2The University of Tokyo,3University of Tsukuba, Correspondence: issey.sukeda@eques.co.jp Abstract We present a Japanese domain-specific lan- guage model for the pharmaceutical field, de- veloped through continual pretraining on 2 billion Japanese pharmaceutical tokens and 8 billion English biomedical tokens. To en- able rigorous evaluation, we introduce three new benchmarks: YakugakuQA , based on na- tional pharmacist licensing exams; NayoseQA , which tests cross-lingual synonym and ter- minology normalization; and SogoCheck , a novel task designed to assess consistency rea- soning between paired statements. We evalu- ate our model against both open-source med- ical LLMs and commercial models, includ- ing GPT-4o. Results show that our domain- specific model outperforms existing open mod- els and achieves competitive performance with commercial ones, particularly on terminology- heavy and knowledge-based tasks. Interest- ingly, even GPT-4o performs poorly on So- goCheck, suggesting that cross-sentence con- sistency reasoning remains an open challenge. Our benchmark suite offers a broader diagnos- tic lens for pharmaceutical NLP, covering fac- tual recall, lexical variation, and logical consis- tency. This work demonstrates the feasibility of building practical, secure, and cost-effective language models for Japanese domain-specific applications, and provides reusable evaluation resources for future research in pharmaceuti- cal and healthcare NLP. Our model, codes, and datasets will be released upon acceptance. 1 Introduction Large Language Models (LLMs) have achieved remarkable performance across a wide range of natural language processing (NLP) tasks. How- ever, their effectiveness remains limited in domain- specific settings such as manufacturing, finance, and medicine (Islam et al., 2023; Hager et al., 2024; Zhang et al., 2024), where deep contextual *Independent researcher. Figure 1: JPHARMATRON and JPHARMA BENCH . The pipeline for data curation, continued pretraining, and evaluation of JP HARMATRON . understanding and precise terminology handling are required. In these domains, general-purpose LLMs often fall short due to inadequate domain knowledge and difficulty handling complex or spe- cialized queries. Moreover, while domain-specific fine-tuning can enhance surface-level performance, it has been shown that this does not necessarily lead to genuine knowledge acquisition (Zhou et al., 2023). The pharmaceutical domain is no exception. In particular, the Japanese pharmaceutical industry faces significant administrative overhead in tasks such as document preparation, verification, and reg- ulatory compliance—often governed by standards such as GMP (Chaloner-Larsson et al., 1999) and ICH guidelines1. Despite these challenges, little work has been done to develop LLMs tailored for pharmaceutical operations, especially in Japanese. In this work, we present JPHARMATRON , a Japanese language LLM series specialized for pharmaceutical operations. To build JPHARMA - TRON , we perform continual pretraining of the 1https://www.ich.org/page/ich-guidelines 1 Figure 2: Performance Comparison with Meditron. JPHARMATRON consistently achieves higher scores than Meditron across JPHARMA BENCH , IgakuQA, and JMMLU. Qwen2.5 (Yang et al., 2024) model using a cu- rated corpus consisting of Japanese pharmaceutical journals, web resources, and synthetic data (Ap- pendix C). Unlike prior work focusing on drug dis- covery (Chaves et al., 2024; Tsuruta et al., 2024), our model targets
https://arxiv.org/abs/2505.16661v1
real-world operational tasks, such as document standardization and terminology nor- malization. To evaluate pharmaceutical reasoning and gener- ation capabilities, we introduce three novel bench- marks: (1) YakugakuQA (§3.2): a multiple-choice QA dataset based on the Japanese National Pharmacist Examination; (2) NayoseQA (§3.3): a paraphrasing bench- mark for standardizing drug names and active sub- stances; (3) SogoCheck (§3.4): a document consistency- check task reflecting real administrative workflows. These benchmarks, collectively referred to as JPHARMA BENCH , are designed to reflect practical scenarios encountered in pharmaceutical compa- nies, particularly in regulatory and clerical opera- tions. To the best of our knowledge, this is the first benchmark suite for evaluating LLMs in Japanese pharmaceutical applications. We evaluate JPHARMATRON using in-context learning across JPHARMA BENCH and two exist- ing benchmarks additionally. Without task-specific fine-tuning, our model outperforms competitive LLMs including Meditron (§2.2), showing gains of 7.9% on YakugakuQA (Ours) and 5.9% onIgakuQA (Kasai et al., 2023). These results sug- gest that domain-adaptive continual pretraining can significantly enhance LLM performance in special- ized pharmaceutical and medical settings. Our contributions are threefold: •We introduce the first LLMs and evalua- tion benchmarks specifically designed for Japanese pharmaceutical NLP. •We develop tasks aligned with real-world workflows, ensuring practical relevance in pharmaceutical operations. •We provide a complete methodology — from data collection to evaluation — that serves as a replicable and secure framework for domain- specific LLM development in regulated indus- tries. 2 Related works 2.1 Domain-specific LLMs and benchmarks in healthcare With the emergence of GPTs (Radford et al., 2018; Brown et al., 2020), domain-specific adaptations for healthcare have rapidly gained attention. Sev- eral English-centric LLMs have been developed to infuse medical knowledge into general-purpose models. For instance, Med-PaLM 2 (Singhal et al., 2023b), a specialized version of PaLM 2 (Anil et al., 2023), is fine-tuned on curated medical datasets and achieves performance comparable to medical professionals on exams. Benchmarking has evolved in parallel. Multi- MedQA (Singhal et al., 2023a) combines datasets to evaluate both factual knowledge and clin- ical reasoning. Other benchmarks, such as MedQA (Jin et al., 2020) and the medical subset of MMLU (Hendrycks et al., 2021), are commonly used to assess instruction-following and medical understanding. In the Japanese context, GPT-style healthcare LLMs are still emerging. Recent projects (Sukeda et al., 2023, 2024a,b) have focused on adapting LLMs for Japanese medical question answering. The standard benchmarks are also being devel- oped (Sukeda, 2024; Jiang et al., 2024), exempli- fied by IgakuQA (Kasai et al., 2023), based on the Japanese national medical licensing exam. These developments in both English and Japanese highlight a global trend toward aligning LLMs with clinical expertise across languages and contexts. While significant progress has been made 2 in the medical field, efforts in the pharmaceutical domain remain limited, and the few existing mod- els (Chen et al., 2024; Chaves et al., 2024) are not publicly available. 2.2 Meditron Among existing domain-specific medical LLMs, Meditron (Chen et al., 2023) is particularly relevant to our work. Meditron is a family of open-source LLMs of 7B and 70B, built upon LLama2 (Tou- vron et al., 2023),
https://arxiv.org/abs/2505.16661v1
and adapted with medical con- tinual pretraining and supervised fine-tuning using curated English medical corpus. It demonstrates strong performance in MedQA (Jin et al., 2020), making it a prominent example of an open medi- cal LLM. The work is further extended by Open Meditron Initiative2. In contrast, our work focuses on the Japanese language and the pharmaceutical domain, both of which remain underexplored. With strong perfor- mance on YakugakuQA, our model serves as a Japanese-pharmaceutical counterpart to Meditron. This parallel extends to benchmarks as well: Med- itron is evaluated on MedQA (Jin et al., 2020), while our model is evaluated on YakugakuQA (ours) and IgakuQA (Kasai et al., 2023), which are all based on national licensing exams in their respective languages and domains. 3 Benchmark construction Pharmaceutical domain has not received as much attention for LLM applications, resulting in a lim- ited number of evaluation benchmarks, especially in Japanese. When the focus is solely on therapeu- tics data, a comprehensive benchmark for therapeu- tics machine learning called the Therapeutic Data Commons (Huang et al., 2022) can be applied to LLM evaluations (Chaves et al., 2024). However, the performance of LLMs in the broader pharma- ceutical domain has only been evaluated on the North American Pharmacist Licensure Examina- tion (NAPLEX) (Ehlert et al., 2024; Chen et al., 2024), with no evaluations conducted in Japanese. Although MMLU (Hendrycks et al., 2021) and JMMLU (Yin et al., 2024) cover related health- care domains, neither includes pharmaceutics as a distinct category. 2https://huggingface.co/OpenMeditron  Which of the following is not an ideal property of a dilute solution? Choose one. 1.Vapor pressure lowering 2.Freezing point depression 3.Boiling point elevation 4.Surface tension reduction 5.Osmotic pressure   Figure 3: An example question from the Japanese National Pharmacist Licensing Examination. The model is required to output “4” in this case. The ques- tion is originally in Japanese, but translated into English by ChatGPT for readability. 3.1 Overview of JP HARMA BENCH To evaluate language models in the Japanese phar- maceutical domain, we constructed three novel benchmarks, each reflecting a different type of rea- soning or knowledge required in real-world phar- maceutical practice: factual recall, terminology nor- malization, and inconsistency detection (Table 1). All benchmarks are based on publicly available data and are structured as question-answering tasks, making them compatible with various LLMs. 3.2 YakugakuQA: National Licensing Exam YakugakuQA is a question-answering dataset based on the Japanese national pharmacist licensing ex- aminations (NPLE) administered by the Ministry of Health, Labour and Welfare. As illustrated in Figure 3, each question requires selecting one or two correct answers from five or six choices. As summarized in Table 2, YakugakuQA serves as a pharmaceutical counterpart to IgakuQA. We have collected the exam data from the past 13 years, from 2012 to 2024. All questions, answers, and commentaries have been obtained from the website yakugaku lab3and manually pro- cessed. The category varies among pharmacy and eight other related areas: pharmacy, pharmacology, chemistry, pathology, hygiene, physics, practice, law, and biology. Some questions in the NPLE require responses based on a provided image — for example, iden-
https://arxiv.org/abs/2505.16661v1
tifying a chemical reaction depicted in the image. However, such image-based questions are excluded from our experiments, as our study focuses on LLMs designed for text input. The number of ques- tions by year and category used in our experiments 3https://yakugakulab.info/ 3 Benchmark Format Main Skill Source #Examples Language(s) YakugakuQA 4-to-6-choice QA Factual recall Licensing exams 3,021 Japanese NayoseQA 5-choice QA Terminology normalization KEGG DRUG Database 34,769 Japanese / English SogoCheck Sentence pair Inconsistency detection Japanese Pharmacopoeia 200 Japanese Table 1: An overview of JPHARMA BENCH , the three pharmaceutical benchmarks for evaluation. Each task is designed to assess different capabilities of LLMs in domain-specific settings. English Japanese Medicine MedQA IgakuQA (Jin et al., 2020) (Kasai et al., 2023) Pharmacy NAPLEX YakugakuQA (not structured) (Ours) Table 2: National licensing exams. These are typically used as benchmarks when evaluating domain-specific LLMs in medical-related fields. is shown in Table 6. 3.3 NayoseQA: Synonym and Terminology Normalization in the Pharmaceutical Domain NayoseQA is our original benchmark designed to evaluate LLMs’ ability to handle lexical variation and term normalization in pharmaceutical texts written in Japanese. The task focuses on resolv- ing different surface forms of the same underlying drug or chemical entity, including: • Japanese name ↔English name • brand name ↔generic name (e.g., Ganaton ↔Itopride hydrochloride) • chemical name ↔common name (e.g., Prostaglandin E2 ↔PGE2) This type of normalization is commonly referred to as “nayose” in Japanese, a term used in information systems to describe the process of identifying and consolidating records that refer to the same real- world entity. In our context, it involves linguistic and domain-specific reasoning to recognize syn- onymous terms for pharmaceutical compounds. In real-world pharmaceutical documents and practice in Japan, such variations are common due to reg- ulatory terminology, manufacturer-specific brand- ing, and historical naming conventions. Accurately interpreting and normalizing these variations is es- sential for drug interaction checks, medical record standardization, and multilingual information re- trieval.  Text A: Storage method: sealed container. Temperature below 25◦C. Humidity below 60%. Text B: Storage method: sealed container. Temperature below 26◦C. Humidity below 61%. Label: Change in temperature and humidity   Figure 4: A simplest example from SogoCheck. The numbers are inconsistent across two inputs. Originally in Japanese, but translated for readability. 3.4 SogoCheck: Inconsistency Detection in Paired Pharmaceutical Statements SogoCheck is a novel benchmark we introduce to evaluate an LLM’s ability to detect logical or factual inconsistencies (referred to as "sogo" in Japanese) between two pieces of text in the pharmaceutical domain. Unlike factual question- answering benchmarks, which assess whether a synthetic text contains any factual errors (Zhao et al., 2023), SogoCheck focuses on cross-text con- sistency. The task is inspired by a common prac- tice in pharmaceutical quality assurance in Japan, where experts conduct consistency reviews to cross- validate information across documents such as package inserts, internal quality assurance logs, and regulatory submissions. In this task, the model is presented with a pair of short Japanese texts, typically drawn from regu- latory documents, drug descriptions, or quality as- surance manuals. The model is asked to determine whether the two
https://arxiv.org/abs/2505.16661v1
statements are consistent either ex- plicit or implicit. Some examples are clear-cut (e.g., numerical mistakes, see Figure 4), while others re- quire pharmacological reasoning or recognition of subtle semantic contradictions. The final dataset includes 200 examples, synthe- sized with an LLM to balance clarity and realism. This benchmark is particularly valuable because inconsistency detection is crucial in practical work- flows such as regulatory review, where conflicting information can lead to severe medical or legal consequences. 4 4 Model & Training We developed a domain-specific language model, JPHARMATRON , through continual pretraining with three different data scales, based on Qwen2.5- 7B (Yang et al., 2024), a multilingual open-source language model that also supports Japanese input, and evolutionary merging. This base model was chosen for its strong general performance, multi- lingual capacity, and availability under a commer- cially permissible license. To inject domain-specific knowledge while pre- serving general language capabilities, we adopted continual pretraining rather than training from scratch. We prepared three variations of the train- ing corpus: 2B tokens: Approximately 2B Japanese tokens sourced from pharmaceutical-related documents such as journal papers and drug package inserts; 10B tokens: The above 2B Japanese tokens com- bined with an additional 8B English tokens from PubMed Abstracts; 9B tokens: Based on the 10B-token corpus, fur- ther augmented with 1.2B tokens from the CC100 multilingual dataset. After removing duplicates, the number of tokens was finally 9B tokens (see Appendix C for details). Training was conducted using standard autoregres- sive language modeling objectives with the orig- inal tokenizer of Qwen2.5. Table 3 provides an overview of the training configuration and data composition. In addition, model merging was per- formed to attach instruction-following ability to the model. Further details on data collection, clean- ing, and preprocessing pipelines are defered to Ap- pendix C. We emphasize that our goal was not to outper- form proprietary LLMs like GPT-4o, but to develop a practically deployable model as a first baseline that balances accuracy, efficiency, and privacy for real-world use in Japanese pharmaceutical contexts. This lightweight domain adaptation strategy en- ables enterprises to build specialized models with- out large-scale resources (§6.2). 5 Evaluation 5.1 Experimental Setups We evaluated our domain-specific model against three types of baseline models: (1) a general- purpose Japanese LLM (Swallow series or equiv-Training Settings Method Continual pretraining Base model Qwen2.5-7B Japanese data 2B tokens (pharma-related) English data 8B tokens (mainly PubMed Abstracts) Tokenizer Qwen2.5 tokenizer Steps 67171 Batch size 16 Optimizer hybridadam Learning rate 1.0×10−5 GPU 8 ×NVIDIA H100 Framework Pai-Megatron-Patch GPU hours 444 Table 3: Details of model training settings. alent), (2) a medical LLM (Meditron)4, and (3) GPT-4o via the OpenAI API. Evaluation was con- ducted across three newly proposed benchmarks — YakugakuQA, NayoseQA, and SogoCheck — as well as two existing Japanese medical benchmarks: IgakuQA and a pharmaceutical subset of JMMLU. This setup enables direct comparison with prior work. To ensure fairness, all models were prompted with consistent formatting (details provided in Ap- pendix B). For multiple-choice questions, models were instructed to select one or more answer op- tions as appropriate, where the accuracy was mea-
https://arxiv.org/abs/2505.16661v1
sured based on exact match. 5.2 Quantitative results Table 4 shows the accuracy of each model on each benchmark. While GPT-4o achieved the highest accuracy overall, as expected from a frontier com- mercial LLM, our domain-specific model consis- tently outperformed both Meditron and the general- purpose Japanese model across all tasks. This high- lights the effectiveness of domain-specific contin- ual pretraining in Japanese, and establishes our model as the strongest open alternative for pharma- ceutical NLP tasks in the Japanese language. Breaking down by benchmark, on YakugakuQA, our model achieved an accuracy of 62.0%, out- performing Meditron3-Qwen2.5-7B by 7.9 points. This result suggests that factual pharmaceutical knowledge can be effectively captured through continual pretraining, even without training from scratch. In addition, it suggests that medical do- main specialization alone may be insufficient for 4We use Meditron3-Qwen2.5-7B from OpenMeditron for comparison, as the older version (Chen et al., 2023) lacks sufficient Japanese support and our model is also based on Qwen2.5-7B, ensuring a fair evaluation. 5 handling pharmaceutical tasks effectively. The accuracy results by categories are listed in Ta- ble 5, along with additional larger models for refer- ences: Llama-3.1-Swallow-70B (Fujii et al., 2024), Qwen2.5-72B-Instruct (Yang et al., 2024), and o1- preview via OpenAI API. In NayoseQA, which tests synonym normal- ization and cross-lingual terminology mapping, the performance gap between our domain-specific model and the general-purpose model (Llama3.1- Swallow) was surprisingly small. This suggests that the task primarily requires lexical and seman- tic matching capabilities rather than deep domain- specific pharmaceutical knowledge. While domain adaptation improved performance modestly, it ap- pears that general LLMs with strong multilingual and synonym handling capabilities can already per- form well on such terminology normalization tasks. This indicates that future pharmaceutical LLM de- velopment efforts may benefit more from enhanc- ing complex reasoning and factual recall abilities rather than focusing solely on terminology align- ment. Finally, SogoCheck proved to be challenging for all models. While one of our models outperformed Meditron by 7.1 points, the absolute accuracy re- mained low. Notably, even GPT-4o achieved only 39.1% accuracy, suggesting that subtle consistency detection in specialized domains remains an open research challenge. Interestingly, many SogoCheck examples were intentionally designed to be solv- able by simple textual comparison — identifying surface-level differences without requiring deep reasoning (see Figure 4). Despite this, LLMs of- ten failed to detect such inconsistencies, indicating that current models still struggle with fine-grained semantic alignment even when superficial textual clues are available. This gap between human intu- ition and model behavior highlights a critical limi- tation in today’s LLM architectures. 5.3 Error analysis We analyze the 16.4% of incorrectly answered ques- tions on YakugakuQA to identify common fail- ure patterns and inform future improvements in domain-specific LLMs such as JP HARMATRON . Positional Bias. Consistent with previous works (Marchisio et al., 2024; Trung et al., 2024), we observed a positional bias in GPT-4o’s responses on YakugakuQA, where the model exhibited a tendency to favor the first answerchoice. Specifically, the number of responses selecting option “1” exceeded the total number of questions (Figure 5a), and the error rate
https://arxiv.org/abs/2505.16661v1
for option “1” was the lowest among all choices (Figure 5b). Single vs. Multiple-Choice Question. GPT-4o exhibited a 4.4% higher error rate on multiple- choice questions compared to single-answer ques- tions (Figure 5c). Question category. Figure 5d shows that error rates for chemistry and physics are around 25%, while those for biology and pathology are be- low 10%. This indicates that GPT-4o performs better in biology and pathology, but struggles with calculation-heavy questions in chemistry and physics (Ahn et al., 2024; Li et al., 2024b). The higher performance in biology and pathology may be attributed to the prevalence of fact-based, single- answer questions in these domains. This pattern is commonly observed across various LLMs, as shown in Table 5, and also in JMMLU as shown in Table 8. Complex questions. Based on the previous observation, we employed Qwen2.5-72B- Instruct (Yang et al., 2024) to annotate questions requiring complex reasoning or calculations, following the LLM-as-a-Judge framework (Li et al., 2024a). Although such questions accounted for fewer than 500 out of approximately 3000, they exhibited an error rate of 34.1% (Figure 5e). These results suggest that top-tier LLMs still struggle with calculation-intensive tasks within the pharmaceutical domain. 6 Discussion 6.1 Impact of our Benchmark Suite Our benchmark suite is designed to evaluate a di- verse range of language capabilities required for pharmaceutical NLP. While prior datasets such as IgakuQA and JMMLU primarily focus on factual recall, our benchmarks target additional competen- cies that better reflect the demands of real-world pharmaceutical decision-making. Evaluation results confirm that this broader scope offers meaningful insights. YakugakuQA and NayoseQA showed consistent improvements across most models, suggesting that domain- specific pretraining effectively enhances factual recall and term-level understanding. In contrast, SogoCheck presented a more difficult challenge. 6 Model YakugakuQA NayoseQA SogoCheck IgakuQA JMMLU (1) TinySwallow-1.5B-Instruct 37.2 35.3 3.1 39.0 32.1 sarashina2.2-3b-instruct 46.2 45.6 0.66 41.6 37.8 Llama-3-Swallow-8B-Instruct-v0.1 42.6 29.8 - 41.5 20.6 Llama-3.1-Swallow-8B-Instruct-v0.3 48.2 57.6 - 45.2 44.0 (2) Meditron3-Qwen2.5-7B 54.1 58.3 19.6 58.8 31.7 (3) GPT-4o 83.6 86.0 39.1 86.6 79.1 Ours JP HARMATRON -7B /2B tokens 60.7 58.3 12.5 62.3 55.0 JPHARMATRON -7B /10B tokens 54.8 62.6 22.0 60.1 48.7 JPHARMATRON -7B /9B tokens 62.0 60.9 26.7 64.7 53.2 Table 4: Performance of our LLMs in five pharmaceutical-related benchmarks , compared to (1) a general- purpose Japanese LLM (Swallow series, or equivalent), (2) a medical LLM (Meditron), and (3) GPT-4o. Each value shows the accuracy (%). “-” denotes the lack of instruction-following capability to solve each task. The top two models for each task are highlighted in bold. Model Biology Chemistry Hygiene Law Pathology Pharmacology Pharmacy Physics Practice Overall TinySwallow-1.5B-Instruct 41.1 21.9 34.4 46.5 44.3 27.8 36.9 32.4 38.0 37.2 sarashina2.2-3b-instruct 46.3 36.7 45.8 56.2 56.6 37.8 41.5 29.2 48.6 46.2 Qwen2.5-7B-Instruct 69.1 18.2 52.9 54.3 65.0 46.6 47.4 49.4 55.7 53.9 Meditron3-Qwen2.5-7B 69.1 24.0 54.4 57.5 63.8 47.4 49.1 45.1 54.0 54.1 Llama-3-Swallow-8B-Instruct-v1 46.0 26.4 45.6 56.1 47.3 31.8 34.6 30.2 46.5 42.6 Llama-3.1-Swallow-8B-Instruct-v3 56.4 18.8 48.5 57.5 56.9 42.1 39.4 34.6 49.7 48.2 Llama-3.1-Swallow-70B-Instruct-v1 81.7 41.4 71.2 70.0 82.1 71.1 66.5 55.5 68.6 70.9 Qwen2.5-72B-Instruct 89.8
https://arxiv.org/abs/2505.16661v1
51.5 72.2 72.5 84.4 76.4 68.7 62.8 70.0 73.6 GPT-4o 94.4 76.1 80.9 83.4 92.1 88.7 81.8 72.6 78.6 83.6 o1-preview 93.3 88.3 88.1 83.3 93.2 90.8 85.0 89.1 84.5 87.9 JPHARMATRON -7B /2B tokens 80.9 28.4 55.9 66.6 71.5 55.7 55.1 55.2 58.6 60.7 JPHARMATRON -7B /10B tokens 70.8 19.3 53.6 57.3 66.9 46.2 48.8 51.7 55.3 54.8 JPHARMATRON -7B /9B tokens 80.5 45.7 57.9 63.8 73.8 58.4 54.9 51.6 61.3 62.0 Table 5: Accuracy of YakugakuQA comparison by category. Each value shows the accuracy (%). The top two categories for each model are highlighted in bold. Most models excel in biology and pathology. Some models showed minor gains, while others failed to improve. As previously shown, the supris- ingly low accuracy of GPT-4o indicates that current LLMs — even the state-of-the-art — struggle with subtle consistency checks in Japanese pharmaceu- tical contexts. These findings highlight the diagnostic value of SogoCheck. Rather than being a standard QA task, it probes semantic understanding capabili- ties that go beyond surface-level knowledge. This suggests that inconsistency detection, especially in high-stakes domains like pharmacovigilance, re- quires capabilities not well-captured by general LLMs. 6.2 Deployable Domain-Specific Models: Challenges and Prospects This study demonstrates the feasibility of build- ing a high-performing, domain-specific LLM in Japanese without relying on commercial APIs. In pharmaceutical settings, where both data sensitiv- ity and operational cost are critical concerns, lo- cally trainable models such as ours present a prac-tical and privacy-conscious alternative. Our open- source setup offers a replicable framework for en- terprises and research groups seeking to train or fine-tune specialized models within secure envi- ronments. Moreover, our benchmark suite lays the groundwork for more practical evaluations of language models in healthcare and pharmaceuti- cal contexts. In particular, tasks like SogoCheck capture practical detection abilities that are not as- sessed by conventional QA benchmarks, thereby suggesting promising directions for future model and dataset development. Despite these advances, the deployment of domain-specific models faces a critical scalability- performance tradeoff. On one hand, 7B-parameter models such as JPharmatron are relatively feasible to deploy using a small cluster of GPUs. On the other hand, such models inevitably fall short of the performance levels achieved by larger models (e.g., 70B). Bridging this gap without compromis- ing deployability remains an open challenge, and we believe our work represents a meaningful first step toward addressing this dilemma. 7 (a) Positional bias (count) (b) Positional bias (error rate) (c) Single-choice vs. Multiple-choice (d) Category-wise accuracy (e) Complex questions Figure 5: Error analysis on GPT-4o’s responses in YakugakuQA. Our ultimate goal in this field is to achieve a strong and useful pharmaceutical LLM. To this end, we need to further strengthen open models, as commercial models are often unavailable or re- stricted by regulations. Our experimental results, particularly those discussed in §5.3, suggest three directions for future work, listed in order of priority: (i) improving performance in core subjects to reach parity with commercial models, (ii) enhancing the overall capabilities of LLMs, and (iii) addressing weaknesses in lower-performing subjects. While the best open models already achieve acceptable performance, they
https://arxiv.org/abs/2505.16661v1
still lag clearly behind commer- cial counterparts (Table 5). As a next step, it is essential to evaluate how much performance can be improved in targeted subject areas, depending on the intended application of the model, by sim- ply incorporating a substantial amount of relevant training data. For the lower-performing subjects, in- cluding the improvement in chemistry and physics, both domain knowledge and reasoning ability must be significantly strengthened. However, consider- ing development costs, we argue that addressingthese weaknesses may not be a high priority in prac- tice, as they can often be circumvented by limiting the task scope from application sides. 7 Conclusion We presented JPharmatron , a Japanese domain- specific LLM for the pharmaceutical field, trained via continual pretraining on a bilingual pharmaceu- tical corpus. Alongside the model, we introduced JPharmaBench , the first benchmark suite covering diverse pharmaceutical language tasks. Our model outperforms existing open medical LLMs across diverse pharmaceutical tasks, highlighting that gen- eral medical specialization alone is insufficient for pharmaceutical applications. Notably, the bench- mark includes tasks such as SogoCheck, which reflect real-world document validation workflows unique to the pharmaceutical domain. Beyond re- leasing a domain-specific model and benchmark, our work demonstrates the feasibility of building cost-effective, specialized LLMs deployable in se- cure, resource-constrained environments, which is 8 critical for real-world use in privacy-sensitive do- mains like pharmaceuticals. 8 Limitations Lack of Complete Instruction-Following Ability in LLMs Some smaller models tend to deviate from instruc- tions, often generating output that includes extra- neous text beyond the expected format. A com- mon error is the inclusion of additional phrases or explanations following a colon or line break. To ensure a fair comparison in our experiments, we post-processed model outputs by extracting only the selected choice and discarding any extra text. Limitations of YakugakuQA Firstly, questions with images should be addressed. In particular, the chemistry category lacks sufficient coverage due to the high proportion of image-based questions. While the rise of multimodal models, especially vision-language models, is an important development, this study focuses exclusively on text- only large language models. Therefore, image- based questions were excluded from our evaluation. In the future, this limitation should be revisited when assessing multimodal models. Moreover, YakugakuQA is a simple five-choice question-answering task, which may not be suf- ficient for practical implementation, although it could serve as a minimum requirement. Last but not least, the prompting strategy can also be improved. In our work, we used a simple setup as an initial step in this field. It should be noted that in-context learning of LLMs has the po- tential to boost performance, as demonstrated by Medprompt (Nori et al., 2023) in medical question- answering for example. This point remains contro- versial (Nori et al., 2024) and was not addressed in this study. Limitations of NayoseQA Although we introduce a novel benchmark NayoseQA, its current format is limited to multiple- choice QA. While this format enables controlled evaluation, it may not fully reflect the practical needs of real-world entity normalization systems, where open-ended or instruction-following formats are more appropriate. To address this, we have
https://arxiv.org/abs/2505.16661v1
sep- arately released an instruction-style (SQuAD (Ra- jpurkar et al., 2016)-type) variant of NayoseQA, which is not included in the main results but mayserve as a valuable resource for future work on more realistic applications. Limitations of SogoCheck SogoCheck is currently limited in scale, with only a small number of consistency pairs included in the benchmark. This restricts the statistical robust- ness of evaluation and may limit its confidence across different model types and domains. In ad- dition, generating realistic inconsistencies is inher- ently challenging. While we employed LLM-based generation methods to create contradictory state- ment pairs, it remains difficult to simulate subtle, human-like inconsistencies that naturally occur in real-world pharmaceutical texts. Many automat- ically generated inconsistencies tend to be either too trivial or too artificial, reducing their diagnos- tic value. Developing more authentic and diverse inconsistency examples remains an open challenge for future work. 9 Acknowledgement This paper is based on results obtained from GE- NIAC (Generative AI Accelerator Challenge, a project to strengthen Japan’s generative AI devel- opment capabilities), a project implemented by the Ministry of Economy, Trade and Industry (METI) and the New Energy and Industrial Technology Development Organization (NEDO). We thank Yuki Kobiyama, Kouta Hiroe, Masabumi Ishihara, and Miyuki Toyoi for help- ful supports. We used ChatGPT-4o for the support of proofreading. References Amro Abbas, Kushal Tirumala, Dániel Simig, Surya Ganguli, and Ari S Morcos. 2023. Semdedup: Data- efficient learning at web-scale through semantic dedu- plication. arXiv preprint arXiv:2303.09540 . Janice Ahn, Rishu Verma, Renze Lou, Di Liu, Rui Zhang, and Wenpeng Yin. 2024. Large language models for mathematical reasoning: Progresses and challenges. In Proceedings of the 18th Conference of the European Chapter of the Association for Com- putational Linguistics: Student Research Workshop , pages 225–237, St. Julian’s, Malta. Association for Computational Linguistics. Takuya Akiba, Makoto Shing, Yujin Tang, Qi Sun, and David Ha. 2025. Evolutionary optimization of model merging recipes. Nature Machine Intelligence , pages 1–10. 9 Loubna Ben Allal, Anton Lozhkov, Elie Bak- ouch, Gabriel Martín Blázquez, Guilherme Penedo, Lewis Tunstall, Andrés Marafioti, Hynek Kydlí ˇcek, Agustín Piqueres Lajarín, Vaibhav Srivastav, et al. 2025. Smollm2: When smol goes big–data-centric training of a small language model. arXiv preprint arXiv:2502.02737 . Rohan Anil, Andrew M. Dai, Orhan Firat, Melvin John- son, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, Eric Chu, Jonathan H. Clark, Laurent El Shafey, Yanping Huang, Kathy Meier-Hellstern, Gau- rav Mishra, Erica Moreira, Mark Omernick, Kevin Robinson, Sebastian Ruder, Yi Tay, Kefan Xiao, Yuanzhong Xu, Yujing Zhang, Gustavo Hernandez Abrego, Junwhan Ahn, Jacob Austin, Paul Barham, Jan Botha, James Bradbury, Siddhartha Brahma, Kevin Brooks, Michele Catasta, Yong Cheng, Colin Cherry, Christopher A. Choquette-Choo, Aakanksha Chowdhery, Clément Crepy, Shachi Dave, Mostafa Dehghani, Sunipa Dev, Jacob Devlin, Mark Díaz, Nan Du, Ethan Dyer, Vlad Feinberg, Fangxiaoyu Feng, Vlad Fienber, Markus Freitag, Xavier Gar- cia, Sebastian Gehrmann, Lucas Gonzalez, Guy Gur- Ari, Steven Hand, Hadi Hashemi, Le Hou, Joshua Howland, Andrea Hu, Jeffrey Hui, Jeremy Hur- witz, Michael Isard, Abe Ittycheriah, Matthew Jagiel- ski, Wenhao Jia, Kathleen Kenealy, Maxim Krikun, Sneha Kudugunta, Chang Lan,
https://arxiv.org/abs/2505.16661v1
Katherine Lee, Ben- jamin Lee, Eric Li, Music Li, Wei Li, YaGuang Li, Jian Li, Hyeontaek Lim, Hanzhao Lin, Zhongtao Liu, Frederick Liu, Marcello Maggioni, Aroma Mahendru, Joshua Maynez, Vedant Misra, Maysam Moussalem, Zachary Nado, John Nham, Eric Ni, Andrew Nys- trom, Alicia Parrish, Marie Pellat, Martin Polacek, Alex Polozov, Reiner Pope, Siyuan Qiao, Emily Reif, Bryan Richter, Parker Riley, Alex Castro Ros, Au- rko Roy, Brennan Saeta, Rajkumar Samuel, Renee Shelby, Ambrose Slone, Daniel Smilkov, David R. So, Daniel Sohn, Simon Tokumine, Dasha Valter, Vijay Vasudevan, Kiran V odrahalli, Xuezhi Wang, Pidong Wang, Zirui Wang, Tao Wang, John Wiet- ing, Yuhuai Wu, Kelvin Xu, Yunhan Xu, Linting Xue, Pengcheng Yin, Jiahui Yu, Qiao Zhang, Steven Zheng, Ce Zheng, Weikang Zhou, Denny Zhou, Slav Petrov, and Yonghui Wu. 2023. PaLM 2 Technical Report. Preprint , arXiv:2305.10403. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems , 33:1877–1901. Gillian Chaloner-Larsson, Roger Anderson, Anik Egan, Manoel Antonio Da Fonseca Costa Filho, Jorge F Gomez Herrera, World Health Organization. Vaccine Supply, and Quality Unit. 1999. A WHO guide to good manufacturing practice (GMP) requirements / written by Gillian Chaloner-Larsson, Roger An- derson, Anik Egan; in collaboration with Manoel Antonio da Fonseca Costa Filho, Jorge F. Gomez Herrera.Juan Manuel Zambrano Chaves, Eric Wang, Tao Tu, Eeshit Dhaval Vaishnav, Byron Lee, S. Sara Mahdavi, Christopher Semturs, David Fleet, Vivek Natara- jan, and Shekoofeh Azizi. 2024. Tx-LLM: A Large Language Model for Therapeutics. Preprint , arXiv:2406.06316. Linqing Chen, Weilei Wang, Zilong Bai, Peng Xu, Yan Fang, Jie Fang, Wentao Wu, Lizhi Zhou, Ruiji Zhang, Yubin Xia, et al. 2024. PharmaGPT: Domain-Specific Large Language Models for Bio- Pharmaceutical and Chemistry. arXiv preprint arXiv:2406.18045 . Zeming Chen, Alejandro Hernández Cano, Angelika Romanou, Antoine Bonnet, Kyle Matoba, Francesco Salvi, Matteo Pagliardini, Simin Fan, Andreas Köpf, Amirkeivan Mohtashami, et al. 2023. Meditron-70b: Scaling medical pretraining for large language mod- els.arXiv preprint arXiv:2311.16079 . Alexa Ehlert, Benjamin Ehlert, Binxin Cao, and Kathryn Morbitzer. 2024. Large Language Models and the North American Pharmacist Licensure Examination (NAPLEX) Practice Questions. American Journal of Pharmaceutical Education , 88(11):101294. Kazuki Fujii, Taishi Nakamura, Mengsay Loem, Hi- roki Iida, Masanari Ohi, Kakeru Hattori, Hirai Shota, Sakae Mizuki, Rio Yokota, and Naoaki Okazaki. 2024. Continual Pre-Training for Cross-Lingual LLM Adaptation: Enhancing Japanese Language Ca- pabilities. In Proceedings of the First Conference on Language Modeling , COLM, page (to appear), University of Pennsylvania, USA. Charles Goddard, Shamane Siriwardhana, Malikeh Ehghaghi, Luke Meyers, Vladimir Karpukhin, Brian Benedict, Mark McQuade, and Jacob Solawetz. 2024. Arcee’s MergeKit: A toolkit for merging large lan- guage models. In Proceedings of the 2024 Confer- ence on Empirical Methods in Natural Language Processing: Industry Track , pages 477–485, Miami, Florida, US. Association for Computational Linguis- tics. P. Hager, F. Jungmann, R. Holland, et al. 2024. Eval- uation and mitigation of the limitations of large lan- guage models in clinical decision-making. Nature Medicine , 30(11):2613–2622. Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou,
https://arxiv.org/abs/2505.16661v1
Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2021. Measuring Massive Multitask Language Un- derstanding. Proceedings of the International Con- ference on Learning Representations (ICLR) . Kexin Huang, Tianfan Fu, Wenhao Gao, Yue Zhao, Yusuf Roohani, Jure Leskovec, Connor W Coley, Cao Xiao, Jimeng Sun, and Marinka Zitnik. 2022. Artifi- cial intelligence foundation for therapeutic science. Nature Chemical Biology . Pranab Islam, Anand Kannappan, Douwe Kiela, Re- becca Qian, Nino Scherrer, and Bertie Vidgen. 2023. Financebench: A new benchmark for financial ques- tion answering. arXiv preprint arXiv:2311.11944 . 10 Junfeng Jiang, Jiahao Huang, and Akiko Aizawa. 2024. JMedBench: A Benchmark for Evaluating Japanese Biomedical Large Language Models. Preprint , arXiv:2409.13317. Di Jin, Eileen Pan, Nassim Oufattole, Wei-Hung Weng, Hanyi Fang, and Peter Szolovits. 2020. What Disease does this Patient Have? A Large-scale Open Domain Question Answering Dataset from Medical Exams. arXiv preprint arXiv:2009.13081 . Jungo Kasai, Yuhei Kasai, Keisuke Sakaguchi, Yutaro Yamada, and Dragomir Radev. 2023. Evaluating GPT-4 and ChatGPT on Japanese medical licensing examinations. Preprint , arXiv:2303.18027. Haitao Li, Qian Dong, Junjie Chen, Huixue Su, Yu- jia Zhou, Qingyao Ai, Ziyi Ye, and Yiqun Liu. 2024a. Llms-as-judges: a comprehensive survey on llm-based evaluation methods. arXiv preprint arXiv:2412.05579 . Qintong Li, Leyang Cui, Xueliang Zhao, Lingpeng Kong, and Wei Bi. 2024b. GSM-plus: A compre- hensive benchmark for evaluating the robustness of LLMs as mathematical problem solvers. In Proceed- ings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers) , pages 2961–2984, Bangkok, Thailand. Associ- ation for Computational Linguistics. Kelly Marchisio, Saurabh Dash, Hongyu Chen, Den- nis Aumiller, Ahmet Üstün, Sara Hooker, and Se- bastian Ruder. 2024. How does quantization affect multilingual LLMs? In Findings of the Association for Computational Linguistics: EMNLP 2024 , pages 15928–15947, Miami, Florida, USA. Association for Computational Linguistics. Harsha Nori, Yin Tat Lee, Sheng Zhang, Dean Carig- nan, Richard Edgar, Nicolo Fusi, Nicholas King, Jonathan Larson, Yuanzhi Li, Weishung Liu, et al. 2023. Can generalist foundation models outcom- pete special-purpose tuning? case study in medicine. arXiv preprint arXiv:2311.16452 . Harsha Nori, Naoto Usuyama, Nicholas King, Scott Mayer McKinney, Xavier Fernandes, Sheng Zhang, and Eric Horvitz. 2024. From Medprompt to o1: Exploration of Run-Time Strategies for Medi- cal Challenge Problems and Beyond. arXiv preprint arXiv:2411.03590 . Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al. 2018. Improving language under- standing by generative pre-training. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250 . Karan Singhal, Shekoofeh Azizi, Tao Tu, S Sara Mah- davi, Jason Wei, Hyung Won Chung, Nathan Scales, Ajay Tanwani, Heather Cole-Lewis, Stephen Pfohl, et al. 2023a. Large language models encode clinical knowledge. Nature , 620(7972):172–180.Karan Singhal, Tao Tu, Juraj Gottweis, Rory Sayres, Ellery Wulczyn, Le Hou, Kevin Clark, Stephen Pfohl, Heather Cole-Lewis, Darlene Neal, et al. 2023b. Towards expert-level medical question an- swering with large language models. arXiv preprint arXiv:2305.09617 . Ben Sorscher, Robert Geirhos, Shashank Shekhar, Surya Ganguli, and Ari Morcos. 2022. Beyond neural scal- ing laws: beating power law scaling via data pruning. Advances in
https://arxiv.org/abs/2505.16661v1
Neural Information Processing Systems , 35:19523–19536. Issey Sukeda. 2024. Development and bilingual evalua- tion of Japanese medical large language model within reasonably low computational resources. arXiv preprint arXiv:2409.11783 . Issey Sukeda, Risa Kishikawa, and Satoshi Kodera. 2024a. 70B-parameter large language models in Japanese medical question-answering. arXiv preprint arXiv:2406.14882 . Issey Sukeda, Masahiro Suzuki, Hiroki Sakaji, and Satoshi Kodera. 2023. JMedLoRA: medical domain adaptation on Japanese large language models using instruction-tuning. arXiv preprint arXiv:2310.10083 . Issey Sukeda, Masahiro Suzuki, Hiroki Sakaji, and Satoshi Kodera. 2024b. Development and analy- sis of medical instruction-tuning for Japanese large language models. Artificial Intelligence in Health , 1(2):107–116. Kushal Tirumala, Daniel Simig, Armen Aghajanyan, and Ari Morcos. 2023. D4: Improving llm pretrain- ing via document de-duplication and diversification. Advances in Neural Information Processing Systems , 36:53983–53995. Hugo Touvron, Louis Martin, Kevin Stone, Peter Al- bert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023. Llama 2: Open founda- tion and fine-tuned chat models. arXiv preprint arXiv:2307.09288 . Luong Trung, Xinbo Zhang, Zhanming Jie, Peng Sun, Xiaoran Jin, and Hang Li. 2024. ReFT: Reasoning with reinforced fine-tuning. In Proceedings of the 62nd Annual Meeting of the Association for Com- putational Linguistics (Volume 1: Long Papers) , pages 7601–7614, Bangkok, Thailand. Association for Computational Linguistics. Hirofumi Tsuruta, Hiroyuki Yamazaki, Ryota Maeda, Ryotaro Tamura, and Akihiro Imura. 2024. A SARS- cov-2 interaction dataset and VHH sequence corpus for antibody language models. In The Thirty-eight Conference on Neural Information Processing Sys- tems Datasets and Benchmarks Track . Prateek Yadav, Derek Tam, Leshem Choshen, Colin A Raffel, and Mohit Bansal. 2023. Ties-merging: Re- solving interference when merging models. Ad- vances in Neural Information Processing Systems , 36:7093–7115. 11 An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, et al. 2024. Qwen2.5 tech- nical report. arXiv preprint arXiv:2412.15115 . Ziqi Yin, Hao Wang, Kaito Horio, Daisuke Kawa- hara, and Satoshi Sekine. 2024. Should We Respect LLMs? A Cross-Lingual Study on the Influence of Prompt Politeness on LLM Performance. arXiv preprint arXiv:2402.14531 . Jingqing Zhang, Kai Sun, Akshay Jagadeesh, Paras- too Falakaflaki, Elena Kayayan, Guanyu Tao, Mahta Haghighat Ghahfarokhi, Deepa Gupta, Ashok Gupta, Vibhor Gupta, et al. 2024. The potential and pit- falls of using a large language model such as chat- gpt, gpt-4, or llama as a clinical assistant. Journal of the American Medical Informatics Association , 31(9):1884–1891. Yiran Zhao, Jinghan Zhang, I Chern, Siyang Gao, Pengfei Liu, Junxian He, et al. 2023. Felm: Bench- marking factuality evaluation of large language mod- els.Advances in Neural Information Processing Sys- tems, 36:44502–44523. Chunting Zhou, Pengfei Liu, Puxin Xu, Srinivasan Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, et al. 2023. Lima: Less is more for align- ment. Advances in Neural Information Processing Systems , 36:55006–55021. A Ethical considerations While JPHARMATRON is designed to complete pharmaceutical tasks resembling the real tasks in pharmacy companies, it is not yet confirmed to accomplish the real tasks within professional ac- ceptable quality. It raises several ethical considera- tions
https://arxiv.org/abs/2505.16661v1
that must be addressed to ensure responsible development and deployment. Importantly, the model may still generate fac- tually incorrect or misleading content. We rec- ommend to further finetune our model with the company’s real data and conduct additional use- case alignment and testing before deploying it in real-world practice. We further emphasize that the model is not intended for clinical use. Instead, it is suitable for document processing tasks, where po- tential risks can be mitigated through human review and validation of the generated content. The training data may contain biases related to demographics, geographic representation, or com- mercial interests. Additionally, if any data were to originate from patents, proprietary databases, or unpublished sources, there would be a risk of inadvertently disclosing protected content or facil- itating unauthorized reuse. Although all training data used in this study were sourced from publiclyavailable datasets, we acknowledge that this issue was not directly addressed in the current work. B Supplementary information on our benchmarks B.1 YakugakuQA The number of YakugakuQA is listed in Table 6. Among the available questions online, only those with texts were extraceted. Prompt Below are the three-shot examples in- cluded in the prompt throughout our experiments. All of them are originally in Japanese, but trans- lated into English by ChatGPT-4o mini for this article. Question: Which of the following insomnia medications inhibits the orexin receptor? Please select exactly one from the options 1, 2, 3, 4, or 5. 1: Brotizolam 2: Flunitrazepam 3: Eszopiclone 4: Ramelteon 5: Lemborexant Answer: 5 Question: Which two mechanisms of action describe the effects of sacubi- tril/valsartan? Please select exactly two from the options 1, 2, 3, 4, or 5. 1: Inhibits neprilysin, thereby preventing the breakdown of endogenous natriuretic peptides, resulting in vasodilation and diuretic effects. 2: Inhibits angiotensin II receptors, suppressing aldosterone secretion from the adrenal cortex, thereby causing vasodilation. 3: Acts on ANP receptors in the blood vessels and kidneys, activating guanylate cyclase, resulting in vasodilation and diuretic effects. 4: Blocks aldosterone receptors in the collecting ducts, leading to diuretic effects. 5: Inhibits angiotensin-converting enzyme, thereby preventing the for- mation of angiotensin II, resulting in vasodilation. Answer: 1,2 Question: Which of the following migraine prophylactic drugs inhibits 12 Biology Chemistry Hygiene Law Pathology Pharmacology Pharmacy Physics Practice Total 2012 17 4 30 29 37 38 36 17 65 273 2013 16 3 32 28 36 34 33 11 63 256 2014 15 4 28 29 35 37 28 13 63 252 2015 8 3 26 27 35 35 31 9 60 234 2016 10 3 30 27 37 40 29 12 50 238 2017 11 2 28 26 37 36 27 10 54 231 2018 11 4 31 27 36 35 25 10 53 232 2019 9 1 28 28 32 33 26 12 46 215 2020 12 4 25 26 33 33 17 12 42 204 2021 6 2 30 27 35 30 19 10 55 214 2022 9 3 25 27 33 33 24 15 48 217 2023 10 3 23 25 27 33 22 15 47 205 2024 11 11
https://arxiv.org/abs/2505.16661v1
33 23 28 36 31 18 59 250 Table 6: The number of questions used in our experiments by year and category. The questions that include images have been excluded from the original NPLE. calcitonin gene-related peptide (CGRP)? Please select exactly one from the options 1, 2, 3, 4, or 5. 1: Basiliximab 2: Trastuzumab 3: Benralizumab 4: Galcanezumab 5: Tocilizumab Answer: 4 B.2 Pharmaceutical-related subset of JMMLU The number of questions included in each category of JMMLU which was used in our evaluation ex- periments is listed in Table 7. The category-wise accuracy is shown in Table 8. Consistent with the results in YakugakuQA (Table 5), the overall trend that biology tends to score higher than chemistry and physics is observed. C Model & Training C.1 Data accumulation The continual pretraining corpus used for JPHAR- MATRON is composed of five categories of text, col- lected from publicly available sources. Each data type was selected to contribute domain-relevant knowledge or general linguistic fluency. An overview is provided below: Journal Articles Academic papers and review articles related to pharmacology, pharmacy prac- tice, and clinical medicine. These texts provide rich domain-specific vocabulary and formal written structures.Category The number of questions clinical_knowledge 150 college_biology 143 college_chemistry 99 college_medicine 150 college_physics 100 high_school_biology 148 high_school_chemistry 149 high_school_physics 150 high_school_statistics 150 medical_genetics 99 nutrition 149 professional_medicine 150 virology 150 Total 1787 Table 7: The number of questions by categories in- cluded in pharmaceutical-related JMMLU. PubMed Abstract Subset A curated selection of English abstracts from the PubMed database, focusing on drug-related publications. This source contributes approximately 8 billion tokens and pro- vides a biomedical foundation to complement the Japanese data. Package Inserts approved by PMDA Texts pub- lished by Japan’s Pharmaceuticals and Medical De- vices Agency (PMDA), such as drug approval sum- maries, review reports, and safety alerts. These documents contribute approximately 87 million to- kens and reflect regulatory terminology. Official Documents from Governmental Insti- tutes Documents from government-affiliated or- ganizations including the Pharmaceuticals and Medical Devices Act. 13 Model clinical_ college_ college_ college_ college_ high_school_ high_school_ high_school_ high_school_ medical_ nutrition professional_ virology Over knowledge biology chemistry medicine physics biology chemistry physics statistics genetics medicine -all TinySwallow-1.5B-Instruct 41.3 28.0 29.3 36.0 28.0 40.5 26.8 25.3 28.7 31.3 34.2 30.7 34.0 32.1 sarashina2.2-3b-instruct 39.3 45.5 29.3 42.0 35.0 52.7 26.2 27.3 34.0 40.4 47.7 44.7 24.7 37.8 Qwen2.5-7B-Instruct 52.7 46.9 30.3 41.3 37.0 50.7 36.2 28.7 32.7 48.5 57.7 49.3 41.3 42.9 Meditron3-Qwen2.5-7B 48.7 27.3 19.2 26.7 33.0 37.8 23.5 28.7 34.7 28.3 44.3 33.3 22.0 31.7 Llama-3-Swallow-8B-Instruct-v0.1 30.7 12.6 17.2 25.3 11.0 26.4 20.1 21.3 27.3 11.1 16.1 30.0 11.3 20.6 Llama-3.1-Swallow-8B-Instruct-v0.3 52.0 45.5 35.4 47.3 37.0 55.4 35.6 30.0 36.7 55.6 53.7 44.7 42.0 44.0 GPT-4o 82.7 93.0 60.6 81.3 69.0 85.1 76.5 70.0 82.0 88.9 82.6 94.7 56.7 79.1 Ours (best) 58.7 64.3 44.4 48.7 50.0 65.5 48.3 46.0 64.7 59.6 62.4 58.7 40.7 55.0 Table 8: Accuracy comparison on JMMLU across different subject categories and different LLMs. General-Domain Corpus A part of FineWeb5 and Swallow Dataset6. C.2 Data Filtering We constructed a high-quality, domain-specific cor-
https://arxiv.org/abs/2505.16661v1
pus for the pharmaceutical domain by leveraging a multi-stage filtering pipeline built upon large lan- guage models (LLMs) and trained classifiers. Fol- lowing SmolLM2 (Allal et al., 2025), the overall procedure consists of three steps: 1.We first sampled a subset of documents from the Common Crawl dataset (CC100). A high-performing LLM (Qwen2.5-72B) was prompted to assign each page a pharmaceuti- cal relevance score ranging from 0 (irrelevant) to 5 (highly relevant). 2.Using 54,056 LLM-labeled samples, we trained a classifier to predict the pharmaceuti- cal relevance score of input documents. Pages scoring 1 or higher were retained. 3.The retained documents were further evalu- ated using the same LLM to assign an educa- tional quality score (0-5). A second classifier, trained on 5,478 LLM-labeled samples, was used to filter out documents with an educa- tional quality score 3 or lower. This ensured that the resulting data not only pertains to phar- maceutical content but is also of pedagogical value. All training data for both classifiers were generated using high-confidence outputs from the Qwen2.5- 72B model. Both classifiers were trained following the configuration of the finemath-classifier7frame- work. 5https://huggingface.co/datasets/ HuggingFaceFW/fineweb 6https://huggingface.co/datasets/ tokyotech-llm/swallow-magpie-ultra-v0.1 7https://huggingface.co/HuggingFaceTB/ finemath-classifierAs a result of this filtering pipeline, we collected 904,651 high-quality, pharmaceutical-related docu- ments (totalling 1.2 billion tokens) from the dedu- plicated Common Crawl (llm-jp-corpus-v38). C.3 Data cleansing In this study, we employed the D4 algorithm (Tiru- mala et al., 2023) to perform data deduplication, aiming to reduce redundant information. D4 is pri- marily composed of SemDeDup (Semantic dedu- plication) (Abbas et al., 2023) and SSL Prototype (Self-Supervised Learning Prototypes) (Sorscher et al., 2022). The former incorporates k-means clustering to eliminate texts with cosine similarity larger than 1−ϵ. We set ϵ= 3×10−8for the discarding threshold in SemDeDup and R= 0.95 for the discarding proportion in SSL Prototype, re- spectively. In summarization, the total number of tokens were reduced from 10B to 9B. C.4 Base model selection Discussing industrial applications often lead to the cost perspectives. Different from research pur- pose development, the operational cost in inference phase also should be taken into account, otherwise no institution can afford to utilize the trained model. Training a model from scratch to learn Japanese was deemed prohibitively costly. Therefore, in se- lecting the base model, we prioritized the use of a pretrained model that had already been trained on Japanese data, and we also sought a model with a commercially viable license that would facilitate its adoption within the pharmaceutical industry. We restricted the model size to around 7B for better us- ability considering the training cost and inference cost. Based on these criteria, we chose Qwen2.5- 7B (Yang et al., 2024) as the base model. 8https://gitlab.llm-jp.nii.ac.jp/datasets/ llm-jp-corpus-v3 14 Merge method YakugakuQA (%) TIES (weight 8:2) 57.2 TIES (weight 7:3) 59.0 TIES (weight 6:4) 60.4 DARE TIES by EvoLLM 60.7 Table 9: Accuracy comparison on YakugakuQA across different merging methods. Qwen2.5-7B- Instruct was used as the base model and JPHARMA - TRON -7B (Ours) was used as the auxiliary model. C.5 Enhancing Instruction Following via Model Merging Our domain-specific model trained through con- tinued pretraining exhibited
https://arxiv.org/abs/2505.16661v1
poor instruction- following capabilities. As a result, these mod- els struggle to answer multiple-choice questions correctly, rendering them ineffective for standard benchmark evaluations which rely heavily on such tasks. Instead of applying supervised fine-tuning (SFT), which can be resource-intensive and require care- fully aligned datasets, we adopt a lightweight ap- proach by leveraging model merging. Specifi- cally, we aim to endow a domain-adapted model with strong instruction-following and reasoning capabilities by merging it with a general-purpose instruction-tuned model. To this end, we designate Qwen2.5-7B-Instruct as the base model, given its demonstrated strength in instruction adherence and task generalization. The domain-specific model, pretrained on 2B tokens of pharmaceutical texts, serves as the knowledge-rich counterpart in the merge. We employ the TIES merging strategy (Yadav et al., 2023) provided by mergekit (Goddard et al., 2024), and assign a weight to balance the retention of domain knowledge while preserving the core reasoning and output structure of the instruction- tuned base model. Table 9 shows the superiority of EvoLLM (Akiba et al., 2025) coupled with DARE TIES merging. 15
https://arxiv.org/abs/2505.16661v1
R1-ShareVL: Incentivizing Reasoning Capability of Multimodal Large Language Models via Share-GRPO Huanjin Yao2,3*, Qixiang Yin4*, Jingyi Zhang1, Min Yang2, Yibo Wang3, Wenhao Wu5 Fei Su4, Li Shen1, Minghui Qiu2, Dacheng Tao1, Jiaxing Huang1 1Nanyang Technological University2ByteDance3Tsinghua University 4Beijing University of Posts and Telecommunications5The University of Sydney ∗Equal Contribution Corresponding Author Abstract In this work, we aim to incentivize the reasoning ability of Multimodal Large Language Models (MLLMs) via reinforcement learning (RL) and develop an ef- fective approach that mitigates the sparse reward and advantage vanishing issues during RL. To this end, we propose Share-GRPO, a novel RL approach that tackle these issues by exploring and sharing diverse reasoning trajectories over expanded question space. Specifically, Share-GRPO first expands the question space for a given question via data transformation techniques, and then encourages MLLM to effectively explore diverse reasoning trajectories over the expanded question space and shares the discovered reasoning trajectories across the expanded questions dur- ing RL. In addition, Share-GRPO also shares reward information during advantage computation, which estimates solution advantages hierarchically across and within question variants, allowing more accurate estimation of relative advantages and im- proving the stability of policy training. Extensive evaluations over six widely-used reasoning benchmarks showcase the superior performance of our method. Code will be available at https://github.com/HJYao00/R1-ShareVL . 1 Introduction The recent success of Reinforcement Learning (RL) in Large Language Models (LLMs), such as Kimi-K1.5 [ 1] and DeepSeek-R1 [ 2], shows its promise in incentivizing model’s long-chain reasoning capability, enabling LLMs to tackle complex tasks such as mathematical and scientific reasoning. The core design of these advances ( e.g., GRPO [ 3] in Deepseek-R1) lies in online reinforcement learning without the need of reward models, which encourages an LLM to generate a group of reasoning paths and iteratively refine its reasoning process with a group relative advantage estimation mechanism based on rule-based reward functions. Typically, a simple reward strategy is adopted: reasoning paths leading to correct answers receive higher rewards, while those leading to incorrect answers receive lower ones, where the model is optimized via the group relative advantages estimated from the rewards. Inspired by these advancements, we aim to develop a simple and effective reinforcement learning method for Multimodal LLMs (MLLMs) to incentivize their long-chain reasoning ability. A simple way is to directly apply these LLM online reinforcement learning methods like GRPO on MLLMs. However, we empirically observe that directly applying GRPO on MLLMs suffers from sparse reward and advantage vanishing issues, leading to degraded performance in enhancing MLLM’s reasoning capability [4, 5, 6]: (1) Sparse reward: Most current MLLMs, especially smaller ones, exhibit very limited long-chain reasoning capability. As a result, only a few generated reasoning paths receive positive rewards, Preprint.arXiv:2505.16673v1 [cs.CV] 22 May 2025 Expanded Solution SpaceUse parallelogram 𝐽𝐾𝐿𝑀to find b if 𝐽𝐾=2𝑏+3and 𝐽𝑀=3𝑎 Use parallelogram 𝐽𝐾𝐿𝑀to find b if 𝐽𝐾=2𝑏+3and 𝐽𝑀=3𝑎 For parallelogram 𝐽𝐾𝐿𝑀, if the measure of 𝐽𝐾is2𝑏+3and the measure of 𝐽𝑀is3𝑎, calculate the value of𝑏. This image has been rotated by 90 degrees. Please mentally rotate it back and solve the problem. For parallelogram 𝐽𝐾𝐿𝑀, if the measure of 𝐽𝐾is2𝑏+3and the
https://arxiv.org/abs/2505.16673v1
measure of 𝐽𝑀is3𝑎, calculate the value of𝑏.𝑄𝑆!𝑄=𝐼,𝑇 𝑄!=𝐼",𝑇 𝑄#=𝐼,𝑇"𝑄"𝑆!!𝑄#𝑆!"𝑄$𝑆!#𝑸Question (Image-Text)𝑺𝑸 Solution SpaceSemantically Consistent Transformation 𝑄$=𝐼",𝑇"(a) Explore and share diverse reasoning trajectories over expanded question space(b) Reward Curve (Left) and Valid Advantage Ratio Curve (Right) (c) Comparison with other Reasoning MLLMs. 204060800.450.550.65 Step204060800.60.70.8 StepRewardValid Advantage RatioShare-GRPOGRPOShare-GRPOGRPO 405060708090 MathVistaMMStarMathVerse 52.86775.450.365.17347.963.270.249.263.968.2Qwen2.5-VL-7BOpenVLThinker-7BMM-Eureka-7BR1-Share-7BResponseSharingFigure 1: (a)Share-GRPO expands the question space via semantically consistent transformations, and then explores diverse reasoning trajectories from different question variants and shares the discovered trajectories among them. (b)Share-GRPO provides denser rewards and higher valid advantage ratios compared to GRPO, demonstrating its effectiveness in mitigating sparse reward and advantage vanishing issues. (c)Share-GRPO outperforms the baseline and other SOTA RL-based reasoning MLLMs on both mathematical and general reasoning benchmarks. especially on challenging questions and particularly during the early stage of training. This leads to sparse rewarding, inefficient exploration and instable training in GRPO-like methods. (2) Advantage vanishing: GRPO-like methods compute relative advantages by comparing the rewards of a group of responses sampled from a given question, leading to advantage vanishing when receiving homogeneous responses. Specifically, along reinforcement learning process, the model tends to gradually predict similar and all correct responses for well-learned questions, and similar and all incorrect responses for poor-learned questions. In this way, the relative advantages tend to approach zero when the group of responses become more homogeneous, and collapse to zero when all responses receive identical rewards ( e.g., all correct or all incorrect), resulting ineffective reinforcement learning. Motivated by these observations, we propose Share-GRPO, a novel approach that introduces the concept of information sharing into MLLM reinforcement learning to mitigate sparse reward and advantage vanishing issues. The core idea of Share-GRPO lies in exploring and sharing diverse reasoning trajectories over expanded question space as shown in Fig. 1 (a). Specifically, Share- GRPO first expands the question space for a given question via data transformation techniques, and then encourages MLLM to effectively explore diverse reasoning trajectories over the expanded question space and shares the discovered reasoning trajectories across the expanded questions during reinforcement learning. In this way, each expanded question variant can both contribute and benefit from the reasoning trajectories generated by others in the expand question space, allowing the model to jointly explore and learn from a shared solution space across expanded questions. In addition, Share-GRPO also shares reward information during advantage computation, which estimates solution advantages hierarchically across and within question variant, allowing more accurate estimation of relative advantages and improving the stability of policy training. Specifically, we estimate advantages at two levels: a local level, which consists of responses generated from each individual question variant, and a global level, which aggregates responses across all variants of the same seed question. This hierarchical advantage estimation enables more robust and fine-grained relative advantage computation, where the local level captures intra-variant structure and variance while the global level exploits cross-variant diversity and complementarity and stabilizes reward signals. In this way, Share-GRPO effectively mitigates the sparse reward and advantage vanishing issues: (1) Share-GRPO expands the question space and enables more diverse solution space for each given question, which effectively increases the likelihood of generating
https://arxiv.org/abs/2505.16673v1
a successful reasoning response and thus mitigates the sparse rewarding issue as illustrated in the left curve of Fig. 1 (b). (2) Share-GRPO allows the model to explore diverse reasoning trajectories from the expanded question space and 2 shares the discovered reasoning trajectories, ultimately mitigating the advantage vanishing issue effectively as illustrated in the right curve in Fig. 1 (b). (3) Share-GRPO estimates solution advantages hierarchically across and within question variant, which enables more accurate estimation of relative advantages and stable reinforcement learning process. In summary, the main contributions of this work are summarized as follows: First, we introduce the concept of information sharing into MLLM reinforcement learning, and propose Share-GRPO which explores and shares diverse reasoning trajectories over expanded question space, effectively mitigating the sparse reward and advantage vanishing issues. To the best of our knowledge, this is the first work that explores information sharing for MLLM reasoning reinforcement learning. Second, we design a hierarchical advantage estimation method by sharing reward information, which estimates solution advantages hierarchically across and within question variant, allowing accurate and robust advantage estimation. Third, extensive experiments on 6 MLLM reasoning benchmarks demonstrate the superiority of our proposed methods as illustrated in Fig. 1 (c). 2 Related Work 2.1 Multimodal Large Language Model Multimodal Large Language Models (MLLMs) [ 7,8,9,10,11,12,13,14,15,16] demonstrate outstanding performance in semantic understanding of cross-domain visual content and multimodal reasoning. Early research on MLLMs primarily focused on text-image alignment and the integration of multiple modalities [ 17,18,19,20,21]. Subsequently, models like GPT-4V [ 22] achieved breakthroughs in cross-modal understanding through multimodal instruction fine-tuning, enabling them to support simple tasks such as image captioning, visual question answering and OCR. More complex tasks, such as mathematical reasoning, document understanding, etc., require MLLMs to be able to perform complex logical deductions. For MLLM reasoning, models such as Multimodal- CoT [ 23] and LLaV A-CoT [ 24] employ chain-of-thought (CoT) reasoning, breaking down the multimodal reasoning process into step-by-step inference steps while leveraging multimodal data to improve the model’s reasoning capabilities. Additionally, Mulberry [ 25] proposes CoMCTS to generate effective reasoning paths through multi-model collaboration. Different from these studies, this work focuses on reinforcement learning to improve MLLM reasoning capability. 2.2 Reinforcement Learning for Multimodal Large Language Model Reasoning Reinforcement learning has become an essential technology for enhancing the capabilities of MLLMs. Early research primarily focused on Reinforcement Learning from Human Feedback(RLHF) [ 26,27, 28,29], which aligns the outputs of multimodal models with human preferences by incorporating human feedback signals. Recently, DeepSeek-R1 [ 2] utilizes a simple rule-based reward function to provide effective and reliable reward signals during the RL process. This indicates that the Group Relative Policy Optimization (GRPO) with result-level rewards effectively enhances the reasoning ability of LLMs. In the multimodal domain, researchers have begun exploring the use of RL to enhance the visual reasoning capabilities of MLLMs. Recent works, such as Vision-R1 [ 30] and MM-Eureka [ 5] have open-sourced large-scale SFT cold start data and RL data. R1-V [ 31], Reason- RFT [ 32], R1-VL [ 4] and other methods [ 33,34,35,36,37] have designed various
https://arxiv.org/abs/2505.16673v1
rule-based reward functions to enhance the reasoning abilities of MLLMs, such as geometric understanding and spatial perception. Unlike these methods, our ShareGRPO explores information sharing for MLLM reasoning reinforcement learning to mitigate sparse reward and advantage vanishing issues. 2.3 Information Sharing in Deep Learning Information sharing is a key strategy in deep learning, enabling more effective learning through the exchange of signals across modalities, tasks, or hierarchical model components. In multi- modal learning, models such as ViLBERT [ 38] and LXMERT [ 39] employ cross-modal attention to achieve fine-grained information fusion between vision and language streams. In contrastive learning ( e.g., SimCLR [ 40], MoCo [ 41]), shared representations across augmented views enhance feature robustness. This concept extends to reinforcement learning, especially in multi-task and multi-agent settings, where information sharing improves sample efficiency and mitigates sparse rewards. Methods like Distral [ 42] and PopArt [ 43] promote shared policy structures, while agents 3 in multi-agent RL benefit from shared value functions or communication protocols [ 44,45]. [46] further demonstrate that shared representations enhance generalization in multi-task RL. Unlike prior work, we introduce information sharing into MLLM reasoning reinforcement learning to mitigate sparse rewards and advantage vanishing for more effective reasoning learning. 3 Method This section first provides the preliminary of Group Relative Policy Optimization (GRPO), and then presents the proposed Share-GRPO that introduces the concept of information sharing into MLLM reinforcement learning. Further details are elaborated in the subsequent subsections. 3.1 Preliminary Group Relative Policy Optimization (GRPO) . GRPO [ 3] is a variant of Proximal Policy Optimiza- tion (PPO) [ 47], designed to enhance the performance of LLMs on complex reasoning tasks, such as mathematical and scientific reasoning. Starting with a pretrained MLLM to be optimized, GRPO first uses it to initialize a policy model πθand a reference model πold. For a given image-text pair (I, T), the reference policy model πθoldgenerates a set of responses {o1, o2, ..., o G}. A group-based reward function then computes the corresponding rewards {R1, R2, ..., R G}, which are subsequently used to estimate the advantage ˆAifor each response relative to the group: ˆAi=Ri−mean {Ri}G i=1 std {Ri}G i=1 . (1) Similar to PPO, GRPO employs a clipped objective with a KL penalty term: JGRPO (θ) =E(I,T)∼pD,o∼πθold(·|I,T)" 1 nnX i=1min πθ(oi|I, T) πθold(oi|I, T)ˆAi,clipπθ(oi|I, T) πθold(oi|I, T),1−ϵ,1 +ϵ ˆAi−βDKL(πθ||πref)!# . (2) Sparse Reward and Advantage Vanishing Issues. Despite the effectiveness of GRPO, it generally faces two challenges when applied to MLLMs: the sparse reward issue and the advantage vanishing issue. Sparse rewarding arises due to the limited reasoning ability of current MLLMs, where only a few reasoning paths receive positive rewards, leading to inefficient exploration and instable training. To alleviate this, prior work such as R1-VL [ 4] introduces step-wise reward signals to provide dense rewards throughout the reasoning process. Advantage vanishing occurs when MLLMs generate homogeneous responses for the same question and receive identical rewards, causing the relative advantages to collapse to zero and resulting in ineffective reinforcement learning. To tackle this issue, VL-Rethinker [ 6] and Skywork R1 [ 48] select the
https://arxiv.org/abs/2505.16673v1
samples with large magnitudes of advantages and reuse them in RL process, while MM-Eureka [ 5] employs an online filtering strategy to remove the samples with zero advantage. Different from the prior works, our Share-GRPO effectively addresses both of these two challenges by exploring and sharing diverse reasoning trajectories over expanded question space, therefore encouraging reward diversity and stable policy optimization. 3.2 Share-GRPO We propose Share-GRPO, a novel online MLLM reinforcement learning framework that mitigates the sparse reward and advantage vanishing issues via exploring and sharing diverse reasoning trajectories over expanded question space. Specifically, for a given question, Share-GRPO first applies semantically consistent transformation to generate a set of varied but semantically equivalent questions, thereby expanding the question space. It then encourages the MLLM to explore diverse reasoning paths over the expanded question space and facilitates the sharing of discovered reasoning trajectories and their rewards across the expanded questions during the reinforcement learning process, as illustrated in Fig. 2. 4 Policy Model 𝜋/𝑶𝟏𝑸𝟏𝑶𝒏𝑸𝟏𝑶𝟏𝑸𝒎𝑶𝒏𝑸𝒎 𝑹𝟏𝑸𝟏𝑹𝒏𝑸𝟏𝑹𝒏𝑸𝒋𝑹𝒏𝑸𝒋Reward Model𝑨𝑶𝟏𝑸𝟏,𝑸𝟏𝑨𝑶𝒏𝑸𝟏,𝑸𝟏𝑨𝑶𝟏𝑸𝒎,𝑸𝟏𝑨𝑶𝒏𝑸𝒎,𝑸𝟏Sharing RewardShared Policy Optimization Q'={𝒯(𝑇),𝜓(𝐼)}Expanded Question Space𝑨𝑶𝒏𝑸𝒎,𝑸𝒎𝑨𝑶𝟏𝑸𝒎,𝑸𝒎𝑨𝑶𝒏𝑸𝟏,𝑸𝒎𝑨𝑶𝟏𝑸𝟏,𝑸𝒎 Response 2*3 Q0Q"𝑹𝟏𝑸𝟏𝑹𝒏𝑸𝟏𝑹𝟏𝑸𝒎𝑹𝒏𝑸𝒎 Information Sharing𝝅𝜽(𝒐𝟏𝑸𝟏|𝑸𝟏)𝝅𝜽(𝒐𝒏𝑸𝟏|𝑸𝟏)𝝅𝜽(𝒐𝟏𝑸𝒎|𝑸𝟏)𝝅𝜽(𝒐𝒏𝑸𝒎|𝑸𝒎)𝝅𝜽(𝒐𝟏𝑸𝒎|𝑸𝒎)𝝅𝜽(𝒐𝒏𝑸𝟏|𝑸𝒎)𝝅𝜽(𝒐𝟏𝑸𝟏|𝑸𝒎) 𝝅𝜽(𝒐𝒏𝑸𝒎|𝑸𝟏)Sharing ResponseRewardShared Policy Optimization Computation3. 中⼼xie着的,竖着的横过来1 Semantically Consistent Transformation虚线 把灰⾊去掉Sharing Reward/Shared AdvantageEstimationSharing ResponseShared policy optimization Q = (I, T) j改为mFigure 2: Overview of the proposed Share-GRPO. For a given question, Share-GRPO first applies se- mantically consistent transformation to generate a set of varied but semantically equivalent questions, thereby expanding the question space. It then encourages the MLLM to explore diverse reasoning paths over the expanded question space and facilitates the sharing of discovered reasoning trajectories and their rewards across the expanded questions during the reinforcement learning process. 3.2.1 Reasoning Space Expansion Question Space Expansion. To expand the question space for a given question, we introduce Semantically Consistent Transformation (SCT) which generates a group of question variant Q= {Q1, Q2, ..., Q m}for each given question Qori={Tori, Iori}. Specifically, we propose two types of transformation techniques, i.e., offline textual SCT and online multimodal SCT, for more diverse, comprehensive and flexible question space expansion. (1) Offline Textual Semantically Consistent Transformation. Prior to online reinforce learning, we first employ offline textual SCT ϕ(·)to rewrite the textual prompt Torifor each give question. Specifically, we prompt GPT-4o to generate msemantically consistent variants, resulting in an expanded question set. The textual prompts of the generated variants differ from that of the original question Toriin syntactic structure and lexical expressions, while preserving the original intent and the corresponding correct answer: Qoffline={ϕ(Tori), Iori}. (3) (2) Online Multimodal Semantically Consistent Transformation. During online reinforcement learning, we introduce a multimodal SCT strategy to further expand the question space on the fly. Given an image Ioriin the input question, we apply visual transformations ψ(·)to alter its visual content. Specifically, we carefully select transformations ( e.g., rotation, noise injection) that preserve critical visual cues necessary for reasoning, and avoid transformations ( e.g., cropping, color distortion) that may disrupt key information. Each image undergoes one randomly selected transformation with a probability p. In addition, to mitigate the potential semantic inconsistencies between the visual and textual inputs after visual changes, we perform a manual textual transformation τthat appends a transformation-
https://arxiv.org/abs/2505.16673v1
specific prompt to the corresponding textual prompt, providing contextual guidance aligned with the visual modification: Qonline={τ(ϕ(Tori)), ψ(Iori)}. (4) Solution Space Expansion. With the expanded question space Q={Q1, Q2, ..., Q m}, Share-GRPO enables to explore diverse reasoning trajectories in an enlarged solution space for each given question. Specifically, for each question Qi∈Q, the policy model πθgenerates ncandidate reasoning responses, resulting in an expanded response set: O={{oQ1 1, ..., oQ1n}, ...,{oQm 1, ...oQmn}}. 3.2.2 Shared Advantage Estimation With the expanded reasoning space, Share-GRPO shares reward information during advantage computation, which estimates reasoning trajectory advantages hierarchically across and within question variant. 5 Following GRPO [ 3], we adopt rule-based reward functions to compute the reward for each gener- ated reasoning trajectory, i.e.,R={{rQ1 1, ..., rQ1n}, ...,{rQm 1, ...rQmn}}. Specifically, we adopt an outcome-level accuracy reward, which assigns higher rewards to reasoning paths that lead to correct answers and lower rewards to those leading to incorrect ones. In addition, we employ a format reward that encourages the reasoning trajectory to follow a detailed step-by-step process before providing the final answer. With the computed rewards R, we propose a hierarchical advantage estimation approach that computes advantage at two levels: a global level, which aggregates responses across all variants of the same original question; and a local level, which considers responses generated from each individual question variant. (1) Global-level Advantage Estimation. We first estimate the advantage from a global perspective, where the relative advantage is computed using the rewards obtained from all question variants Q={Q1, Q2, ..., Q m}: ˆAglobal i,j,k=RQj i−mean {{rQ1 1, ..., rQ1n}, ...,{rQm 1, ...rQmn}} std {{rQ1 1, ..., rQ1n}, ...,{rQm 1, ...rQmn}} . (5) (2) Local-level Advantage Estimation. We also estimate the advantage at a local level, where the relative advantage is computed within the responses generated from each individual question variant Qj∈Q. Specifically, for each question variant Qj, the local advantage is estimated as follows: ˆAlocal i,j,k=RQj i−mean {rQj 1, ...rQjn} std {rQj 1, ...rQjn} . (6) With the global-level advantage and local-level advantage estimated via Eqs. 5 and 6, we can obtain the final advantage as follow: ˆAhier i,j,k=(ˆAglobal i,j,k+ˆAlocal i,j,k, j=k, ˆAglobal i,j,k, j ̸=k,(7) where the local advantage ˆAlocal i,j,kis only computed when the responses are generated from the same question variant, i.e., when j=k. By incorporating hierarchical advantage estimation, Share-GRPO achieves more accurate relative advantage computation, leading to more stable and effective policy training. 3.2.3 Shared Policy Optimization With the expanded reasoning space and the shared advantage estimation, Share-GRPO enables to explore and share diverse reasoning trajectories and allows more accurate advantage estimation for each given question. Then, we optimize policy model πθby sharing diverse reasoning trajectories O={{oQ1 1, ..., oQ1n}, ...,{oQm 1, ...oQmn}}across question variants Q={Q1, Q2, ..., Q m}: L(θ) =E(Q)∼pD,o∼πθold(·|Q)" 1 nnX i=11 m2mX j=1 k=1min πθ(oQj i|Qk) πθold(oQj i|Qk)ˆAhier i,j,k,clipπθ(oQj i|Qk) πθold(oQj i|Qk),1−ϵ,1 +ϵ ˆAhier i,j,k!# . (8) 4 Experiments In this section, we first provide implementation details in Sec. 4.1, and then present main results in Sec. 4.2 that demonstrate the effectiveness of Share-GRPO. In Sec. 4.3, we conduct comprehensive ablation studies
https://arxiv.org/abs/2505.16673v1
to examine the impact of each design in Share-GRPO. Sec. 4.4 provides more discussion and analysis of Share-GRPO. More details are elaborated in the subsequent subsections. 6 Table 1: Main Results. To examine the effectiveness of Share-GRPO, we compare our R1-ShareVL which is trained by Share-GRPO without cold-start supervised fine-tuning against SOTAs across multiple reasoning tasks, including both domain-specific and general-purpose tasks.∗denotes evaluation on official weights using VLMEvalKit [51]. Model MathVista MMStar MMMU MathVerse MathVision AI2D Avg. GPT-4o[52] 63.8 65.1 70.7 50.8 30.4 84.9 60.9 Claude3.7-Sonnet[53] 66.8 – 71.8 52.0 41.3 – – Kimi1.5[1] 70.1 – 68.0 – 31.0 – – LLaV A-Reasoner-8B [54] 50.6 54.0 40.0 – – 78.5 – LLaV A-CoT-11B[24] 54.8 57.6 – – – 78.7 – Mulberry-7B[25] 63.1 61.3 55.0 – – – – Qwen2.5-VL-7B [55] (Base Model) 68.2 63.9 58.6 49.2 25.1 83.9 58.1 X-REASONER-7B [56] 69.0 – 56.4 – 29.6 – – R1-Onevision-7B[33] 64.1 – – 47.1 29.9 – – Vision-R1-7B[30] 73.5 64.3∗54.2∗52.4 29.4∗84.2∗59.7 OpenVLThinker-7B[35] 70.2 63.2 51.9 47.9 29.6 82.7 57.6 MM-Eureka-7B[5] 73.0 65.1∗55.3∗50.3 26.9 84.1∗59.1 ThinkLite-7B [57] 74.3 63.7 53.1 52.2 29.9 83.0 59.3 R1-ShareVL-7B 75.4 67.0 58.1 52.8 29.5 84.5 61.2 Scaling to Larger Models Qwen2.5-VL-32B [55] (Base Model) 74.7 69.5 70.0 49.9 38.4 84.6∗64.5 MM-Eureka-32B[5] 74.8 67.3∗64.6∗56.5 34.4 85.4∗63.8 R1-ShareVL-32B 77.6 70.2 70.1 59.0 40.3 86.2 67.2 4.1 Implementation Details In this work, we adopt Qwen2.5-VL-7B and Qwen2.5-VL-32B [ 49] as our base models. For training data, we randomly sample 52K multimodal data from MM-Eureka [ 5]. Model optimization is carried out using EasyR1 [ 50] codebase, with training conducted on 8 NVIDIA H100 GPUs for the 7B model and 32 H100 GPUs for the 32B model. For the rollout parameter, we use a question variant mof 2, a sample number nof 6 per question, and a probability pof 0.3. For RL–related hyperparameters, we use a global batch size of 128, a rollout batch size of 512, a rollout temperature of 0.7, and a learning rate of 1e-6. 4.2 Main Results To comprehensively examine the effectiveness of our proposed Share-GRPO, we conduct experiments on models of different sizes ( i.e., 7B and 32B). Notably, unlike prior studies [ 4,33,30], we do not involve an additional cold-start stage with supervised fine-tuning. As shown in Table 1, we provide an extensive comparison against state-of-the-art models across 6 widely used and challenging benchmarks, covering a diverse range of reasoning tasks from specialized domains to general-purpose reasoning. A detailed description of the benchmarks can be found in the appendix. Comparison with baselines. We first compare our R1-ShareVL 7B and R1-ShareVL 32B trained by Share-GRPO with the corresponding base models, i.e., Qwen2.5-VL-7B and Qwen2.5-VL-32B. As presented in Table 1, Share-GRPO effectively improves the long-chain reasoning capabilities of MLLMs by large margins. For example, on the challenging mathematical benchmarks like MathVista and MathVerse, R1-ShareVL-7B achieves improvements of +7.2% and +3.6%, respectively. It is worth noting that, based on previous studies, RL can enhance MLLMs’ long-chain reasoning ability on mathematical tasks, but it often comes at the cost of degraded performance on multi- discipline and general benchmarks. For instance,
https://arxiv.org/abs/2505.16673v1
ThinkLite-7B drops -0.2% and -5.5% on MMStar and MMMU, respectively. In contrast, our R1-ShareVL-7B model achieves a +3.1% improvement on MMStar and comparable accuracy on MMMU, demonstrating Share-GRPO’s generalization capability in enhancing reasoning across diverse tasks. When scaling our method to larger models (i.e., Qwen2.5-VL-32B) with stronger foundational capabilities, our method remains robust and consistently improves performance. In particular, R1-ShareVL-32B achieves a +9.1% improvement over the baseline model on MathVerse, along with an average performance gain of +2.7%. 7 Comparison with MLLMs trained via RL. We then compare R1-ShareVL with other state-of-the- art MLLMs trained by reinforcement learning approaches. Our R1-ShareVL-7B using the same base model and fewer training data outperforms MM-Eureka-7B with an average performance gain of +2.1%, especially a notable improvement of +1.4% on MathVista. Notably, beyond its capability in long-chain mathematical reasoning, R1-ShareVL also exhibits stronger reasoning generalization to multi-discipline and general reasoning tasks. Specifically, compared to ThinkLite-7B which also excels in mathematical reasoning, R1-ShareVL achieves better performance on the multi-discipline benchmark MMMU and the general benchmark MMStar, outperforming it by +5.0% and +3.3%, respectively. Besides, a similar conclusion can be observed on larger models: our R1-ShareVL 32B further improves overall performance compared with MM-Eureka-32B by +3.4%, demonstrating the effectiveness and generalization of Share-GRPO. 4.3 Ablation Study Table 2: Ablation study of Share-GRPO. MethodShared Policy Shared AdvantageMathVista Offline Online Global Local Qwen2.5-VL-7B (Baseline) 68.2 Qwen2.5-VL-7B + GRPO 72.8 Share-GRPO (Ours)" " 73.9 " " " 74.8 " " " " 75.4Ablation Study of Share-GRPO. As shown in Table 2, we conduct abla- tion studies to examine the individual contribution of each design in Share- GRPO, including shared policy opti- mization ( i.e., offline and online se- mantically consistent transformation) and shared advantage estimation ( i.e., global and local advantage estimation). Compared to the GRPO baseline, incorporating the informa- tion sharing among only offline question variants with global shared advantage estimation yields a performance boost of +1.1%. Further including the information sharing among online multimodal semantically consistent transformations results in exploring and sharing more diverse reasoning paths and a +0.9% performance improvement. Finally, enabling both global and local advantage estima- tion achieves the best result of 75.4% on MathVista, highlighting the effectiveness of hierarchical advantage computation. These results demonstrate that both policy sharing and advantage sharing contribute significantly to the final performance of Share-GRPO. 4.4 Discussion Complementarity Between Share-GRPO and Dynamic Sampling. We compare Share-GRPO with dynamic sampling [ 58] and further discuss their complementarity, as shown in Table 3. Dynamic sampling performs multiple rollouts to generate candidate responses, from which a subset with valid rewards is selected to update the policy. Distinct from it, Share-GRPO enhances the diversity of reasoning responses and shares them during policy optimization, yielding richer reward signals. On average, GRPO with dynamic sampling achieves a performance of 65.1%, while Share-GRPO yields a higher score of 66.8%, demonstrating the potential of leveraging diverse reasoning paths to enrich reward feedback and enhance policy learning. Moreover, combining Share-GRPO with dynamic sampling achieves a higher accuracy of 67.2% on average, highlighting the great complementarity of Share-GRPO. Table 3: The
https://arxiv.org/abs/2505.16673v1
complementarity between Share-GRPO and Dynamic Sampling. Method MathVista MMStar MMMU Avg. GRPO + Dynamic Sampling 73.6 65.7 56.1 65.1 Share-GRPO 75.4 67.0 58.1 66.8 Share-GRPO + Dynamic Sampling 75.9 66.7 59.0 67.2 Hyperparameter studies of Share GRPO. We study the impact of the number of question variant min Semantically Consistent Transformation (SCT) in Table 4, with the number of responses per question fixed at 6. When m= 2, Share-GRPO achieves an accuracy of 75.4% on MathVista, significantly outperforming the baseline GRPO. When mis increased to 4, the performance of the model trained with Share-GRPO further improves, demonstrating the benefit of scaling up the number of question variants. The Impact of the Number of Sampling N. In Table 5, we compare the performance of GRPO with different sampling numbers Nagainst Share-GRPO. We observe that with N= 6, the model trained 8 Question: Let 𝑓be twice differentiable function on the interval −1<𝑥<5with 𝑓(1)=0and 𝑓(2)=3.The graph of 𝑓′, the derivative of 𝑓, is shown above. The graph of 𝑓′crosses the x-axis at 𝑥=−0.5and 𝑥=4. Let ℎbe the function given by ℎ(𝑥)=𝑓(𝑥+1). which is the equation for the line tangent to the graph of ℎat 𝑥=3Choices:A.𝑦=5𝑥/12+7/4B.𝑦=5𝑥/12+5/4C.𝑦=7𝑥/12+7/4Correct Answer: (A) R1-Share-7B: <think> To find the equation of the tangent line to the graph of ℎ(𝑥)=𝑓(𝑥+1)at 𝑥=3, we need to follow these steps:1.Find ℎ(3):ℎ(3)=𝑓(3+1)=𝑓(4)=𝑓(2)=3So, the point of tangency is (3,3).2.Find ℎ"(3):To find the slope of the tangent line, we need to compute ℎ"(𝑥)using the chain rule:ℎ"(𝑥)=𝑓"(𝑥+1)⋅#$%&# At 𝑥=3:ℎ"3=𝑓"3+1⋅123+1=𝑓"2⋅14 From the graph, we see that 𝑓"(2)='(:ℎ"(3)=53⋅14=5123.Equation of the tangent line:The equation of the tangent line at (3,3)with slope '#$is:𝑦−3=512(𝑥−3)Simplifying this:𝑦−3=512𝑥−1512 𝑦=512𝑥−1512+3 Convert 3 to a fraction with a denominator of 12:𝑦=512𝑥−1512+3612 𝑦=512𝑥+2112 Simplify the fraction:𝑦=512𝑥+74 The correct answer is: \boxed{A} </think>\boxed{A}Figure 3: Qualitative Results of reasoning capability of R1-ShareVL on mathematical problems. Table 4: Hyperparameter study of Share- GRPO. We study the impact of the number of question variants in Share-GRPO. Method Question MathVista ShareGRPO 2 75.4 ShareGRPO 3 75.6 ShareGRPO 4 75.9Table 5: The study on the Number of Sam- pling N . We study the impact of the number of sampling N in Share-GRPO. Method Sampling MathVista GRPO 6 72.3 GRPO 12 72.8 GRPO 24 73.0 ShareGRPO (3+3) 74.7 ShareGRPO (6+6) 75.4 using GRPO achieves a score of 72.3% on MathVista. As the number of sampling increases, the performance improves to 72.8 at N= 12 . However, further increasing the sampling number Nto 24 yields only marginal gains of 0.2%, while introducing additional computational overhead. Therefore, increasing the number of sampling reaches a performance ceiling, making it an ineffective way to further improve reasoning reinforcement learning. Instead of simply increasing N, Share-GRPO enhances the diversity of reasoning paths and leverages the concept of information sharing to amplify reward signals and enhance training stability. By sharing responses and incorporating hierarchical advantage estimation, our R1-ShareVL 7B achieves a score of 75.4% with only 6 generated responses per question, surpassing the performance of GRPO even with 24 sampled responses. 4.5 Qualitative Results Fig. 3 illustrates that Share-GRPO effectively enhances the model’s reasoning ability on complex mathematical problems. In this example, the
https://arxiv.org/abs/2505.16673v1
model accurately interprets the question and arrives at the correct answer, showing strong performance in symbolic reasoning and function analysis. This highlights the capability of Share-GRPO to guide the model toward precise and coherent solutions in mathematically demanding tasks. 5 Conclusion In this paper, we propose Share-GRPO, a novel reinforcement learning framework for MLLMs, which introduces the concept of information sharing to effectively mitigate the challenges of sparse rewards and advantage vanishing. Share-GRPO expands the question space by generating semantically consistent variants, and encourages MLLMs to explore and share responses across a more diverse solution space. Furthermore, Share-GRPO estimates advantages hierarchically within and across question variants at both global and local levels to effectively guide optimization. We conduct extensive experiments, ablation studies and discussion, which demonstrate the superiority of our proposed methods on various reasoning benchmarks. 9 References [1]Kimi Team, Angang Du, Bofei Gao, Bowei Xing, Changjiu Jiang, Cheng Chen, Cheng Li, Chenjun Xiao, Chenzhuang Du, Chonghua Liao, et al. Kimi k1. 5: Scaling reinforcement learning with llms. arXiv preprint arXiv:2501.12599 , 2025. [2]Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948 , 2025. [3]Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, YK Li, Y Wu, et al. Deepseekmath: Pushing the limits of mathematical reasoning in open language models. arXiv preprint arXiv:2402.03300 , 2024. [4]Jingyi Zhang, Jiaxing Huang, Huanjin Yao, Shunyu Liu, Xikun Zhang, Shijian Lu, and Dacheng Tao. R1-vl: Learning to reason with multimodal large language models via step-wise group relative policy optimization. arXiv preprint arXiv:2503.12937 , 2025. [5]Fanqing Meng, Lingxiao Du, Zongkai Liu, Zhixiang Zhou, Quanfeng Lu, Daocheng Fu, Botian Shi, Wenhai Wang, Junjun He, Kaipeng Zhang, et al. Mm-eureka: Exploring visual aha moment with rule-based large-scale reinforcement learning. arXiv preprint arXiv:2503.07365 , 2025. [6]Haozhe Wang, Chao Qu, Zuming Huang, Wei Chu, Fangzhen Lin, and Wenhu Chen. Vl-rethinker: Incentivizing self-reflection of vision-language models with reinforcement learning. arXiv preprint arXiv:2504.08837 , 2025. [7]Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774 , 2023. [8]Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, et al. Qwen technical report. arXiv preprint arXiv:2309.16609 , 2023. [9]An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, et al. Qwen2. 5 technical report. arXiv preprint arXiv:2412.15115 , 2024. [10] Peter Tong, Ellis Brown, Penghao Wu, Sanghyun Woo, Adithya Jairam Vedagiri IYER, Sai Charitha Akula, Shusheng Yang, Jihan Yang, Manoj Middepogu, Ziteng Wang, et al. Cambrian-1: A fully open, vision-centric exploration of multimodal llms. Advances in Neural Information Processing Systems , 37:87310–87356, 2024. [11] Zhe Chen, Jiannan Wu, Wenhai Wang, Weijie Su, Guo Chen, Sen Xing, Muyan Zhong, Qinglong Zhang, Xizhou Zhu, Lewei Lu, et al. Internvl: Scaling up vision foundation models and
https://arxiv.org/abs/2505.16673v1
aligning for generic visual-linguistic tasks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 24185–24198, 2024. [12] Dongchen Lu, Yuyao Sun, Zilu Zhang, Leping Huang, Jianliang Zeng, Mao Shu, and Huo Cao. Internvl- x: Advancing and accelerating internvl series with efficient visual token compression. arXiv preprint arXiv:2503.21307 , 2025. [13] Gemini Team, Rohan Anil, Sebastian Borgeaud, Jean-Baptiste Alayrac, Jiahui Yu, Radu Soricut, Johan Schalkwyk, Andrew M Dai, Anja Hauth, Katie Millican, et al. Gemini: a family of highly capable multimodal models. arXiv preprint arXiv:2312.11805 , 2023. [14] Yuan Yao, Tianyu Yu, Ao Zhang, Chongyi Wang, Junbo Cui, Hongji Zhu, Tianchi Cai, Haoyu Li, Weilin Zhao, Zhihui He, et al. Minicpm-v: A gpt-4v level mllm on your phone. arXiv preprint arXiv:2408.01800 , 2024. [15] Huanjin Yao, Wenhao Wu, Taojiannan Yang, YuXin Song, Mengxi Zhang, Haocheng Feng, Yifan Sun, Zhiheng Li, Wanli Ouyang, and Jingdong Wang. Dense connector for mllms. Advances in Neural Information Processing Systems , 37:33108–33140, 2024. [16] Ziheng Wu, Zhenghao Chen, Ruipu Luo, Can Zhang, Yuan Gao, Zhentao He, Xian Wang, Haoran Lin, and Minghui Qiu. Valley2: Exploring multimodal models with scalable vision-language design. arXiv preprint arXiv:2501.05901 , 2025. [17] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. Advances in neural information processing systems , 36:34892–34916, 2023. [18] Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, et al. Flamingo: a visual language model for few-shot learning. Advances in neural information processing systems , 35:23716–23736, 2022. [19] Haotian Liu, Chunyuan Li, Yuheng Li, and Yong Jae Lee. Improved baselines with visual instruction tuning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 26296–26306, 2024. 10 [20] Xiang Lan, Feng Wu, Kai He, Qinghao Zhao, Shenda Hong, and Mengling Feng. Gem: Empowering mllm for grounded ecg understanding with time series and images. arXiv preprint arXiv:2503.06073 , 2025. [21] Haotian Liu, Chunyuan Li, Yuheng Li, Bo Li, Yuanhan Zhang, Sheng Shen, and Yong Jae Lee. Llavanext: Improved reasoning, ocr, and world knowledge, 2024. [22] Zhengyuan Yang, Linjie Li, Kevin Lin, Jianfeng Wang, Chung-Ching Lin, Zicheng Liu, and Lijuan Wang. The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421 , 9(1):1, 2023. [23] Zhuosheng Zhang, Aston Zhang, Mu Li, Hai Zhao, George Karypis, and Alex Smola. Multimodal chain-of-thought reasoning in language models. arXiv preprint arXiv:2302.00923 , 2023. [24] Guowei Xu, Peng Jin, Li Hao, Yibing Song, Lichao Sun, and Li Yuan. Llava-cot: Let vision language models reason step-by-step, 2024. URL https://arxiv. org/abs/2411.10440 . [25] Huanjin Yao, Jiaxing Huang, Wenhao Wu, Jingyi Zhang, Yibo Wang, Shunyu Liu, Yingjie Wang, Yuxin Song, Haocheng Feng, Li Shen, et al. Mulberry: Empowering mllm with o1-like reasoning and reflection via collective monte carlo tree search. arXiv preprint arXiv:2412.18319 , 2024. [26] Haotian Luo, Li Shen, Haiying He, Yibo Wang, Shiwei Liu, Wei Li, Naiqiang Tan, Xiaochun Cao, and Dacheng Tao. O1-pruner: Length-harmonizing fine-tuning for o1-like reasoning pruning. arXiv preprint arXiv:2501.12570 , 2025. [27] Harrison Lee, Samrat Phatale, Hassan Mansoor, Kellie
https://arxiv.org/abs/2505.16673v1
Ren Lu, Thomas Mesnard, Johan Ferret, Colton Bishop, Ethan Hall, Victor Carbune, and Abhinav Rastogi. Rlaif: Scaling reinforcement learning from human feedback with ai feedback. 2023. [28] Tianyu Yu, Yuan Yao, Haoye Zhang, Taiwen He, Yifeng Han, Ganqu Cui, Jinyi Hu, Zhiyuan Liu, Hai-Tao Zheng, Maosong Sun, et al. Rlhf-v: Towards trustworthy mllms via behavior alignment from fine-grained correctional human feedback. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 13807–13816, 2024. [29] Tianyu Yu, Haoye Zhang, Yuan Yao, Yunkai Dang, Da Chen, Xiaoman Lu, Ganqu Cui, Taiwen He, Zhiyuan Liu, Tat-Seng Chua, et al. Rlaif-v: Aligning mllms through open-source ai feedback for super gpt-4v trustworthiness. arXiv preprint arXiv:2405.17220 , 2024. [30] Wenxuan Huang, Bohan Jia, Zijie Zhai, Shaosheng Cao, Zheyu Ye, Fei Zhao, Yao Hu, and Shaohui Lin. Vision-r1: Incentivizing reasoning capability in multimodal large language models. arXiv preprint arXiv:2503.06749 , 2025. [31] Liang Chen, Lei Li, Haozhe Zhao, Yifan Song, and Vinci. R1-v: Reinforcing super generalization ability in vision-language models with less than $3. https://github.com/Deep-Agent/R1-V , 2025. Accessed: 2025-02-02. [32] Huajie Tan, Yuheng Ji, Xiaoshuai Hao, Minglan Lin, Pengwei Wang, Zhongyuan Wang, and Shanghang Zhang. Reason-rft: Reinforcement fine-tuning for visual reasoning. arXiv preprint arXiv:2503.20752 , 2025. [33] Yi Yang, Xiaoxuan He, Hongkun Pan, Xiyan Jiang, Yan Deng, Xingtao Yang, Haoyu Lu, Dacheng Yin, Fengyun Rao, Minfeng Zhu, et al. R1-onevision: Advancing generalized multimodal reasoning through cross-modal formalization. arXiv preprint arXiv:2503.10615 , 2025. [34] Hardy Chen, Haoqin Tu, Fali Wang, Hui Liu, Xianfeng Tang, Xinya Du, Yuyin Zhou, and Cihang Xie. Sft or rl? an early investigation into training r1-like reasoning large vision-language models. arXiv preprint arXiv:2504.11468 , 2025. [35] Yihe Deng, Hritik Bansal, Fan Yin, Nanyun Peng, Wei Wang, and Kai-Wei Chang. Openvlthinker: An early exploration to complex vision-language reasoning via iterative self-improvement. arXiv preprint arXiv:2503.17352 , 2025. [36] Yi Peng, Xiaokun Wang, Yichen Wei, Jiangbo Pei, Weijie Qiu, Ai Jian, Yunzhuo Hao, Jiachun Pan, Tianyidan Xie, Li Ge, et al. Skywork r1v: pioneering multimodal reasoning with chain-of-thought. arXiv preprint arXiv:2504.05599 , 2025. [37] Yufei Zhan, Yousong Zhu, Shurong Zheng, Hongyin Zhao, Fan Yang, Ming Tang, and Jinqiao Wang. Vision-r1: Evolving human-free alignment in large vision-language models via vision-guided reinforcement learning. arXiv preprint arXiv:2503.18013 , 2025. [38] Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. Advances in neural information processing systems , 32, 2019. [39] Hao Tan and Mohit Bansal. Lxmert: Learning cross-modality encoder representations from transformers. arXiv preprint arXiv:1908.07490 , 2019. 11 [40] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In International conference on machine learning , pages 1597–1607. PmLR, 2020. [41] Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 9729–9738, 2020. [42] Yee Teh, Victor Bapst, Wojciech M Czarnecki, John Quan, James Kirkpatrick, Raia Hadsell, Nicolas Heess, and Razvan Pascanu. Distral: Robust multitask reinforcement learning. Advances in neural information
https://arxiv.org/abs/2505.16673v1
processing systems , 30, 2017. [43] Matteo Hessel, Hubert Soyer, Lasse Espeholt, Wojciech Czarnecki, Simon Schmitt, and Hado Van Hasselt. Multi-task deep reinforcement learning with popart. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 33, pages 3796–3803, 2019. [44] Jakob Foerster, Ioannis Alexandros Assael, Nando De Freitas, and Shimon Whiteson. Learning to communicate with deep multi-agent reinforcement learning. Advances in neural information processing systems , 29, 2016. [45] Ryan Lowe, Yi I Wu, Aviv Tamar, Jean Harb, OpenAI Pieter Abbeel, and Igor Mordatch. Multi-agent actor-critic for mixed cooperative-competitive environments. Advances in neural information processing systems , 30, 2017. [46] Carlo D’Eramo, Davide Tateo, Andrea Bonarini, Marcello Restelli, and Jan Peters. Sharing knowledge in multi-task deep reinforcement learning. arXiv preprint arXiv:2401.09561 , 2024. [47] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 , 2017. [48] Yichen Wei, Yi Peng, Xiaokun Wang, Weijie Qiu, Wei Shen, Tianyidan Xie, Jiangbo Pei, Jianhao Zhang, Yunzhuo Hao, Xuchen Song, et al. Skywork r1v2: Multimodal hybrid reinforcement learning for reasoning. arXiv preprint arXiv:2504.16656 , 2025. [49] Shuai Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, Sibo Song, Kai Dang, Peng Wang, Shijie Wang, Jun Tang, et al. Qwen2. 5-vl technical report. arXiv preprint arXiv:2502.13923 , 2025. [50] Zheng Yaowei, Lu Junting, Wang Shenzhi, Feng Zhangchi, Kuang Dongdong, and Xiong Yuwen. Easyr1: An efficient, scalable, multi-modality rl training framework. https://github.com/hiyouga/EasyR1 , 2025. [51] Haodong Duan, Junming Yang, Yuxuan Qiao, Xinyu Fang, Lin Chen, Yuan Liu, Xiaoyi Dong, Yuhang Zang, Pan Zhang, Jiaqi Wang, et al. Vlmevalkit: An open-source toolkit for evaluating large multi-modality models. In Proceedings of the 32nd ACM International Conference on Multimedia , pages 11198–11201, 2024. [52] Aaron Hurst, Adam Lerer, Adam P Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, et al. Gpt-4o system card. arXiv preprint arXiv:2410.21276 , 2024. [53] Anthropic. Claude 3.5 sonnet, 2024. [54] Ruohong Zhang, Bowen Zhang, Yanghao Li, Haotian Zhang, Zhiqing Sun, Zhe Gan, Yinfei Yang, Ruoming Pang, and Yiming Yang. Improve vision language model chain-of-thought reasoning. arXiv preprint arXiv:2410.16198 , 2024. [55] Shuai Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, Sibo Song, Kai Dang, Peng Wang, Shijie Wang, Jun Tang, et al. Qwen2. 5-vl technical report. arXiv preprint arXiv:2502.13923 , 2025. [56] Qianchu Liu, Sheng Zhang, Guanghui Qin, Timothy Ossowski, Yu Gu, Ying Jin, Sid Kiblawi, Sam Preston, Mu Wei, Paul V ozila, et al. X-reasoner: Towards generalizable reasoning across modalities and domains. arXiv preprint arXiv:2505.03981 , 2025. [57] Xiyao Wang, Zhengyuan Yang, Chao Feng, Hongjin Lu, Linjie Li, Chung-Ching Lin, Kevin Lin, Furong Huang, and Lijuan Wang. Sota with less: Mcts-guided sample selection for data-efficient visual reasoning self-improvement. arXiv preprint arXiv:2504.07934 , 2025. [58] Qiying Yu, Zheng Zhang, Ruofei Zhu, Yufeng Yuan, Xiaochen Zuo, Yu Yue, Tiantian Fan, Gaohong Liu, Lingjun Liu, Xin Liu, et al. Dapo: An open-source llm reinforcement learning system at scale. arXiv preprint arXiv:2503.14476 , 2025. [59] Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei
https://arxiv.org/abs/2505.16673v1
Chang, Michel Galley, and Jianfeng Gao. Mathvista: Evaluating mathematical reasoning of foundation models in visual contexts. arXiv preprint arXiv:2310.02255 , 2023. [60] Lin Chen, Jinsong Li, Xiaoyi Dong, Pan Zhang, Yuhang Zang, Zehui Chen, Haodong Duan, Jiaqi Wang, Yu Qiao, Dahua Lin, et al. Are we on the right way for evaluating large vision-language models? arXiv preprint arXiv:2403.20330 , 2024. 12 [61] Xiang Yue, Yuansheng Ni, Kai Zhang, Tianyu Zheng, Ruoqi Liu, Ge Zhang, Samuel Stevens, Dongfu Jiang, Weiming Ren, Yuxuan Sun, et al. Mmmu: A massive multi-discipline multimodal understanding and reasoning benchmark for expert agi. arXiv preprint arXiv:2311.16502 , 2023. [62] Renrui Zhang, Dongzhi Jiang, Yichi Zhang, Haokun Lin, Ziyu Guo, Pengshuo Qiu, Aojun Zhou, Pan Lu, Kai-Wei Chang, Yu Qiao, et al. Mathverse: Does your multi-modal llm truly see the diagrams in visual math problems? In European Conference on Computer Vision , pages 169–186. Springer, 2024. [63] Ke Wang, Junting Pan, Weikang Shi, Zimu Lu, Houxing Ren, Aojun Zhou, Mingjie Zhan, and Hong- sheng Li. Measuring multimodal mathematical reasoning with math-vision dataset. Advances in Neural Information Processing Systems , 37:95095–95169, 2024. [64] Aniruddha Kembhavi, Mike Salvato, Eric Kolve, Minjoon Seo, Hannaneh Hajishirzi, and Ali Farhadi. A diagram is worth a dozen images. In Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part IV 14 , pages 235–251. Springer, 2016. 13 A Benchmarks We evaluate our models on the following benchmarks. •MathVista [ 59]is used to evaluate the mathematical problem-solving ability of MLLMs, containing 6141 questions covering areas such as arithmetic, geometry, algebra, and statis- tics. •MMStar [ 60]is an innovative multimodal assessment benchmark that includes 1500 care- fully selected visual key samples, addressing issues of visual redundancy and data leakage in existing assessments. •MMMU [ 61]is a large-scale interdisciplinary multimodal understanding and reasoning benchmark that collects 11.5K multimodal questions from university exams, quizzes, and textbooks. •MathVerse [ 62]includes 2612 multimodal mathematics problems and has manually anno- tated 15672 test samples, comprising 3 main types of questions and 12 subcategories, such as plane geometry, solid geometry, and functions. •MathVision [ 63]is a collection of 3,040 high-quality mathematics problems, all accompa- nied by visual contexts, sourced from real mathematics competitions. •AI2D [ 64]is a dataset that contains over 5000 scientific charts, which can be used for tasks such as image classification and visual question answering. 14
https://arxiv.org/abs/2505.16673v1
SPaRC: A Spatial Pathfinding Reasoning Challenge Lars Benedikt Kaesberg∗, Jan Philip Wahle∗, Terry Ruas, Bela Gipp University of Göttingen, Germany ∗{l.kaesberg, wahle}@uni-goettingen.de Dataset hf.co/datasets/lkaesberg/SPaRC Code github.com/lkaesberg/SPaRC Webpage sparc.gipplab.org Abstract Existing reasoning datasets saturate and fail to test abstract, multi-step problems, espe- cially pathfinding and complex rule constraint satisfaction. We introduce SPaRC (Spatial Pathfinding Reasoning Challenge), a dataset of 1,000 2D grid pathfinding puzzles to evaluate spatial and symbolic reasoning, requiring step- by-step planning with arithmetic and geometric rules. Humans achieve near-perfect accuracy (98.0%; 94.5% on hard puzzles), while the best reasoning models, such as o4-mini, struggle (15.8%; 1.1% on hard puzzles). Models of- ten generate invalid paths (>50% of puzzles for o4-mini), and reasoning tokens reveal they make errors in navigation and spatial logic. Un- like humans, who take longer on hard puzzles, models fail to scale test-time compute with dif- ficulty. Allowing models to make multiple so- lution attempts improves accuracy, suggesting potential for better spatial reasoning with im- proved training and efficient test-time scaling methods. SPaRC can be used as a window into models’ spatial reasoning limitations and drive research toward new methods that excel in ab- stract, multi-step problem-solving. 1 Introduction Reasoning models made stark progress to solve complex mathematical (Hendrycks et al., 2021b), software-engineering (Jimenez et al., 2023; Quan et al., 2025), and knowledge tasks (Hendrycks et al., 2021a). With more capable models comes the question of how to measure progress in reasoning, and how they compare to humans. As reasoning benchmarks test specific tasks with priors (e.g., (MMLU-Pro (Wang et al., 2024b), GPQA (Rein et al., 2023)), models started to achieve (super- )human scores, leading to rapid dataset saturation. Thus, datasets needing abstract reasoning with min- imal priors have become increasingly important as they are more robust to scaling training data and pattern-matching. Notably, ARC-AGI (Chol- let, 2019) and related works (Song et al., 2025; ?Difficulty:5Puzzle:e836ae53a1a6b880Difficulty:2Puzzle:704931925890a56cDifficulty:1Puzzle:7ce0b04c1a28cca6Figure 1: Example puzzles from SPaRC. Wang et al., 2024a) challenge models with spatial few-shot grid problems. However, they often do not require a combination of step-by-step planning, pathfinding, and logic skills, abilities every human possesses (Chollet, 2019). We propose SPaRC, a new dataset to overcome limitations of current datasets, primarily focusing on pathfinding and combinations of arithmetic and geometric rules, such as counting, segregation, and shape logic, by presenting multi-step constraint problems. Our proposed task consists of 2D grid puzzles through which a line must be drawn from start to end while fulfilling various rules, such as collecting dots along the way or separating colored elements (see Figure 1 for an example). These rules can be combined in various non-trivial ways and involve deep, abstract, rule-based reasoning within a constrained spatial pathfinding environ- 1arXiv:2505.16686v1 [cs.AI] 22 May 2025 AIME 2024 GPQA DiamondARC-AGIHLESPaRC SPaRC (Level 5)0255075100Accuracy (%)93.4% 81.4% 41.8% 14.28% 15.8% 1.1%Other Benchmarks SPaRC (Ours)Figure 2: Accuracy (%) of o4-mini on existing bench- marks and on SPaRC, as well as on only hard puzzles from SPaRC with difficulty 5. ment. Solving these puzzles requires an understand- ing of the individual rules and their connections, and long-term planning to meet all rules simul-
https://arxiv.org/abs/2505.16686v1
taneously. This often involves revising previous hypotheses where a single wrong step can irrevoca- bly lead to the wrong path. We provide 500 train and 500 test puzzles of different sizes and difficulty degrees from 1 (very easy) to 5 (very hard). Experiments with three instruction-tuned mod- els, four reasoning models, and six human anno- tators show puzzles are solved easily by humans at 98% accuracy (94.5% for difficulty 5 puzzles) but challenge the best reasoning model, o4-mini, at 15.8% accuracy (1.1% for difficulty 5 puzzles). Fig- ure 2 compares the accuracy of the best reasoning model we tested (o4-mini) on existing reasoning benchmarks with our proposed SPaRC, showing that it poses a new challenge for models. Models often fail to generate valid paths, and reasoning tokens reveal issues with grid navigation, spatial logic, and careless mistakes that lead to irreversible errors. Humans take up to 13 times longer on harder puzzles. Instruction-tuned models increase test-time tokens by ∼40%, and reasoning models only by ∼5%with higher difficulty. Multiple at- tempts per puzzle raise accuracy (e.g., 15.8% to 35.0% for o4-mini), indicating inefficient solution- finding and potential for improved spatial reasoning training. Ablations show prompt design (15.8% to 21.0%) and few-shot examples (12.6% to 15.8%) have modest effects, and multimodal prompting (i.e., puzzle screenshots) does not improve perfor- mance over text (12.6% vs. 5.6% for o4-mini). SPaRC provides a new challenge to evaluate spatial and symbolic reasoning in large language models (LLMs), addressing limitations of existing saturated benchmarks.Key Contributions: ▶We propose SPaRC , a new challenging bench- mark of 1,000 examples to test spatial and sym- bolic reasoning on 2D pathfinding tasks. (§3) ▶We conduct extensive manual and automated evaluation with six human annotators, three state-of-the-art instruction-tuned (Qwen 2.5, GPT-4.1, Gemma 3) and four reasoning models (o4-mini, o3-mini, QwQ, R1) on SPaRC. (§4.1) ▶We analyze why models fail to solve puzzles (e.g., rule cell crossing), causes for reason- ing mistakes (e.g., logical fallacies), and upper bounds for reasoning when increasing test-time compute by using pass@k sampling. (§4.2) ▶We perform various ablation studies on puzzle representation (e.g., prompt design, visual repre- sentation), and prompting (e.g., few-shot exam- ples), underlining our results’ robustness. (§4.3) 2 Related Work Benchmarking language models has shifted from core NLP tasks like question answering (Rajpurkar et al., 2016) and paraphrasing (Dolan and Brockett, 2005; Wahle et al., 2024a) in GLUE (Wang et al., 2019) to more complex evaluations, as these tasks have saturated. Long-horizon reasoning datasets, including MATH (Hendrycks et al., 2021b), AIME (Art of Problem Solving, 2025), BBH (Suzgun et al., 2022), and MUSR (Sprague et al., 2023), challenge models on multi-step problem-solving. However, these benchmarks rely on data priors, knowledge recall, or pattern matching, enabling reasoning models like DeepSeek’s R1 to saturate them, showing a gap in evaluating spatial reasoning and complex planning. Specifically related to our proposal are symbolic and spatial benchmarks that use novel task repre- sentations or underrepresented ones in LLM train- ing data. Notably, ARC-AGI (Chollet, 2019) tests abstract pattern recognition and inductive reason- ing from few-shot 2D grid examples, showing that even in
https://arxiv.org/abs/2505.16686v1
simple scenarios, the most advanced rea- soning models fail. However, ARC-AGI does not require step -by-step planning or following discrete rules. VisualPuzzles (Song et al., 2025) presents algorithmic, analogical, and spatial riddles, but every task is multiple choice, so the model is not constructing individual solutions. SpatialEval (Wang et al., 2024a) covers navigation, relation, 2 and counting on images, 2D grids, and text. How- ever, SpatialEval mazes span only a few moves, and the counting or relation questions appear inde- pendently, not within one combined task. PPNL (Aghzal et al., 2023) tests spatial-temporal reason- ing via 2D grid-based path planning. It focuses on obstacle avoidance within the grid and does not incorporate complex, interacting rules. Another related task is EnigmaEval (Wang et al., 2025), but it does not focus on pathfinding. Our dataset SPaRC addresses these limitations by requiring long-term, step-by-step path planning, where early errors in the reasoning chain can sig- nificantly impact later parts. SPaRC requires path- finding, counting, segregation, and logic involving colors and shapes in a single task, and on different- sized puzzle grids with complex, interacting rules. Unlike other benchmarks, we also support prob- lems with multiple correct solutions, allowing for testing different path-finding strategies. 3 Dataset The primary goal of SPaRC is to test new pathfind- ing capabilities not represented in current bench- marks, specifically spatial navigation, symbolic understanding, constraint satisfaction, and multi- step planning, and also combinations in new ways, such as counting, segregation, and color or shape logic. The design of the dataset is inspired by the puzzle mechanics of the video game The Witness (Blow, 2016), adapted into a format suitable for LLM assessment. 3.1 Puzzle Rules Each puzzle in SPaRC is a 2D grid of m×nrule cells with(x, y) = (0 ,0)∈(m, n)being the top- left corner of the grid, and xincreases to the right, andydownwards. Rule cells are surrounded by edges that can be used to draw a path. There exists onestart point on the edges (large circle) and one end point on the edges (extension of the edge). The goal of solving a puzzle is to move from the start point to the end point along the edges around the rule cells to fulfill all rule cell conditions. The path must be a single, continuous sequence of edges from the start to the end point, without crossing or overlapping itself at any edge segment. Central to each puzzle are the rule cells, which we describe together with what it means to fulfill the rule cell condition. Appendix F contains puzzle examples to illustrate the components of our dataset. Item Collection (Dots): The solution path needs to pass through every dot. Path Breaks (Gaps): The solution path cannot go through any edge segment con- taining a gap. Gaps act as local barriers. Color Separation (Stones): The solution path must be drawn to separate stones of different colors. All stones located within any sin- gle enclosed region must be of the same color. Pairing (Stars): Each star must share its region with exactly one other symbol of the same color. No unpaired
https://arxiv.org/abs/2505.16686v1
stars are allowed. Edge Count (Triangles): The solution path must touch the number of edges shown by the triangles in the cell, e.g., two triangles mean the path must touch exactly two edges of that cell. Shape Fitting (Polyominoes): If a cell contains a polyomino (poly), the solution path must enclose a region that matches its exact shape and area. The region must not rotate or mir- ror the poly. Multiple polys can share a region if their shapes fit without overlapping. Shape Subtraction (Ylop): A ylop must be enclosed in the same region as one or more polys. Its shape and area subtract from the total required by the polys. If a ylop cancels out a poly exactly, that pair imposes no constraint. 3.2 Dataset Creation Generation Our process starts with randomly creating an xbyygrid, where xandyrange from 2 to 6 (e.g., 3×5). Figure 10 in Appendix B provides an indexing example. We then randomly fill half of the grid with rule cells (the rule cell to grid cell percentage is termed rule density ) and set a random start and end point. To solve puzzles automatically, we implement a generation-validation loop. First, we generate an initial puzzle and solve it using brute-force by exhaustively testing all valid paths from start to end.1If the initial puzzle fails to produce a solvable puzzle, we decrease the rule density and regenerate the puzzle. Conversely, if the solver finds over k distinct solutions (indicating the puzzle might be too unconstrained), rule density is increased, and 1Brute-force is necessary because many puzzles fall into NP or NP-Complete complexity classes (Abel et al., 2018). 3 Statistics Count Puzzles with Rule Type Gaps 313 Dots 292 Stones 355 Stars 210 Triangles 233 Polygons 305 Ylops 25 Puzzles with Difficulty Level Level 1 86 Level 2 118 Level 3 121 Level 4 86 Level 5 89 Table 1: Counts of puzzles for SPaRCfor different diffi- culties and rules based on the test set. the puzzle is regenerated. We found 50 solutions to be a reasonable hyperparameter choice for k empirically by testing different generation setups. We generate the SPaRC dataset containing 500 training and 500 testing examples. The distribu- tions of different rules in SPaRC are shown in Ta- ble 1. When sampling puzzles, we aim for an ap- proximately equal distribution between rules. How- ever, puzzles tend to have fewer stars (color pairing rule) and triangles (edge counting rule) than other rules. Observe how in Table 1, we only generated 25 puzzles containing ylops. This is for two rea- sons: they can only exist if polys are available, and they are the hardest rule, as judged by humans. For later tests on specific rules, we also created single-rule splits (more on this in Section 4.1). Difficulty Estimation To quantify puzzle com- plexity, we created a difficulty metric that weights individual spatial reasoning tasks, specifically the number of distinct rules, the total number of rule cells, the rule cell density, and an estimate of poten- tial complex rule interactions. Each contributes via a weighted sum to a raw
https://arxiv.org/abs/2505.16686v1
score, which we then sta- tistically normalize onto a standardized 1 (easiest) to 5 (hardest) scale (see Appendix D for calculation specifics). As our later experiments with humans and reasoning models will demonstrate, this dif- ficulty estimate is pretty robust. The distribution of difficulties of SPaRC can be found in Table 1. We sample with an approximately uniform distribu- tion between puzzles, ending up with slightly more level 3 (121) and level 2 puzzles (118) than level 1 (86), level 4 (86), and level 5 (89).4 Experiments We assess SPaRC through automated and manual studies. In the automated evaluation, we consider instruction-based models - Gemma 3 27B (Team et al., 2025), Qwen 2.5 72B (Research, 2024), GPT 4.1 (OpenAI, 2025a); and reasoning models - QwQ 32B (Team, 2024), DeepSeek R1 Distill Llama 70B (DeepSeek-AI et al., 2025), o3-Mini (Ope- nAI, 2025c), and o4-mini (OpenAI, 2025b). We measure model accuracy on solving our puzzles (Section 4.1), performance on specific rule cells, reasoning errors (Section 4.2), and conduct abla- tion studies regarding the stability of our findings (Section 4.3). For the manual inspection, we test human performance and time on the same puzzles (Section 4.4). We used six annotators (aged 22-27) with CS backgrounds. Setup All puzzles are presented to the LLMs us- ing prompts with a human-annotated example so- lution (details about the example and solution can be found in Appendix H.4). Our textual represen- tation is inspired by the ARC challenge (Chollet, 2019). Extraction occurs using regex after a prede- fined sequence of “####” as stated in the prompt. By default, we provide a one-shot example with a human-annotated path, as it yielded the best re- sults (details later in Section 4.3; prompt example in Appendix H.4). Details on models, hardware, and tokens processed are in Appendix A. 4.1 Main Results We present key baseline evaluations across models and difficulty. Scaling test-time compute allows us to identify upper bounds of model capabilities. Baselines We want to understand how reasoning- and instruction-tuned LLMs solve spatial multi- step reasoning tasks compared to humans. We compute accuracy (% of solved puzzles) for these models. Human baseline results use majority votes from three annotators per puzzle (details on the human evaluation later in Section 4.4). Figure 3 shows accuracy for humans and LLMs. Humans solve puzzles nearly perfectly at 98.0% (98/100 puzzles solved). The top reasoning model, o4-mini, performs much worse at 15.8% (79/500 puzzles). GPT-4.1 is the best instruction-tuned model at 1.6% (8/500 puzzles). Reasoning models perform better overall (avg. 8.5%). Closed models outperform open ones: o4-mini (15.8%) and o3-mini (8.2%) versus R1 70B (4.0%) and QwQ (5.8%), with simi- 4 Human o4-mini o3-miniQwQ 32BR1 70B GPT-4.1 Gemma-3 27B Qwen 2.5 72B0255075100Accuracy (%)98.0% 15.8%8.2% 5.8% 4.0% 1.6% 1.2% 0.4% Human Instruction-tuned ReasoningFigure 3: Accuracy (%) ofhuman annotators (teal) against different LLMs: reasoning models (orange) and instruction-tuned models (blue). Higher is better. lar trends in instruction-tuned models. Results sug- gest these puzzles are very challenging for LLMs, while relatively easy for humans. We hypothesize errors arise from models’ spatial understanding limitations, such as misunderstanding rules, logical
https://arxiv.org/abs/2505.16686v1
fallacies, and misinterpreting grid representations (Huang and Chang, 2022; Turpin et al., 2023). Difficulty Level We decompose the results in Figure 3 by difficulty. We compare the best model (o4-mini) against human performance. Humans achieve 100% accuracy at difficulty level 1, while o4-mini reaches 47.7%, showing it solves nearly half of the simple puzzles. At level 2, o4-mini drops to 19.5%, but humans remain at 100%. For higher difficulties, with larger grids and complex rules, o4-mini’s rate decreases further, reaching 1.2% at level 4 (solving 1 of 89 puzzles), compared to 94.4% for humans. Level 5 shows similar re- sults to level 4 (similar trends across all models). Results for all models are in Appendix I. Overall, LLMs have severe reasoning challenges as puzzle difficulty increases. A possible explanation could be that models conclude reasoning prematurely in complex puzzles by ignoring certain rules and run- ning into dead ends. Specific rules or combinations of rules may also be particularly challenging. Rule Specific Analysis We examine the accuracy of models on splits containing individual rules or specific rule combinations to analyze which rules the models succeed or fail on. Specifically, we create puzzles only containing gaps ,dots,stones , stars ,triangles ,polys , orylops . We also create multi-rule combination splits to investigate how models handle the interaction between a few dis- tinct types of rules, stones x stars ,gaps x dots x triangles , and dots x stars x polys . Because ylops can only exist in the presence of polys, this split 1 2 3 4 5 Difficulty Level050100Accuracy (%) 47.7% 19.5% 10.7% 1.2% 1.1%100.0% 100.0% 100.0%94.4% 94.5% Best Model: o4-mini Human PerformanceFigure 4: Accuracy (%) at different difficulty (1-5) between o4-mini (orange) and human annotators (teal). Higher is better. contains puzzles with ylops and polys. Each split contains 50 training and 50 test samples, and we also make them available in our release. Figure 5 shows accuracy for the primary dataset (top row) and the relative performance delta ( ∆) of specific splits (e.g., gap accuracy minus full set accuracy; bottom rows). The gaps split shows superior performance across all models, whereas dots hover near the av- erage model performance on all puzzles. Dots and gaps tasks are similar yet differ in performance: gaps explicitly forbid using edges, providing im- mediate error feedback, whereas dots require edge use, with errors apparent only after path comple- tion. Polys produce mixed results; stronger models (o4-mini, o3-mini) show minor performance dif- ferences compared to all puzzles, while smaller reasoning and instruction-tuned models markedly improve. Polys and ylops lead to substantial per- formance decreases, which are also the most chal- lenging rules perceived by humans. Some weaker models (QwQ, Gemma) markedly outperform their average on polys (improvements of 13.2 and 12.8 points, respectively), suggesting smaller models might solve some puzzles more intuitively, while others tend to “overthink” problems, leading to higher success in simpler setups (more details later in Section 4.2). Performance differences may also result from fundamental path construction errors, logical mistakes, or model rule misinterpretations. 4.2 Path Errors and Reasoning Mistakes We analyze
https://arxiv.org/abs/2505.16686v1
model-constructed paths and their rea- soning tokens to shed light on why reasoning mod- els fail to solve puzzles. Path Errors We analyze common errors in con- structing a valid path (ignoring rules to solve the game for now). We assess five error types for all 5 Subset -50 0 50 100 Δ Accuracy (%) -50 0 50 100 Δ Accuracy (%) -50 0 50 100 Δ Accuracy (%) -50 0 50 100 Δ Accuracy (%) -50 0 50 100 Δ Accuracy (%) -50 0 50 100 Δ Accuracy (%) -50 0 50 100 Δ Accuracy (%) Gaps Stars Dots Stones Triangles Polys Stones x Stars Gaps x Dots x Triangles Dots x Stars x Polys Polys x Ylops -11.80 18.20 68.20 -1.80 -7.80 0.20 0.20 2.20 6.20 4.20 39.80 -0.20 -6.20 -2.20 -4.20 -6.20 -2.20 -0.20 1.80 1.80 46.20 22.20 14.20 -3.80 -5.80 -3.80 2.20 2.20 0.20 0.20 28.00 -2.00 -2.00 -2.00 -4.00 4.00 2.00 0.00 0.00 0.00 12.80 -1.20 -1.20 -1.20 -1.20 -1.20 0.80 4.80 0.80 0.80 -1.60 -1.60 -1.60 -1.60 0.40 2.40 4.40 2.40 0.40 8.40 -0.40 -0.40 -0.40 -0.40 -0.40 -0.40 -0.40 7.60 1.60 1.60 Reasoning o4 Mini o3 Mini QwQ 32B R1 70B Instruction-Tuned Gemma-3 27B GPT-4.1 Qwen 2.5 72B 0 50 100 Accuracy (%) 0 50 100 Accuracy (%) 0 50 100 Accuracy (%) 0 50 100 Accuracy (%) 0 50 100 Accuracy (%) 0 50 100 Accuracy (%) 0 50 100 Accuracy (%) Full Set 15.80 8.20 5.80 4.00 1.20 1.60 0.40 Figure 5: Performance of models on puzzles containing only specific rules. Columns represent individual models, for reasoning- and instruction-tuned models. The Full Set row shows the Accuracy (%) per model across all puzzles. The rows below show accuracy on specific rules minus accuracy on the full set ( ∆Accuracy (%) ). Invalid PathIncorrect Start/End Disconnected Line Intersecting Line Rule Cell Crossing25%50%75%100% o4-mini R1 70B GPT-4.1 Gemma-3 27B Figure 6: Analysis of Path Errors (%) in generated solutions for different LLMs. Each corner shows a specific error, and the distance from the center indicates the % of generations with that error. Lower is better. models: Incorrect Start/End (line starts or ends at wrong edge), Disconnected Line (line not con- tinuous), Intersecting Line (line crosses an edge multiple times), Rule Cell Crossing (line does not stay on edges but crosses rule cells). Paths with any such errors are deemed Invalid Path . Examples for each error type can be found in Appendix J. Figure 6 shows the percentage of path rule vi- olations for four selected models (results for all models can be found in Table 7 in Appendix I). Smaller enclosed areas in the figure imply better adherence to path rules. The two reasoning mod-els (o4-mini and R1 70B) have similar violation patterns, but o4-mini performs better overall. No- tably, over 50% of puzzles fail because models do not construct valid paths. Instruction models (GPT- 4.1, Gemma-3 27B) perform worse, showing dis- tinct weaknesses. GPT-4.1 frequently produces dis- connected lines, while Gemma-3 27B commonly crosses rule cells. Interestingly, Gemma-3 27B produces fewer disconnected lines
https://arxiv.org/abs/2505.16686v1
than the larger reasoning model R1 70B. Reasoning models have higher accuracy despite similar basic path errors, suggesting successful path construction is only the first hurdle. Across models, the most common error is Rule Cell Crossing , indicating frequent vi- olations by paths moving through rule cells rather than along edges. However, up to this point, our ex- planations of other model failures have been largely hypotheses, and the precise underlying causes re- main unclear. Causes for Reasoning Mistakes To shed light on the “why” of reasoning model failures, we manu- ally analyzed R1 70B puzzles (as it openly provides reasoning tokens) with the puzzles containing only single rule types (e.g., only stones). We selected puzzles where models produced valid paths (with- out path errors) but failed to fulfill all rule cells. This resulted in 48 puzzle solutions for analysis. Models most commonly failed due to logical fallacies (36/48), grid/index system misinterpreta- tion (26/48), and careless shortcutting of multiple 6 0 5 10 15 20 25 30 35 40pass@1pass@2pass@4pass@8 15.8%22.8%28.6%35.0%Overall Accuracy (%) 1 2 3 4 5 Difficulty Level020406080100Accuracy (%) 47.7%62.8%76.7%80.2% 19.5%28.0%37.3%49.2% 10.7%17.4%20.7%28.9% 1.2%4.7%5.8%11.6% 1.1%2.2%3.4%3.4% pass@1 pass@2 pass@4 pass@8Figure 7: Accuracy (%) for generating k∈ {1,2,4,8} solutions and evaluating whether the correct path is in one of the kattempts (pass@k) for o4-mini across difficulty (1-5). Higher is better. reasoning steps (23/48). Interestingly, R1 often recognized mistakes or dead ends (25/48) before concluding, indicating limited reasoning but aware- ness of its constraints. Different splits revealed specific reasoning lim- itations. With dots, models typically recognized missed ones during path construction but failed to correct their paths accordingly (e.g., Figure 21 in Appendix K). With gaps, models frequently made careless, unvalidated multi-step moves, vi- olating rules by crossing gaps (e.g., Figure 19 in Appendix K). We provide further examples with highlights of R1’s reasoning tokens in Appendix K. Mistakes, like unvalidated multi-step moves and grid misinterpretation, highlight ongoing chal- lenges in long-term spatial planning, as even minor shortcuts lead to significant rule violations. How- ever, models’ recognition of errors and dead ends points toward opportunities and gives space for future contributions to improve spatial reasoning, e.g., via iterative reasoning or sampling multiple parallel paths with strategies to find correct ones. Upper Reasoning Bounds To determine models’ upper limits, we purposefully increase test-time compute by generating up to eight attempts per puzzle for each model (i.e., pass@1 to pass@8). Figure 7 shows accuracy rising from 15.8% (pass@1) to 35.0% (pass@8) for o4-mini. This improvement is expected as we scale computation. Importantly, this setting isn’t practical at test-time, as we only verify if the solution appears among the kgenerations. In practice, a decision mechanism like majority voting would be necessary (Kaesberg et al., 2025). 0 2 5 7 10 12 15 17zero-shotone-shottwo-shot 12.6%15.8%14.8%Overall Accuracy (%) 1 2 3 4 5 Difficulty Level020406080Accuracy (%) 46.5%47.7%60.5% 13.6%19.5%13.6% 3.3%10.7%5.0% 3.5%1.2%0.0% 0.0%1.1%0.0% zero-shot one-shot two-shotFigure 8: Accuracy (%) for zero-shot (blue), one-shot (orange), and two-shot (green) examples provided to o4-mini across difficulty (1-5). Higher is better. Still, additional attempts are not sufficient to solve complex puzzles. Success rates improve by 32.5
https://arxiv.org/abs/2505.16686v1
points for level 1 puzzles (easy), but only 2.3 points for level 4 and 5 (difficult ones). This shows that our puzzles cannot be easily solved by just increasing the computation, but the reasoning steps have to get more sophisticated and have to adjust according to the difficulty level. Higher results for larger kgive hope that future work can find better training methods to improve reasoning. 4.3 Ablations We investigate how changes to the prompting (few- shot examples, different prompts) and puzzle rep- resentation (as text and images) affect our results through various ablations. Few-Shot Prompting We investigate the effect of in-context learning by comparing zero-shot, one-shot, and two-shot configurations (see Ap- pendix H.4 for few-shot examples). Previous ex- periments always defaulted to one-shot. Figure 8 shows that one-shot has the highest overall accuracy (15.8%), while zero-shot performs worst (12.6%) for o4-mini. At difficulty 1, two-shot outperforms one-shot, but this reverses at higher levels. Examples generally help model compre- hension, but too many examples seem to have no benefit, and sometimes negatively impact perfor- mance. Additional analysis in Appendix L.3 shows that zero-shot has fewer path violations than few- shot. Improved one-shot over zero-shot performance is expected, but two-shot’s slightly lower perfor- mance than one-shot is surprising, as more exam- ples should clarify rule interactions; however, given 7 1 2 3 4 5 Difficulty Level0204060Accuracy (%) 8.1%46.5% 10.2%13.6% 7.4%3.3% 0.0%3.5% 0.0%0.0% o4-mini - 12.6% o4-mini-vision - 5.6%Figure 9: Comparison of Accuracy (%) for o4-mini using a textual representation (blue) vs. a puzzle screen- shot (orange) across difficulty (1-5). Higher is better. small differences, stochastic variance is possible. Similar findings were reported by Ye et al. (2023), suggesting increased examples do not always help, possibly due to cognitive overload or excessive fo- cus on example analysis instead of task solving. Visual Representation Another factor that might influence our results is the textual 2D representa- tion of the puzzles. Therefore, we provide screen- shots of the puzzle, similar to Figure 1, and adjust the prompt accordingly. We compare visual results to zero-shot textual results, as the visual prompt lacks an example solution. Details on this configu- ration are available in Appendix H.3. Figure 9 compares accuracy of o4-mini using de- fault textual prompts versus visual prompts across difficulty levels. The visual representation reduces overall performance from 12.6% to 5.6%. The gap between text and image prompts is larger at easier difficulty levels, but diminishes at higher difficulty levels. Additional analysis in Appendix L.1 shows that a main cause for bad results on easy puzzles is invalid path constructions. These results suggest current textual representa- tions are easier for multi-modal reasoning models to understand. Likely, connecting textual descrip- tions to visual puzzle elements adds complexity compared to purely textual prompts. However, whether the current textual representation is also optimal remains uncertain. Alternative Prompt We test if our results are affected by different formulations in our prompts, i.e., prompt engineering (White et al., 2023; Wahle et al., 2024b). Because paths previously failed due to violations of path rules, we adjusted the prompt to emphasize these rules
https://arxiv.org/abs/2505.16686v1
more explicitly. This ad- justment improved o4-mini’s accuracy from 15.8% to 21.0% and reduced path errors, with Rule CellDifficulty Level 1 2 3 4 5 Hum. Acc. (%) 100 % 100 % 100 % 94.4 % 94.5 % Hum. Avg. (s) 10.7 18.3 26.7 60.7 131.5 Hum. Mdn. (s) 7.1 13.7 15.6 28.8 85.6 QwQ Acc. (%) 20.9% 5.9% 2.5% 1.2% 0.0% QwQ #Tokens 14433 14200 13983 14072 13114 Qwen 2.5 Acc. (%) 0.0% 1.7% 0.0% 0.0% 0.0% Qwen 2.5 #Tokens 790 888 953 1037 1161 Table 2: Accuracy ,Average andMedian human solve time (seconds), and Accuracy andNumber of (#) Gen- erated Tokens for QwQ 32B and Qwen 2.5 72B over Difficulty Level (1-5). Crossing decreasing from 51.2% to 29.0%, and Intersecting Line dropping from 31.2% to 21.2%. However, at higher difficulty (level 5), there was no improvement, with o4-mini still only solving 1 out of 89 puzzles (more details in Figure 24 in Appendix L). Prompt engineering moderately in- creases performance at lower difficulty levels, but it does not have marked impacts at higher levels. 4.4 Human Evaluation For a human baseline, we asked six male annotators aged 22-27, with a background in computer science and data science, to solve 100 i.i.d. drawn puzzles from the dataset, divided into two subsets of 50 samples each. Each of the 100 puzzles is annotated three times, and a puzzle is marked as solved if the majority found a correct solution. Even though we did not test all 500 test samples of SPaRC, sam- pling i.i.d., and using two non-overlapping sets with three annotators each gives us a fair estimate of human performance. We recorded the accuracy, number of attempts, and solving time. Details on annotation instructions are in Appendix M. Table 2 shows humans achieve near-perfect per- formance, with 100% accuracy at difficulty levels 1–3 and around 95% at levels 4 and 5. Average solve time increases exponentially with difficulty, from 10.7 seconds for difficulty 1 to 26.7 seconds for difficulty 3, then starkly increasing to 60.7 sec- onds for difficulty 4 and 131.5 seconds for difficulty 5. Median solve times are consistently lower than average times, indicating that a few very difficult puzzles significantly increase the average. Compared to humans, models show two relevant time-scaling aspects. First, previous pass@k ex- periments (Figure 7 in Section 4.3) showed that multiple attempts to solve puzzles improved perfor- mance on easy puzzles but did not increase perfor- mance on difficult puzzles. Second, analyzing the number of generated tokens (Table 2), instruction- 8 tuned models such as Qwen 2.5, increase token counts with puzzle difficulty (from 790 to 1161), while reasoning models, such as QwQ maintain relatively constant token counts across difficulties (14433 to 13114). See Table 4 in Appendix E for all models. This suggests models do not effectively scale spatial reasoning at test-time. 5 Conclusion We introduced SPaRC, a dataset of 1,000 examples designed to evaluate spatial and symbolic reasoning capabilities on 2D grid pathfinding puzzles. This dataset tests reasoning skills not evaluated by exist- ing benchmarks, focusing specifically on multi-step constraint satisfaction problems
https://arxiv.org/abs/2505.16686v1
requiring spatial and symbolic reasoning. We evaluated puzzles with six human annotators, three instruction-tuned models (GPT-4.1, Gemma 3, Qwen 2.5), and four reasoning models (o4-mini, o3-mini, QwQ, R1). Humans achieved a 98% accu- racy. The best reasoning model, o4-mini, reached only 16%. Performance was drastically affected by puzzle difficulty, with models solving 48% at level 1, 20% at level 2, and just 1.1% at level 5. Humans consistently solved puzzles across levels, includ- ing 95% at level 5. Our error analysis revealed that path errors and reasoning mistakes stemmed from logical fallacies, grid misunderstandings, and performing too many reasoning steps at once. Gen- erating up to eight attempts per puzzle improved accuracy up to 30% for difficulty 1 puzzles and 2% for difficulty 5. Humans needed up to 13 times more time to solve hard puzzles than easy ones, and instruction-tuned models scaled token us- age with difficulty by ∼40%. Reasoning models showed only a ∼5%increase for harder difficulties. Ablation studies on visual puzzle representation, prompting, and few-shot examples show only mild variations and support the robustness of our results. Empirically, SPaRC reveals critical limitations in current reasoning models regarding spatial reason- ing, symbolic reasoning, multi-step planning, and constraint satisfaction. Existing methods, including enhanced prompting and increased computational sampling, offer only partial improvements. Funda- mental advances in model reasoning capabilities are still needed to reach human-level results.Limitations Our evaluation depends on a fixed delimiter (“####”) and a regex that collects the following coordinate list. When a model omits the de- limiter, writes several delimiter lines, or inserts natural -language text between coordinates, extrac- tion can fail, producing false negatives. These events are rare in practice, and we stress the re- quired format in every prompt, but complete robust- ness is unattainable when testing many different models. OpenAI models (o4-mini, o3-mini) return only final coordinates with a small explanation, but redact intermediate reasoning tokens. Conse- quently, detailed failure analysis is restricted to open models like R1 70B. Intermediate reasoning can differ from final answers in models of any scale, as previously documented by Turpin et al. (2023); Chen et al. (2025), thus potentially limiting gener- alization from trace-based analyses. The dataset covers single-rule puzzles and a lim- ited set of two- and three-rule combinations but does not exhaustively represent all possible inter- actions among the seven rule types. Future re- leases could introduce underrepresented combina- tions (e.g., stars ×triangles ×polys ×ylops ) to probe generalization more comprehensively. How- ever, as models fail on most easy tasks already and current splits reveal clear error patterns and sup- port comparative ranking of the different rule types, we leave this to future work when models become more capable. The poly set in Figure 5 in Section 4.1 shows improvements for weaker but not stronger models. The poly rule sometimes fills the entire grid with poly shapes, necessitating a path along the grid’s edge. This condition impacts only the poly subset, explaining performance spikes. Smaller models find this shortcut more frequently, likely because simpler solutions emerge when overwhelmed by many complex poly shapes. Acknowledgements This work was
https://arxiv.org/abs/2505.16686v1
partially supported by the Lower Saxony Ministry of Science and Culture and the VW Foundation. Many thanks to Andreas Stephan, Tianyu Yang, Zeinab Taghavi, and Annika Schulte- Hürmann for their thoughtful discussions and feed- back. 9 References Zachary Abel, Jeffrey Bosboom, Michael Coulombe, Erik D. Demaine, Linus Hamilton, Adam Hesterberg, Justin Kopinsky, Jayson Lynch, Mikhail Rudoy, and Clemens Thielen. 2018. Who witnesses the witness? finding witnesses in the witness is hard and some- times impossible. Mohamed Aghzal, Erion Plaku, and Ziyu Yao. 2023. Can large language models be good path planners? a benchmark and investigation on spatial-temporal reasoning. ArXiv preprint , abs/2310.03249. Art of Problem Solving. 2025. Aime problems and solutions. Accessed: 2025-05-07. Jonathan Blow. 2016. The witness. [Online; accessed 15-May-2025]. Yanda Chen, Joe Benton, Ansh Radhakrishnan, Jonathan Uesato, Carson Denison, John Schulman, Arushi Somani, Peter Hase, Misha Wagner, Fabien Roger, Vlad Mikulik, Samuel R. Bowman, Jan Leike, Jared Kaplan, and Ethan Perez. 2025. Reasoning models don’t always say what they think. François Chollet. 2019. On the measure of intelligence. ArXiv preprint , abs/1911.01547. DeepSeek-AI, Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, Xiaokang Zhang, Xingkai Yu, Yu Wu, Z. F. Wu, Zhibin Gou, Zhihong Shao, Zhuoshu Li, Ziyi Gao, and 181 others. 2025. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. William B. Dolan and Chris Brockett. 2005. Automati- cally constructing a corpus of sentential paraphrases. InProceedings of the Third International Workshop on Paraphrasing (IWP2005) . Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Stein- hardt. 2021a. Measuring massive multitask language understanding. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021 . OpenReview.net. Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. 2021b. Measuring Mathematical Problem Solving With the MATH Dataset. Jie Huang and Kevin Chen-Chuan Chang. 2022. To- wards reasoning in large language models: A survey. Carlos E. Jimenez, John Yang, Alexander Wettig, Shunyu Yao, Kexin Pei, Ofir Press, and Karthik Narasimhan. 2023. Swe-bench: Can language mod- els resolve real-world github issues? Lars Kaesberg, Terry Ruas, Jan Philip Wahle, and Bela Gipp. 2024. CiteAssist: A system for automated preprint citation and BibTeX generation. In Proceed- ings of the Fourth Workshop on Scholarly DocumentProcessing (SDP 2024) , pages 105–119, Bangkok, Thailand. Association for Computational Linguistics. Lars Benedikt Kaesberg, Jonas Becker, Jan Philip Wahle, Terry Ruas, and Bela Gipp. 2025. V oting or consensus? decision-making in multi-agent debate. OpenAI. 2025a. Introducing gpt-4.1 in the api. Ac- cessed: 2025-05-15. OpenAI. 2025b. Introducing openai o3 and o4-mini. Accessed: 2025-05-15. OpenAI. 2025c. Openai o3-mini: Pushing the frontier of cost-effective reasoning. Accessed: 2025-05-15. Shanghaoran Quan, Jiaxi Yang, Bowen Yu, Bo Zheng, Dayiheng Liu, An Yang, Xuancheng Ren, Bofei Gao, Yibo Miao, Yunlong Feng, Zekun Wang, Jian Yang, Zeyu Cui, Yang Fan, Yichang Zhang, Binyuan Hui, and Junyang Lin. 2025. Codeelo: Benchmark- ing competition-level code generation of llms with human-comparable elo ratings. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+
https://arxiv.org/abs/2505.16686v1
questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing , pages 2383–2392, Austin, Texas. Association for Computational Linguistics. David Rein, Betty Li Hou, Asa Cooper Stickland, Jack- son Petty, Richard Yuanzhe Pang, Julien Dirani, Ju- lian Michael, and Samuel R. Bowman. 2023. GPQA: A Graduate-Level Google-Proof Q&A Benchmark. Alibaba Research. 2024. Qwen2 Technical Report. Yueqi Song, Tianyue Ou, Yibo Kong, Zecheng Li, Gra- ham Neubig, and Xiang Yue. 2025. Visualpuzzles: Decoupling multimodal reasoning evaluation from domain knowledge. Zayne Sprague, Xi Ye, Kaj Bostrom, Swarat Chaudhuri, and Greg Durrett. 2023. MuSR: Testing the Limits of Chain-of-thought with Multistep Soft Reasoning. Mirac Suzgun, Nathan Scales, Nathanael Schärli, Se- bastian Gehrmann, Yi Tay, Hyung Won Chung, Aakanksha Chowdhery, Quoc V . Le, Ed H. Chi, Denny Zhou, and Jason Wei. 2022. Challenging big-bench tasks and whether chain-of-thought can solve them. Gemma Team, Aishwarya Kamath, Johan Ferret, Shreya Pathak, Nino Vieillard, Ramona Merhej, Sarah Perrin, Tatiana Matejovicova, Alexandre Ramé, Morgane Rivière, Louis Rouillard, Thomas Mesnard, Geoffrey Cideron, Jean bastien Grill, Sabela Ramos, Edouard Yvinec, Michelle Casbon, Etienne Pot, Ivo Penchev, and 197 others. 2025. Gemma 3 technical report. Qwen Team. 2024. QwQ: Reflect Deeply on the Bound- aries of the Unknown. 10 Miles Turpin, Julian Michael, Ethan Perez, and Samuel R. Bowman. 2023. Language models don’t always say what they think: Unfaithful explanations in chain-of-thought prompting. Jan Philip Wahle, Terry Ruas, Saif M. Mohammad, Nor- man Meuschke, and Bela Gipp. 2023. Ai usage cards: Responsibly reporting ai-generated content. Preprint , arXiv:2303.03886. Jan Philip Wahle, Terry Ruas, Yang Xu, and Bela Gipp. 2024a. Paraphrase types elicit prompt engineering capabilities. ArXiv preprint , abs/2406.19898. Jan Philip Wahle, Terry Ruas, Yang Xu, and Bela Gipp. 2024b. Paraphrase types elicit prompt engineering capabilities. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Process- ing, pages 11004–11033, Miami, Florida, USA. As- sociation for Computational Linguistics. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019. GLUE: A multi-task benchmark and analysis plat- form for natural language understanding. In 7th In- ternational Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019 . OpenReview.net. Clinton J. Wang, Dean Lee, Cristina Menghini, Jo- hannes Mols, Jack Doughty, Adam Khoja, Jayson Lynch, Sean Hendryx, Summer Yue, and Dan Hendrycks. 2025. Enigmaeval: A benchmark of long multimodal reasoning challenges. Jiayu Wang, Yifei Ming, Zhenmei Shi, Vibhav Vineet, Xin Wang, Yixuan Li, and Neel Joshi. 2024a. Is a picture worth a thousand words? delving into spatial reasoning for vision language models. Yubo Wang, Xueguang Ma, Ge Zhang, Yuansheng Ni, Abhranil Chandra, Shiguang Guo, Weiming Ren, Aaran Arulraj, Xuan He, Ziyan Jiang, Tianle Li, Max Ku, Kai Wang, Alex Zhuang, Rongqi Fan, Xiang Yue, and Wenhu Chen. 2024b. MMLU-Pro: A More Robust and Challenging Multi-Task Language Un- derstanding Benchmark (Published at NeurIPS 2024 Track Datasets and Benchmarks). Jules White, Quchen Fu, Sam Hays, Michael Sandborn, Carlos Olea, Henry Gilbert, Ashraf Elnashar, Jesse Spencer-Smith, and Douglas C Schmidt. 2023. A prompt pattern catalog to enhance prompt
https://arxiv.org/abs/2505.16686v1
engineer- ing with chatgpt. ArXiv preprint , abs/2302.11382. Junjie Ye, Xuanting Chen, Nuo Xu, Can Zu, Zekai Shao, Shichun Liu, Yuhan Cui, Zeyang Zhou, Chao Gong, Yang Shen, Jie Zhou, Siming Chen, Tao Gui, Qi Zhang, and Xuanjing Huang. 2023. A comprehen- sive capability analysis of gpt-3 and gpt-3.5 series models. ArXiv preprint , abs/2303.10420. 11 Appendix A Models & Hardware This section details the large language models (LLM) used in our experiments, the hardware on which they were run, and the approximate number of tokens processed for each model. For open models we used Gemma 3 27B (Team et al., 2025), QwQ 32B (Team, 2024), Qwen 2.5 72B (Research, 2024) and DeepSeek R1 70B (DeepSeek-AI et al., 2025). For propietary models we used GPT-4.1 (Ope- nAI, 2025a), o3-mini (OpenAI, 2025c) and o4-mini (OpenAI, 2025b). For both OpenAI reasoning mod- els, the default medium effort reasoning mode was used. Table 3 shows the details regarding model size, tokens processed, and hardware used. Model Name Size Tokens Processed Hardware Open Models Gemma 3 27B 875,711 4x Nvidia A100 QwQ 32B 13,863,364 4x Nvidia A100 Qwen 2.5 72B 955,167 8x Nvidia A100 DeepSeek R1 70B 9,136,467 8x Nvidia A100 Propietary Models GPT-4.1 N/A 5,057,588 OpenAI API o3-mini N/A 19,776,881 OpenAI API o4-mini N/A 59,192,466 OpenAI API Table 3: Overview of models, hardware, and token counts. Token counts are approximate. For all ablations and the main study, o4-mini was analyzed on 6500 puzzles overall. For a com- parable 1000 puzzles, this would equate to approx- imately 9,865,411 tokens. Both OpenAI reasoning models were used with the medium reasoning ef- fort. B Grid Indexing Figure 10 shows a puzzle grid with all its coor- dinates according to the prompts in Appendix H. Figure 10: Puzzle grid from Figure 1 with all grid cells annotated with their coordinates. C Licenses and Code Acknowledgments The source code developed and used in this work is provided under the BSD 3-Clause License. This licensing choice is required due to dependencies on code from the following repositories, which are partially distributed under the BSD 3-Clause License: • jbzdarkid/jbzdarkid.github.io •NewSoupVi/The-Witness-Randomizer-for- Archipelago We gratefully acknowledge the authors of these repositories for making their implementations pub- licly available. The datasets used in this study are provided un- der the Creative Commons Attribution 4.0 Interna- tional (CC-BY-4.0) license. D Difficulty Metric Calculation This section provides the details for calculating the difficulty metric used to rate SPaRC puzzles in this paper. The metric aims to capture multiple aspects of complexity. The calculation involves determin- ing individual component scores, combining them via a weighted sum, and normalizing the result. The score function is described in Appendix D.1 and its components in Appendix D.2. D.1 Combination and Normalization The individual component scores (Appendix D.2) are combined using a weighted sum to produce a raw difficulty score ( Sraw). The specific weights reflect the empirically determined relative impor- tance of each component: Sraw=wmechSmech+winteract Sinteract +wgridSgrid +wdensitySdensity +wcountScount 12 where the weights used are: wmech = 1.2, winteract = 1.2,wgrid= 2.5,wdensity = 1.0, and wcount= 1.2. Notably, grid size (
https://arxiv.org/abs/2505.16686v1
Sgrid) is weighted most heavily. Finally, to produce a standardized and inter- pretable difficulty score (typically ranging from 0 to 5), the raw score ( Sraw) is normalized. This is achieved by: 1.Calculating the Z-score of Srawrelative to a pre-determined normal distribution, charac- terized by a mean ( µ= 12.06) and standard deviation ( σ= 5.27). These parameters were derived empirically from a large dataset of puzzle scores. Z=Sraw−µ σ 2.Converting the Z-score to a value between 0 and 1 using the standard normal cumulative distribution function (CDF), often denoted as Φ(Z). CDF_value = Φ(Z) 3.Linearly scaling this CDF value to the target range [0, 5]. Scaled_Score =CDF_value ×5 4.Clamping the result to ensure the final dif- ficulty score strictly falls within the [0, 5] bounds. Final Score = max(0 ,min(5 ,Scaled_Score )) This normalization process ensures that scores are comparable across different puzzles and provides a distribution more amenable to interpretation as a rating. D.2 Component Scores Five distinct aspects of the puzzle contribute to the overall difficulty score: •Mechanics Score ( Smech):This score reflects the cognitive load associated with understand- ing different rules. It is directly proportional to the number of unique rule types present in the puzzle ( Nmech). •Interaction Score ( Sinteract ):This score quan- tifies complexity from the interplay between different mechanics. It is calculated onlywhen multiple rule types ( Nmech>1) are present. It is proportional to both the num- ber of potentially interacting mechanics (ap- proximated as Nmech−1) and the rule density (ρrules), where rule density is the total number of rule instances ( Nrules) divided by the grid area ( A=width×height). •Grid Score ( Sgrid):This component reflects the complexity associated with the search space size. It increases proportionally with the grid area ( A). Larger grids generally re- quire more path exploration. •Density Score ( Sdensity ):This score measures constraint concentration. It is directly de- rived from the rule density ( ρrules=Nrules/A). Higher density can make satisfying all con- straints simultaneously more challenging. •Rule Count Score ( Scount):Independent of density, this score considers the absolute num- ber of constraints. It is proportional to the total number of rule instances ( Nrules) on the grid. A puzzle with many rules can be com- plex even if spread over a large grid. E Tokens by Puzzle Difficulty Table 4 shows the average tokens produced by the different models, decomposed by puzzle difficulty. Model Level 1 Level 2 Level 3 Level 4 Level 5 Reasoning QwQ 32B 14433.3 14200.6 13983.1 14072.8 13114.1 R1 70B 7646.5 9119.8 9374.6 10134.4 9989.6 Instruction Qwen 2.5 72B 790.6 888.7 953.1 1037.7 1161.2 Gemma-3 27B 802.8 874.6 910.0 941.2 1033.3 Table 4: Average tokens per puzzle by difficulty level. F Rule Visualizations Figures 11 to 17 provide visual context for the dif- ferent puzzle rule types discussed in our evaluation (Section 4), this section presents examples of each core subtype. For each rule, we show the unsolved puzzle grid (a) with a valid solution path (b). 13 (a) Unsolved Dots puzzle. (b) Solved Dots puzzle. Figure 11: Example of the Dots rule. The
https://arxiv.org/abs/2505.16686v1
solution path must pass through all dots present on its segments. (a) Unsolved Gaps puzzle. (b) Solved Gaps puzzle. Figure 12: Example of the Gaps rule. The solution path cannot cross specific marked edges on the grid. (a) Unsolved Stones puzzle. (b) Solved Stones puzzle. Figure 13: Example of the Stones rule. The solution path must separate grid cells containing different colored stones into distinct regions. 14 (a) Unsolved Stars puzzle. (b) Solved Stars puzzle. Figure 14: Example of the Stars rule. Each region with a star must contain exactly one other rule of the same color. (a) Unsolved Triangles puzzle. (b) Solved Triangles puzzle. Figure 15: Example of the Triangles rule. The solution path must touch the number of grid edges equal to the number of triangles in the adjacent cell. (a) Unsolved Polys puzzle. (b) Solved Polys puzzle. Figure 16: Example of the Polys rule (Polyominoes). The solution path must outline a region that perfectly contains the depicted poly shape. Multiple polys in one region can be combined. 15 (a) Unsolved Polys & Inverse Polys puzzle. (b) Solved Polys & Inverse Polys puzzle. Figure 17: Example of the Polys & Ylops (Inverse Polys) rule combination. The solution path must outline a region satisfying both polyomino shape inclusion and subtraction constraints. G Additional Dataset Statistics Table 5 provides the rule distributions of the full set of SPaRC and all of its splits. Statistics Full Set Gaps Dots Stones Stars Tri Polys P-Y St-S G-D-T D-S-P Train Set Size 500 50 50 50 50 50 50 50 50 50 50 Test Set Size 500 50 50 50 50 50 50 50 50 50 50 Count per Difficulty Level Puzzles (Level 1) 86 34 29 0 0 13 0 3 0 21 7 Puzzles (Level 2) 118 6 9 5 29 17 24 4 12 13 9 Puzzles (Level 3) 121 7 7 13 15 12 22 9 13 6 7 Puzzles (Level 4) 86 3 3 18 5 4 4 18 12 4 9 Puzzles (Level 5) 89 0 2 14 1 4 0 16 13 6 18 Count per Rule Type Puzzles with Gaps 313 50 0 0 0 0 0 0 0 40 0 Puzzles with Dots 292 0 50 0 0 0 0 0 0 47 46 Puzzles with Stones 355 0 0 50 0 0 0 0 49 0 0 Puzzles with Stars 210 0 0 0 50 0 0 0 32 0 39 Puzzles with Triangles 233 0 0 0 0 50 0 0 0 37 0 Puzzles with Polygons 305 0 0 0 0 0 50 50 0 0 43 Puzzles with Ylops 25 0 0 0 0 0 0 43 0 0 0 Table 5: Statistics for all splits of SPaRC. Difficulty and rule statistics are only based on the test set, as only these are used for evaluation. H Prompting Listings 1 to 5 in Appendices H.1 to H.4 provide the prompts and few-shot examples used for the experiments in Section 4. H.1 Default Prompt You are an expert spatial reasoning AI specializing in
https://arxiv.org/abs/2505.16686v1
solving puzzles from the game ’The Witness’. Your task is to solve the following puzzle by finding a valid line from the Start Node to the End Node. 16 GRID DEFINITION: - The puzzle involves a grid of {grid_size[’width’]}x{grid_size[’height’]} cells. - COORDINATE SYSTEM: Nodes are indexed (x, y). Node (0,0) is the top-left node. x increases to the right, y increases downward. - Line: The solution line travels along grid edges, connecting adjacent nodes horizontally or vertically. The line cannot visit the same node twice. - RULE PLACEMENT: Rule symbols (squares, stars, polyshapes, negative polyshapes, triangles) are located at cells with all odd coordinates. The line goes AROUND cells containing rules, forming boundaries. SOLVING RULES: 1. Draw a continuous line from the START NODE to the END NODE by connecting adjacent nodes ( horizontally or vertically) without visiting the same node twice. 2. The line can only be placed on (+) and (.) cells. These cells have at least one even coordinate. The line can NEVER be placed on a rule cell (all odd coordinates). 3. The line acts as a boundary, potentially dividing the grid cells into one or more distinct regions. 4. All rules associated with symbols on the grid must be satisfied: - Gaps (’G’): The line CANNOT traverse a cell marked by a Gap. - Dots (’.’): The line MUST pass through a cell marked by a Dot. - Squares (’o-X’): All squares within a single region created by the line must be the same color. Different colored squares MUST be separated into different regions by the line. - Stars (’*-X’): Each star must be paired with EXACTLY one other element of the same color in a region. Other colors are ignored. - Triangles (’A-X (1)’, ’B-X (2)’, ’C-X (3)’, ’D-X (4)’): The line must touch EXACTLY the number of edges specified by the triangle count (edges are top, right, bottom, left of the cell). - Polyshapes (’P-X-Y’): The region containing this symbol must be shaped EXACTLY like the defined polyshape Y. The shape must fit entirely within the region’s boundaries. If multiple positive polyshapes are in one region, the region’s shape must accommodate their combined, non- overlapping forms (like Tetris pieces). - Negative Polyshapes (’Y-X-Y’): The negative polyshape can only be placed on top of already placed normal polyshapes. The negative polyshapes must fit on the grid, but can allow overlap between normal polyshapes or placement of polyshapes that extend beyond the area defined by the line. If the negative polyshapes exactly cancel the normal polyshapes, there is no restriction on the grid shape anymore. A negative polyshape only counts as valid if it is used. START POSITION: {start_pos} END POSITION: {end_pos} GRID NOTATION: - ’S’: Start point - ’E’: End point - ’+’: Cell on which the line can be drawn - ’N’: Empty rule cell - ’G’: Gap (cannot be crossed) - ’.’: Dot line must cross this cell - ’o-X’: Stone of color X - ’*-X’: Star of color X - ’A-X’ Triangle with count 1 - ’B-X’ Triangle with count 2 - ’C-X’
https://arxiv.org/abs/2505.16686v1
Triangle with count 3 - ’D-X’ Triangle with count 4 - ’P-X-Y’: Positive polyshape of color X and shape ID Y - ’Y-X-Y’: Negative polyshape (ylop) of color X and shape ID Y COLOR CODES: R=red, B=blue, G=green, Y=yellow, W=white, O=orange, P=purple, K=black {example_section} PUZZLE GRID: {grid_str} POLYSHAPE DEFINITIONS: Defines the shapes referenced by P-X-Y and Y-X-Y symbols in the grid. In the 2D array, 1 indicates a cell occupied by the shape, 0 indicates an empty cell. {polyshapes_str} 17 Please solve this puzzle. First, explain your reasoning step-by-step, including key deductions and constraint checks made along the way. Then, provide the final solution as a sequence of node coordinates in (x, y) format (dont skip any intermediate nodes), starting with the start node and ending with the end node, after this string: "####". Example coordinate list: [(0,0), (1,0), (2,0), (2,1), ...] Listing 1: The LLM prompt used for generating the results discussed in Section 4.1. H.2 Alternative Prompt ## Objective You are a specialized AI proficient in spatial reasoning and solving puzzles from the game ’The Witness’. Your goal is to find a valid path (a continuous line) from the specified Start Node to the End Node on the provided grid, adhering to all puzzle rules. ## Core Concepts & Grid Basics * **Grid Dimensions:** The puzzle grid has {grid_size[’width’]} columns and {grid_size[’height’]} rows. * **Coordinate System:** Nodes are identified by ‘(x, y)‘ coordinates. ‘(0,0)‘ is the top-left node . ‘x‘ increases to the right, ‘y‘ increases downwards. * **Path:** The solution is a single, continuous line connecting adjacent nodes either horizontally or vertically. * **No Revisits:** The path **CANNOT** visit the same node more than once. * **Valid Path Cells:** The path travels along the grid lines (edges between nodes). It can only occupy positions marked ‘+‘ or ‘.‘ in the grid layout (these correspond to positions with at least one even coordinate). * **Rule Cells:** Cells containing rule symbols (squares, stars, etc.) have coordinates where both ‘x‘ and ‘y‘ are odd. The path goes *around* these rule cells, never *on* them. * **Regions:** The drawn path divides the grid cells into one or more distinct enclosed areas ( regions). Many rules apply based on the contents of these regions. ## Puzzle Input Data * **Start Node:** {start_pos} * **End Node:** {end_pos} * **Grid Layout:** ‘‘‘ {grid_str} ‘‘‘ * **Polyshape Definitions (if applicable):** * Shapes are defined by 2D arrays where ’1’ indicates an occupied cell and ’0’ indicates an empty cell. ‘‘‘ {polyshapes_str} ‘‘‘ ## Symbol Legend (Grid Notation) * ‘S‘: **Start Node** (Path begins here) * ‘E‘: **End Node** (Path ends here) * ‘+‘: Valid cell for the path to occupy * ‘N‘: Empty rule cell (no rule) * ‘G‘: **Gap** (Path **CANNOT** cross this cell) * ‘.‘: **Dot** (Path **MUST** pass through this cell) * ‘o-X‘: **Square** of color X * ‘*-X‘: **Star** of color X * ‘A-X‘: **Triangle** (touch 1 edge) * ‘B-X‘: **Triangle** (touch 2 edges) * ‘C-X‘: **Triangle** (touch 3 edges) * ‘D-X‘: **Triangle** (touch 4 edges) * ‘P-X-Y‘: **Polyshape** (positive) of color X
https://arxiv.org/abs/2505.16686v1
and shape ID Y * ‘Y-X-Y‘: **Negative Polyshape** (ylop) of color X and shape ID Y **Color Codes:** R=Red, B=Blue, G=Green, Y=Yellow, W=White, O=Orange, P=Purple, K=Black ## Detailed Solving Rules The drawn path must satisfy **ALL** applicable constraints: 1. **Path Constraints:** 18 * Path **MUST** start at ‘S‘ and end at ‘E‘. * Path connects adjacent nodes (horizontal/vertical moves only). * Nodes **CANNOT** be revisited. * Path **MUST** pass through all Dot (‘.‘) cells. * Path **CANNOT** pass through any Gap (‘G‘) cells. 2. **Region-Based Rules** (Apply to areas enclosed by the path): * **Squares (‘o-X‘):** All squares within a single region **MUST** be the same color. Squares of different colors **MUST** be separated into different regions by the path. * **Stars (‘*-X‘):** Within a single region, each star symbol **MUST** be paired with exactly **ONE** other element (star or square) *of the same color*. Other colors within the region are irrelevant to this specific star’s rule. * **Polyshapes (‘P-X-Y‘):** The region containing this symbol **MUST** be able to contain the specified shape (defined in Polyshape Definitions). The shape must fit entirely within the region’s boundaries. If multiple positive polyshapes are in one region, the region must accommodate their combined, non-overlapping forms. Rotation of polyshapes is generally allowed unless context implies otherwise. * **Negative Polyshapes (‘Y-X-Y‘):** These "subtract" shape requirements, typically within the same region as corresponding positive polyshapes. A negative polyshape cancels out a positive polyshape of the exact same shape and color within that region. If all positive shapes are canceled, the region has no shape constraint. A negative shape is only considered ’used’ if it cancels a positive one. Negative shapes can sometimes rationalize apparent overlaps or boundary violations of positive shapes if interpreted as cancellations. 3. **Path-Based Rules (Edge Touching):** * **Triangles (‘A-X‘, ‘B-X‘, ‘C-X‘, ‘D-X‘):** The path **MUST** touch a specific number of edges of the cell containing the triangle symbol. * ‘A-X‘ (1): Path touches **EXACTLY 1** edge of the triangle’s cell. * ‘B-X‘ (2): Path touches **EXACTLY 2** edges of the triangle’s cell. * ‘C-X‘ (3): Path touches **EXACTLY 3** edges of the triangle’s cell. * ‘D-X‘ (4): Path touches **EXACTLY 4** edges (fully surrounds) the triangle’s cell. {example_section} ## Task & Output Format 1. **Solve the Puzzle:** Determine the valid path from the Start Node to the End Node that satisfies all rules. 2. **Explain Reasoning:** Provide a step-by-step explanation of your thought process. Detail key deductions, how constraints were applied, and any backtracking or choices made. 3. **Provide Solution Path:** After the reasoning, output the exact marker string ‘####‘ followed immediately by the solution path as a list of node coordinates ‘(x, y)‘. Include all intermediate nodes from start to end. **Example Solution Path Format:** #### [(0, 0), (1, 0), (2, 0), (2, 1), ...] Listing 2: The LLM prompt used for generating the results discussed in prompt ablation in Section 4.3. H.3 Vision Prompt You are an expert spatial reasoning AI specializing in solving puzzles from the game ’The Witness’. Your task is to solve the puzzle in the image by finding
https://arxiv.org/abs/2505.16686v1
a valid line from the Start Node to the End Node. The image shows a Witness puzzle grid of size {grid_size[’width’]*2}x{grid_size[’height’]*2}. In this puzzle: - The solution is a continuous line from the start circle to the end marker - The line travels along grid edges, connecting adjacent nodes horizontally or vertically - The line cannot visit the same node twice - The line must satisfy all constraints represented by the symbols on the grid - The line can not be placed on rule cells - The line can only travel 1 cell per step (no diagonal moves and provide each step as a separate coordinate) COORDINATE SYSTEM: - Nodes are indexed (x, y) where (0,0) is the top-left node - x increases to the right, y increases downward - The grid cells have rule symbols located at cells with all odd coordinates 19 - The line goes AROUND cells containing rules, forming boundaries - Both line and rule cells are on the same grid. Therefore each intersection has a distance of 2 to the next intersection. SOLVING RULES: 1. Draw a continuous line from the START NODE (big circle on the line) to the END NODE (rounded end) without visiting the same node twice. 2. The line can only be placed on valid path cells. 3. The line acts as a boundary, potentially dividing the grid cells into one or more distinct regions . 4. All rules associated with symbols on the grid must be satisfied: - Dots: The line MUST pass through each dot. - Colored squares: All squares within a single region created by the line must be the same color. Different colored squares MUST be separated into different regions by the line. - Colored stars: Each star must be paired with EXACTLY one other element of the same color in a region. Other colors are ignored. - Triangles: The line must touch EXACTLY the number of edges specified by the number of triangles in that cell (edges are top, right, bottom, left of the cell). - Tetris-like polyomino shapes: The region containing this symbol must be shaped EXACTLY like the defined polyshape. - Negative polyshapes: These cancel out regular polyshapes if they overlap. Text description of the puzzle: {puzzle_data.get("text_visualization", "")} Analyze the puzzle image carefully and determine the solution path. First, explain your reasoning step-by-step, including key deductions and constraint checks made along the way. Then, provide the final solution as a sequence of node coordinates in (x, y) format, starting with the start node and ending with the end node, after this string: "####".. DON’T SKIP ANY intermediate nodes (the distance between each node must be 1). Example coordinate list: [(0,0), (1,0), (2,0), (2,1), ...] Listing 3: The LLM prompt used for generating the results discussed in vision ablation in Section 4.3. H.4 Few-Shot Example EXAMPLE PUZZLE GRID: ["+",".","+","+","+","E","+"] ["+","C-R","+","o-K","+","o-K","+"] ["S","+","+","+","+","+","+"] ["+","P-G-112","+","*-G","+","P-B-624","+"] ["+","+","+","+","+","+","+"] ["+","*-G","+","*-G","+","o-K","+"] ["+","+","+",".","+","+","+"] EXAMPLE POLYSHAPE DEFINITIONS: Shape 112: [0,1,0,0] [0,1,0,0] [0,1,0,0] [0,0,0,0] Shape 624: [0,1,0,0] [0,1,1,0] [0,1,0,0] [0,0,0,0] EXAMPLE SOLUTION: We start at (0,2) and draw a line to (0,0). We then draw a line
https://arxiv.org/abs/2505.16686v1
to (2,0) to reach the dot at (1,0) and surround the 3 count triangle. We then draw a line to (2,2) here we go down to touch the third side of the triangle cell and therefore validate the 3 count triangle. We continue down to (2,6) to validate the polyshape 112 and also the green star with the green polyshape After this we draw a line to (4,6) to start validating the polyshape 624 by surrounding it. 20 Therefore we have to draw a line to (6,4) over (4,4) which creates a region for the stone at (5,5) which validates the stone. We continue up to (6,2) for the polyshape 624 and then go to (4,2) and after this to (4,0) to finaly validate the polyshape 624. This also validates the two green stars at (3,3) and (3,5) with each other and the black stone at (3,1) because its the only stone in its region. This line also creates a region for the black stone at (5,1) because its the only stone in its region . Now we can draw a line to (5,0) to reach the end node. #### (0,2),(0,1),(0,0),(1,0),(2,0),(2,1),(2,2),(2,3),(2,4),(2,5),(2,6),(3,6),(4,6),(4,5),(4,4),(5,4) ,(6,4),(6,3),(6,2),(5,2),(4,2),(4,1),(4,0),(5,0) Listing 4: The examples used for generating the results discussed in few-shot ablation in Section 4.3. SECOND EXAMPLE PUZZLE GRID: ["+","E","+","+","+","+","+","+","+"] ["+","N","+","N","+","o-B","+","N","S"] ["+","+","+","+","+","+","+","+","+"] ["+","P-W-8992","G","Y-W-18","+","P-W-48","+","P-W-48","+"] ["+","+","+","G","+","+","+","+","+"] SECOND EXAMPLE POLYSHAPE DEFINITIONS: Shape 18: [0,1,0,0] [1,0,0,0] [0,0,0,0] [0,0,0,0] Shape 48: [0,1,0,0] [0,1,0,0] [0,0,0,0] [0,0,0,0] Shape 8992: [0,0,1,0] [0,1,1,1] [0,0,0,0] [0,0,0,0] SECOND EXAMPLE SOLUTION: We start at (8,1) and draw a line to (8,2). Then we draw a straight line to (4,2). From here we go up to (4,0). This creates one region with only a blue stone at (5,1) which makes it valid. The other region contains numerus polyshapes and ylops. But the region already has a valid shape. The P-W-8992 gets placed on the bottom left and combined with the Y-W-18 to form a 2x1 region. The other part of the region can exactly be formed by the two P-W-48 polyshapes. Now we can draw a line to (1,0) to reach the end node. #### (8,1),(8,2),(7,2),(6,2),(5,2),(4,2),(4,1),(4,0),(3,0),(2,0),(1,0) Listing 5: The examples used for generating the results discussed in few-shot ablation in Section 4.3. I Full Tabular Main Results Tables 6 to 8 provide the detailed and complete results for the experiments in Section 4.1. 21 I.1 Difficulty per Level Model All Level 1 Level 2 Level 3 Level 4 Level 5 Reasoning o4-mini 15.8% 47.7% 19.5% 10.7% 1.2% 1.1% o3-mini 8.2% 29.1% 10.2% 2.5% 1.2% 0.0% QwQ 32B 5.8% 20.9% 5.9% 2.5% 1.2% 0.0% R1 70B 4.0% 17.4% 2.5% 1.7% 0.0% 0.0% Instruction GPT-4.1 1.6% 7.0% 0.8% 0.8% 0.0% 0.0% Gemma-3 27B 1.2% 3.5% 0.8% 0.8% 0.0% 1.1% Qwen 2.5 72B 0.4% 0.0% 1.7% 0.0% 0.0% 0.0% Table 6: Accuracy (%) for SPaRC puzzles achieved by various LLMs, categorized as Reasoning orInstruction models. The table displays the overall accuracy ( All) and the breakdown by puzzle Difficulty Level (1-5) for each model. Performance generally decreases sharply as the difficulty level increases. The highest overall performance is achieved by o4-mini (15.8% ). Values are
https://arxiv.org/abs/2505.16686v1
shown in percent (%). I.2 Path Metrics Model Incorrect Start/End Disconnected Line Intersecting Line Rule Cell Crossing Invalid Path Reasoning o4-mini 3.8% 27.6% 31.2% 51.2% 59.2% o3-mini 3.0% 13.2% 8.0% 56.2% 63.2% QwQ 32B 1.6% 26.2% 30.8% 70.0% 76.4% R1 70B 10.2% 52.4% 35.8% 57.6% 82.2% Instruction GPT-4.1 53.8% 87.0% 51.0% 55.0% 93.6% Gemma-3 27B 40.8% 37.6% 42.0% 84.6% 88.0% Qwen 2.5 72B 8.0% 41.0% 20.2% 59.0% 90.6% Table 7: Percentage of generated solutions with path violations for SPaRC puzzles across different LLMs. Models are grouped into Instruction andReasoning categories. Columns show the rate (%) for specific violation types. I.3 Rule Specific Analysis Model Full Set Gaps Dots Stones Stars Tri Polys St-S P-Y G-D-T D-S-P Reasoning o4-mini 15.8% 84.0% 22.0% 16.0% 34.0% 14.0% 16.0% 20.0% 4.0% 18.0% 8.0% o3-mini 8.2% 48.0% 10.0% 6.0% 8.0% 4.0% 2.0% 6.0% 2.0% 8.0% 10.0% QwQ 32B 5.8% 52.0% 6.0% 8.0% 28.0% 2.0% 20.0% 8.0% 2.0% 0.0% 6.0% R1 70B 4.0% 32.0% 4.0% 2.0% 2.0% 4.0% 8.0% 6.0% 0.0% 4.0% 2.0% Instruction GPT-4.1 1.6% 10.0% 0.0% 2.0% 4.0% 0.0% 4.0% 6.0% 0.0% 0.0% 2.0% Gemma-3 27B 1.2% 6.0% 0.0% 0.0% 2.0% 0.0% 14.0% 2.0% 2.0% 0.0% 0.0% Qwen 2.5 72B 0.4% 2.0% 0.0% 0.0% 0.0% 0.0% 8.0% 0.0% 0.0% 0.0% 2.0% Table 8: Accuracy (%) for various LLMs on SPaRC puzzles, broken down by puzzle split type. Models are categorized as Reasoning orInstruction . Columns display the overall accuracy ( Full Set ) and the accuracy (%) on splits featuring specific single rules ( Gaps ,Dots ,Stones ,Stars ,Tri,Polys ) or rule combinations ( St-S: Stones & Stars, P-Y: Polys & Ylops, G-D-T : Gaps & Dots & Triangles, D-S-P : Dots & Stars & Polys). Values are shown in percent (%). J Details on Path Errors Figure 18 shows the examples of SPaRC puzzle structure and the fundamental path violation types discussed in Section 4. 22 (a) Empty puzzle grid. (b) Solved puzzle (correct). (c) Incorrect start/end (incorrect). (d) Disconnected line (incorrect). (e) Self-intersecting path (incorrect). (f) Rule-cell crossing (incorrect). Figure 18: (a)–(b): An empty puzzle and its solution. (c)–(f): The four kinds of path-generation errors. 23 K Details on Reasoning Mistakes Figures 19 to 22 provide four examples of common reasoning mistakes for DeepSeek R1 70B and highlights the steps that lead to the mistake. Puzzle: 80a59619e323acba. Model: DeepSeek R1 Distill Llama 70B. Puzzle Grid Model’s Path Correct Path Reasoning Excerpts: ... Alright, so I’ve got this Witness puzzle to solve. Let me try to figure it out step by step. ... From (8,3) ->(8,2) ->(8,1). From there, perhaps moving left to (7,1), then (6,1), etc., but I need to avoid ’G’s and ’N’s. ... Wait, but from (8,3) to(8,2) to(8,1) to(8,0) ... From from (8,0) to(7,0) to(6,0) ... From (6,0), cangoleftto(5,0) ordown to(6,1). Let’s goleftto(5,0). (6,0) ->(5,0). From (5,0), cangoleftto(4,0) ordown to(5,1). Let’s goleftto(4,0). (5,0) ->(4,0). ... So: ... - (8,0) -> (7,0 -(7,0) ->(6,0) -(6,0) ->(5,0) -(5,0) ->(4,0) - (4,0) -> (3,0) ... ... Butletmedouble-check ifImissed any’G’s along theway. For example, in row 0, column 5 is ’G’, but the path doesn’t
https://arxiv.org/abs/2505.16686v1
go near there. Similarly, row 1 has ’G’s at 2 and 5, but the path goes through row 1, columns 0 and 1, which are ’+’, so that’s okay. ... Yes, thispath should work. Figure 19: Analysis of puzzle 80a59619e323acba with an incorrect solution attempt by DeepSeek R1 Llama 70B Distill. The model tends to commit to a sequence of reasoning steps without thoroughly validating each one, leading to careless mistakes. 24 Puzzle: cf36a0865a1ed242. Model: DeepSeek R1 70B. Puzzle Grid Model’s Path Correct Path Reasoning Excerpts: ... Since the line starts at (1,0) and ends at (1,4), it’s a vertical line in the middle. Somaybe thelinegoes straight down the middle,butthatwould split thegridintotwohalves. But then the blue stars are on the top-left and bottom-right, and the red stars are on the top-right and bottom-left. Soifthelinegoes straight down themiddle,each blue starwould beinaseparate region, which might notsatisfytherulebecause each needs tobepaired with exactly oneother. Similarly for the red stars. ... But how can the line enclose both blue stars in one region and both red stars in another, given the line starts and ends in the middle? ... Wait, perhaps the line can loop around the blue stars. Let me try to imagine apath thatstarts at(1,0), goes around thetop-left blue star, then around thebottom-right blue star, and then proceeds to the end. ... But the line can’t cross itself, so it’s challenging. Alternatively, maybe thelinegoes from (1,0) down to(1,4), but that’s a straight line, which might not satisfy the star rules. ... Wait, maybe I need to create a loop that goes around the top-left and bottom-right blue stars. Let’s try: (1,0)→(0,0)→(0,1)→(0,2)→(1,2)→(2,2)→(2,3)→(3,3) isblocked. So can’t. So from (2,3), go down to (2,4). Then left to (1,4). ... Wait, maybe the line can go from (1,0) to (0,0), then down to (0,1), then right to(1,1) isblocked, sogodown to(0,2), then right to (1,2), then down to(1,3) isblocked, so go right to (2,2), then down to (2,3), then right to(3,3) isblocked, so go down to (2,4), then left to (1,4). ... But this doesn’t enclose the blue stars. I’m stuck. ... Ithink I’mstuck and need to conclude that I can’t find a solution right now. Figure 20: Analysis of puzzle cf36a0865a1ed242 from the stars split with incorrect solution by DeepSeek R1 70B. The model fails to realize that the grid can form one large region with two stars of each color, leading to an incorrect path. It often tries to loop around stars and run into grid cells, which is impossible. The model notices it got stuck and provides the latest attempt to a valid solution. 25 Puzzle: a49a784ae7d91339. Model: DeepSeek R1 Llama 70B Distill. Puzzle Grid Model’s Path Correct Path Reasoning Excerpts: ... Wait, that seems likea5x5grid ofnodes, buttheactualcells are2x2 because each cell is four nodes. Hmm, maybe I’m misunderstanding. ... Wait, I’m getting confused. Let me clarify. Thegridis2x2cells, each cellbeingasquare. So the nodes are the intersections, making a 3x3 grid of nodes. Sothenodes gofrom (0,0) to(2,2). Thestart isat(0,2) which isthebottom-leftnode, andthe endisat(2,0), thetop-right node. ... Start at (0,2). From there, go up to (0,1), then right to
https://arxiv.org/abs/2505.16686v1
(1,1) — but(1,1) isarulecell, socan’t gothere. So instead, go right to (1,2), which is a ’.’ that must be passed through. So from (0,2) to (1,2). ... Wait, butthelinemust pass through (2,4). So perhaps from (1,2), go down to (2,2), then right to (2,3), then down to (2,4), which is a ’.’. ... So path so far: (0,2) →(1,2)→(2,2)→(2,3)→(2,4). ... From (2,4), need togetto(4,4), which isanother ’.’. So go right to (3,4), then to (4,4). ... Now, from (4,4), need togetto(4,0), another ’.’. So go down to (4,3), (4,2), (4,1), (4,0). ... Butwait, (4,0) isa’.’andtheendisat(2,0). So from (4,0), go left to (3,0), then to (2,0). But (2,0) is the end point, so that’s acceptable. ... Butdoes thispath cover allthe’.’s? Let’s see: -(1,2): yes - (2,4): yes - (4,4): yes -(4,0): yes Ithink thispath satisfiesalltheconstraints. Figure 21: Analysis of puzzle a49a784ae7d91339 from the dots split with incorrect solution by DeepSeek R1 Llama 70B Distill. The model has issues interpreting the grid sizes as 5x5 initially, but converges to the correct dimensions after some reasoning steps. The model tries to draw paths on rule cells but corrects itself quickly. Finally, the model mixes up xandycoordinates, thinking it had drawn the correct path but drawing an inverse indexed path. 26 Puzzle: d9e6e97a86d583d9. Model: DeepSeek R1 70B. Puzzle Grid Model’s Path Correct Path Reasoning Excerpts: ... Okay, so I’m trying to solve this puzzle from The Witness. It’s a bit tricky, but I’ll take it step by step. Let me first understand the problem. ... Start ingat(4,0), which isthebottom-right corner. The end is at (0,0), top-left. So the line needs to move up and left. ... From (4,2), move left to (3,2). From (3,2), move down to (3,3) isblocked byarulecell, so instead move left to (2,2). ... This path would create a region on the left that includes (1,1) and (3,1), and a region on the right that includes (1,3) and (3,3). ... Yes, thisshould satisfytherules because each region has only one color of stones. Figure 22: Analysis of puzzle d9e6e97a86d583d9 of the stones split with incorrect solution by DeepSeek R1 70B. The model misinterprets the coordinate system, assuming (4,0) is the bottom-right corner, which is incorrect, as (4,4) is the bottom-right. The model also often attempts to draw a line over rule cells. This leads to an incorrect path that fails to satisfy the puzzle’s rules. 27 L Details on Ablations Figures 23 to 25 provide more details for the ablation experiments in Section 4.3, considering vision models, alternative prompts, and few-shot examples. L.1 Vision Mode Invalid Path Incorrect Start/End Disconnected LineIntersecting LineRule Cell Crossing020406080100Error Rate (%)53.6% 3.6%26.6%24.8%43.6%85.4% 28.0%41.8% 27.2%56.2%o4-mini o4-mini Vision 1 2 3 4 5 Difficulty Level01020304050Accuracy (%) 46.5% 13.6% 3.3% 3.5% 0.0%8.1%10.2% 7.4% 0.0% 0.0%o4-mini o4-mini Vision o4-mini Overall: 12.6% o4-mini Vision Overall: 5.6% Figure 23: Comparison of the standard o4-mini Zero-Shot (blue) and its vision-enabled counterpart o4-mini Vision (orange) on SPaRC puzzles. Left Panel: Bar chart showing the Error Rate (%) for different types of path violations across all generated solutions. o4-mini Vision generally exhibits higher rates of these structural errors.
https://arxiv.org/abs/2505.16686v1
Right Panel: Line chart displaying the Accuracy (%) against puzzle Difficulty Level (1-5). The standard o4-mini Zero-Shot achieves a significantly higher overall accuracy ( 12.6% , blue dotted line) compared to o4-mini Vision (5.6% , orange dotted line), outperforming it at nearly all difficulty levels. L.2 Alternative Prompt Invalid Path Incorrect Start/End Disconnected LineIntersecting LineRule Cell Crossing020406080100Error Rate (%)59.2% 3.8%27.6%31.2%51.2%48.4% 3.2%30.4% 21.2%29.0%o4-mini o4-mini Alt Prompt 1 2 3 4 5 Difficulty Level0102030405060Accuracy (%) 47.7% 19.5% 10.7% 1.2% 1.1%54.7% 29.7% 14.0% 5.8% 1.1%o4-mini o4-mini Alt Prompt o4-mini Overall: 15.8% o4-mini Alt Prompt Overall: 21.0% Figure 24: Performance comparison of o4-mini using its standard prompt (blue) versus an alternative prompt (orange) on SPaRC puzzles. Left Panel: Bar chart showing the Error Rate (%) for different types of path violations across all generated solutions. The alternative prompt generally reduces the frequency of these structural errors. Right Panel: Line chart displaying the Accuracy (%) against puzzle Difficulty Level (1-5). The alternative prompt results in a higher accuracy across all difficulties, improving the overall success from 15.8% (standard, blue dotted line) to 21.0% (alternative, orange dotted line). 28 L.3 Few-Shot Invalid Path Incorrect Start/End Disconnected LineIntersecting LineRule Cell Crossing020406080100Error Rate (%)53.6% 3.6%26.6%24.8%43.6%59.2% 3.8%27.6%31.2%51.2%58.2% 4.6%24.6%28.2%50.0%o4-mini (zero-shot) o4-mini (one-shot) o4-mini (two-shot) 1 2 3 4 5 Difficulty Level0102030405060Accuracy (%) o4-mini (zero-shot) o4-mini (one-shot) o4-mini (two-shot) zero-shot Overall: 12.6% one-shot Overall: 15.8% two-shot Overall: 14.8% Figure 25: Impact of few-shot prompting on o4-mini ’s performance and error profile for SPaRC puzzles. Compares zero-shot (blue), one-shot (orange), and two-shot (green) prompting strategies. Left Panel: Bar chart showing the Error Rate (%) for different types of fundamental path violations across all generated solutions. Few significant differences emerge in the error profiles across prompting strategies. Right Panel: Line chart displaying the Accuracy (%) against puzzle Difficulty Level (1-5). While one-shot prompting achieves the highest overall success rate ( 15.8% , orange dotted line) compared to zero-shot (12.6% , blue dotted line) and two-shot (14.8% , green dotted line), all strategies show a sharp decline in performance as puzzle difficulty increases. Figure 26: Visual explanation of how each rule looks on the puzzle grid for the annotators of the dataset. 29 M Details on Human Annotation M.1 Annotators The annotators are two Ph.D. students and four research assistants (two bachelor students and two master students) in Computer Science and Data Science. They are all male and between 22 and 27 years old. As part of their research job, they receive at least the minimum wage in Germany. M.2 Annotation Instructions Introduction This guide provides the rules and instructions for annotating SPaRC puzzles. We want to compare whether there are patterns/similar- ities that make the puzzles difficult for humans or LLMs to solve. Therefore, we have created 6 test sets of 50 puzzles each. These sets contain puzzles with all possible combinations of rules and grid sizes. Task Annotate all the samples in the dataset in as little time as possible per puzzle. Each puzzle is solvable, but if you can’t think of a solution after a reasonable amount of time (5-10 minutes), you can
https://arxiv.org/abs/2505.16686v1
click Show Solution or Skip Puzzle to continue. Once you have completed all 50 puzzles, please e-mail the annotated file back to us. By sending the file back to us, you agree that we can publish your annotations anonymously. This includes solve time, required attempts, and solution path. Rules The line must connect Start with End with a continuous path without using the same cell twice. It must also follow all rules defined by the puzzle. A visual explanation of the rules can be seen in Figure 26. •Gaps: The line CANNOT traverse a cell marked by a Gap. •Dots: The line MUST pass through a cell marked by a Dot. •Stone: All stones within a single region cre- ated by the line must be the same color. Differ- ent colored squares MUST be separated into different regions by the line. •Stars: Each star must be paired with EX- ACTLY one other element of the same color in a region. Other colors are ignored. •Triangles: The line must touch EXACTLY the number of edges specified by the trianglecount (edges are top, right, bottom, left of the cell). •Polyshapes (Poly): The region containing this symbol must be shaped EXACTLY like the defined polyshape. The shape must fit entirely within the region’s boundaries. If multiple positive polyshapes are in one region, the region’s shape must accommodate their combined, non-overlapping forms (like Tetris pieces). •Negative Polyshapes (Ylop): The negative polyshape can only be placed on top of al- ready placed normal polyshapes. The neg- ative polyshapes must fit on the grid, but can allow overlap between normal polyshapes or placement of polyshapes that extend be- yond the area defined by the line. If the neg- ative polyshapes exactly cancel the normal polyshapes, there is no restriction on the grid shape anymore. A negative polyshape only counts as valid if it is used. Example Dataset You can use the following dataset to experiment and get familiar with the puz- zles and all rules: Link redacted for anonymity. Important Hints •The annotation state gets saved even when closing the window, but to be safe, also always download the current annotated dataset when you stop annotating. •If you reload the page, don’t overwrite the existing data. 30 N Acknowledgment of AI Usage AI Usage card based on Wahle et al. (2023). AI Usage Card PROJECT DETAILSPROJECT NAME SPaRC: A Spatial Pathfinding Reasoning ChallengeDOMAIN PaperKEY APPLICATION Dataset CONTACT(S) NAME(S) EMAIL(S) AFFILIATION(S) Lars Benedikt Kaesberg l.kaesberg@uni- goettingen.deUniversity Göttingen MODEL(S) MODEL NAME(S) VERSION(S) ChatGPT 4o, 4.5, o3 Gemini 2.5 pro Claude 3.7 sonnet LITERATURE REVIEWFINDING LITERATURE ChatGPT GeminiFINDING EXAMPLES FROM KNOWN LITERATURE OR ADDING LITERATURE FOR EXISTING STATEMENTSCOMPARING LITERATURE WRITING GENERATING NEW TEXT BASED ON INSTRUCTIONSASSISTING IN IMPROVING OWN CONTENT OR PARAPHRASING RELATED WORK ChatGPT GeminiPUTTING OTHER WORKS IN PERSPECTIVE CODING GENERATING NEW CODE BASED ON DESCRIPTIONS OR EXISTING CODE ChatGPT Gemini ClaudeREFACTORING AND OPTIMIZING EXISTING CODE ChatGPT Gemini ClaudeCOMPARING ASPECTS OF EXISTING CODE ETHICS WHY DID WE USE AI FOR THIS PROJECT? Efficiency / Speed Expertise AccessWHAT STEPS ARE WE TAKING TO MITIGATE ERRORS OF AI? -WHAT STEPS ARE WE
https://arxiv.org/abs/2505.16686v1
Beyond Induction Heads: In-Context Meta Learning Induces Multi-Phase Circuit Emergence Gouki Minegishi1Hiroki Furuta1Shohei Taniguchi1Yusuke Iwasawa1Yutaka Matsuo1 Abstract Transformer-based language models exhibit In- Context Learning (ICL), where predictions are made adaptively based on context. While prior work links induction heads to ICL through a sud- den jump in accuracy, this can only account for ICL when the answer is included within the con- text. However, an important property of practical ICL in large language models is the ability to meta-learn how to solve tasks from context, rather than just copying answers from context; how such an ability is obtained during training is largely un- explored. In this paper, we experimentally clarify how such meta-learning ability is acquired by ana- lyzing the dynamics of the model’s circuit during training. Specifically, we extend the copy task from previous research into an In-Context Meta Learning setting, where models must infer a task from examples to answer queries. Interestingly, in this setting, we find that there are multiple phases in the process of acquiring such abilities, and that a unique circuit emerges in each phase, contrast- ing with the single-phases change in induction heads. The emergence of such circuits can be related to several phenomena known in large lan- guage models, and our analysis lead to a deeper understanding of the source of the transformer’s ICL ability. 1. Introduction Transformer-based language models (Vaswani et al., 2017) show an intriguing ability to perform In-Context Learning (ICL) (Brown et al., 2020; Xie et al., 2021; Garg et al., 2022; Dong et al., 2024). ICL is the ability to predict the response to a query based on context without any additional weight updates. A widely adopted application of ICL is few-shot *Equal contribution1The University of Tokyo. Correspondence to: Gouki Minegishi <minegishi@weblab.t.u-tokyo.ac.jp >. Proceedings of the 42ndInternational Conference on Machine Learning , Vancouver, Canada. PMLR 267, 2025. Copyright 2025 by the author(s).learning in which only a small number of examples in the context guide the model’s response to a new query. Due to its unique capability, ICL has gained a lot of attention in the research community, and there have been several approaches such as Bayesian inference (Xie et al., 2021) and meta-gradient descent (V on Oswald et al., 2023) to uncover its underlying mechanisms. One of the popular approaches to understanding ICL is mechanistic interpretability: reverse-engineering the com- putations performed by models (Elhage et al., 2021). A key focus within this framework is the study of circuits , sub- graphs with distinct functionality that serve as fundamental building blocks of neural network behavior (Wang et al., 2022; Conmy et al., 2023a). Notably, Olsson et al. (2022) uncovered induction heads , a specific circuit mechanism that plays a crucial role in enabling ICL. Induction heads recognize the repeating pattern [A][B] . . .[A] within the context and predict [B] as the next token through a match-and-copy operation (Figure 1-(a)). The existence of induction heads is further investigated under more complex tasks, such as performing semantic matching (Ren et al., 2024), serving as subcomponents of circuits for natural lan- guage tasks within
https://arxiv.org/abs/2505.16694v1
LLMs (Wang et al., 2022; Merullo et al., 2024), and engaging in intricate interactions with multi-head attention (Singh et al., 2024). However, the copy mechanism as described in the induc- tion head explains only a fraction of the few-shot ICL. Let us consider, for instance, the following ICL scenario in a Country-to-Capital task, based on Hendel et al. (2023): France →Paris, Spain →Madrid, Japan→ ? example query prediction It is well known that ICL can enhance performance in this scenario however, this improvement cannot be explained merely by retrieving similar examples through induction heads. A straightforward way to explain this ability is to assume that the model infers the task from the examples and then uses this inferred task to make predictions. For example, Hendel et al. (2023); Todd et al. (2024) demon- strates that tasks are internally represented as vectors (i.e., task vectors) within the LLM. This task inference ability is 1arXiv:2505.16694v1 [cs.CL] 22 May 2025 Beyond Induction Heads: In-Context Meta Learning Induces Multi-Phase Circuit Emergence recognized as a form of meta-learning (Min et al., 2022a). However, it remains unclear exactly what kind of circuit implements this meta-learning or how the circuit is acquired. In this study, our goal is to elucidate how such meta-learning capability is acquired. To that end, we extend the copy task from previous research (Reddy, 2023) to a problem setting, which we call In-Context Meta-Learning (ICML) setting, that requires task inference. We then train a simplified trans- former on this extended setting, and analyze changes in its internal circuits during the training process. In this setting, as shown in Figure 1-(a), there exists a set of multiple tasks, and the answers differ from task to task, so the model needs to infer the task from the examples to answer the query. Interestingly, we observe that learning dynamics emerge in this setting that differ significantly from the case of simple copying tasks. First, we find that the model undergoes learn- ing phase while acquiring meta-learning capabilities, unlike the single phase typically observed in copying tasks. More specifically, we find that in the first phase, a bigram-type circuit emerges that focuses solely on the query, ignoring the context and relying only on the model’s weights. In the second phase, a circuit emerges that pays attention only to the labels in the context. Finally, a circuit emerges that chunks each example pair into a single token. We introduce novel metrics to measure these three circuits and show that the abrupt change of these metrics aligns closely with the sudden jumps in accuracy. Notably, the label-focused circuit that emerges in the second phase sug- gests that during acquiring meta-learning capabilities, the model may initially learn to identify tasks by examining only the set of labels, without considering the correspon- dence between classes and labels. The existence of the label-focused circuit also corresponds to the phenomenon in previous studies (Min et al., 2022b) that LLMs maintain high ICL performance even under random label assignments, which is one explanation for the unique nature of LLMs. We also examine the case of a
https://arxiv.org/abs/2505.16694v1
multi-head model, which is a more practical setting; sudden jumps in accuracy become less apparent, and different heads can still specialize in par- allel — for instance, one head may converge on a particular circuit, while another becomes a different one. Although this parallel specialization leads to smoother accuracy im- provements, our circuit-level metrics uncover hidden circuit emergence, revealing that even though learning phases re- main invisible in the accuracy curve, the underlying circuits still change abruptly. This observation suggests that even when a clear phase changes are not observed on the loss curve, as in the case of LLM training, abrupt changes can oc- cur on the circuits, which leads to bridging the gap between toy experiments in the study of mechanistic interpretability and practical scenarios.2. Related Works 2.1. In-Context Learning Brown et al. (2020) demonstrated with GPT-3 the remark- able ability of LLMs to perform a wide range of tasks using only a few examples provided in the input prompt. Few-shot ICL is the ability of LLMs to solve new tasks by examin- ing a sequence of (input, label) pairs that share a common concept within the context. Rather than updating their internal parameters, these models rely solely on the contextual examples to deduce the task’s rules. In general, ability to learn from few-shot examples is asso- ciated with meta-learning (Wang et al., 2020; Hospedales et al., 2021), and success of the ICL demonstrate the strong ability of LLM to meta-learn. In effective ICL, the model infers the underlying task from the examples provided and refines its predictions based on the inferred task. Although this meta-learning-based ability is widely used, the under- lying mechanisms enabling LLMs to perform these tasks remain poorly understood, and some puzzling results have been observed. For example, Min et al. (2022b) demon- strated that even when the labels in the examples are ran- domized, the accuracy improves. Additionally, Chan et al. (2022) have demonstrated that data distributional properties significantly influence ICL performance. To understand ICL, various approaches have been proposed. For example, V on Oswald et al. (2023); Dai et al. (2023) demonstrated that transformers can solve linear regression problems within the context by leveraging meta-gradients. Based on this, analytical methods have been applied to study the ability of transformers to handle a range of tasks, includ- ing discrete functions (Bhattamishra et al., 2023), nonlinear functions (Kim & Suzuki, 2024), and classification prob- lems (von Oswald et al., 2023). 2.2. Mechanistic Interpretability One promising approach to understanding ICL is mechanis- tic interpretability (MI), which seeks to uncover the internal mechanisms of models (Olah et al., 2020; Elhage et al., 2021). A key focus of MI is the study of circuits , which are subgraphs with distinct functionality that serve as funda- mental building blocks of neural network behavior (Wang et al., 2022; Conmy et al., 2023b; Merullo et al., 2024). One such circuit studied in the context of ICL is the in- duction head (Olsson et al., 2022). The induction heads are a two-layer structure; in particular, the latter layer is commonly called the induction
https://arxiv.org/abs/2505.16694v1
head, and the earlier layer is referred to as the previous token head. Previous token head attends to and copies the preceding token into the current token. When few-shot examples are present in the context, it chunks each (x, ℓ)pair into a single token. Induction heads then perform a match-and-copy operation, matching a query 2 Beyond Induction Heads: In-Context Meta Learning Induces Multi-Phase Circuit Emergence …Task1Task2Task "!!!"!#!$Copy Task In-Context Meta Learning (Ours)# Classes……………………$ Labels!!ℓ!!#!$ℓ$?!"ℓ"copymatch# Classes% Labelsinferpredictℓ!&ℓ"&ℓ$&?…… (a)Task Structure .........MLPAttention2Attention1 (b)Network Structure Figure 1. (a)Task Structure : Previous studies focused on a copying-task setup, where the query’s answer remains unchanged by context, allowing the model to either memorize pairs or match andcopy from context. In contrast, this work explores a more practical scenario where (x, ℓ)pairs vary by task, requiring the model to infer the task from examples and predict the query’s answer. (b) Network Structure : we mainly use two layers of attention followed by a token-wise MLP layer. The task is consistent within the context. derived from the current token with a key derived from the previous token head’s output. For more details on the in- duction head, see Appendix A. Further research has shown that induction heads can perform soft matching (Crosbie & Shutova, 2024), emerge naturally in multi-head attention settings (Singh et al., 2024), and are present in LLMs (Cho et al., 2024). Despite these advancements of induction heads, these stud- ies have primarily focused on tasks where the context explic- itly includes the label to be copied, such as direct copying tasks. Therefore, induction heads alone cannot fully explain the meta-learning capabilities in more practical scenarios. 3. Experimental Setup 3.1. In-Context Meta Learning To analyze the meta-learning capabilities of ICL, building on prior works (Chan et al., 2022; Reddy, 2023), we de- sign a simple experimental setting named the In-Context Meta-Learning (ICML) described in Figure 1-(a). Unlike previous approaches, where copying labels or memorizing (x, ℓ)pairs was sufficient to predict the answer, our setting instead requires the model to meta-learn the underlying task (τ) from (x, ℓ)context pairs. The network is trained to predict the label of a target xqgiven an alternating sequence ofNitems and Nlabels: x1, ℓτ 1, x2, ℓτ 2, . . . , x N, ℓτ N| {z } examples, xq|{z} query,?|{z} prediction Here, τrepresents the task, where each task defines a unique (x, ℓ)pair with labels ℓrandomly assigned to items x. The total number of tasks is denoted as T, and the context pre- sented to the model consistently corresponds to the same task. Since the query xqmay not be appeared the in-context examples, the network needs to infer the task τ, instead of simply copying a label, from the context.Following Reddy (2023), we represent each item xand label ℓin a(P+D)-dimensional space. Of these dimensions, P is dedicated to positional information via a one-hot encoding (with P= 65 across all experiments), while Dcaptures the content. To encourage translation-invariant operations, each input sequence is randomly placed within a window of size (2N+ 1) spanning the range [0, P−1]. Each class kis associated with a D-dimensional
https://arxiv.org/abs/2505.16694v1
mean vector µk, whose entries are drawn independently from N(0,1/D). For an itemxiassigned to class k, we add noise η(sampled from the same distribution) scaled by ϵ, giving xi=µk+ϵ η√ 1+ϵ2, where ϵgoverns within-class variation and the denominator ensures ∥xi∥ ≈1. Finally, each class is linked to one of L labels, with L≤K. To control the proportion to which a query can be solved by copying from the context, the same item as the query is included in the context with probability pB. We use T= 3,K= 64 ,L= 32 ,N= 4,D= 63,ϵ= 0.1,pB= 0, unless otherwise specified. In our ICML setup, we can reproduce the standard match-and-copy induction head mechanism from Reddy (2023) by setting T= 1, pB= 1,. For detailed results, see Appendix A. 3.2. Network Structure Following prior research (Reddy, 2023), we use a two-layer attention-only transformer shown in Figure 1-(b), where each layer µcomprises mheads (indexed by h), and a causal mask ensures position iattends only to positions j≤i. A two-layer MLP classifier then produces the label probabili- ties. For the complete set of equations and hyperparameter details, see Appendix B. In this architecture, each head h in layer µcomputes attention weights {p(µ,h) ij}, quantifying how strongly position i(query) attends to position j(key). These outputs are aggregated across heads and passed to the MLP, which makes the final label predictions. The classifier is a two-layer MLP with ReLU activations, followed by a softmax layer producing probabilities over L labels. We train this network to classify the query item xq 3 Beyond Induction Heads: In-Context Meta Learning Induces Multi-Phase Circuit Emergence Figure 2. (left) Changes in accuracy and loss across three distinct phases during training, with lighter-shaded curves indicating different random seeds. Each phase is highlighted with a different background color: Phase 1 (yellow), Phase 2 (orange), and Phase 3 (red). (right) Visualization of the attention maps (circuits) corresponding to each phase, with characteristic attention patterns indicated by red arrows and their circuits displayed above. Specific attention types, such as Bigram, Label Attention, and Chunk Example, emerge at different phases, reflecting the model’s adaptation to the task. Quantitative results for these attention maps are provided in Figure 4. 104 105 106 Step0.000.050.100.15Delta AccuracyDelta Accuracy (Acc(t+∆t)−Acc(t)) Delta Accuracy 0.00.20.40.60.81.0 AccuracyAccuracy Figure 3. Accuracy (blue) and ∆Accuracy (green) as functions of the training step. Here, ∆Accuracy =Acc(t+ ∆t)−Acc(t) with∆t= 100 . Vertical dashed lines indicate where ∆Accuracy exceeds 0.025, marking the transition points between the three observed phases (Phase 1, Phase 2, Phase 3). into one of the Llabels using cross-entropy loss. Both the query/key dimension and the MLP hidden layer dimension are set to 128. We use a batch size of 128 and optimize with vanilla stochastic gradient descent at a learning rate of 0.01. 4. Abrupt Learning and Emergent Circuits 4.1. Three-Phase Dynamics and Circuit Overview We conducted experiments under the ICML setting with three tasks (i.e., T= 3). As shown on the left side of Fig- ure 2, the results reveal three distinct phases of accuracy changes, each accompanied by a corresponding drop in loss. The observed dynamics
https://arxiv.org/abs/2505.16694v1
are as follows: the first accuracy plateau occurs at around 30–40%, the second at approxi- mately 75%, and the final phase reaches 100%. To clearly these three phases, we define the following metric: ∆Accuracy =Accuracy t+ ∆t −Accuracy (t), where tdenotes the optimization step and we set ∆t= 100 . In Figure 3, we plot this quantity along with the model’s ac- curacy, marking vertical lines at steps where ∆Accuracy >0.025. These lines serve as boundaries between the three observed phases. Based on this threshold, we partition the model’s behavior into Phase 1, Phase 2, and Phase 3 throughout the remainder of this paper. On the right side of Figure 2, we visualize the attention maps from the two layers of the model during each phase. The attention patterns emerging during the learning process can be categorized into the following three types: 1.Bigram: Strong attention is focused on the token in the context that corresponds to the query token ( xq). 2.Label Attention: Strong attention is focused on the label tokens of the (x, ℓ)pair within the context. 3.Chunk Example: Attention aggregates the (x, ℓ)token pair in the context into a single token, similar to the induction head’s previous token head. As visualized on the right side of Figure 2, the combinations of these attention types differ between the first and second layers across the three phases: Phase 1 (Non-Context Circuit; NCC): Both layers use bigram attention, ignoring the context and relying solely on the model’s weights. At this stage, the model predict label base on only query, limiting accuracy to around 1/T. In this case, the accuracy stagnates at around 30–40%. Phase 2 (Semi-Context Circuit; SCC): The first layer exhibits label attention, while the second layer focuses on the query token (bigram attention). The model not only leverages weights memory but also attends to label tokens (i.e., half of the context ), in the context to infer possible answers, resulting in improved accuracy of around 75%. Phase 3 (Full-Context Circuit; FCC): The first layer ag- gregates the (x, ℓ)pair into a single token (chunk example), while the second layer focuses on these aggregated tokens (label attention) to predict label, resulting in using the entire context . Through this abstraction of the pairwise relation- 4 Beyond Induction Heads: In-Context Meta Learning Induces Multi-Phase Circuit Emergence Table 1. Summary of circuits, accuracy, and layer-wise attention. Circuit Accuracy ( T= 3) Layer 1 Layer 2 NCC 30–40% Bigram Bigram SCC ≈75% Label Attention Bigram FCC 100% Chunk Example Label AttentionTable 2. Formulas of the three attention metrics. Metric Formula Bigram pµ,h 2N+1,2N+1 Label AttentionPN k=1pµ,h 2N+1,2k Chunk Example1 NPN k=1pµ,h 2k,2k−1 103 104 105 106 Steps0.000.250.500.751.00 AccuracyBigram 103 104 105 106 Steps0.000.250.500.751.00 AccuracyLabel Attention 103 104 105 106 Steps0.000.250.500.751.00 AccuracyChunk Example 0.20.40.60.81.0 Bigram 0.00.10.20.30.4 Label Attention0.10.20.3 Chunk Example Accuracy Layer1 Metrics Layer2 Metrics Figure 4. Evolution of the three attention metrics (Bigram, Label Attention, and Chunk Example) across optimization steps for the first (green) and second (red) layers. The shaded regions represent the three learning phases: Phase 1 (yellow), Phase 2 (orange), and Phase
https://arxiv.org/abs/2505.16694v1
3 (red), defined by ∆Accuracy (Figure 3). Each metric shifts cleanly at the phase boundaries, demonstrating a close correspondence between accuracy improvements and circuit-level transformations. ship (i.e., task inference), the model can produce correct answers for the query. Once the model learns this circuit, it achieves 100% accuracy. The relationship between each circuit and its corresponding attention pattern is summarized in Table 1. 4.2. Quantifying Circuit Emergence To quantitatively measure these circuits, we propose three metrics based on the attention maps of each layer. Let pµ,h i,j represent the attention from token jto token iin the h-th head of the µ-th layer. Let the context length be 2N+ 1(in this case, N= 4). We define three primary attention-based metrics, with precise formulas provided in Table 2. Here, we briefly describe what each metric represents: (1) Bigram Metrics capture the attention from the query token to itself; (2)Label Attention Metrics measure the total attention from the query token to the label tokens within the context; (3) Chunk Example Metrics assess the attention from xtoℓ within each (x, ℓ)pair. The plots in Figure 4 illustrate how these metrics evolve in the first and second layers across the three phases. For the Bigram Metrics, both the first and second layers show high values at the moment of the initial jump in accuracy, marking the formation of the NCC. Then, at the beginning of Phase 2, the bigram metrics in the first layer decrease significantly while those in the second layer remain high, and label attention in the first layer rises — together lead- ing to the formation of the SCC. At the start of Phase 3, the chunk example metrics in the first layer increase, and the label attention metrics in the second layer also becomehigh, resulting in the formation of the FCC. Importantly, these metric transitions align closely with the corresponding jumps in model accuracy, supporting the view that these metrics provide a valid and quantitative perspective on the circuit changes observed during the three phases, as depicted in the right side of Figure 2. 4.3. Deeper Look at the Semi-Context Circuit How SCC Drives Accuracy In Phase 2, the model forms the SCC, using label information from the context in addi- tion to the query. We provide a theoretical analysis of why this leads to improved accuracy and empirically validate our theory through controlled experiments. To clarify SCC’s behavior, we tested the following simplified conditions: 1.The number of classes ( K) equals the number of labels (L), with no duplication. 2.The input context (including the query) contains no du- plicate classes. 3.The number of tasks ( T) is set to 2, and there are no common (x, ℓ)pairs shared across tasks. 4.To specifically focus on SCC, a mask is applied to circuits associated with SCC during training (details are provided in the Appendix C). In Phase 1, since there are two tasks, the model has a 50% chance of predicting correctly by random guessing. In other words, the model’s prediction reduces to a binary choice for each input query (xq). Once label information
https://arxiv.org/abs/2505.16694v1
becomes usable, the binary choice can potentially be narrowed fur- ther. This occurs when one of the labels corresponding to 5 Beyond Induction Heads: In-Context Meta Learning Induces Multi-Phase Circuit Emergence 103 104 105 Steps0.00.20.40.60.81.0 AccuracyK=8 K=16 K=32 Theoretical Accuracy Figure 5. Comparison of theoretical accuracy (dashed lines) and model accuracy for different class counts ( K). The close alignment between theoretical predictions and experimental results confirms the validity of the theoretical analysis. the two options is present in the context. In this scenario, the label in the context is definitively not the correct answer for the query, as per the defined conditions. Thus, the an- swer becomes uniquely determinable, increasing accuracy. Following the derivation in Appendix D, the probability of one of the labels appearing in the context is p= 1−K−2 4 K−1 4. Therefore, the theoretical accuracy achievable with SCC can be expressed as: Theoretical Accuracy =p·1 + (1 −p)·0.5. Figure 5 shows the theoretical accuracy alongside the accu- racy achieved by a model trained with only Phase 2 attention circuits remaining. The class/label counts were varied as K={8,16,32}. The near-perfect agreement between the theoretical and empirical results confirms both the validity of our derivation and the role of SCC in boosting accuracy. Random-Label Robustness of SCC We focus on the ten- dency of SCC to make predictions “based solely on labels and query.” We hypothesize that this circuit explains the puzzling phenomenon that the improvement in ICL perfor- mance observed even when using random labels, as noted in Min et al. (2022b). Min et al. (2022b) has demonstrated that replacing labels randomly within examples results in only a marginal performance drop, suggesting that ICL does not rely heavily on (x, ℓ)pairs. To investigate this phe- nomenon, we define an Out-of-Distribution (OOD) evalu- ation where the labels in each context pair are randomly permuted. Specifically, we consider: x1, ℓτ π(1), x2, ℓτ π(2), . . . , x N, ℓτ π(N)| {z } examples, xq|{z} query,?|{z} prediction Here, πis a random permutation on {1,2, . . . , N }, meaning thatℓτ π(i)replaces the original label ℓτ i. By measuring the model’s accuracy under these shuffled labels, we obtain 103 104 105 Steps0.00.20.40.60.81.0 AccuracyTraning Accuracy Random-Label Accuracy (RLA)Figure 6. Comparison of training accuracy and random-label ac- curacy (RLA). The plot demonstrates the rise in both metrics, with RLA following a trend similar to emergence Phase 2. This indicates that SCC acquired in Phase 2 contributes to improved accuracy even with shuffled labels. theRandom-Label-Accuracy (RLA) . In the Figure 6, we compare this RLA with the training accuracy. Similar to the rise observed in Phase 2, when SCC is acquired, the RLA also increases. This suggests that the reason for the improved performance with random labels, as seen in Min et al. (2022b), is the existence of circuits similar to SCC within LLMs. 4.4. Effects of Data Property on Circuits Emergence Previous studies have indicated that certain properties of the training data, such as burstiness, can influence the emergence of ICL (Chan et al., 2022) and induction heads (Reddy, 2023). In
https://arxiv.org/abs/2505.16694v1
this work, we explore how these data properties affect the development of circuits in our ICML setting, with the aim of advancing our understand- ing of the multi-phase emergence of these circuits. As mentioned in Section 3, the variables capturing the char- acteristics of the data include the number of tasks T, the number of classes K, the noise magnitude ϵ. In addition, and following Chan et al. (2022), we adopt rank-frequency distributions over both classes and tasks: f(k)∼k−αand f(τ)∼τ−β, which follow a power-law form commonly known as Zipf’s law (Zipf, 1949) (see Appendix E for de- tails). The default values are T= 3,K= 64 ,ϵ= 0.1, α= 0, and β= 0. The results of varying these parameters are shown in the Figure 7. For results obtained by varying pB, see Appendix F. In Figure 7-(a), we present the results of varying the number of tasks T. As Tincreases, Phase 1 accuracy decreases (approximately proportional to 1/T). When T= 1, the setup aligns with previous studies (see Figure 1), where the model’s accuracy increases in a single phase rather than un- dergoing multiple phases. Conversely, for T≥2, the model consistently exhibits three distinct phases. This indicates that the multi-phase phenomenon is robust to the number of tasks, and that introducing additional tasks in the ICL setting can provide new empirical insights. 6 Beyond Induction Heads: In-Context Meta Learning Induces Multi-Phase Circuit Emergence 103 104 105 106 Steps0.20.40.60.81.0 AccuracyT=1 T=2 T=3 T=4 (a) The nunber of Tasks T 103 104 105 Steps0.00.20.40.60.81.0 AccuracyK=32 K=64 K=128 K=256 K=512 K=1024 (b) The number of classes K 103 104 105 Steps0.20.40.60.81.0 Accuracy=0.1 =0.25 =0.5 =0.75 =1 (c) Within-class variation ϵ 103 104 105 Steps0.20.40.60.81.0 Accuracy=0 =0.25 =0.5 =0.75 =1 =1.25 (d)Class-rank frequency exponent α 103 104 105 Steps0.20.40.60.81.0 AccuracyAverage =0 =0.25 =0.5 =0.75 =1 103 104 105 Steps0.20.40.60.81.0τ=1 =0 =0.25 =0.5 =0.75 =1 103 104 105 Steps0.00.20.40.60.81.0τ=2 =0 =0.25 =0.5 =0.75 =1 103 104 105 Steps0.00.20.40.60.81.0τ=3 =0 =0.25 =0.5 =0.75 =1 (e) Task-rank frequency exponent β Figure 7. The relationship between learning phase dynamics and data distribution properties is explored by varying key parameters: the number of tasks ( T), the number of classes ( K), the noise magnitude ( ϵ), the sampling bias for classes ( k−α), and the sampling bias for tasks ( τ−β). Default values are T= 3,K= 64 ,ϵ= 0.1,α= 0, andβ= 0. The plots show how these variations influence accuracy and the emergence of learning phases. In Figure 7-(b), when Kis small (e.g., K= 32 ), the model tends to skip Phase 1 and transition directly to Phase 2. In contrast, when Kis large (e.g., K= 128 ,256), the model skips Phase 2 and jumps directly from Phase 1 to Phase 3. This can be explained by the theoretical values derived in Section 4.3, where increasing the number of classes brings the accuracy in Phase 2 closer to that in Phase 1, effectively making Phase 2 unobservable for large K. In Figure 7-(c), increasing ϵ(the within-class variation) leads to skipping Phase 2. Moreover, when ϵis 1, Phase 1 is also skipped.
https://arxiv.org/abs/2505.16694v1
Following the results of Chan et al. (2022), higher values of ϵmake it more difficult for the model to memorize the (x, ℓ)pairs in its weights, and thus it shifts its focus toward leveraging the context. The observation that NCC is skipped entirely when ϵ= 1 aligns with this trend. Although SCC is a circuit that uses the context, it inherits the nature of NCC, causing it to be skipped as ϵ increases. In Figure 7-(d), we see that increasing αlikewise tends to skip Phase 1 or Phase 2. The heightened sampling bias makes it more challenging to memorize pairs in the weights, so the model more readily exploits context-based information. As a result, the NCC or SCC does not emerge. In summary, the results suggest that when the model finds it difficult to memorize (x, ℓ)pairs (larger ϵorα) neither NCC nor SCC emerges. In Figure 7-(e), we examine how varying the task sampling biasβaffects both the average accuracy across tasks and the accuracy of each individual task. While changing βleads to only minor differences in the overall average trend, the accuracy on a per-task basis varies considerably with β. In particular, when βis high (e.g., β= 1), the model tends to memorize the most frequent task (i.e., τ= 0) first, causing the remaining tasks to skip NCC and progress directly to forming FCC. Additional results for larger values of Tandvarying context length ( N) are provided in Appendix I and Appendix J, respectively. 5. Multi-Head Enhances Circuit Discovery 5.1. Parallel Circuit Exploration To investigate a more practical scenario, we extend our anal- ysis to multi-head attention. Figure 8 compares the accuracy changes for models with two heads and one head. In the left panel of Figure 8, we observe that learning phases become less pronounced when using multi-head attention. A closer examination of the attention maps for each head (as shown in the right panel of Figure 8) reveals that different heads specialize in distinct functions. Specifically, one head learns circuits resembling NCC, while another head becomes FCC. This parallel specialization provides a smoother trajectory of accuracy improvement, in contrast to the multi-learning phase observed in single-head models. These findings suggest that multi-head attention allows for parallel exploration of circuits, improving the efficiency of circuit discovery. As a result, the multiple phase char- acteristic of single-head models are absent in multi-head configurations. This behavior aligns with observations in LLMs, where multi-head attention enables different heads to serve distinct functions, leading to smoother accuracy im- provements, as seen in Figure 8. Results for a larger number of heads are provided in the Appendix G. 5.2. Hidden Circuit Emergence In Figure 8, we observe multiple attention heads lead to smoothing the accuracy improvement. To gain deeper in- sights into this phenomenon, we analyze how the internal circuits evolve by using the circuit metrics summarized in Table 2. In Figure 9 (left), we present the circuit met- 7 Beyond Induction Heads: In-Context Meta Learning Induces Multi-Phase Circuit Emergence Head #2Head #1 1-la yer 1-la yer2-la yer2-la yerBigramBigram ChunkExampleLabel Attention Figure 8. Comparison of accuracy
https://arxiv.org/abs/2505.16694v1
dynamics between single-head (blue) and multi-head (orange) attention models (left). The multi- head model exhibits smoother accuracy improvements, without the distinct learning phases observed in the single-head model. On the right, the attention maps for the two heads in the multi-head model are visualized. Head 1 specializes in NCC, while Head 2 adopts circuits resembling FCC. These findings indicate that multi-head attention allows parallel circuit discovery, enhancing the efficiency of the learning process. rics for Bigram and Label Attention in Head 2. Notably, around the 30,000th training step, the Bigram metric ex- hibits a pronounced increase in the second layer, whereas the Label Attention metric is notably larger in the first layer. The right panel displays the corresponding attention maps, which clearly demonstrate an SCC-like pattern, illustrating how the model’s attention shifts between bigram-driven and label-focused mechanisms. The attention maps on the Fig- ure 9 (right) correspond to the model’s behavior at 30,000 training steps, as indicated by the vertical dashed line. A complete set of metrics is provided in Appendix H. These results suggest that, even though we do not observe abrupt learning in accuracy under the multi-head config- uration, a hidden circuit emerge within the model’s inter- nal mechanisms. This hidden phenomenon implies that, in more practical scenarios (such as large-scale language model where the loss typically decreases in a smooth fash- ion), the model’s internal circuits may still undergo signifi- cant emergent shifts. 6. Discussion We introduced controlled experimental called In-Context Meta-Learning (ICML), designed to move beyond simple copy tasks by requiring task inference. We then investigate how a 2-layer, attention-only transformer acquires ICL abili- ties, inspired by induction head research (Olsson et al., 2022; Reddy, 2023). Although our model is much smaller than those used in large-scale interpretability research (Wang et al., 2022; Merullo et al., 2024; Templeton et al., 2024; Gao et al., 2024), this controlled design revealed novel in- sights, including multi-learning phases that illuminate how the model’s internal circuits evolve. Moreover, the observed random-label robustness (Section 4.3) and multi-head be- haviors, where the loss decreases smoothly (Section 5), both align with findings in LLMs. These results connect small- Head #2 Layer2 Head #2 Layer1Figure 9. Circuit metrics (left) and attention maps (right) for Bi- gram (Head 2) and Label Attention (Head 2) in multi head setting. The left plots depict the progression of Accuracy (blue), Layer 1 Metrics (green), and Layer 2 Metrics (red) over training steps. The attention maps on the right correspond to the model’s behavior at 30,000 training steps, as indicated by the vertical dashed line. scale experiments to practical LLMs, clarifying ICL mecha- nisms. Additional related work is presented in Appendix K. Relationship to Prior Internal-Circuit Research Pre- vious investigations taking an internal-circuit approach to ICL have largely focused on induction heads, which em- ploy a match-and-copy mechanism (Ren et al., 2024; Cho et al., 2024). In contrast, by adopting a more practical meta- learning perspective, our study reveals multi-phase circuits that initially memorize examples and then evolve to infer the underlying task, which differs from the single-learning phase commonly observed
https://arxiv.org/abs/2505.16694v1
in induction heads. While both induction heads and our Full-Context Circuits (FCC) chunk contextual (x, ℓ)pairs into a single token in the first layer, the second layer diverges: induction heads retrieve only a label, whereas FCC further aggregates (Chunk Example → Label Attention in Table 1). This shared mechanism in the first layer implies that even a simple copy task contributes to meta-learning–like ICL capabilities. In addition, consistent with earlier findings (Chan et al., 2022; Singh et al., 2023; Reddy, 2023), these results highlight the key role of dataset characteristics in circuit formation and ICL performance. Implication for LLMs Our analysis links circuits to the established concept of task vectors (Hendel et al., 2023; Todd et al., 2024). A task vector represents the abstracted representation a model forms from examples, and although such vectors have been recognized, the internal circuit-based mechanisms that produce them remain poorly understood. Our findings offers a step toward elucidating these mecha- nisms. In addition, we examine multi-head attentions. Prior work (Singh et al., 2024) has identified redundancy in in- duction heads under multi-head architectures. Our findings indicate that, rather than mere redundancy, multiple distinct circuits emerge in parallel in the multi-head setting, resulting in smoother performance gains. This observation bridges 8 Beyond Induction Heads: In-Context Meta Learning Induces Multi-Phase Circuit Emergence 0 10 20 30 40 Layer0.020.040.060.080.100.120.14Metrics Bigram 0 10 20 30 40 Layer0.020.040.060.080.10Metrics Label Attention 0 10 20 30 40 Layer0.0040.0060.0080.0100.012Metrics Chunk Example Figure 10. Layer-wise analysis of Bigram, Label Attention, and Chunk Example metrics in a pretrained LLM (GPT2-XL). We observe that chunk example scores peak in earlier layers while label attention scores are higher in middle or later layers, consistent with the final circuit (FCC) behavior in our 2-layer attention-only model, where the first layer emphasizes chunk example and the second layer specializes in label attention. the discontinuous concept of circuits with the continuous performance improvements seen in LLMs. To test whether the circuits we observe in our controlled toy setting also appear in real-world pretrained models, we con- duct an additional analysis using a standard sentiment clas- sification task. Specifically, we use the SST21dataset from the GLUE benchmark, consisting of 872 sentiment-labeled samples. We use GPT2-XL2(48-layer decoder transformer, pretrained). Each prompt contains two labeled examples followed by a query example without its label, in a 2-shot setup. The prompt format is as follows: Review: {text}\nSentiment: {label } Review: {text}\nSentiment: {label } Review: {text}\nSentiment: An actual example of such a prompt is provided in Ap- pendix N. As the model is fully trained, we cannot observe circuit formation over training; instead, we analyze atten- tion patterns across layers in response to the fixed prompts. We define three attention-based metrics, using raw attention probabilities p(i, j)from token jto token i, averaged over all heads in each layer: 1.Bigram Attention: p(query ,query ) Measures the self-attention of the final token (query). 2.Label Attention:1 KPK k=1p(query ,label k) Measures how much the query attends to each of the K= 2label tokens. 3.Chunk Example Attention:1 KPK k=1P ip(labelk,textk,i)P itextk,i Measures how strongly each label token attends to the
https://arxiv.org/abs/2505.16694v1
corresponding review tokens. Figure 10 shows that the Chunk Example metric is higher 1https://huggingface.co/datasets/stanfordnlp/sst2 2https://huggingface.co/openai-community/gpt2-xlin early layers, while Label Attention dominates later lay- ers—mirroring our two-layer model’s progression from chunk example to label focus. This pattern aligns with our earlier findings in small Transformers, suggesting these circuits generalize to LLMs. We further investigate circuit behaviors in more standard Transformer architectures (see Appendix L), under next- token prediction objectives (see Appendix M), and in mod- els deeper than two layers (see Appendix O). In all these cases, we observe consistent structural patterns, supporting the robustness and generality of our circuit-based interpreta- tion. 7. Conclusion We introduced In-Context Meta-Learning (ICML), a con- trolled setting for analyzing how attention-only transform- ers acquire in-context learning abilities. Unlike simpler induction-head settings limited to match-and-copy circuits, our approach allowed us to explore how internal circuits function in a more practical task-inference context. Our analysis revealed a multi-phase learning process, where early layers bind example pairs (chunk example) and later layers abstract task-relevant patterns (label attention). These circuits proved robust to random labels and benefited from multi-head attention, resulting in smoother learning dynam- ics. We further showed similar circuit patterns emerging in pretrained models, such as GPT2-XL, on real-world natural language tasks, suggesting our findings generalize beyond toy settings. While our work is still far from fully cap- turing the complexity of real LLM behaviors, connecting controlled experiments in mechanistic interpretability to realistic use-cases of LLMs is becoming increasingly impor- tant. Such efforts will help advance interpretability research and play a crucial role in the development of safer AI sys- tems. 9 Beyond Induction Heads: In-Context Meta Learning Induces Multi-Phase Circuit Emergence Impact Statement This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none which we feel must be specifically highlighted here. Acknowledgements HF was supported by JSPS KAKENHI Grant Number JP22J21582. References Bhattamishra, S., Patel, A., Blunsom, P., and Kanade, V . Understanding in-context learning in transformers and llms by learning to learn discrete functions. arXiv preprint arXiv:2310.03016 , 2023. Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-V oss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D. M., Wu, J., Winter, C., Hesse, C., Chen, M., Sigler, E., Litwin, M., Gray, S., Chess, B., Clark, J., Berner, C., McCan- dlish, S., Radford, A., Sutskever, I., and Amodei, D. Language models are few-shot learners. arXiv preprint arXiv:2005.14165 , 2020. Chan, S. C. Y ., Santoro, A., Lampinen, A. K., Wang, J. X., Singh, A., Richemond, P. H., McClelland, J., and Hill, F. Data distributional properties drive emer- gent in-context learning in transformers. arXiv preprint arXiv:2205.05055 , 2022. Cho, H., Kato, M., Sakai, Y ., and Inoue, N. Revisiting in-context learning inference circuit in large language models. arXiv preprint arXiv:2410.04468 , 2024. Conmy, A., Mavor-Parker, A. N., Lynch, A., Heimersheim, S., and Garriga-Alonso, A. Towards automated circuit discovery for mechanistic interpretability. arXiv preprint arXiv:2304.14997 , 2023a. Conmy,
https://arxiv.org/abs/2505.16694v1
A., Mavor-Parker, A. N., Lynch, A., Heimersheim, S., and Garriga-Alonso, A. Towards automated circuit discovery for mechanistic interpretability, 2023b. URL https://arxiv.org/abs/2304.14997 . Crosbie, J. and Shutova, E. Induction heads as an essential mechanism for pattern matching in in-context learning. arXiv preprint arXiv:2407.07011 , 2024. Dai, D., Sun, Y ., Dong, L., Hao, Y ., Ma, S., Sui, Z., and Wei, F. Why can GPT learn in-context? language models secretly perform gradient descent as meta-optimizers. In Findings of the Association for Computational Linguis- tics: ACL 2023 , pp. 4005–4019, 2023.D’Angelo, F., Croce, F., and Flammarion, N. Selective induction heads: How transformers select causal struc- tures in context. In The Thirteenth International Confer- ence on Learning Representations , 2025. URL https: //openreview.net/forum?id=bnJgzAQjWf . Dong, Q., Li, L., Dai, D., Zheng, C., Ma, J., Li, R., Xia, H., Xu, J., Wu, Z., Chang, B., Sun, X., Li, L., and Sui, Z. A survey on in-context learning. arXiv preprint arXiv:2301.00234 , 2024. Edelman, E., Tsilivis, N., Edelman, B. L., Malach, E., and Goel, S. The evolution of statistical induction heads: In-context learning markov chains. In Globerson, A., Mackey, L., Belgrave, D., Fan, A., Paquet, U., Tomczak, J., and Zhang, C. (eds.), Advances in Neural Informa- tion Processing Systems , volume 37, pp. 64273–64311. Curran Associates, Inc., 2024. Elhage, N., Nanda, N., Olsson, C., Henighan, T., Joseph, N., Mann, B., Askell, A., Bai, Y ., Chen, A., Conerly, T., DasSarma, N., Drain, D., Ganguli, D., Hatfield- Dodds, Z., Hernandez, D., Jones, A., Kernion, J., Lovitt, L., Ndousse, K., Amodei, D., Brown, T., Clark, J., Kaplan, J., McCandlish, S., and Olah, C. A math- ematical framework for transformer circuits. Trans- former Circuits Thread , 2021. https://transformer- circuits.pub/2021/framework/index.html. Furuta, H., Minegishi, G., Iwasawa, Y ., and Matsuo, Y . Towards empirical interpretation of internal cir- cuits and properties in grokked transformers on mod- ular polynomials. Transactions on Machine Learn- ing Research , 2024. ISSN 2835-8856. URL https: //openreview.net/forum?id=MzSf70uXJO . Gao, L., la Tour, T. D., Tillman, H., Goh, G., Troll, R., Radford, A., Sutskever, I., Leike, J., and Wu, J. Scaling and evaluating sparse autoencoders, 2024. URL https: //arxiv.org/abs/2406.04093 . Garg, S., Tsipras, D., Liang, P. S., and Valiant, G. What can transformers learn in-context? a case study of simple function classes. In Advances in Neural Information Processing Systems , volume 35, pp. 30583–30598, 2022. He, T., Doshi, D., Das, A., and Gromov, A. Learning to grok: Emergence of in-context learning and skill composition in modular arithmetic tasks. In The Thirty-eighth Annual Conference on Neural Information Processing Systems , 2024. URL https://openreview.net/forum? id=aVh9KRZdRk . Hendel, R., Geva, M., and Globerson, A. In-context learn- ing creates task vectors. In The 2023 Conference on Empirical Methods in Natural Language Processing , 2023. URL https://openreview.net/forum? id=QYvFUlF19n . 10 Beyond Induction Heads: In-Context Meta Learning Induces Multi-Phase Circuit Emergence Hoogland, J., Wang, G., Farrugia-Roberts, M., Carroll, L., Wei, S., and Murfet, D. Loss landscape degener- acy drives stagewise development in transformers, 2025. URL https://arxiv.org/abs/2402.02364 . Hospedales, T., Antoniou, A., Micaelli, P., and Storkey, A. Meta-learning in neural networks: A survey. IEEE transactions on
https://arxiv.org/abs/2505.16694v1
pattern analysis and machine intelligence , 44(9):5149–5169, 2021. Kim, J. and Suzuki, T. Transformers learn nonlinear fea- tures in context: Nonconvex mean-field dynamics on the attention landscape. arXiv preprint arXiv:2402.01258 , 2024. Merullo, J., Eickhoff, C., and Pavlick, E. Circuit component reuse across tasks in transformer language models. arXiv preprint arXiv:2310.08744 , 2024. Min, S., Lewis, M., Zettlemoyer, L., and Hajishirzi, H. MetaICL: Learning to learn in context. In Carpuat, M., de Marneffe, M.-C., and Meza Ruiz, I. V . (eds.), Pro- ceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies , pp. 2791–2809, Seat- tle, United States, July 2022a. Association for Compu- tational Linguistics. doi: 10.18653/v1/2022.naacl-main. 201. URL https://aclanthology.org/2022. naacl-main.201/ . Min, S., Lyu, X., Holtzman, A., Artetxe, M., Lewis, M., Hajishirzi, H., and Zettlemoyer, L. Rethinking the role of demonstrations: What makes in-context learning work? arXiv preprint arXiv:2202.12837 , 2022b. Minegishi, G., Iwasawa, Y ., and Matsuo, Y . Bridging lottery ticket and grokking: Understanding grokking from inner structure of networks. Transactions on Machine Learning Research , 2025. ISSN 2835-8856. URL https:// openreview.net/forum?id=eQeYyup1tm . Nanda, N., Chan, L., Lieberum, T., Smith, J., and Stein- hardt, J. Progress measures for grokking via mechanistic interpretability. arXiv preprint arXiv:2301.05217 , 2023. Olah, C., Cammarata, N., Schubert, L., Goh, G., Petrov, M., and Carter, S. Zoom in: An introduction to circuits. Distill , 2020. URL https://distill.pub/2020/ circuits/zoom-in . Olsson, C., Elhage, N., Nanda, N., Joseph, N., DasSarma, N., Henighan, T., Mann, B., Askell, A., Bai, Y ., Chen, A., Conerly, T., Drain, D., Ganguli, D., Hatfield-Dodds, Z., Hernandez, D., Johnston, S., Jones, A., Kernion, J., Lovitt, L., Ndousse, K., Amodei, D., Brown, T., Clark, J., Kaplan, J., McCandlish, S., and Olah, C. In- context learning and induction heads. arXiv preprint arXiv:2209.11895 , 2022.Park, C. F., Lubana, E. S., and Tanaka, H. Competition dynamics shape algorithmic phases of in-context learning. InThe Thirteenth International Conference on Learning Representations , 2025. URL https://openreview. net/forum?id=XgH1wfHSX8 . Raventos, A., Paul, M., Chen, F., and Ganguli, S. Pretrain- ing task diversity and the emergence of non-bayesian in-context learning for regression. In Thirty-seventh Conference on Neural Information Processing Systems , 2023. URL https://openreview.net/forum? id=BtAz4a5xDg . Reddy, G. The mechanistic basis of data dependence and abrupt learning in an in-context classification task. arXiv preprint arXiv:2312.03002 , 2023. Ren, J., Guo, Q., Yan, H., Liu, D., Zhang, Q., Qiu, X., and Lin, D. Identifying semantic induction heads to understand in-context learning. arXiv preprint arXiv:2402.13055 , 2024. Singh, A. K., Chan, S. C. Y ., Moskovitz, T., Grant, E., Saxe, A. M., and Hill, F. The transient nature of emer- gent in-context learning in transformers. arXiv preprint arXiv:2311.08360 , 2023. Singh, A. K., Moskovitz, T., Hill, F., Chan, S. C., and Saxe, A. M. What needs to go right for an induction head? a mechanistic study of in-context learning cir- cuits and their formation. In Forty-first International Conference on Machine Learning , 2024. URL https: //openreview.net/forum?id=O8rrXl71D5 . Templeton, A., Conerly, T., Marcus, J., Lindsey, J., Bricken, T., Chen, B., Pearce, A., Citro, C., Ameisen,
https://arxiv.org/abs/2505.16694v1
E., Jones, A., Cunningham, H., Turner, N. L., McDougall, C., MacDiarmid, M., Freeman, C. D., Sumers, T. R., Rees, E., Batson, J., Jermyn, A., Carter, S., Olah, C., and Henighan, T. Scaling monosemanticity: Ex- tracting interpretable features from claude 3 sonnet. Transformer Circuits Thread , 2024. URL https: //transformer-circuits.pub/2024/ scaling-monosemanticity/index.html . Todd, E., Li, M., Sharma, A. S., Mueller, A., Wallace, B. C., and Bau, D. Function vectors in large language models. InThe Twelfth International Conference on Learning Representations , 2024. URL https://openreview. net/forum?id=AwyxtyMwaG . Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L. u., and Polosukhin, I. Attention is all you need. In Guyon, I., Luxburg, U. V ., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R. (eds.), Advances in Neural Information Processing Systems , volume 30. Curran Associates, Inc., 11 Beyond Induction Heads: In-Context Meta Learning Induces Multi-Phase Circuit Emergence 2017. URL https://proceedings.neurips. cc/paper_files/paper/2017/file/ 3f5ee243547dee91fbd053c1c4a845aa-Paper. pdf. V on Oswald, J., Niklasson, E., Randazzo, E., Sacramento, J. a., Mordvintsev, A., Zhmoginov, A., and Vladymyrov, M. Transformers learn in-context by gradient descent. InProceedings of the 40th International Conference on Machine Learning , ICML’23. JMLR.org, 2023. von Oswald, J., Niklasson, E., Schlegel, M., Kobayashi, S., Zucchet, N., Scherrer, N., Miller, N., Sandler, M., y Arcas, B. A., Vladymyrov, M., Pascanu, R., and Sacra- mento, J. Uncovering mesa-optimization algorithms in transformers. arXiv preprint arXiv:2309.05858 , 2023. Wang, K., Variengien, A., Conmy, A., Shlegeris, B., and Steinhardt, J. Interpretability in the wild: a circuit for indirect object identification in gpt-2 small. arXiv preprint arXiv:2211.00593 , 2022. Wang, Y ., Yao, Q., Kwok, J. T., and Ni, L. M. Generalizing from a few examples: A survey on few-shot learning. ACM computing surveys (csur) , 53(3):1–34, 2020. Xie, S. M., Raghunathan, A., Liang, P., and Ma, T. An explanation of in-context learning as implicit bayesian inference. arXiv preprint arXiv:2111.02080 , 2021. Zipf, G. K. Human Behavior and the Principle of Least Effort . Addison-Wesley, Cambridge, MA, 1949. 12 Beyond Induction Heads: In-Context Meta Learning Induces Multi-Phase Circuit Emergence A. Induction Head Figure 11 illustrates an induction circuit consisting of a previous token head in Layer 1 and an induction head in Layer 2. After Layer 1, the side-by-side xandℓtokens are chunked into a single token. In Layer 2, two operations occur: matching ofxvia queries and keys (in purple) and copying ofℓ(in red). 𝑥!"ℓ!"ℓ#"𝑥#"? …𝑥!"𝑥#"ℓ!"𝑥#"ℓ#"𝑥#"KKQVVℓ#"MLPmatchcopy…… Figure 11. The circuit consists of a previous token head in Layer 1 and an induction head in Layer 2. After Layer1, the side-by-side xand ℓtokens are chunked into a single token. In Layer 2, we highlight two operations: matching ofxvai queries and keys (in purple), and copying ofℓ(in red). When a sample is drawn with probability pB, the burstiness parameter Bintroduced by Chan et al. (2022); Reddy (2023) becomes relevant, determining how many times items from the query class appear in an input sequence (where Nis a multiple of B). In our ICML setup, we specifically examine the case with T= 1,pB= 1, andB= 1, as shown in Figure 12. We observe that
https://arxiv.org/abs/2505.16694v1
the first attention layer encodes each (x, ℓ)pair into a single token, while the second layer strongly attends to one of these pairs, effectively implementing the match-and-copy mechanism characteristic of an induction head. Notably, our setting thus subsumes the standard induction head experiments proposed in Reddy (2023). Layer2 Layer1 Figure 12. (left) The emergence of induction heads is observed as single-learning phase. (right) The attention maps on the right illustrate the circuit mechanism, where Layer 1 groups (x, ℓ)pairs into single-token representations, and Layer 2 then copies this label. 13 Beyond Induction Heads: In-Context Meta Learning Induces Multi-Phase Circuit Emergence B. Model Details B.1. Network Architecture Our model features two layers of multi-head attention with a causal mask, followed by a two-layer MLP classifier. Each attention layer µ∈ {1,2}hasmheads, labeled by h. Let (u1, . . . , u n)be the input sequence (subject to a causal mask ensuring ican only attend to j≤i). The outputs of the first layer are {vi}; those of the second layer are {wi}. Attention Computation. Within layer µ, head hcomputes attention weights p(µ,h) ij =exp K(h) µuj⊤ Q(h) µui P k≤iexp K(h) µuk⊤ Q(h) µui, (1) where Q(h) µandK(h) µare the learnable query and key matrices for head hin layer µ. Next, each head outputs a weighted sum of the value -transformed inputs: Head(µ,h) i =X j≤ip(µ,h) ij V(h) µuj , (2) where V(h) µis the corresponding value matrix. Multi-Head Aggregation. The outputs of all mheads in layer µare concatenated and projected by a trainable matrix WOµ, yielding vi=ui+W1 Oh Head(1,1) i;. . .; Head(1,m) ii , (3) wi=vi+W2 Oh Head(2,1) i;. . .; Head(2,m) ii . (4) Here, [. . .]indicates concatenation over the head outputs, and each WOµis a learnable linear projection. Classifier. The two-layer MLP receives the final attention outputs {wi}(e.g., specifically wn, ifnindexes the query token). A hidden layer with ReLU activation is followed by a softmax that produces label probabilities. B.2. Training Details Table 3. Training and Model Configuration Hyperparameter Value Loss Function Cross-entropy Optimizer Vanilla SGD Learning Rate 0.01 Batch Size 128 Dimension of query/key/value 128 MLP Hidden Layer Dimension 128 Causal Mask Restrict sums to j≤i 14 Beyond Induction Heads: In-Context Meta Learning Induces Multi-Phase Circuit Emergence C. Controlled Circuit Pruning Experiments To validate the relationship between the identified circuits and model performance, we conducted controlled pruning experiments. In these experiments, all components except the circuits corresponding to a specific phase were pruned at initialization, isolating the contribution of each circuit. For comparison, we also trained a fully trainable model, referred to as the full model , which could attend to all identified attention patterns. As shown in Figure 13, networks trained with only the circuits from a particular phase plateaued at accuracies corresponding to that phase. This result provides strong evidence that the circuits identified in each phase are directly responsible for the observed performance. Interestingly, when the Phase 3 circuit was provided from the beginning (pink curve in Figure 13), the model achieved 100% accuracy in single step. In contrast, the full model exhibited a more gradual
https://arxiv.org/abs/2505.16694v1
improvement, sequentially discovering and leveraging the circuits corresponding to each phase. This highlights the dynamic nature of the full model’s training process, where it incrementally constructs and refines the required circuits during training. 103 104 105 106 Optimization Steps0.00.20.40.60.81.0Accuracy Phase 1 Circuit Phase 2 Circuit Phase 3 Circuit Full Model Figure 13. Controlled pruning experiments to validate the relationship between identified circuits and model performance. Networks trained with only the circuits from a specific phase plateaued at accuracies corresponding to that phase (yellow: Phase 1, orange: Phase 2, pink: Phase 3). This demonstrates that the identified circuits are directly responsible for the observed performance in each phase. D. Derivation of the Theoretical Accuracy In the main text, we define p= 1−K−2 4 K−1 4, (5) and use it to obtain the “Theoretical Accuracy” as Theoretical Accuracy =p·1 + 1−p ·0.5. (6) This appendix provides a more detailed derivation of these formulas, along with the underlying conditions. Task Conditions. 1. The number of classes ( K) equals the number of labels ( L), with no duplication. 2. The input context (including the query) contains no duplicate classes. 3. Only two tasks are considered ( T= 2). 4. There are no common (x, ℓ)pairs shared between different tasks. 5. To focus on SCC, a mask is applied to circuits associated with SCC during training. 15 Beyond Induction Heads: In-Context Meta Learning Induces Multi-Phase Circuit Emergence Excluding Both L1andL2.We are interested in the probability that the context does notcontain L1orL2. • The total number of ways to choose 4 distinct classes from the K−1classes (excluding the query’s class) isK−1 4 . •To exclude both L1andL2, we must choose all 4 classes from the remaining K−2classes, leading toK−2 4 possible ways to form the context with neither L1norL2present. Hence, the probability that neither L1norL2is in the context is K−2 4 K−1 4. Probability pand the Accuracy Calculation. We denote by pthe probability that at least one ofL1orL2appears in the context: p= 1−K−2 4 K−1 4. Under the task rules, if at least one of these two labels appears in the context, it cannot be the label for the query, so the other one must be correct. This yields 100% accuracy in that scenario. Conversely, if neither L1norL2is found in the context (probability 1−p), the model is forced to guess between two equally likely options, resulting in 50% accuracy. Therefore, Theoretical Accuracy =p·1 + (1 −p)·0.5, as stated in the main text. 16 Beyond Induction Heads: In-Context Meta Learning Induces Multi-Phase Circuit Emergence E. Rank-Frequency In natural language processing and other real-world domains, both data instances and task distributions often follow a power-law structure, commonly referred to as Zipf’s law (Zipf, 1949). This law states that the frequency of an item or task is inversely proportional to its rank, meaning that a small number of elements occur frequently, while the majority appear rarely. Formally, this is expressed as: f(k)∝k−α, (7) where kdenotes the rank of an item, and αcontrols the degree of skewness. Figure 14 illustrates how increasing αleads to a more imbalanced distribution, with a steep
https://arxiv.org/abs/2505.16694v1
drop in frequency beyond the highest-ranked elements. 0 20 40 60 80 100 Rank k0.00.20.40.60.81.0f(k)Rank-Frequency Distributions for Different α Values (f(k)=k−α) =0 =0.25 =0.5 =0.75 =1.0 =1.25 Figure 14. Rank-frequency distributions for different values of the power-law exponent α, following the Zipfian distribution f(k) =k−α. Asαincreases, the distribution becomes more skewed, with a few high-frequency items dominating while the majority appear infrequently. In our setting, not only data but also task sampling follows a similar Zipfian distribution: f(τ)∼τ−β, (8) where βdetermines the skewness of the task distribution. 17 Beyond Induction Heads: In-Context Meta Learning Induces Multi-Phase Circuit Emergence F. Effects of Birstiness on Circuit Emergence When a sample is drawn with probability pB, the burstiness parameter Bintroduced by Chan et al. (2022); Reddy (2023) becomes relevant, determining how many times items from the query class appear in an input sequence (where Nis a multiple of B). Figure 15 examines the impact of burstiness Band probability pB. The left panel shows accuracy curves for different values of Bat a fixed pB= 0.25. AsBincreases, Phase 1, where NCC memorizes pairs through weight updates — tends to be skipped. The right panel presents accuracy curves for different values of pBwhile keeping B= 1 fixed. As pBincreases, the model’s accuracy improves more smoothly, and distinct learning phases become less pronounced. These results align with previous studies showing that increased burstiness tends to shift the model away from weight-based solutions and toward context-dependent reasoning (Chan et al., 2022; Reddy, 2023). 103 104 105 Steps0.250.500.751.00 AccuracyB=1 B=2 (a)Birstiness (B) 103 104 105 Steps0.250.500.751.00 AccuracypB=0 pB=0.25 pB=0.5 pB=0.75 pB=1 (b)pB Figure 15. (Left) Accuracy curves for different values of Bat a fixed pB= 0.25. Increasing Btends to skip Phase 1, where NCC memorizes pairs through weights. (Right) Accuracy curves for different values of pBwithB= 1. AspBincreases, the learning process becomes smoother, reducing the occurrence of distinct learning phases. G. Multi-Heads Experiments Figure 16 (Left) shows accuracy curves over training steps for different numbers of attention heads (1, 2, 4, 8, and 16). Models with multiple heads exhibit a smooth increase in accuracy, whereas the single-head configuration undergoes multi-learning phases, where accuracy improves in distinct jumps rather than gradually. Figure 16 (Right) visualizes attention patterns in a 4-head attention model across two layers. The four heads naturally divide into two functional roles: two heads focus on NCC, while the other two heads focus on FCC. layer1layer24 head Attention Figure 16. (Left) Accuracy curves over training steps for different numbers of attention heads (1, 2, 4, 8, and 16). Models with multiple heads exhibit a smooth increase in accuracy, whereas the single-head configuration shows multi-learning phases. (Right) Visualization of attention patterns in a 4-head attention model, separated by layer. Two heads focus on NCC, while the other two focus on FCC. Red squares highlight key attention positions indicative of each role. 18 Beyond Induction Heads: In-Context Meta Learning Induces Multi-Phase Circuit Emergence H. Circuit Metrics in Multi-head Attention Figure 17 presents circuit metrics for each attention head, analyzed by layer in a two-head attention model. Head 1 consistently maintains high bigram
https://arxiv.org/abs/2505.16694v1
values across both Layer 1 and Layer 2. This indicates that it primarily performs token-level copying operations, forming an NCC. In contrast, Head 2 exhibits a different pattern. As training progresses, the chunk example metric increases in Layer 1, while the label attention metric becomes dominant in Layer 2, forming an FCC. These findings reinforce the idea that multi-head attention facilitates specialization, allowing different heads to develop distinct computational circuits that enhance the model’s meta-learning capabilities. 103 104 1050.250.500.751.00 Accuracy 103 104 1050.250.500.751.00 Accuracy 103 104 1050.250.500.751.00 Accuracy 103 104 1050.250.500.751.00 Accuracy 103 104 1050.250.500.751.00 Accuracy 103 104 1050.250.500.751.00 Accuracy0.20.40.60.81.0 BigramBigram Head1 0.00.20.4 Label AttentionLabel Attention Head1 0.200.250.30 Chunk ExampleChunk Example Head1 0.000.250.500.75 BigramBigram Head2 0.20.40.60.8 Label AttentionLabel Attention Head2 0.2500.2750.3000.3250.350 Chunk ExampleChunk Example Head2 Accuracy Layer1 Metrics Layer2 Metrics Figure 17. Circuit metrics for each attention head, analyzed by layer in two heads attentions. Head 1 maintains high bigram values across both Layer 1 and Layer 2, indicating the formation of an NCC. In contrast, Head 2 exhibits increasing chunk example values in Layer 1 and high label attention values in Layer 2, suggesting the formation of an FCC. 19 Beyond Induction Heads: In-Context Meta Learning Induces Multi-Phase Circuit Emergence I. Impact of Increasing Task Numbers ( T) Figure 18 shows accuracy (left) and loss (right) for models trained with increasing numbers of tasks ( T= 3,6,9,12,15,18). Even with higher T, models exhibit sudden accuracy jumps. As T increases, initial accuracy decreases, and it takes longer for models to achieve sharp accuracy improvements, highlighting the challenges of training under more realistic conditions. 103 104 105 Optimization Steps0.000.250.500.751.00 AccuracyAccuracy T=3 T=6 T=9 T=12 T=15 T=18 103 104 105 Optimization Steps0123 LossLoss T=3 T=6 T=9 T=12 T=15 T=18 Figure 18. Left: Accuracy, Right: Loss, for increasing numbers of tasks ( T) set to 3, 6, 9, 12, 15, and 18, bringing the setup closer to real-world conditions. Even with higher T, the model still exhibits sudden jump in accuracy. As Tincreases, the accuracy in the first phase (around 1/T) decreases, and it takes longer to reach the next sharp jump in accuracy. J. Impact of Increasing Context Length ( N) Figure 19-(a) presents accuracy curves when increasing the number of few-shot examples ( N), resulting in a longer total context. Multiple learning phases are visible for contexts up to N= 8. For N≥16, the model quickly achieves perfect accuracy, indicating easier learning with more context. Figure 19-(b) shows attention maps at N= 16 (context length=33) with clear chunk-example attention patterns in layer 1 and label-attention patterns in layer 2, consistent with behaviors observed at lower contexts. 103 104 105 Optimization Steps0.000.250.500.751.00 AccuracyAccuracy /glyph1197=2 /glyph1197=4 /glyph1197=8 /glyph1197=16 /glyph1197=32 /glyph1197=64 (a) Accuracy layer 1layer 2 (b) Attention Maps Figure 19. (a)Accuracy curves when increasing the number of few-shot examples ( N) in the context, making the total context length 2N+ 1. Up to about N= 8, multiple learning phases are visible.For N≥16, the model exhibits only a single learning phase before reaching 100 % accuracy. This behavior suggests that having a larger context makes it
https://arxiv.org/abs/2505.16694v1
much easier for the model to learn a circuit that leverages the context, eliminating the need for intermediate phases. (b)Attention maps of a two-layer, attention-only Transformer with N= 16 (context length = 33) at 100% accuracy. The first layer (left) shows a chunk example attention pattern, whereas the second layer (right) focuses on label attention. These observations are consistent with the circuits seen at N= 4. 20 Beyond Induction Heads: In-Context Meta Learning Induces Multi-Phase Circuit Emergence K. Extended Related Work Our study relates closely to multiple strands of literature on in-context learning (ICL) and multi-phase emergence. Here, we elaborate on these connections and highlight distinct aspects of our approach. In-Context Learning Literature Beyond Copy Tasks Previous studies on in-context learning frequently employ linear regression frameworks (V on Oswald et al., 2023; Raventos et al., 2023), offering analytically tractable models for theoretical analysis. While these approaches use identical tasks across context examples, they typically optimize for mean squared error (MSE), diverging from the conditions encountered in real-world in-context scenarios. In contrast, our experimental design closely mirrors practical applications of LLMs, emphasizing realistic task settings and diverse few-shot contexts. Several works (D’Angelo et al., 2025; Edelman et al., 2024; Park et al., 2025) have examined in-context learning in the context of Markov chain tasks, highlighting meta-learning phenomena. Although the meta-learning dynamics studied are related to ours, these Markov chain settings differ significantly from the typical few-shot example-pair format that characterizes standard LLM usage. Our research specifically targets these conventional scenarios, aiming for greater ecological validity in interpreting LLM behavior. Research (He et al., 2024) exploring in-context learning on modular arithmetic tasks (Nanda et al., 2023; Furuta et al., 2024; Minegishi et al., 2025) investigates phenomena like out-of-distribution generalization and the role of attention mechanisms at convergence. While extending beyond simpler tasks, such studies typically do not focus on tracking the dynamic acquisition of internal circuits throughout training. Our work uniquely captures these acquisition dynamics, directly linking them to observed behaviors like random-label accuracy emergence and multi-head smoothing. Multi-Phase Emergence Literature Prior investigations into multi-phase emergence (Edelman et al., 2024) have shown transformers acquiring functional capabilities via discrete phase transitions. This finding aligns closely with our observations. However, whereas previous studies utilized Markov chain prediction tasks—progressing through uniform, unigram, and bigram phases—our experiments examine a more practically relevant progression: from Bigram to Semi-Context, and eventually Full-Context circuits. This advancement better reflects complexities observed in realistic few-shot learning scenarios. Several developmental interpretability studies (Hoogland et al., 2025) also highlight multi-phase transitions, primarily by analyzing geometric properties of loss landscapes, such as the Local Learning Coefficient (LLC). Our analysis diverges by explicitly characterizing the mechanistic circuits underlying these transitions. An exciting avenue for future research could involve bridging these LLC-based geometric perspectives with our mechanistic circuit analyses to enrich interpretability methodologies. 21 Beyond Induction Heads: In-Context Meta Learning Induces Multi-Phase Circuit Emergence L. Experiment on Standard Transformer Figure 20 shows accuracy and attention maps for a standard transformer (with attention and MLP layers) trained on the same task as the simpler 2-layer attention-only model from the
https://arxiv.org/abs/2505.16694v1