text
string
source
string
latency. Moreover, as the num- ber and diversity of available LLMs continue to grow, it becomes critical to route queries to the most suitable model, effectively balancing perfor- mance with operational costs. 2.2 LLM Routing LLM routing seeks to identify the most suitable language model for a given query, with various strategies proposed. Early methods include LLM- Blender (Jiang et al., 2023), which employs an en- semble framework querying multiple LLMs to se- lect the optimal response, and AutoMix (Aggarwal et al., 2024), which utilizes a smaller model for self- verification before potentially escalating to a larger model. While these can improve performance, their reliance on multiple querying inherently in- creases latency. Other strategies, such as Hy- 2 bridLLM (Ding et al., 2024) and RouteLLM (Ong et al., 2025), focus on training a binary classifier to choose between a human-defined strong and weak model. However, these methods’ efficacy is highly contingent on the subjective definition of model strength and can be computationally expen- sive when applied to a large pool of LLMs. More re- cent research has shifted towards multi-LLM rout- ing. RouterDC (Chen et al., 2024),C2MAB-V (Dai et al., 2024), and Prompt-to-Leaderboard (Frick et al., 2025) trains a parametric router to route queries. Concurrently, ModelSpider (Zhang et al., 2023) and EmbedLLM (Zhuang et al., 2025) en- code LLMs into learnable representations to facil- itate routing. Despite these advancements, a sig- nificant limitation is the need to retrain the entire routing mechanism when new models are intro- duced. Addressing this, Model-SAT (Zhang et al., 2025) aimed to resolve the retraining weakness through human-defined, model-independent capa- bility decompositions. However, its reliance on predefined capability sets undermined adaptability to new capability dimensions. 3 Methodology In this section, we introduce InferenceDynamics , which involves: (i) identifying the knowledge and capability required for a given query, (ii) quanti- fying the knowledge and capability of LLMs, and (iii) routing queries to LLMs based on their scores. 3.1 Problem Setup LetMT={M1, M 2, . . . , M t}denote a set of LLMs, and let D={(xi, yi)}nbe a dataset where xirepresents a query and yiits corresponding ground truth. For an unseen query x∈ Q, where x /∈ D, LLM routing is formalized as a function R:Q → M T. This function maps the query x to the model Mbest∈ M Tthat is considered most suitable, based on a joint assessment of both cost and performance. Our objective is to develop a rout- ing algorithm with the dataset D, that effectively generalizes to OOD queries. 3.2 Knowledge and Capability Generation It is widely acknowledged that no single LLM demonstrates universal proficiency across the full spectrum of query types. Previous research (Wang et al., 2024c; Li et al., 2024c) substantiates that dis- tinct queries necessitate specific underlying capabil- ities and domain-specific knowledge. Accordingly,assessing an LLM’s aptitude for a given query ne- cessitates identifying the requisite capabilities and knowledge pertinent to that query. Let Cdenote the set of defined LLM capabilities and Krepresent the world knowledge space. For a given query x, we utilize an auxiliary LLM M/∈ M Tto predict
https://arxiv.org/abs/2505.16303v1
two sets: Cx={c1, c2,··· |ci∈ C} : This set com- prises the capabilities deemed necessary to address query x, ranked in descending order of importance. Kx={k1, k2,··· | ki∈ K} : This set encom- passes the knowledge areas considered essential for resolving query x, also ranked in descending order of importance. Following Minaee et al. (2024), we categorize capabilities into reasoning, comprehension, instruc- tion following, agentic, knowledge retrieval, cod- ing, and multilingual. With regard to the knowl- edge dimension ( Kx), we impose no predefined constraints to fully accommodate its inherent diver- sity. 3.3 Scoring To quantify the proficiency of a model Mtwith respect to specific capabilities and knowledge, we utilize the accessible set D. The performance score st iof model Mtfor a given query-response pair (xi, yi)∈ D index is determined by averaging over Kindependent trials: st i=1 KKX k=1eval (Mt(xi)k, yi) where Mt(xi)kis the model’s k-th generated re- sponse to the input query xi, and eval (·,·)rep- resents the query-specific evaluation metric em- ployed to compare the model’s response against the ground truth yi. To incorporate the trade-off between performance and computational expendi- ture, we record the average computational cost ct i incurred by model Mtwhen processing query xi. Subsequent to the identification of the knowl- edge and capability sets and computing the scores for all queries in the set D, we define a refined score for model Mt. This score, Sα β(Mt,xi, e), quanti- fies the model’s effectiveness for a specific element e(which can be a knowledge item k∈ K xior a capability c∈ Cxi) associated with query xi. Illus- trating with a knowledge element k, this score is formulated as: Sα β(Mt,xi, k) =|Kxi|X j=1(st i−βct i)1(k=kj)αj−1 P|Kxi| m=1αm−1 3 ARC BIGBench BigCodeBenchFinQA Flores GSM8KHiT oM LegalBenchMATHMedQAMMLUMMMLU NP-Calendar NP-MeetingNP-Trip PlanBench PubMedQARACE RuleT aker ScienceQASciTLDRXSum020406080100Performance (%)LLM Performances Across Datasets Legend Grok2 Nova-pro-v1 Qwen-max GLM4-plus GPT-4o Llama3.3-70b Gemini1.5-pro Qwen2.5-72bFigure 2: LLM performances across 20 datasets in RouteMix . Dataset labels including " PlanBench " indicate subsets of the PlanBench benchmark. For detailed metric information, refer to Appx. §A. In this formulation, the hyperparameter αserves to attenuate the influence of less critical knowledge elements, based on their rank j. The hyperparam- eterβacts as a coefficient penalizing higher com- putational costs. The denominator,P|Kxi| k=1αk−1, functions as a normalization factor, ensuring that each query contributes equitably to the knowledge score, regardless of the number of knowledge ele- ments it encompasses. Building upon these per-query, per-element scores, the aggregate score of model Mtfor a spe- cific knowledge element kacross the entire index- ing dataset Dis computed as: Sα β(Mt,D, k) =1 |Dk|NX i=1Sα β(Mt,xi, k) where Dk={(xi, yi)|k∈ K i}denotes the subset of query-response pairs in which knowledge kis present in the knowledge set. A similar method- ology is employed for the computation of aggregate capability scores. 3.4 Routing when inference For an unseen query xwith its knowledge and capability sets, we compute the knowledge score KS and capability score CSfor each candidate model Mtto guide routing. The knowledge score is given by: KSα(Mt,x) =|Kx|X i=1Sα β(Mt,D, ki)αi−1 P|Kx| m=1αm−1,(1) The capability score, CSα(Mt,x),
https://arxiv.org/abs/2505.16303v1
is computed analogously. Normalization across both knowledge and capability score calculations ensures that these two distinct types of scores are on a comparable scale, facilitating a balanced routing decision.The final routing decision is determined by the following algorithm: RMT(x) = arg max Mt∈MT(γKSα(Mt,x) +δCSα(Mt,x)) (2) which aims to identify the model with the highest weighted average of the knowledge and capabil- ity scores. A key advantage of this framework is its adaptability. New LLMs are efficiently inte- grated by evaluating them on Dto quantify their knowledge and capability scores, which are then used in routing. Similarly, when queries introduce novel knowledge, the LLMs’ scores for this new knowledge can be computed and integrated, refin- ing subsequent routing decisions. 4 Experiment 4.1 Dataset In this section, we introduce our comprehensive dataset: RouteMix , which consist of the Index Set and Evaluation Set. 4.1.1 Index Set The term ’Index Set’ designates the dataset utilized during the development of our routing algorithm. Given that our methodology is parameter-free, this nomenclature serves to differentiate it from datasets conventionally used in training-dependent meth- ods. The ’Index Set’ is thus employed primarily for characterizing and indexing the capabilities and knowledge of LLMs. To construct a sufficiently di- verse ’Index Set’ for robust LLM profiling, we have curated 20 distinct datasets. These datasets span a wide array of domains and are instrumental in quantifying the specific knowledge and capabilities of each model. Comprehensive details regarding the statistics, data processing methodologies, and evaluation metrics for each dataset are presented in Appx. §A. 4 Method MMLU-Pro GPQA BigGenBench LiveBench Avg. Single Large Language Model Gemini-1.5-Pro 82.83 75.76 80.92 53.79 73.33 GPT-4o 79.71 74.24 85.36 49.62 72.23 Grok-2 80.14 76.26 83.66 53.26 73.33 Qwen2.5-Max 75.86 71.21 82.48 52.77 70.58 GLM-4-Plus 79.06 75.76 83.27 47.32 71.35 Nova-Pro 77.49 70.20 83.01 44.38 68.77 Llama-3.3-70B-Instruct 76.27 69.70 78.17 50.67 68.70 Qwen-2.5-72B-Instruct 75.41 73.23 82.61 49.83 70.27 Random 78.26 72.22 82.61 48.83 70.48 Routing Algorithm (Ours) Routing by Knowledge 80.99 78.28 82.61 53.17 73.76 Routing by Capability 80.09 76.26 84.18 53.65 73.55 Inference Dynamics 80.85 77.78 84.31 55.57 74.55 Table 1: LLM routing results across four benchmarks are presented. The metrics we used are introduced in §4.2. The best performances are bold-faced , while the second-best performances are underlined . "Routing by Knowledge" denotes routing decisions made solely based on the knowledge score, whereas "Routing by Capability" refers to routing based only on the capability score. "Mixed Routing" indicates a simultaneous consideration of both scores during the routing process. 4.1.2 Evaluation Set We incorporate four benchmarks that comprehen- sively evaluate the LLM as the evaluation set of RouteMix : (i) MMLU-Pro (Wang et al., 2024b) spans 14 diverse domains and includes approxi- mately 12,000 instances. (ii) GPQA (Rein et al., 2023) consists of multiple choice questions at the graduate level in subdomains of physics, chemistry, and biology. For our evaluation, we utilize the Diamond subset. (iii) BigGenBench (Kim et al., 2024) comprises 77 distinct tasks evaluating core abilities of LLM, with a total of 765 human-written instances. (iv) LiveBench (White et al., 2025) is a real-time
https://arxiv.org/abs/2505.16303v1
updated benchmark with 18 tasks across 6 categories, including math, reasoning, coding, data analysis, language and instruction following. In the evaluation, we utilize the snapshot released on 2024-11-25. 4.2 Experiment Setup For the candidate models, we select eight high- performing LLMs: Gemini-1.5-Pro(Reid et al., 2024), GPT-4o (Hurst et al., 2024), Grok-2, Qwen2.5-Max (Yang et al., 2024), GLM-4- Plus (Zeng et al., 2024), Nova-Pro (Intelligence, 2024), Llama-3.3-70B-Instruct (AI@Meta, 2024), and Qwen-2.5-72B-Instruct (Yang et al., 2024). To ensure a fair comparison when testing these models, all parameters and the input prompt are kept consis- tent across evaluations. To derive the Knowledge and Capability attributes, we employ GPT-4o-mini to generate these characteristics. Since knowledge may include semantically similar phrases, we uti-lize MiniLM-L6 (Wang et al., 2020) to consolidate Knowledge entries with a cosine similarity score greater than 0.6. Additionally, Knowledge entries with a frequency lower than 10 are filtered out and designated as ’Other’ knowledge. When the system encounters a query containing previously unseen knowledge elements, these are also classified as ’Other’ knowledge. By default, for unconstrained routing, the parameters αandβare set to 0.5 and 0, respectively. The weights for the Knowledge and Capability scores are both set to 1.0 by de- fault. In terms of evaluation, the exact match score is employed for both the MMLU-Pro and GPQA datasets. For BigGenBench, we follow the method- ology proposed by Sprague et al. (2025), using GPT-4o-mini as a language model-based judge. In- stances receiving a score greater than 4 are classi- fied as correct. For LiveBench, we adhere to the original evaluation script, and the metric is average score across six categories. 4.3 Capability and Knowledge Quantification The performance of the candidate models on the Index Set is presented in Fig. 2. Generally, these models do not exhibit substantial performance dis- tinctions when evaluated across the entire Index Set. However, their relative strengths become ap- parent on specific subsets, where different models tend to outperform one another. This observation suggests that the model pool consists of LLMs with broadly comparable overall abilities, yet with vary- ing specializations. 5 Subsequent to the computation of average perfor- mance scores, the top four models are selected for more detailed analysis. Their respective capabil- ity and knowledge scores are visualized in Fig. 1. For clarity and simplification in this visualization, we focus on the eight most frequently occurring knowledge elements and capabilities within the In- dex Set. The fact that the highest-scoring model changes with the specific knowledge or capabil- ity further substantiates the premise: LLMs, even those exhibiting similar aggregate performance lev- els, possess distinct areas of specialized expertise. 4.4 Optimal Routing The optimal routing results, presented in Tab. 1, highlight the clear superiority of our proposed rout- ing strategies. Among these, our Mixed Routing strategy, which combines both Knowledge andCa- pability scores, achieves the highest average per- formance, outperforming the best single model, Gemini-1.5-Pro, by a margin of 1.28. This strategy secures top results on LiveBench and ranks second on GPQA and BigGenBench, demonstrating the effectiveness and versatility of our comprehensive routing algorithm. Additionally, the Routing by Knowledge
https://arxiv.org/abs/2505.16303v1
and Routing by Capability approaches also deliver strong results, consistently surpassing the best single model and significantly outperform- ing random routing on average. Notably, Routing byKnowledge excels in knowledge-intensive tasks, achieving the best score on GPQA and the second- best on MMLU-Pro. This underscores its ability to effectively direct queries requiring accurate factual recall and nuanced domain understanding. Simi- larly, Routing by Capability performs exceptionally well on capability-driven benchmarks, particularly on BigGenBench, highlighting the importance of leveraging a model’s inherent strengths in complex reasoning and generation tasks. Both approaches play an integral role in the success of the Mixed Routing system. These findings also emphasize that no single LLM universally dominates across all tasks. Mod- els like Gemini-1.5-Pro and GPT-4o exhibit vary- ing strengths, further validating the necessity and advantages of intelligent LLM routing systems. 4.5 Routing with Constraints To investigate the system’s performance under vary- ing cost constraints, we systematically adjusted the βparameter, maintaining all other experimental configurations as previously defined. The evalua- 0 5 10 15 20 251.10 1.05 1.00 0.95 0.90Performance Ratio (%) Performance Change with Increasing Tasks GPQA LiveBench Best Single Model 0 5 10 15 20 25 1.25 1.00 0.75 0.50 0.25Cost Ratio (%) Price Ratio Change with Increasing Figure 3: Performance Ratio (%) and Cost Ratio (%) variation on GPQA and LiveBench. The "Best Single Model" refers to the most performant LLM for each task. tion employed two distinct metrics. The first metric, termed Performance Ratio , quantifies the efficacy of the Mixed Routing strategy. This is calculated as the ratio of the performance achieved by Mixed Routing to that of the best-performing single can- didate LLM on the respective benchmark. The second metric, Cost Ratio , assesses the economic efficiency of the routing algorithm. It is defined as the total cost incurred by the routing process (en- compassing both knowledge generation and capa- bility assessment costs) relative to the operational cost of the best-performing single LLM. The empirical results of this sensitivity analysis are depicted in Fig. 3. In scenarios without strin- gent price constraints (i.e., β= 0 ), our routing system demonstrates superior performance com- pared to the best single model, while operating at approximately 80% of the latter’s budget. As the βparameter is incrementally increased, thereby prioritizing cost reduction, the operational cost of the routing algorithm decreases significantly. Con- currently, the system maintains a competitive per- formance level relative to the best single model. Notably, at a βvalue of 15, our routing algorithm 6 IDOODFigure 4: Distribution of knowledge domains across 24 datasets in RouteMix . The In-Domain (ID) subset is utilized for quantifying Knowledge andCapability , while the Out-of-Domain (OOD) subset is employed for evaluating the routing algorithm. Dataset labels including " LiveBench " indicate subsets of the LiveBench benchmark, and labels including " NaturalPlan " similarly denote subsets of the NaturalPlan benchmark. The algorithm to compute the normalized proportion is included in Appx. §B. achieves performance nearly equivalent to the best single model but utilizes only approximately half the associated cost. An interesting observation is the differential sen- sitivity of benchmarks
https://arxiv.org/abs/2505.16303v1
to changes in β. Specifically, the performance and cost metrics for LiveBench, a text generation benchmark, exhibit more pro- nounced variations in response to adjustments in β compared to those observed for GPQA, a question- answering benchmark. This suggests that text gen- eration tasks are more sensitive to the price penalty imposed by βthan QA tasks. 5 Analysis 5.1 Model Selection The distribution of model selections under vari- ous conditions is illustrated in Fig. 5. Consistent with findings in previous works (Chen et al., 2024; Frick et al., 2025), cost-efficient models are in- frequently selected in optimal routing scenarios; instead, the strategy predominantly converges to- wards higher-performing models. For comprehen- sive benchmarks such as BigGenBench, our ap-proach primarily routes queries to expensive yet high-performing models like GPT-4o and Grok-2, reflecting a tendency to leverage top-tier capabili- ties for broad-ranging tasks. Conversely, for task sets demanding highly specialized capabilities, the routing algorithm typically assigns queries directly to the most proficient model. For instance, within the coding subset of LiveBench, 91% of queries are routed to Qwen-Max, which demonstrates the strongest coding capabilities. This model’s leading performance in coding is further corroborated by its results on BigCodeBench and its specific Cod- ing capability score, as detailed in Fig. 1 and Fig. 2, respectively. These observations collectively indi- cate that our routing algorithm effectively directs queries to the most suitable models based on spe- cific task demands. In the context of cost-constrained routing, an increasing cost penalty prompts the router to pro- gressively shift its selections from expensive, top- performing models towards more affordable, albeit less powerful, alternatives. 7 MMLUPro BigGenBenchGPQA LiveBench-Code LiveBench-Data LiveBench-LanguageLiveBench-Math LiveBench-ReasoningLiveBench-InstructGrok-2 Gemini-1.5-Pro Qwen2.5-Max GPT-4o Nova-Pro GLM-4-plus Llama-3.3-70B-Instruct Qwen-2.5-72B-InstructRouter Chosen Model0.25 0.41 0.39 0.05 0.30 0.58 0.22 0.64 0.19 0.17 0.07 0.34 0.00 0.07 0.04 0.36 0.00 0.09 0.07 0.10 0.01 0.91 0.52 0.14 0.41 0.36 0.37 0.50 0.31 0.26 0.02 0.09 0.01 0.01 0.00 0.20 0.01 0.04 0.00 0.00 0.01 0.24 0.00 0.00 0.15 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.01 0.01 0.02 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00Percentage of Chosen Models on Evaluation Sets 0 510 15 20 25 Grok-2 Gemini-1.5-Pro Qwen2.5-Max GPT-4o Nova-Pro GLM-4-plus Llama-3.3-70B-Instruct Qwen-2.5-72B-Instruct0.27 0.18 0.12 0.05 0.02 0.00 0.16 0.23 0.25 0.19 0.12 0.05 0.10 0.12 0.13 0.11 0.03 0.01 0.46 0.44 0.38 0.27 0.14 0.06 0.01 0.02 0.03 0.06 0.07 0.06 0.00 0.00 0.01 0.06 0.12 0.17 0.00 0.01 0.04 0.12 0.24 0.30 0.00 0.01 0.04 0.14 0.26 0.34Percentage of Chosen Models with Increasing 0.00.20.40.60.8 0.10.20.30.4Figure 5: Comparative distribution of router-selected models. Lighter colors signify a higher selection ratio for a given model. The left panel details model selection across evaluation benchmarks using the Optimal Mixed Routing strategy. The right panel illustrates the impact of an increasing cost penalty coefficient ( β) on the model selection distribution. Llama-3.3-70BNova-Pro Qwen-2.5-72B Qwen2.5-MaxGLM-4-PlusGPT-4o Grok-2 Gemini-1.5-Pro697071727374Average Score Performance of Dynamic Routing Routing Performance Best LLM Performance 2nd Best LLM Performance Figure 6: Routing Performance (%) in Dynamic LLM Pools. 5.2 Knowledge Distribution As shown in Fig. 4,
https://arxiv.org/abs/2505.16303v1
the distribution of generated knowledge highlights the RouteMix benchmark’s comprehensive span of knowledge domains, rang- ing from highly specific academic areas to practical applications. On datasets with broad knowledge requirements, such as MMLU-Pro, the generated knowledge exhibits a relatively balanced distribu- tion. For benchmarks targeting one or two specific domains, like MATH-500, the model typically gen- erates more fine-grained knowledge components re- lated to the core domain. This facilitates a more nu- anced quantification of the model’s domain-specific knowledge. 5.3 Dynamics Routing In this section, we investigate the scalability of our framework with respect to dynamic LLM pools.The corresponding results are presented in Fig. 6. The x-axis in this figure represents the progressive addition of specific new models to the LLM can- didate pool. Initially, the pool consists solely of Llama-3.3-70B; subsequently, one new model is added to the candidate pool at each increment along the x-axis. Notably, our routing algorithm consis- tently maintains a top-2 performance ranking and surpasses the best single model across the five eval- uated candidate pool configurations. This outcome demonstrates the robust scalability of our frame- work when new models are introduced, crucially without the need for any additional training. 6 Conclusions This paper introduces InferenceDynamics , a scal- able and adaptable LLM routing framework that quantifies model capabilities and domain-specific knowledge to match queries with the most suit- able LLMs. Evaluated on the new comprehensive RouteMix benchmark, InferenceDynamics demon- strated superior performance, outperforming the best single LLM by 1.28 on average and achiev- ing comparable results at approximately half the cost under budget constraints. Key contributions include the RouteMix dataset for evaluating gen- eralization and the InferenceDynamics algorithm, which generalizes to unseen queries and effectively routes them within dynamic model pools without retraining. Our work enables more efficient and tailored utilization of the diverse LLM ecosystem. 8 Limitations Despite the promising results and the robust design of InferenceDynamics, several limitations warrant discussion and offer avenues for future research: Niche Suitability for Highly Constrained En- vironments InferenceDynamics is engineered for scalability and adaptability, demonstrating its strengths when dealing with a large, diverse, and evolving pool of LLMs, or when new capability and knowledge domains are frequently encountered. However, in scenarios characterized by a very lim- ited and static set of LLMs and a narrowly de- fined, unchanging task scope, a dedicated learning- based routing approach (e.g., a fine-tuned classifier) might be more appropriate or yield marginally su- perior, hyper-specialized performance. Our frame- work prioritizes generalizability and efficient adap- tation to dynamic conditions, which is a differ- ent niche than hyper-optimization for small, fixed- scope problems. Benchmark-Driven Evaluation vs. Real-World Application Complexity The current evaluation of InferenceDynamics relies on the comprehensive RouteMix dataset, which is composed of various established benchmarks. While these benchmarks cover a wide array of tasks and domains, they may not fully capture the intricacies and dynamic nature of real-world application systems. For instance, the utility and performance of InferenceDynamics in more complex, interactive systems like multi- agent environments, where task allocation might depend on evolving collaborative states, have not been explicitly tested. Exploring the
https://arxiv.org/abs/2505.16303v1
deployment and effectiveness of InferenceDynamics in such real-application scenarios remains an important di- rection for future work. Addressing these limitations will be crucial for broadening the applicability and enhancing the ro- bustness of InferenceDynamics and similar LLM routing frameworks. Ethics Statement Our study utilizes publicly available datasets and accesses Large Language Models (LLMs) through their respective APIs. The ethical considerations pertaining to this research are as follows: Datasets: This research exclusively employs pub- licly available datasets, strictly for academic re- search purposes. We affirm that no personally iden- tifiable information or private data was involved inour study. LLM APIs: Our application of LLMs via APIs rigorously conforms to the policies set forth by the API providers. This includes adherence to fair use guidelines and respect for intellectual property rights. Transparency: In line with standard academic research practices, we provide detailed descriptions of our methodology and the prompts utilized in our experiments. Furthermore, the source code for this research will be made publicly available upon the acceptance of this paper. References Pranjal Aggarwal, Aman Madaan, Ankit Anand, Sriv- idya Pranavi Potharaju, Swaroop Mishra, Pei Zhou, Aditya Gupta, Dheeraj Rajagopal, Karthik Kappa- ganthu, Yiming Yang, Shyam Upadhyay, Manaal Faruqui, and Mausam. 2024. Automix: Automati- cally mixing language models. In Advances in Neu- ral Information Processing Systems 38: Annual Con- ference on Neural Information Processing Systems 2024, NeurIPS 2024, Vancouver, BC, Canada, De- cember 10 - 15, 2024 . AI@Meta. 2024. Llama 3 model card. Isabel Cachola, Kyle Lo, Arman Cohan, and Daniel S. Weld. 2020. TLDR: extreme summarization of sci- entific documents. In Findings of the Association for Computational Linguistics: EMNLP 2020, Online Event, 16-20 November 2020 , volume EMNLP 2020 ofFindings of ACL , pages 4766–4777. Association for Computational Linguistics. Lingjiao Chen, Matei Zaharia, and James Zou. 2023. Frugalgpt: How to use large language models while reducing cost and improving performance. CoRR , abs/2305.05176. Shuhao Chen, Weisen Jiang, Baijiong Lin, James T. Kwok, and Yu Zhang. 2024. Routerdc: Query-based router by dual contrastive learning for assembling large language models. In Advances in Neural In- formation Processing Systems 38: Annual Confer- ence on Neural Information Processing Systems 2024, NeurIPS 2024, Vancouver, BC, Canada, December 10 - 15, 2024 . Zhijun Chen, Jingzheng Li, Pengpeng Chen, Zhuoran Li, Kai Sun, Yuankai Luo, Qianren Mao, Dingqi Yang, Hailong Sun, and Philip S. Yu. 2025. Harnessing multiple large language models: A survey on LLM ensemble. CoRR , abs/2502.18036. Zhiyu Chen, Wenhu Chen, Charese Smiley, Sameena Shah, Iana Borova, Dylan Langdon, Reema Moussa, Matt Beane, Ting-Hao Kenneth Huang, Bryan R. Routledge, and William Yang Wang. 2021. Finqa: A dataset of numerical reasoning over financial data. 9 InProceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Repub- lic, 7-11 November, 2021 , pages 3697–3711. Associ- ation for Computational Linguistics. Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. 2018. Think you have solved question answering? try arc, the ai2 reasoning challenge. arXiv:1803.05457v1 . Peter Clark, Oyvind Tafjord, and Kyle Richardson. 2020. Transformers
https://arxiv.org/abs/2505.16303v1
as soft reasoners over language. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI 2020 , pages 3882–3890. ijcai.org. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. 2021. Training verifiers to solve math word prob- lems. CoRR , abs/2110.14168. Xiangxiang Dai, Jin Li, Xutong Liu, Anqi Yu, and John C. S. Lui. 2024. Cost-effective online multi- llm selection with versatile reward models. CoRR , abs/2405.16587. Dujian Ding, Ankur Mallick, Chi Wang, Robert Sim, Subhabrata Mukherjee, Victor Rühle, Laks V . S. Lak- shmanan, and Ahmed Hassan Awadallah. 2024. Hy- brid LLM: cost-efficient and quality-aware query routing. In The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna, Austria, May 7-11, 2024 . OpenReview.net. Evan Frick, Connor Chen, Joseph Tennyson, Tianle Li, Wei-Lin Chiang, Anastasios N. Angelopoulos, and Ion Stoica. 2025. Prompt-to-leaderboard. CoRR , abs/2502.14855. Naman Goyal, Cynthia Gao, Vishrav Chaudhary, Peng- Jen Chen, Guillaume Wenzek, Da Ju, Sanjana Kr- ishnan, Marc’Aurelio Ranzato, Francisco Guzmán, and Angela Fan. 2022. The flores-101 evaluation benchmark for low-resource and multilingual ma- chine translation. Trans. Assoc. Comput. Linguistics , 10:522–538. Neel Guha, Julian Nyarko, Daniel E. Ho, Christopher Ré, Adam Chilton, K. Aditya, Alex Chohlas-Wood, Austin Peters, Brandon Waldon, Daniel N. Rockmore, Diego Zambrano, Dmitry Talisman, Enam Hoque, Faiz Surani, Frank Fagan, Galit Sarfaty, Gregory M. Dickinson, Haggai Porat, Jason Hegland, Jessica Wu, Joe Nudell, Joel Niklaus, John J. Nay, Jonathan H. Choi, Kevin Tobia, Margaret Hagan, Megan Ma, Michael A. Livermore, Nikon Rasumov-Rahe, Nils Holzenberger, Noam Kolt, Peter Henderson, Sean Rehaag, Sharad Goel, Shang Gao, Spencer Williams, Sunny Gandhi, Tom Zur, Varun Iyer, and Zehua Li. 2023. Legalbench: A collaboratively built bench- mark for measuring legal reasoning in large languagemodels. In Advances in Neural Information Pro- cessing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023 . Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2020. Measuring massive multitask language under- standing. CoRR , abs/2009.03300. Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Ja- cob Steinhardt. 2021. Measuring mathematical prob- lem solving with the MATH dataset. In Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks 1, NeurIPS Datasets and Benchmarks 2021, December 2021, virtual . Aaron Hurst, Adam Lerer, Adam P. Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, Alek- sander Madry, Alex Baker-Whitcomb, Alex Beu- tel, Alex Borzunov, Alex Carney, Alex Chow, Alex Kirillov, Alex Nichol, Alex Paino, Alex Renzin, Alex Tachard Passos, Alexander Kirillov, Alexi Chris- takis, Alexis Conneau, Ali Kamali, Allan Jabri, Al- lison Moyer, Allison Tam, Amadou Crookes, Amin Tootoonchian, Ananya Kumar, Andrea Vallone, An- drej Karpathy, Andrew Braunstein, Andrew Cann, Andrew Codispoti, Andrew Galu, Andrew Kondrich, Andrew Tulloch, Andrey Mishchenko, Angela Baek, Angela Jiang, Antoine Pelisse, Antonia Woodford, Anuj Gosalia, Arka Dhar, Ashley Pantuliano, Avi Nayak, Avital Oliver, Barret Zoph, Behrooz Ghor- bani, Ben
https://arxiv.org/abs/2505.16303v1
Leimberger, Ben Rossen, Ben Sokolowsky, Ben Wang, Benjamin Zweig, Beth Hoover, Blake Samic, Bob McGrew, Bobby Spero, Bogo Giertler, Bowen Cheng, Brad Lightcap, Brandon Walkin, Brendan Quinn, Brian Guarraci, Brian Hsu, Bright Kellogg, Brydon Eastman, Camillo Lugaresi, Car- roll L. Wainwright, Cary Bassin, Cary Hudson, Casey Chu, Chad Nelson, Chak Li, Chan Jun Shern, Channing Conger, Charlotte Barette, Chelsea V oss, Chen Ding, Cheng Lu, Chong Zhang, Chris Beau- mont, Chris Hallacy, Chris Koch, Christian Gibson, Christina Kim, Christine Choi, Christine McLeavey, Christopher Hesse, Claudia Fischer, Clemens Winter, Coley Czarnecki, Colin Jarvis, Colin Wei, Constantin Koumouzelis, and Dane Sherburn. 2024. Gpt-4o sys- tem card. CoRR , abs/2410.21276. Amazon Artificial General Intelligence. 2024. The ama- zon nova family of models: Technical report and model card. Amazon Technical Reports . Dongfu Jiang, Xiang Ren, and Bill Yuchen Lin. 2023. Llm-blender: Ensembling large language models with pairwise ranking and generative fusion. In Proceedings of the 61st Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), ACL 2023, Toronto, Canada, July 9-14, 2023 , pages 14165–14178. Association for Computa- tional Linguistics. 10 Di Jin, Eileen Pan, Nassim Oufattole, Wei-Hung Weng, Hanyi Fang, and Peter Szolovits. 2020. What dis- ease does this patient have? A large-scale open do- main question answering dataset from medical exams. CoRR , abs/2009.13081. Qiao Jin, Bhuwan Dhingra, Zhengping Liu, William W. Cohen, and Xinghua Lu. 2019. Pubmedqa: A dataset for biomedical research question answering. InProceedings of the 2019 Conference on Empiri- cal Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019 , pages 2567–2577. Association for Computational Linguistics. Seungone Kim, Juyoung Suk, Ji Yong Cho, Shayne Longpre, Chaeeun Kim, Dongkeun Yoon, Guijin Son, Yejin Choi, Sheikh Shafayat, Jinheon Baek, Sue Hyun Park, Hyeonbin Hwang, Jinkyung Jo, Hy- owon Cho, Haebin Shin, Seongyun Lee, Hanseok Oh, Noah Lee, Namgyu Ho, Se June Joo, Miy- oung Ko, Yoonjoo Lee, Hyungjoo Chae, Jamin Shin, Joel Jang, Seonghyeon Ye, Bill Yuchen Lin, Sean Welleck, Graham Neubig, Moontae Lee, Kyungjae Lee, and Minjoon Seo. 2024. The biggen bench: A principled benchmark for fine-grained evaluation of language models with language models. CoRR , abs/2406.05761. Steven Kolawole, Don Kurian Dennis, Ameet Tal- walkar, and Virginia Smith. 2024. Revisiting cas- caded ensembles for efficient inference. CoRR , abs/2407.02348. Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard H. Hovy. 2017. RACE: large-scale read- ing comprehension dataset from examinations. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017 , pages 785–794. Association for Computational Lin- guistics. Jiawei Li, Yizhe Yang, Yu Bai, Xiaofeng Zhou, Yinghao Li, Huashan Sun, Yuhang Liu, Xingpeng Si, Yuhao Ye, Yixiao Wu, Yiguan Lin, Bin Xu, Ren Bowen, Chong Feng, Yang Gao, and Heyan Huang. 2024a. Fundamental capabilities of large language models and their applications in domain scenarios: A survey. InProceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2024, Bangkok, Thailand, August 11-16, 2024 , pages 11116–11141. Association
https://arxiv.org/abs/2505.16303v1
for Computational Linguistics. Junyou Li, Qin Zhang, Yangbin Yu, Qiang Fu, and Deheng Ye. 2024b. More agents is all you need. CoRR , abs/2402.05120. Moxin Li, Yong Zhao, Yang Deng, Wenxuan Zhang, Shuaiyi Li, Wenya Xie, See-Kiong Ng, and Tat-Seng Chua. 2024c. Knowledge boundary of large language models: A survey. CoRR , abs/2412.12472.Tian Liang, Zhiwei He, Wenxiang Jiao, Xing Wang, Yan Wang, Rui Wang, Yujiu Yang, Shuming Shi, and Zhaopeng Tu. 2024. Encouraging divergent think- ing in large language models through multi-agent debate. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, EMNLP 2024, Miami, FL, USA, November 12-16, 2024 , pages 17889–17904. Association for Computa- tional Linguistics. Pan Lu, Swaroop Mishra, Tanglin Xia, Liang Qiu, Kai- Wei Chang, Song-Chun Zhu, Oyvind Tafjord, Pe- ter Clark, and Ashwin Kalyan. 2022. Learn to ex- plain: Multimodal reasoning via thought chains for science question answering. In Advances in Neural Information Processing Systems 35: Annual Confer- ence on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022 . Andrea Matarazzo and Riccardo Torlone. 2025. A survey on large language models with some in- sights on their capabilities and limitations. CoRR , abs/2501.04040. Shervin Minaee, Tomás Mikolov, Narjes Nikzad, Meysam Chenaghlu, Richard Socher, Xavier Am- atriain, and Jianfeng Gao. 2024. Large language models: A survey. CoRR , abs/2402.06196. Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018. Don’t give me the details, just the summary! topic-aware convolutional neural networks for ex- treme summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Lan- guage Processing, Brussels, Belgium, October 31 - November 4, 2018 , pages 1797–1807. Association for Computational Linguistics. Isaac Ong, Amjad Almahairi, Vincent Wu, Wei-Lin Chi- ang, Tianhao Wu, Joseph E. Gonzalez, M. Waleed Kadous, and Ion Stoica. 2025. Routellm: Learning to route llms from preference data. In The Thirteenth In- ternational Conference on Learning Representations, ICLR 2025, Singapore, April 24-28, 2025 . OpenRe- view.net. Machel Reid, Nikolay Savinov, Denis Teplyashin, Dmitry Lepikhin, Timothy P. Lillicrap, Jean-Baptiste Alayrac, Radu Soricut, Angeliki Lazaridou, Orhan Firat, Julian Schrittwieser, Ioannis Antonoglou, Ro- han Anil, Sebastian Borgeaud, Andrew M. Dai, Katie Millican, Ethan Dyer, Mia Glaese, Thibault Sotti- aux, Benjamin Lee, Fabio Viola, Malcolm Reynolds, Yuanzhong Xu, James Molloy, Jilin Chen, Michael Isard, Paul Barham, Tom Hennigan, Ross McIl- roy, Melvin Johnson, Johan Schalkwyk, Eli Collins, Eliza Rutherford, Erica Moreira, Kareem Ayoub, Megha Goel, Clemens Meyer, Gregory Thornton, Zhen Yang, Henryk Michalewski, Zaheer Abbas, Nathan Schucher, Ankesh Anand, Richard Ives, James Keeling, Karel Lenc, Salem Haykal, Siamak Shakeri, Pranav Shyam, Aakanksha Chowdhery, Ro- man Ring, Stephen Spencer, Eren Sezener, and et al. 11 2024. Gemini 1.5: Unlocking multimodal under- standing across millions of tokens of context. CoRR , abs/2403.05530. David Rein, Betty Li Hou, Asa Cooper Stickland, Jackson Petty, Richard Yuanzhe Pang, Julien Di- rani, Julian Michael, and Samuel R. Bowman. 2023. GPQA: A graduate-level google-proof q&a bench- mark. CoRR , abs/2311.12022. Zayne Rea Sprague, Fangcong Yin, Juan Diego Ro- driguez, Dongwei Jiang, Manya Wadhwa, Prasann Singhal, Xinyu Zhao, Xi Ye, Kyle Mahowald, and
https://arxiv.org/abs/2505.16303v1
Greg Durrett. 2025. To cot or not to cot? chain- of-thought helps mainly on math and symbolic rea- soning. In The Thirteenth International Conference on Learning Representations, ICLR 2025, Singapore, April 24-28, 2025 . OpenReview.net. Mirac Suzgun, Nathan Scales, Nathanael Schärli, Se- bastian Gehrmann, Yi Tay, Hyung Won Chung, Aakanksha Chowdhery, Quoc V . Le, Ed H. Chi, Denny Zhou, and Jason Wei. 2023. Challenging big-bench tasks and whether chain-of-thought can solve them. In Findings of the Association for Com- putational Linguistics: ACL 2023, Toronto, Canada, July 9-14, 2023 , pages 13003–13051. Association for Computational Linguistics. Karthik Valmeekam, Matthew Marquez, Alberto Olmo Hernandez, Sarath Sreedharan, and Subbarao Kamb- hampati. 2023. Planbench: An extensible benchmark for evaluating large language models on planning and reasoning about change. In Advances in Neural Information Processing Systems 36: Annual Confer- ence on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023 . Qineng Wang, Zihao Wang, Ying Su, Hanghang Tong, and Yangqiu Song. 2024a. Rethinking the bounds of LLM reasoning: Are multi-agent discussions the key? In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), ACL 2024, Bangkok, Thailand, August 11-16, 2024 , pages 6106–6131. Association for Computational Linguistics. Wenhui Wang, Furu Wei, Li Dong, Hangbo Bao, Nan Yang, and Ming Zhou. 2020. Minilm: Deep self- attention distillation for task-agnostic compression of pre-trained transformers. In Advances in Neural Information Processing Systems 33: Annual Confer- ence on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual . Yubo Wang, Xueguang Ma, Ge Zhang, Yuansheng Ni, Abhranil Chandra, Shiguang Guo, Weiming Ren, Aaran Arulraj, Xuan He, Ziyan Jiang, Tianle Li, Max Ku, Kai Wang, Alex Zhuang, Rongqi Fan, Xiang Yue, and Wenhu Chen. 2024b. Mmlu-pro: A more robust and challenging multi-task language understanding benchmark. In Advances in Neural Information Pro- cessing Systems 38: Annual Conference on Neural Information Processing Systems 2024, NeurIPS 2024, Vancouver, BC, Canada, December 10 - 15, 2024 .Zhihu Wang, Shiwan Zhao, Yu Wang, Heyuan Huang, Jiaxin Shi, Sitao Xie, Zhixing Wang, Yubo Zhang, Hongyan Li, and Junchi Yan. 2024c. Re-task: Revis- iting LLM tasks from capability, skill, and knowledge perspectives. CoRR , abs/2408.06904. Colin White, Samuel Dooley, Manley Roberts, Arka Pal, Benjamin Feuer, Siddhartha Jain, Ravid Shwartz-Ziv, Neel Jain, Khalid Saifullah, Sreemanti Dey, Shubh- Agrawal, Sandeep Singh Sandha, Siddartha V . Naidu, Chinmay Hegde, Yann LeCun, Tom Goldstein, Willie Neiswanger, and Micah Goldblum. 2025. Livebench: A challenging, contamination-limited LLM bench- mark. In The Thirteenth International Conference on Learning Representations, ICLR 2025, Singapore, April 24-28, 2025 . OpenReview.net. Yufan Wu, Yinghui He, Yilin Jia, Rada Mihalcea, Yu- long Chen, and Naihao Deng. 2023. Hi-tom: A benchmark for evaluating higher-order theory of mind reasoning in large language models. In Find- ings of the Association for Computational Linguis- tics: EMNLP 2023, Singapore, December 6-10, 2023 , pages 10691–10706. Association for Computational Linguistics. Frank F. Xu, Yufan Song, Boxuan Li, Yuxuan Tang, Kritanjali Jain, Mengxue Bao, Zora Zhiruo Wang, Xuhui Zhou, Zhitong Guo, Murong Cao, Mingyang Yang, Hao Yang Lu, Amaad Martin,
https://arxiv.org/abs/2505.16303v1
Zhe Su, Leander Maben, Raj Mehta, Wayne Chi, Lawrence Keunho Jang, Yiqing Xie, Shuyan Zhou, and Graham Neu- big. 2024. Theagentcompany: Benchmarking LLM agents on consequential real world tasks. CoRR , abs/2412.14161. An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayi- heng Liu, Fei Huang, Haoran Wei, Huan Lin, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Yang, Jiaxi Yang, Jingren Zhou, Junyang Lin, Kai Dang, Keming Lu, Keqin Bao, Kexin Yang, Le Yu, Mei Li, Mingfeng Xue, Pei Zhang, Qin Zhu, Rui Men, Runji Lin, Tianhao Li, Tingyu Xia, Xingzhang Ren, Xuancheng Ren, Yang Fan, Yang Su, Yichang Zhang, Yu Wan, Yuqiong Liu, Zeyu Cui, Zhenru Zhang, and Zihan Qiu. 2024. Qwen2.5 technical report. CoRR , abs/2412.15115. Aohan Zeng, Bin Xu, Bowen Wang, Chenhui Zhang, Da Yin, Diego Rojas, Guanyu Feng, Hanlin Zhao, Hanyu Lai, Hao Yu, Hongning Wang, Jiadai Sun, Jiajie Zhang, Jiale Cheng, Jiayi Gui, Jie Tang, Jing Zhang, Juanzi Li, Lei Zhao, Lindong Wu, Lucen Zhong, Mingdao Liu, Minlie Huang, Peng Zhang, Qinkai Zheng, Rui Lu, Shuaiqi Duan, Shudan Zhang, Shulin Cao, Shuxun Yang, Weng Lam Tam, Wenyi Zhao, Xiao Liu, Xiao Xia, Xiaohan Zhang, Xiao- tao Gu, Xin Lv, Xinghan Liu, Xinyi Liu, Xinyue Yang, Xixuan Song, Xunkai Zhang, Yifan An, Yi- fan Xu, Yilin Niu, Yuantao Yang, Yueyan Li, Yushi Bai, Yuxiao Dong, Zehan Qi, Zhaoyu Wang, Zhen Yang, Zhengxiao Du, Zhenyu Hou, and Zihan Wang. 2024. Chatglm: A family of large language mod- 12 els from GLM-130B to GLM-4 all tools. CoRR , abs/2406.12793. Kai Zhang, Liqian Peng, Congchao Wang, Alec Go, and Xiaozhong Liu. 2024. LLM cascade with multi-objective optimal consideration. CoRR , abs/2410.08014. Yi-Kai Zhang, Ting-Ji Huang, Yao-Xiang Ding, De- Chuan Zhan, and Han-Jia Ye. 2023. Model spider: Learning to rank pre-trained models efficiently. In Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Pro- cessing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023 . Yi-Kai Zhang, De-Chuan Zhan, and Han-Jia Ye. 2025. Capability instruction tuning: A new paradigm for dynamic llm routing. Preprint , arXiv:2502.17282. Huaixiu Steven Zheng, Swaroop Mishra, Hugh Zhang, Xinyun Chen, Minmin Chen, Azade Nova, Le Hou, Heng-Tze Cheng, Quoc V . Le, Ed H. Chi, and Denny Zhou. 2024. NATURAL PLAN: bench- marking llms on natural language planning. CoRR , abs/2406.04520. Richard Zhuang, Tianhao Wu, Zhaojin Wen, Andrew Li, Jiantao Jiao, and Kannan Ramchandran. 2025. Em- bedllm: Learning compact representations of large language models. In The Thirteenth International Conference on Learning Representations, ICLR 2025, Singapore, April 24-28, 2025 . OpenReview.net. Terry Yue Zhuo, Minh Chien Vu, Jenny Chim, Han Hu, Wenhao Yu, Ratnadira Widyasari, Imam Nur Bani Yusuf, Haolan Zhan, Junda He, Indraneil Paul, et al. 2024. Bigcodebench: Benchmarking code genera- tion with diverse function calls and complex instruc- tions. arXiv preprint arXiv:2406.15877 . 13 A Benchmark Overview Table Table 2: Overview of Benchmarks, Data Processing, Prompts, and Metrics Benchmark Name Data Processing Manner Prompt TypeMetric Used ARC (Clark et al., 2018) Sample 500 instances according to the portion of ARC-Easy and ARC- Challenge.Zero-shot DAAccuracy
https://arxiv.org/abs/2505.16303v1
BigBench-Hard (Suzgun et al., 2023) Sample 40 instances from each cate- gory except weboflies, to avoid colli- sion with LiveBench. Formulate into MCQA for Yes/No and QA question. Remain the free-response question un- changed.Zero-shot CoTExact Match (EM) BigCodeBench (Zhuo et al., 2024) We directly use the BigCodeBench- Hard subset, with 148 instances.DA for code completionPass@1 FinQA (Chen et al., 2021) Sample 500 instances from the dataset. CoT from Sprague et al. (2025)Exact Match(EM) Flores200 (Goyal et al., 2022) We incorporate the top10 commonly used language except for English. And sample 100 instances for each language.Translation PromptChrf++ (Goyal et al., 2022) GSM8K (Cobbe et al., 2021) Sample 500 instances from the dataset. CoT from Sprague et al. (2025)Exact Match(EM). HiToM (Wu et al., 2023) Sample 500 instances under CoT set- tings.CoT from Official RepoAccuracy LegalBench (Guha et al., 2023) Sample 4 instances from each category except for short answering task, result- ing in 616 instances.Few-shot DA Accuracy MATH (Hendrycks et al., 2021) We use the subset MATH-500. CoT from Sprague et al. (2025)Exact Match(EM) MedQA (Jin et al., 2020) Sample 500 instances from the dataset DA Accuracy MMLU (Hendrycks et al., 2020) We sample instances according to the portion of different categories, and make sure each category has at least 10 instances. Resulting in 1262 instances.DA Accuracy MMMLU (Hendrycks et al., 2020) We sample 100 instances for all lan- guages except for English. Result in 1400 instances.DA Accuracy NaturalPlan (Zheng et al., 2024) Sample 200 instances from each subset, including scheduling, calendar meeting, and trip planning.DA Accuracy PlanBench (Valmeekam et al., 2023) Use the subset of PlanGeneration in BlocksWorld.DA Accuracy Continued on next page... 14 Table 2 – continued from previous page Benchmark Name Data Processing Manner Prompt TypeMetric Used PubMedQA (Jin et al., 2019) Sample 500 instances from original datasetDA Accuracy RACE (Lai et al., 2017) Sample 500 instances from original datasetDA Accuracy RuleTaker (Clark et al., 2020) Sample 500 instances from original datasetDA Accuracy ScienceQA (Lu et al., 2022) Sample 500 instances which don’t have corresponding picture.DA Accuracy SciTLDR (Cachola et al., 2020) Directly use the test set Summarization PromptRogueL. XSum (Narayan et al., 2018) Sample 500 instances for the dataset Summarization PromptRogueL Specifically, when quantifying the capability and knowledge of LLMs for translation and summarization tasks, we establish a performance threshold. An output is considered correct if its evaluation score or relevant metric exceeds this threshold. B Knowledge Domain Distribution The dataset’s knowledge domain distribution is determined by a weighted rank approach. For each domain D∈ D (where Dis the set of all unique domains), its frequency at each rank r(denoted FD,r, for r= 1, . . . , N ) is multiplied by a corresponding rank weight Wr(typically Wr= 1/r). These products are summed to yield a weighted score SD: SD=NX r=1(FD,r×Wr) The final distribution percentage PDfor each domain is then its SDnormalized by the sum of all domain weighted scores ( Stotal=P D′∈DSD′), expressed as a percentage: PD=SDP D′∈DSD′ ×100% This method ensures higher-ranked domain occurrences contribute more significantly, with all PD summing to 100%. C BigGenBench Evaluation
https://arxiv.org/abs/2505.16303v1
Following Sprague et al. (2025), we employ GPT-4o-mini as LLM-as-a-Judge to evaluate the BigGen- Bench, and instances with a score larger than 4 is considered correct. The specific prompt is shown below: 15 Prompt for evaluation BigGenBench Task Description: An instruction (might include an Input inside it), a response to evaluate, a reference answer that gets a score of 5, and a score rubric representing a evaluation criteria are given. 1. Write a detailed feedback that assess the quality of the response strictly based on the given score rubric, not evaluating in general. 2. After writing a feedback, write a score that is an integer between 1 and 5. You should refer to the score rubric. 3. The output format should look as follows: "Feedback: (write a feedback for criteria) [RESULT] (an integer number between 1 and 5)" 4. Please do not generate any other opening, closing, and explanations. The instruction to evaluation: example question Response to evaluate: example solution Reference Answer (Score 5): reference score Score Rubrics: Criteria: criteria Description of a Score 1 response: score1 description Description of a Score 2 response: score2 description Description of a Score 3 response: score3 description Description of a Score 4 response: score4 description Description of a Score 5 response: score5 description Feedback: Remember, you must strictly evaluate the response based on the given score rubric, and finish your output in the format of "(...) [RESULT] <score>", where <score> is a number between 1 and 5. D Prompt of Knowledge and Capability Generation The specific prompt for knowledge and capability generation is shown below: 16 Prompt for evaluation BigGenBench The capabilities of Language Models include the following: - Reasoning: Ability to logically analyze information, draw conclusions, and make inferences. - Comprehension (Applicable to queries involving long passage comprehension): Understanding and interpreting the meaning, context, and nuances of extended or complex long-context text, such as lengthy documents, multi-paragraph inputs, or intricate narratives. - Instruction Following (Applicable to queries involving several constraints): Accurately adhering to explicit user-provided guidelines, constraints, or formatting requirements specified within the query. - Agentic: Capacity related to agent-like behavior, such as actively formulating plans, strategically deciding steps, and autonomously identifying solutions or actions to achieve specific goals or complex tasks. - Knowledge Retrieval: Accessing and presenting accurate factual information from pre-existing knowledge. - Coding: Generating, interpreting, or debugging computer programs and scripts. - In-context Learning: Learning from examples or context provided within the current interaction without additional training. - Multilingual (Must rank it in top3 when queries involving languages other than English): Understanding, generating, or translating content accurately across multiple languages. Given the Query below: 1. Identify and list the *LLM Capabilities* from the definitions above that are directly and significantly re- quired to effectively address the query. 2. Identify and list the general *Knowledge Domains* (e.g., categories, subject areas) most pertinent to solving the problem presented in the query. List the selected Capabilities first, ranked from most important to least important. Then, list the identified Knowledge Domains, also ranked from most important to least important. *Do not provide any justification or
https://arxiv.org/abs/2505.16303v1
explanation* for your selections or rankings. Example: Query: "Solve the following financial problem efficiently and clearly. Output the final answer as: boxedanswer. Where [answer] is just the final number or expression that solves the problem. Keep the answer to five decimal places if it is a number, and do not use percentages; keep the decimal format. Problem: what is the net change in net revenue during 2016 for Entergy Mississippi, Inc.? the 2015 net revenue of amount (in millions) is 696.3; the 2016 net revenue of amount (in millions) is 705.4; Entergy Mississippi, Inc." Capabilities: Reasoning, Knowledge retrieval Knowledge: 1. Financial 2. Math 3. Data Analysis ... Query: input prompt 17
https://arxiv.org/abs/2505.16303v1
PMPO: Probabilistic Metric Prompt Optimization for Small and Large Language Models Chenzhuo Zhao1*Ziqian Liu2*Xingda Wang1Junting Lu1 Chaoyi Ruan3† 1Peking University2Unaffiliated3National University of Singapore {cyzcz, nev_settle, aidan.lew.37}@stu.pku.edu.cn liuziqian25@gmail.com ruancy@comp.nus.edu.sg Abstract Prompt optimization offers a practical and broadly applicable alternative to fine-tuning for improving large language model (LLM) perfor- mance. However, existing methods often rely on costly output generation, self-critiquing abil- ities, or human-annotated preferences, which limit their scalability, especially for smaller or non-instruction-tuned models. We introduce PMPO (Probabilistic Metric Prompt Optimiza- tion), a unified framework that refines prompts using token-level cross-entropy loss as a direct, lightweight evaluation signal. PMPO identifies low-quality prompt segments by masking and measuring their impact on loss, then rewrites and selects improved variants by minimizing loss over positive and negative examples. Un- like prior methods, it requires no output sam- pling or human evaluation during optimiza- tion, relying only on forward passes and log- likelihoods. PMPO supports both supervised and preference-based tasks through a closely aligned loss-based evaluation strategy. Exper- iments show that PMPO consistently outper- forms prior methods across model sizes and tasks: it achieves the highest average accuracy on BBH, performs strongly on GSM8K and AQUA-RAT, and improves AlpacaEval 2.0 win rates by over 19 points. These results highlight PMPO’s effectiveness, efficiency, and broad applicability. 1 Introduction Prompt design has emerged as a critical factor in steering large language models (LLMs) to- ward reliable performance across diverse tasks. As fine-tuning becomes increasingly costly or re- stricted , automatic prompt optimization has be- *Contributed equally. †Corresponding author. Figure 1: Overview of PMPO. Top: Iterative prompt refinement via loss-based evaluation. Bottom: Compar- ative strengths over prior methods in terms of evaluation strategy, efficiency, model support, and task generality. come a practical alternative to improve model be- havior without modifying parameters (Agarwal et al., 2024; Wang et al., 2023). Recent methods tackle this by iteratively refining prompts through model feedback (Xiang et al., 2025), human pref- erences (Cheng et al., 2023), or reward-based search (Zheng et al., 2024). While effective, these approaches often face three main challenges: (1) high cost due to output generation and evaluation loops, (2) reliance on large models capable of in- trospection or multi-step reasoning, and (3) lack of generality across task types and model sizes. Some methods, such as PromptWizard (Agarwal et al., 2024) and PromptAgent (Wang et al., 2023), rely on model-internal critique and iterative analy- sis, achieving strong results on complex tasks but requiring significant computation and model coop- eration. Others like BPO (Cheng et al., 2023) and PAS (Zheng et al., 2024) improve efficiency via offline learning of prompt-rewriting models, but require substantial labeled data or preference anno- tations, and their applicability to new tasks or small 1arXiv:2505.16307v1 [cs.CL] 22 May 2025 models is limited. SPO (Xiang et al., 2025) avoids ground truth by using the model’s own judgment to compare outputs, but this self-evaluation is less reliable for smaller LMs. A central limitation across these approaches is how prompt quality is evaluated. Most techniques treat prompt evaluation as a generative task, where outputs are generated and then scored externally, making the
https://arxiv.org/abs/2505.16307v1
process computationally expensive and less reliable. External scoring often lacks consis- tency or sufficient granularity, especially when re- lying on models to assess complex outputs. More- over, methods tend to specialize: some focus on supervised tasks with explicit labels, while others target alignment tasks defined by preferences or style. Few offer a unified, efficient mechanism applicable to both. In this work, we introduce PMPO (Probabilistic Metric Prompt Optimization), a novel prompt opti- mization framework that directly uses the model’s cross-entropy loss as the evaluation signal. PMPO frames prompt optimization as a classic loss min- imization problem: given a prompt and input, it computes the likelihood of the desired or preferred output without requiring the model to generate or rank outputs explicitly. This evaluation is efficient, fully automated, and compatible with models of varying scales, making PMPO applicable to both large and small language models in diverse deploy- ment settings. Moreover, PMPO supports both accuracy-based tasks (e.g., classification, QA) and preference-based tasks (e.g., summarization, in- struction following) under a unified loss-based eval- uation strategy: it minimizes cross-entropy loss for tasks with labeled outputs and maximizes the like- lihood of preferred responses over less preferred ones when comparative preference information is available. Figure 1 presents an overview of the PMPO re- finement loop and summarizes its advantages over existing approaches. Compared to prior methods, PMPO offers the following key advantages: •Loss-based evaluation. Unlike approaches that depend on output generation or human judgments, PMPO evaluates prompts using the model’s intrinsic loss signal , enabling fast and scalable optimization. •General applicability. PMPO unifies preference-based and supervised settings by treating prompt optimization as max- imizing either output likelihood or re-ward—depending on the task signal available. •Support for small models. PMPO requires only forward passes and likelihoods, not in- trospection or reasoning, making it usable for smaller LMs that cannot critique their outputs or support complex prompting. •High sample efficiency. Since loss evaluation is batchable and cheap, PMPO can explore more candidate prompts under fixed budget constraints, enabling extensive search with minimal overhead. 2 Related Works 2.1 Prompt Engineering Early prompt engineering methods rely on man- ually crafted instructions and few-shot exemplars to guide language model behavior (Brown et al., 2020; Deng et al., 2023; Zheng et al., 2023c; Wang et al., 2024b). Chain-of-Thought (CoT) prompt- ing (Wei et al., 2022) enhances this paradigm by encouraging step-by-step reasoning, leading to sub- stantial gains on complex tasks. Other structured prompting methods, such as Step-Back prompt- ing (Zheng et al., 2023a), introduce abstraction or meta-reasoning steps to improve logical infer- ence. Similarly, Rephrase-and-Respond (Deng et al., 2024) enhances model comprehension by prompting it to rephrase questions before answer- ing. These techniques improve performance by enriching intermediate reasoning or input interpre- tation, but they are based on fixed templates and do not perform automated prompt generation or optimization. In contrast, another line of work explores automatic prompt construction. Methods like APE (Shen et al., 2023) treat prompt design as a search problem, using language models to generate and rank instruction variants based on task perfor- mance. While these approaches reduce manual
https://arxiv.org/abs/2505.16307v1
ef- fort, they often rely on large models for generation and evaluation, limiting scalability in low-resource or small-model settings. Overall, this body of work underscores the importance of prompt formulation and motivates the development of more systematic and efficient optimization frameworks. 2.2 Prompt Optimization Recent work on automated prompt optimization can be broadly categorized into generation-based 2 Figure 2: Overview of the PMPO framework. Step 1: Mask Analysis estimates the impact of each prompt part by measuring loss changes when masked, identifying positive, negative, or neutral effects. Step 2: Evaluation and Generation selects hard cases and generates refined candidates, scored with the same loss-based metric. The best-performing prompt is retained. Step 3: Iterative Refinement repeats this process until stop. and introspection-based methods (Yan et al., 2024; Wang et al., 2024a; Guo12 et al.; Zhou et al., 2022). OPRO (Yang et al., 2023) treats the op- timization process as a black-box loop, where an LLM proposes new prompts based on previ- ous attempts and their performance (e.g., accu- racy). EvoPrompt (Tong et al., 2025) and Prompt- Breeder (Fernando et al., 2023) further adopt evo- lutionary strategies—maintaining populations of prompts that evolve through mutation and selec- tion, with PromptBreeder uniquely co-evolving its mutation strategies. While these approaches of- ten achieve strong performance, they rely on re- peated output generation and scoring, which makes them resource-intensive and task-specific. More- over, their effectiveness depends heavily on large LLMs serving as optimizers, limiting applicability to smaller models or open-ended preference-based tasks. Another direction leverages the model’s own in- trospective signals to refine prompts. PromptWiz- ard (Agarwal et al., 2024) uses model-internal critiques to iteratively improve prompts and in- context examples. TextGuard (Pei et al., 2023) adopts a unit-test framework, generating adversar- ial test cases to probe and correct prompt weak- nesses. Though effective in structured tasks with clear correctness criteria, these methods often as- sume explicit feedback or reference outputs, and require natural-language generation for both diag-nosis and revision. Self-supervised approaches like SPO (Xiang et al., 2025) and TextGuard (Pei et al., 2023) reduce reliance on labels by comparing out- puts or generating improvement suggestions, yet still depend on costly output sampling or heuristic reward models. 3 PMPO Framework We introduce PMPO, a unified framework designed to iteratively refine prompts using fine-grained cross-entropy evaluation and adaptive rewriting strategies (see Figure 2). PMPO systematically analyzes prompts by applying a model-adaptive masking technique to quantify token-level impacts (Step 1, detailed in Section3.2), and selectively rewrites the lowest-performing segments using lan- guage model-generated variants (Step 2, described in Sections3.3 and3.4). Through an iterative cycle of prompt generation, evaluation, and refinement (Step 3, corresponding to the overall loop in Sec- tion3.1), PMPO automatically enhances prompt effectiveness without human intervention, adapting flexibly across different model architectures and tasks. Problem Definition Given a task T, our frame- work initializes with a base instruction Pand lever- ages a problem description paired with a dataset D= (xi, yi, ri)i= 1N, where each example con- 3 sists of an input xi, an output yi, and an associated label or preference score riindicating the quality or desirability
https://arxiv.org/abs/2505.16307v1
of yiforxi. The language model M generates outputs with probabilities pM(y|x, P) conditioned on input xand prompt P. Our objec- tive is to derive an optimal prompt P∗that maxi- mizes the expected weighted log-probability, that is,E(x, y, r )∼ D[r·logpM(y|x, P)], where r can represent either a binary label or a scalar pref- erence score. This unified formulation enables the framework to enhance the model’s target-aligned generation capability by amplifying preferences over desired outputs. 3.1 Iterative Framework for PMPO We propose an iterative algorithm that progres- sively refines prompt quality based on evaluation metrics and targeted modifications. The full proce- dure is outlined in Algorithm 1. Algorithm 1 An Overview of PMPO Require: Dataset D, Initial prompt P0, Language model M, Max iterations T, Top- ksamples k, Variants per sample n Ensure: Optimized prompt P∗ 1:P∗← P 0 2:foriteration ←1toTdo 3: Compute metric L(P∗, M)onD 4: Select top- ksamples {(xj, yj)} 5: Initialize variant set V ← {P∗} 6: foreach(xj, yj)do 7: Analyze failure and token importance using M 8: Generate nvariants {P′ j,1, ...,P′ j,n} 9: V ← V ∪ {P′ j,1, ...,P′ j,n} 10: end for 11: Evaluate all prompts in Vand select best Pnew 12: ifL(P∗, M)>L(Pnew, M)then 13: P∗← Pnew 14: end if 15:end for 16:return P∗ In each iteration, the PMPO framework begins with a detailed, mask-guided analysis using the model Mto identify specific segments within the current prompt that affect performance. It then computes the cross-entropy or preference- based loss across the entire dataset and selects the most challenging examples, defined as those with the highest losses, as priority targets for im-provement. Based on the insights from the mask- guided analysis, the framework generates multiple refined prompt variants, each designed to address the detected weaknesses. These candidates are re- evaluated using batch-level loss performance, and the variant demonstrating the greatest improvement is retained for the next iteration. This refinement process continues until the maximum number of iterations is reached. 3.2 Mask-Guided Importance Evaluation Given a prompt P, we decompose it into a set of semantic units {s1, s2, ..., s m}, where each unit represents a relatively independent functional com- ponent and may span one or multiple tokens. To identify influential components, we leverage the language model itself to segment Pinto masked structures—i.e., the model autonomously inserts <MASK> boundaries without human supervision. For each semantic unit sj, we create a perturbed prompt variant by masking the unit as follows: P−j={s1, s2, ..., s j−1,<MASK> , sj+1, ..., s m} (1) We then compute the change in batch-level cross- entropy loss when sjis masked: ∆Lj=Lbatch(P−j, M)− L batch(P, M)(2) Here, Lbatch(·, M)denotes the average cross- entropy loss over the dataset under model M. A positive ∆Ljindicates that sjcontributes posi- tively to task performance (its removal increases loss), whereas a negative value suggests a detrimen- tal or redundant effect. Values of ∆Ljclose to zero imply negligible impact on model behavior. 3.3 Prompt Evaluation via Loss-based Metrics To quantitatively assess the effectiveness of prompts, we utilize loss-based metrics derived from the model’s internal probability estimates. Given a model M,
https://arxiv.org/abs/2505.16307v1
a prompt P, input x, and expected output y, we define the token-level cross- entropy loss as: LCE(x, y,P, M) =−|y|X i=1logPM(yi|y<i, x,P) (3) where yiis the i-th token of the output sequence andy<idenotes all preceding tokens. The batch- 4 level loss over dataset Dis computed as: Lbatch(P, M) =1 nnX i=1LCE(xi, yi,P, M)(4) Unlike binary accuracy metrics, Lbatchcaptures token-wise generation probabilities, provides a con- tinuous evaluation space, and reflects variations in model confidence induced by different prompts. In scenarios where candidate outputs are asso- ciated with preference signals, we additionally in- corporate a pairwise preference loss inspired by preference optimization techniques. Given a pre- ferred output y+and a less preferred alternative y−, we define: Lpref(x, y+, y−,P, M) = −logσ β· sM(x, y+,P)−sM(x, y−,P) (5) where sM(x, y,P) = log PM(y|x,P)is the model-assigned log-probability, σis the sigmoid function, and βis a scaling factor. 3.4 Prompt Variant Generation To generate new prompt candidates, PMPO em- ploys a model-in-the-loop rewriting mechanism that leverages the language model itself to revise instructions. Rather than using predefined rules or templates, a rewriting prompt is constructed for each selected hard example (x, y)—identified by high preference-based cross-entropy loss under the current prompt P. This rewriting prompt in- cludes five key elements: the task description Tto maintain generality, the current prompt Pas the base for revision, the hard example (x, y)to ex- pose weaknesses, editing instructions focused on improving clarity, specificity, and structural qual- ity, and a token-level mask analysis that highlights segments in Pcontributing most to the loss. The model is guided to first diagnose flaws in Pand then apply a multi-step refinement strategy. These steps include rephrasing rigid wording, re- fining task constraints, removing redundancy, sim- plifying overly complex instructions, improving logical flow, expanding underspecified parts, merg- ing overlapping rules, and enhancing overall lan- guage quality. These edits are applied in an adap- tive and integrated manner based on the specific issues observed, preventing overfitting to individ- ual cases and encouraging broadly effective im- provements. Each hard example (xj, yj)yields aset of revised prompts P′j,1, . . . ,P′j, ngenerated via temperature-controlled top- psampling. These variants are pooled and evaluated using batch- level cross-entropy, and the best-performing candi- date—measured by loss or accuracy—is selected as the updated prompt for the next iteration. 3.5 Efficiency of PMPO A core advantage of PMPO lies in its computa- tional and architectural efficiency during both eval- uation and candidate prompt generation. Unlike prior methods that rely on full output generation followed by correctness judgments—either via ex- ternal classifiers, preference models, or second- pass LLMs—PMPO directly leverages the model’s intrinsic token-level likelihoods. It computes the cross-entropy loss over the target outputs via a sin- gle forward pass, completely bypassing the need for output decoding. This enables rapid scoring of multiple prompt candidates at scale, even under limited computa- tional resources. Additionally, PMPO’s prompt rewriting process is tightly streamlined: for each high-loss example, the model performs one-shot analysis and generates revised prompt variants without iterative reasoning, multi-step prompting, or introspective feedback. As summarized in Ta- ble 1, PMPO is unique in combining
https://arxiv.org/abs/2505.16307v1
token-level evaluation with multi-candidate scoring—offering a favorable efficiency-profile over generation- heavy methods such as OPRO or PromptWizard. Method Evaluation Level Candidate Count SPO Sequence-Level Single PromptWizard Sequence-Level Multiple EvoPrompt Sequence-Level Multiple OPRO Sequence-Level Single PMPO Token-Level Multiple Table 1: Comparison of Efficiency. 4 Experiment 4.1 Experiment Settings Dataset. We evaluate PMPO on a diverse set of benchmarks covering mathematical reasoning, log- ical inference, and open-ended instruction follow- ing. For math problem solving, we use GSM8K andAQUA-RAT , which require models to perform multi-step numerical reasoning. To assess logical reasoning capabilities, we adopt the BBH bench- mark, designed to challenge models with complex 5 Table 2: Average test accuracy in the 1-shot setting across multiple tasks for different prompting methods, evaluated on Qwen2.5-14B-Instruct. Bold values indicate the best-performing method for each task. Task Name AO CoT RaR StepBack OPRO EvoPrompt PromptWizard Ours boolean_expressions 0.756 0.920 0.952 0.936 0.972 0.952 0.976 0.984 causal_judgement 0.674 0.631 0.695 0.658 0.636 0.647 0.599 0.695 date_understanding 0.684 0.740 0.708 0.752 0.800 0.772 0.636 0.784 disambiguation_qa 0.656 0.776 0.716 0.640 0.848 0.760 0.892 0.736 dyck_languages 0.096 0.240 0.236 0.228 0.392 0.308 0.220 0.256 formal_fallacies 0.704 0.800 0.784 0.808 0.856 0.792 0.816 0.816 geometric_shapes 0.440 0.616 0.576 0.684 0.580 0.620 0.508 0.676 hyperbaton 0.632 0.704 0.768 0.848 0.740 0.756 0.752 0.896 logical_deduction 0.692 0.856 0.844 0.847 0.767 0.864 0.845 0.864 movie_recommendation 0.564 0.636 0.624 0.640 0.636 0.596 0.676 0.684 multistep_arithmetic_two 0.052 0.968 0.972 0.956 0.988 0.948 0.976 0.988 navigate 0.660 0.908 0.856 0.924 0.896 0.944 0.848 0.960 object_counting 0.508 0.812 0.772 0.756 0.688 0.876 0.832 0.884 penguins_in_a_table 0.753 0.945 0.932 0.952 0.932 0.959 0.726 0.952 reasoning_about_colored_objects 0.708 0.892 0.768 0.880 0.876 0.872 0.912 0.888 ruin_names 0.632 0.660 0.556 0.716 0.788 0.680 0.692 0.840 salient_translation_error_detection 0.600 0.572 0.604 0.644 0.624 0.604 0.504 0.600 snarks 0.831 0.809 0.837 0.848 0.843 0.882 0.787 0.826 sports_understanding 0.752 0.660 0.680 0.804 0.828 0.812 0.544 0.836 temporal_sequences 0.832 0.908 0.864 0.900 0.964 0.916 0.900 0.944 tracking_shuffled_objects 0.599 0.900 0.852 0.847 0.860 0.871 0.839 0.880 web_of_lies 0.536 0.900 0.972 0.920 0.820 0.900 0.716 0.976 word_sorting 0.276 0.444 0.624 0.600 0.388 0.608 0.544 0.580 Best performing tasks 0 1 2 2 5 3 2 11 Average Accuracy 0.593 0.752 0.747 0.773 0.770 0.780 0.728 0.806 Method GSM8K AQUA-RAT AO 0.871 0.760 APE 0.939 0.827 COT 0.907 0.843 RaR 0.932 0.843 Step-back 0.925 0.811 OPRO 0.936 0.819 PromptBreeder 0.917 0.831 PromptWizard 0.882 0.799 Textguard 0.939 0.807 Ours 0.940 0.846 Table 3: Accuracy on math reasoning datasets GSM8K and AQUA-RAT using different prompt optimization methods(0-Shot), use Qwen2.5-14B as their optimiza- tion models. inference tasks under minimal guidance. For evalu- ating general instruction-following and open-ended task performance, we use AlpacaEval 2.0 , which includes a broad range of user instructions and uses GPT-4 Turbo as an automatic evaluator to compare model responses against reference answers. Baseline. We compare PMPO against two cate- gories of prompting methods across benchmark datasets. The first category includes conventional manually designed prompting strategies: CoT, RaR,and StepBack, which enhance model reasoning or answer formulation through structural or log- ical heuristics. The second category comprises recent automated prompt
https://arxiv.org/abs/2505.16307v1
optimization approaches: Opro, EvoPrompt, PromptWizard, TextGuard, and PromptBreeder, which leverage language models to search, mutate, or iteratively improve prompts without human intervention. Implementation Details. We conduct experi- ments using a mix of open-source and propri- etary language models, including Qwen2.5 (0.5B, 14B, 32B), LLaMA3.1 (8B), DeepSeek-R1-Distill- Qwen (1.5B). For each dataset, we randomly select 20% of the examples for training (capped at 50) and evaluate on the remaining set. The preference scaling factor βis fixed at 1 across all experiments. In each optimization round, we choose the top- k= 3 most challenging samples and generate 4 prompt variants per sample. Optimization runs for up to 20 iterations. All experiments are conducted on a single NVIDIA H800 GPU, with each full optimization taking approximately 20 minutes. 6 4.2 Experimental Results and Analysis Reasoning Tasks. As shown in Table 2, our method consistently outperforms existing prompt- ing approaches across a wide range of reasoning and understanding tasks, evaluated on Qwen2.5- 14B-Instruct under a 1-shot setting for fair com- parison. PMPO achieves the highest average accuracy of 80.6%, surpassing strong baselines such as EvoPrompt (78.0%), OPRO (77.1%), and PromptWizard (72.8%), and ranks first on 11 out of 23 tasks—significantly more than any other method. Unlike generation-heavy strategies like OPRO and EvoPrompt, which use black-box search or population-based mutation, PMPO’s single-pass, loss-based evaluation enables more efficient opti- mization and stronger performance, especially on tasks that require multi-step or spatial reasoning. While PromptWizard is competitive on some logic tasks, its self-critique-based rewrites often create verbose, rigid prompts that limit generalization. In contrast, PMPO produces lightweight and struc- turally adaptive prompts that more effectively align with model behavior, yielding robust performance across both symbolic and naturalistic reasoning. Math Tasks. As shown in Table 3, our method achieves the best performance on two widely used math reasoning benchmarks, GSM8K and AQUA- RAT, with accuracies of 94.0% and 84.6% re- spectively—outperforming all baselines, including APE, CoT (90.7% on GSM8K, 84.3% on AQUA- RAT), and PromptBreeder. Notably, both datasets provide complete solution steps rather than just final answers, which we incorporate as positive targets during training. This enables PMPO to iteratively refine prompts using token-level con- fidence (cross-entropy loss), effectively guiding the model to generate detailed, accurate multi-step reasoning. In contrast to methods like Chain-of- Thought that rely on manually designed heuristics, PMPO’s approach not only enhances final answer accuracy but also significantly improves the quality and fidelity of intermediate steps—encouraging the model to emulate human-like problem-solving be- haviors such as decomposition, variable definition, and numeric justification, thereby demonstrating stronger alignment with complex mathematical rea- soning tasks while maintaining generalizability. Open-ended Dataset. To evaluate PMPO on open- ended instruction-following tasks, we conduct ex- periments on the AlpacaEval 2.0 benchmark. Using GPT-4 Turbo as the evaluator, we compare model-generated responses with reference outputs across five instruction categories. As shown in Figure 3, PMPO-optimized prompts raise the average win rate of Qwen2.5-14B from 31.81% to51.52% , a substantial improvement. Notably, our method boosts performance across all instruction subsets, including difficult ones like helpful (from 17.83% to 47.29%) and oasst (from 34.57% to 55.61%).
https://arxiv.org/abs/2505.16307v1
Figure 3: Win rate comparisons on AlpacaEval 2.0. Left: original Qwen2.5-14B win rates across instruction sources. Right: PMPO-optimized results using the same model. Bottom: average win rates across models. These results show that PMPO enables mid- sized models like Qwen2.5-14B to produce com- petitive outputs, surpassing larger models such as LLaMA3.1-70B (39.1%) and GPT-4 Turbo (46.1%), and nearly matching GPT-4 Omni (51.3%). This highlights PMPO’s effectiveness in enhancing instruction alignment without model fine-tuning or explicit preference labels. 4.3 Open-Source Cross-Model Generalization Analysis To assess adaptability across model scales and ar- chitectures, we conduct cross-model evaluation by applying prompts optimized on one model to oth- ers (Table 4, BBH:Navigate). Models of various sizes—including small, medium, and large—are evaluated in a zero-shot manner by using optimized prompts for inference. Results reveal a notable instruction-following capacity gap: prompts optimized on large mod- els, though effective on similarly large or medium models, often degrade when transferred to smaller ones, which may struggle with complex or verbose instructions. Furthermore, prompts consistently 7 Target \ Prompt SourceQwen2.5 0.5BDeepSeek 1.5BLLaMA 3.1–8BQwen2.5 14BQwen2.5 32B Qwen2.5-0.5B 0.580 0.568 0.464 0.500 0.580 DeepSeek-1.5B 0.612 0.772 0.700 0.640 0.584 LLaMA-3.1–8B 0.708 0.792 0.800 0.852 0.860 Qwen2.5-14B 0.912 0.948 0.896 0.960 0.956 Qwen2.5-32B 0.972 0.948 0.952 0.972 0.980 Table 4: Cross-model accuracy on Navigate. Each row corresponds to a target model and each column to the prompt source model . Diagonal entries represent same-model optimization. Models are ordered from smallest to largest. perform best on the originating model, indicating that prompt effectiveness is closely linked to the internal reasoning and instruction-following mech- anisms of each model. Figure 4: Cross-model performance on BBH:Navigate using prompts optimized on Qwen2.5-32B. 4.4 Transferability to Proprietary Models While PMPO is fully applicable to open-source models that support token-level likelihood access, applying it to proprietary systems introduces addi- tional constraints. These models typically provide log-probabilities only for generated tokens, with- out supporting full-sequence evaluation through a single forward pass. As a result, it is not straightfor- ward to compute cross-entropy loss over an entire dataset. One potential workaround is to construct step-wise input sequences by incrementally append- ing target tokens and querying the model at each step to obtain token-level probabilities. However, this procedure can lead to substantial token con- sumption and high latency. Therefore, we do not recommend applying the full PMPO optimization process directly on these models in practice. Despite the limited access to loss signals on pro- prietary systems, prompts optimized via PMPO on open-source models still demonstrate strong cross-model transferability. When applied to GPT- 3.5 Turbo 0613, Claude 3.5 Haiku 20241022 and GPT-4o—despite being optimized on Qwen2.5- 32B—these prompts consistently improve perfor-mance on the BBH:Navigate benchmark (Figure 4). This suggests that loss-guided tuning on open- source models can produce prompts that generalize well, even without direct tuning on closed models. Setting TIM BCA PrefLoss Acc. (%) Full ✓ ✓ ✓ 80.63 w/o TIM ✗ ✓ ✓ 79.05 w/o TIM,BCA ✗ ✗ ✓ 77.96 w/o TIM,BCA,PrefLoss ✗ ✗ ✗ 76.74 Table 5: Ablation results on BBH dataset using Qwen2.5-14B. Removing individual modules leads to progressive performance degradation.
https://arxiv.org/abs/2505.16307v1
4.5 Ablation Study To assess the contribution of individual components within PMPO, we conduct a cumulative ablation study on the BBH benchmark using Qwen2.5-14B for both optimization and evaluation. Specifically, we disable three key modules: Token Importance Masking (TIM), Bad Case Analysis (BCA), and Preference Loss (PrefLoss), while keeping the itera- tive refinement process intact. As shown in Table 5, removing TIM—which localizes low-performing prompt segments based on token-level loss im- pact—reduces accuracy from 80.63% to 79.05%. Further disabling BCA, which prioritizes examples with high loss for targeted prompt improvement, results in a drop to 77.96%. When all three com- ponents are removed, accuracy declines to 76.74%. These results demonstrate the cumulative impor- tance of model-informed prompt refinement, where fine-grained evaluation and targeted rewriting sub- stantially enhance final performance. 5 Conclusion We present PMPO, a unified and efficient frame- work for prompt optimization that relies on loss- based evaluation and iterative rewriting. Instead of relying on output generation or human feedback, PMPO uses token-level likelihoods to identify and refine underperforming prompt segments, enabling 8 scalable and efficient optimization. Experimental results on reasoning, mathematical, and instruction- following benchmarks show that PMPO consis- tently outperforms existing methods in both accu- racy and efficiency. Its lightweight design, com- patibility with smaller models, and minimal need for manual supervision make it well-suited for both academic research and practical deployment. Limitations Despite the promising results, our study has several imitations. While PMPO demonstrates strong efficiency and effectiveness in optimizing prompts across a range of open-source models, its application to propri- etary, closed-source language models remains lim- ited. Most commercial APIs (e.g., OpenAI, An- thropic) do not expose full log-likelihoods, which restricts the direct use of PMPO’s loss-based eval- uation due to privacy constraints and concerns around model behavior leakage. Although approxi- mate likelihoods can be estimated via autoregres- sive token-by-token querying, this significantly in- creases latency and token usage, making it impracti- cal for large-scale optimization. Nonetheless, some API-based frameworks, such as vLLM, do provide access to token-level log-probabilities (e.g., via prompt_logprobs ), allowing PMPO to be applied in those settings. We view this as a positive di- rection, and hope that more commercial providers will consider offering similar transparency to facil- itate research on prompt optimization and model alignment. Additionally, in extremely low-resource scenar- ios (e.g., using only one training example), PMPO may exhibit reduced robustness. Since the opti- mization directly minimizes model loss on a lim- ited number of instances, it is prone to overfitting in such cases. If no additional data is introduced, the resulting prompt variants may align too closely with the few observed examples, leading to reduced generalization. While this setup is inherently chal- lenging for most learning algorithms, it highlights a fundamental limitation of data-scarce prompt op- timization. References Eshaan Agarwal, Joykirat Singh, Vivek Dani, Raghav Magazine, Tanuja Ganu, and Akshay Nambi. 2024. Promptwizard: Task-aware prompt optimization framework. arXiv preprint arXiv:2405.18369 .Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-V oss, Gretchen Krueger, Tom Henighan,
https://arxiv.org/abs/2505.16307v1
Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, and 12 others. 2020. Lan- guage models are few-shot learners. Preprint , arXiv:2005.14165. Jiale Cheng, Xiao Liu, Kehan Zheng, Pei Ke, Hongning Wang, Yuxiao Dong, Jie Tang, and Minlie Huang. 2023. Black-box prompt optimization: Aligning large language models without model training. arXiv preprint arXiv:2311.04155 . Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, and 1 others. 2021. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168 . Yihe Deng, Weitong Zhang, Zixiang Chen, and Quan- quan Gu. 2023. Rephrase and respond: Let large language models ask better questions for themselves. arXiv preprint arXiv:2311.04205 . Yihe Deng, Weitong Zhang, Zixiang Chen, and Quan- quan Gu. 2024. Rephrase and respond: Let large language models ask better questions for themselves. Preprint , arXiv:2311.04205. Yann Dubois, Balázs Galambosi, Percy Liang, and Tat- sunori B. Hashimoto. 2025. Length-controlled al- pacaeval: A simple way to debias automatic evalua- tors. Preprint , arXiv:2404.04475. Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, and Tim Rock- täschel. 2023. Promptbreeder: Self-referential self-improvement via prompt evolution. arXiv preprint arXiv:2309.16797 . Qingyan Guo12, Rui Wang, Junliang Guo, Bei Li23, Kaitao Song, Xu Tan, Guoqing Liu, Jiang Bian, and Yujiu Yang. Connecting large language models with evo-lutionary algorithms yields powerful prompt op- timizers. Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blun- som. 2017. Program induction by rationale genera- tion: Learning to solve and explain algebraic word problems. arXiv preprint arXiv:1705.04146 . Hengzhi Pei, Jinyuan Jia, Wenbo Guo, Bo Li, and Dawn Song. 2023. Textguard: Provable defense against backdoor attacks on text classification. Preprint , arXiv:2311.11225. Yunhang Shen, Chaoyou Fu, Peixian Chen, Mengdan Zhang, Ke Li, Xing Sun, Yunsheng Wu, Shaohui Lin, and Rongrong Ji. 2023. Aligning and prompting everything all at once for universal visual perception. Preprint , arXiv:2312.02153. 9 Mirac Suzgun, Nathan Scales, Nathanael Schärli, Se- bastian Gehrmann, Yi Tay, Hyung Won Chung, Aakanksha Chowdhery, Quoc V Le, Ed H Chi, Denny Zhou, and 1 others. 2022. Challenging big-bench tasks and whether chain-of-thought can solve them. arXiv preprint arXiv:2210.09261 . Zeliang Tong, Zhuojun Ding, and Wei Wei. 2025. Evo- prompt: Evolving prompts for enhanced zero-shot named entity recognition with large language models. InProceedings of the 31st International Conference on Computational Linguistics , pages 5136–5153. Wenyi Wang, Hisham A Alyahya, Dylan R Ashley, Oleg Serikov, Dmitrii Khizbullin, Francesco Faccio, and Jürgen Schmidhuber. 2024a. How to correctly do semantic backpropagation on language-based agentic systems. arXiv preprint arXiv:2412.03624 . Xinyuan Wang, Chenxi Li, Zhen Wang, Fan Bai, Haotian Luo, Jiayou Zhang, Nebojsa Jojic, Eric P Xing, and Zhiting Hu. 2023. Promptagent: Strategic planning with language models enables expert-level prompt optimization. arXiv preprint arXiv:2310.16427 . Xu Wang, Cheng Li, Yi Chang, Jindong Wang, and Yuan Wu. 2024b. Negativeprompt: Leveraging psychology for large language models enhancement via negative emotional stimuli. arXiv preprint arXiv:2405.02814 . Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, and 1 others. 2022. Chain-of-thought prompting elic- its reasoning in large language models.
https://arxiv.org/abs/2505.16307v1
Advances in neural information processing systems , 35:24824– 24837. Jinyu Xiang, Jiayi Zhang, Zhaoyang Yu, Fengwei Teng, Jinhao Tu, Xinbing Liang, Sirui Hong, Chenglin Wu, and Yuyu Luo. 2025. Self-supervised prompt opti- mization. arXiv preprint arXiv:2502.06855 . Cilin Yan, Jingyun Wang, Lin Zhang, Ruihui Zhao, Xiaopu Wu, Kai Xiong, Qingsong Liu, Guoliang Kang, and Yangyang Kang. 2024. Efficient and accurate prompt optimization: the benefit of mem- ory in exemplar-guided reflection. arXiv preprint arXiv:2411.07446 . Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V Le, Denny Zhou, and Xinyun Chen. 2023. Large language models as optimizers. arXiv preprint arXiv:2309.03409 . Huaixiu Steven Zheng, Swaroop Mishra, Xinyun Chen, Heng-Tze Cheng, Ed H Chi, Quoc V Le, and Denny Zhou. 2023a. Take a step back: Evoking reason- ing via abstraction in large language models. arXiv preprint arXiv:2310.06117 . Huaixiu Steven Zheng, Swaroop Mishra, Xinyun Chen, Heng-Tze Cheng, Ed H Chi, Quoc V Le, and Denny Zhou. 2023b. Take a step back: Evoking reason- ing via abstraction in large language models. arXiv preprint arXiv:2310.06117 .Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, and 1 others. 2023c. Judging llm-as-a-judge with mt-bench and chatbot arena. Advances in Neural Information Pro- cessing Systems , 36:46595–46623. Miao Zheng, Hao Liang, Fan Yang, Haoze Sun, Tian- peng Li, Lingchu Xiong, Yan Zhang, Youzhen Wu, Kun Li, Yanjun Shen, and 1 others. 2024. Pas: Data- efficient plug-and-play prompt augmentation system. arXiv preprint arXiv:2407.06027 . Yongchao Zhou, Andrei Ioan Muresanu, Ziwen Han, Keiran Paster, Silviu Pitis, Harris Chan, and Jimmy Ba. 2022. Large language models are human-level prompt engineers. In The Eleventh International Conference on Learning Representations . 10 A Appendix A.1 Detailed Prompts of ours In this section, we present the meta prompts used in ours for prompt optimization, evaluation, and segment-level masking. These prompts enable the framework to analyze, rewrite, and assess instructions with minimal human intervention. We include variants adapted for large and small models, as well as the masking strategy used for loss attribution. Prompt Optimization (Large Model). We first show the optimization prompt used with capable instruction-following LMs: Optimization Prompt for Large Models You are an expert prompt engineer tasked with dynamically improving prompts to generate more effective, diverse solutions. When analyzing a prompt, first diagnose its core weaknesses, then apply multiple strategic modifications as needed. ,→ Given the current prompt and task description, your objective is to produce a significantly improved version that will better solve the intended task. ,→ Your optimization must center on the task description: {task_description} CRITICAL WARNING - MAINTAIN TASK SCOPE: 1. The task_description defines the FULL SCOPE of what your prompt must address 2. Examples are provided ONLY to understand the FORMAT, not to narrow the task 3. Your prompt MUST maintain the original breadth of the task_description 4. NEVER specialize the prompt to only handle specific examples you 've seen Example Input: {user_input} Expected Answer: {true_answer} These examples are provided ONLY for pattern analysis. Do NOT directly incorporate these exact examples into your prompt or design your prompt
https://arxiv.org/abs/2505.16307v1
specifically for these examples. Instead: ,→ - Extract the underlying patterns and reasoning these examples demonstrate - Understand the general skills or knowledge being tested - Focus your prompt improvements on the task_description and solving the general problem First, analyze the prompt for: - Gaps in instruction clarity or specificity - Unnecessary constraints limiting creative problem-solving - Missing guidance that would help solve the general task type - Overly rigid structure that hinders diverse approaches - Places where more natural, professional language would improve understanding - Redundancies or contradictions causing confusion Then, apply a strategic combination of these techniques (using multiple approaches rather than just one): (1) ENHANCE STRUCTURE, (2) ADD RULES OR PRINCIPLES, (3) REMOVE UNNECESSARY ELEMENTS, (4) REPHRASE FOR CLARITY, (5) SIMPLIFY COMPLEXITY, (6) EXPAND WITH DETAILS, (7) CONSOLIDATE SIMILAR RULES, (8) PROFESSIONAL REFRAMING, (9) DIVERSIFY APPROACH IMPORTANT: Your response must focus on creating a prompt that will produce substantively better results on the general task, not just on the specific examples provided. ,→ Additionally, here is an analysis of the current prompt, segmented by mask (for reference in your optimization): {mask} Current prompt: {current_prompt} For the final prompt, please wrap it with <prompt></prompt>. Prompt Optimization (Small Model). To accommodate smaller LMs with limited instruction-following capacity, we use a simplified variant: Optimization Prompt for Small Models Task_description: {task_description} Example Input: {user_input} Expected Answer: {true_answer} Mask Analysis: {mask} 11 Current Prompt: {current_prompt} Based on the following examples where the current cross-entropy is relatively high, please analyze the reasons and modify the prompt to improve performance. Rather than directly quoting the examples, focus on deeply analyzing the underlying patterns and issues that contribute to high cross-entropy. Prioritize identifying the root causes of performance problems and make labeled prompt modifications that address these specific issues. Concentrate on what changes will most effectively improve task outcomes rather than structural coherence or theoretical correctness.,→ ,→ ,→ ,→ Your response must focus on creating a prompt that will produce substantively better results on the general task, not just on the specific examples provided. ,→ Please only wrap the optimized final prompt with <prompt></prompt> tags. Prompt Evaluation. To evaluate candidate outputs, we use a lightweight semantic comparison prompt that tolerates format variation: Evaluation Prompt You are an expert evaluator determining if an answer matches the ground truth. Consider equivalent formats like 'A', '(A)','A.', etc. as correct. Focus on the meaning rather than exact string matching. For questions where the answer format is important, verify that the model answers in the correct format. For example, in Dyck language problems, if the question asks what follows '([{} 'and the ground truth is '])', but the model answers with '([{}]) ', this should be considered correct as it includes the proper closing brackets.,→ ,→ ,→ ,→ Mask Generation. For prompt segmentation, we generate masked variants to localize ineffective segments based on their contribution to loss: Prompt for Mask Generation Given the following prompt, analyze it to identify up to 5 relatively independent units (segments) that are not tightly connected to their surrounding content. Such units can include individual methods, rules,
https://arxiv.org/abs/2505.16307v1
or examples. For each unit, consider whether masking (removing) it would leave the surrounding prompt logically coherent and understandable.,→ ,→ Your task: 1. Carefully read the current prompt: {prompt} 2. If possible, segment the prompt into up to 5 independent units. Only select units whose removal would not disrupt the overall flow or meaning of the prompt. ,→ 3. For each selected unit, wrap it in <mask></mask> tags. 4. Wrap the entire prompt in <prompt></prompt> tags. 5. If you find the prompt does not contain any truly independent units suitable for masking, simply output the prompt wrapped in <prompt></prompt> without any <mask></mask> tags. ,→ Formatting requirements: - Do not exceed 5 masked segments in total. - Do not create overlapping or nested masks. - Each masked unit should represent a coherent, removable segment (such as a method, rule, or example), not a random phrase. ,→ - Maintain all other prompt content unchanged. Example input: current prompt: {prompt} Example output with masking: <prompt> ... Some instructions ... <mask>This is an example rule to be masked.</mask> ... More instructions ... </prompt> Example output with no masking: <prompt> ... Full original prompt (no <mask> tags) ... </prompt> This prompt is used during the mask-guided analysis phase in PMPO to identify local prompt segments whose removal yields significant loss changes. These segments are considered candidates for targeted rewriting in subsequent iterations. A.2 Experiment Details A.2.1 Tasks and Data Details We evaluate PMPO across multiple datasets covering symbolic reasoning, math problem-solving, and instruction-following. Table A6 summarizes dataset sizes and splits used in our experiments. 12 Table A6: Dataset sizes and data splits used for training and evaluation. Dataset Name Test Size Train (max) BBH* 6,511 1304 GSM8K 1,319 50 AQUA-RAT 254 50 AlpacaEval 2.0 805 50 BBH* The BIG-Bench Hard (BBH) benchmark (Suzgun et al., 2022) comprises 23 challenging tasks selected from the BIG-Bench suite, focusing on areas where language models previously underperformed compared to average human raters. Each task includes 250 test examples, totaling 6,511 samples. We utilize the full BBH dataset for evaluation and randomly sample 50 examples for training. GSM8K GSM8K (Cobbe et al., 2021) is a math word problem benchmark requiring multi-step numerical reasoning. We use the standard test split (1,319 samples) for evaluation. For training, we randomly sample up to 50 examples from the training set. AQUA-RAT AQUA-RAT (Ling et al., 2017) contains multiple-choice math questions requiring algebraic reasoning and textual comprehension. Following prior work, we use the full test set (2,371 questions) and randomly sample 50 training examples for optimization. AlpacaEval 2.0 AlpacaEval 2.0 (Dubois et al., 2025) is a benchmark for evaluating instruction- following ability of language models using GPT-4 Turbo as the judge. The dataset contains 805 diverse prompts spanning tasks such as open-ended generation, roleplay, summarization, and reasoning. We use the full evaluation set for testing. For prompt optimization training, we use preference-labeled pairs (selected vs. rejected responses) from the public repository reciprocate/alpaca-eval , which offers high-quality alignment signals derived from automatic LLM feedback. From this set, we sample 50 training pairs with clear preference margins. These examples
https://arxiv.org/abs/2505.16307v1
allow PMPO to optimize prompts for alignment-style objectives using cross-entropy on preferred vs. dispreferred completions. Note: * BBH refers to the complete set of 23 tasks in the BIG-Bench Hard benchmark. •Boolean Expressions: Evaluate the truth value of a random Boolean expression consisting of Boolean constants (True, False) and basic Boolean operators (and, or, and not). •Causal Judgment: Given a short story (involving moral, intentional, or counterfactual analysis), determine how a typical person would answer a causal question about the story. •Date Understanding: Given a small set of sentences about a particular date, answer the provided question (e.g., “The concert was scheduled to be on 06/01/1943, but was delayed by one day to today. What is the date yesterday in MM/DD/YYYY?”). •Disambiguation QA: Given a sentence with an “ambiguous” pronoun, either determine whether the sentence is inherently ambiguous or state the antecedent of the pronoun. •Dyck Languages: Predict the sequence of the closing parentheses of a Dyck-4 word without its last few closing parentheses. •Formal Fallacies Syllogisms Negation: Determine whether an argument—presented infor- mally—can be logically deduced from a provided set of statements. •Geometric Shapes: Given a full SVG path element containing multiple commands, determine the geometric shape that would be generated. •Hyperbaton (Adjective Ordering): Given two English-language sentences, determine the one with the correct adjective order. 13 •Logical Deduction: Deduce the order of a sequence of objects based on clues about their spatial relationships and placements. •Movie Recommendation: Given a list of movies a user might have watched and liked, recommend a new, relevant movie from a list of candidates. •Multi-Step Arithmetic: Solve multi-step equations involving basic arithmetic operations. •Navigate: Given a series of navigation steps, determine whether the agent would return to the starting point. •Object Counting: Given a list of items and quantities, determine the count of a certain object class (e.g., fruits). •Penguins in a Table: Given a table of penguins (and sometimes new information), answer a question about the penguins’ attributes. •Reasoning about Colored Objects: Answer a simple question about the color of an object based on a given context. •Ruin Names: Given an artist, band, or movie name, identify a one-character edit that changes the meaning humorously. •Salient Translation Error Detection: Given a German sentence and its English translation, deter- mine the type of translation error present. •Snarks: Given two nearly-identical sentences, determine which one is sarcastic. •Sports Understanding: Determine whether a fictitious sports-related sentence is plausible. •Temporal Sequences: Given a person’s daily schedule, determine when they could perform another activity. •Tracking Shuffled Objects: Given initial object positions and a series of swaps, determine final positions. •Web of Lies: Evaluate the truth value of a Boolean function expressed as a word problem. •Word Sorting: Given a list of words, sort them lexicographically. A.2.2 Configuration We compare ours against two categories of baseline methods: manually designed prompting strategies and automated prompt optimization frameworks. The former apply structural heuristics to improve reasoning, while the latter leverage LLMs or search algorithms to generate and refine prompts automatically. Manually Designed Prompting Strategies Chain-of-Thought (CoT) (Wei et al., 2022). CoT prompting
https://arxiv.org/abs/2505.16307v1
enhances reasoning by encouraging the model to generate intermediate steps before the final answer. It improves performance in arithmetic and multi-step tasks by including phrases like “Let’s think step by step” to elicit structured reasoning chains. Step-Back (Zheng et al., 2023b). Step-Back prompting first guides the model to abstract high-level concepts before applying them to the specific task. This abstraction step enables more principled reasoning, particularly in STEM tasks, by helping the model organize relevant knowledge before execution. Rephrase-and-Respond (Deng et al., 2024). Rephrase-and-Respond (RaR) improves answer quality by prompting the model to reformulate the input question before solving it. This internal clarification reduces ambiguity and enhances robustness, especially in under-specified or complex queries. 14 Automated Prompt Optimization Approaches OPRO (Yang et al., 2023). OPRO treats prompt design as a black-box optimization problem. An LLM iteratively proposes and evaluates new prompts based on previous ones and their accuracy on training examples. Although effective, it depends on repeated generation and evaluation over full datasets. EvoPrompt (Tong et al., 2025). EvoPrompt combines evolutionary algorithms with LLMs to maintain and evolve a population of prompts through mutation and selection. New prompts are generated via LLM-guided variation, and top-performing variants are retained across generations. PromptWizard (Agarwal et al., 2024). PromptWizard uses a critique-and-synthesis loop where one LLM analyzes prompt weaknesses and another generates refinements. This self-reflective mecha- nism incrementally improves prompts based on failure analysis, with high sample efficiency and task specialization. TextGrad (Pei et al., 2023). TextGrad performs optimization via pseudo-gradients derived from LLM-generated feedback. It applies gradient descent-like updates to prompts, reverting bad updates using validation feedback, and supports optimization without requiring explicit supervision labels. PromptBreeder (Fernando et al., 2023). PromptBreeder evolves both task prompts and mutation strategies in parallel. It uses co-evolution to generate increasingly effective prompts without human intervention, achieving state-of-the-art performance on complex reasoning benchmarks. SPO (Xiang et al., 2025). SPO avoids human-labeled data by comparing outputs from multiple prompts and selecting better-performing variants through self-judgment. This self-supervised loop incrementally refines prompts based on model preference rather than external metrics. PMPO (Probabilistic Metric Prompt Optimization). Ours introduces a unified loss-based prompt optimization framework. It evaluates prompt quality using token-level cross-entropy or preference loss, requiring only forward passes without output sampling. In each iteration, ours identifies low-utility prompt segments via model-adaptive masking, generates variants based on hard examples, and selects improved candidates via likelihood minimization. Unlike black-box or introspection-based methods, ours supports both supervised and preference tasks under a consistent evaluation scheme and demonstrates scalability to smaller models with limited data. A.2.3 Baseline Prompt In this section, we present the baseline prompts used for comparison across all methods. For fairness, all prompt optimization approaches that require an initial prompt use the same COT Prompt as the starting point. AO Prompt Please output directly and only output the final answer. COT Prompt Let's think step by step. Step-back Prompt Please first think about the principles involved in solving this task which could be helpful. And then provide a solution step by step for this question. 15 RaR Prompt Please rephrase the question in
https://arxiv.org/abs/2505.16307v1
a way that is easier to understand, minimizing ambiguity and considering edge cases. And then provide a solution step by step for the question. A.3 Case Study Figure A5: Cross-entropy loss per iteration during prompt optimization on AQUA-RAT. To better understand how PMPO progressively improves prompt quality during optimization, we conduct a case study on the AQUA-RAT dataset. Starting with the base prompt Let’s think step by step , we observe how PMPO rewrites the prompt over 20 iterations by identifying high-loss examples and refining the prompt to better capture general mathematical reasoning patterns. Table A5 shows selected steps in this evolution along with the corresponding cross-entropy losses. As observed, the prompts gradually become more structured and specific in mathematical reasoning and format expectations. Figure A5 shows the loss curve, highlighting the performance gains across the 20 iterations. Notably, early iterations achieve rapid improvements, while later refinements focus on polishing structure and clarity. Iter 1, Loss = 1.974 Let's think step by step Iter 2, Loss = 1.926 Given a mathematical problem involving factors and divisibility, your task is to determine which of the provided options correctly identifies all numbers that the given positive whole number must be divisible by. Follow these steps to solve the problem effectively:,→ ,→ 1.Identify the prime factors of the given number based on the information provided. 2.Determine the least common multiple (LCM) of these prime factors, which represents the smallest number that the given number must be divisible by. ,→ 3.Evaluate each option against the LCM to check if it meets the divisibility criteria. 4.Select the correct option(s) that accurately reflect the divisibility rules derived from the prime factors. Ensure your solution clearly explains your reasoning process and highlights why certain options are correct or incorrect based on the divisibility properties of the given number. ,→ Iter 4, Loss = 1.913 Given a mathematical problem involving geometric shapes or numerical properties, your task is to solve the problem and identify the correct answer among the provided options. Here are the steps to follow for an effective solution: ,→ 1.Carefully read and comprehend the problem statement, identifying the key elements and requirements. 2.Apply relevant mathematical formulas, principles, or theorems to derive a solution. This may involve calculations, logical deductions, or both. ,→ 16 3.Evaluate each provided option in light of your solution to determine which one accurately reflects the correct answer. 4.Clearly explain your reasoning process and highlight why certain options are correct or incorrect based on the problem 's specific requirements. ,→ Maintain precision in your calculations and ensure that your solution aligns with the core mathematical concepts involved. Use clear and concise language to articulate your thought process and conclusions. ,→ Iter 6, Loss = 1.894 To effectively solve the given math problem and select the correct answer option, follow these steps: 1.Carefully read and understand the problem statement to identify all relevant conditions and relationships among the numbers involved. ,→ 2.Apply mathematical principles and formulas related to the conditions identified to derive necessary calculations or conclusions. ,→ 3.Analyze each answer option in light of
https://arxiv.org/abs/2505.16307v1
the derived conclusions and evaluate whether it satisfies the conditions of the problem. ,→ 4.Clearly explain the rationale behind selecting the correct answer option, detailing how the option meets the specified conditions. ,→ Ensure your response includes a clear explanation of the reasoning process used to arrive at the correct answer, demonstrating a thorough understanding of the problem 's underlying mathematical concepts. ,→ Iter 8, Loss = 1.871 Given a mathematical problem involving numerical properties, your task is to solve the problem and identify the correct answer among the provided options. Follow these steps for an effective solution: ,→ 1.Thoroughly read and understand the problem statement, identifying key elements such as given factors, divisibility rules, or other numerical properties. ,→ 2.Apply appropriate mathematical principles, theorems, or logical deductions to derive a solution. This may include factorization, divisibility tests, or other relevant calculations. ,→ 3.Evaluate each provided option against your derived solution to determine the correct answer. 4.Clearly explain your reasoning process, highlighting why certain options are correct or incorrect based on the problem 's specific requirements. ,→ Ensure your solution is precise and aligned with the core mathematical concepts involved. Use clear, concise language to articulate your thought process and conclusions. ,→ By following these guidelines, you will be able to effectively solve the problem and select the correct answer from the given options. ,→ Iter 10, Loss = 1.871 Given a mathematical problem involving geometric or numerical properties, your task is to accurately solve the problem and identify the correct answer from the provided options. To achieve this, follow these steps: ,→ 1.Carefully read the problem statement to identify all given elements and required calculations, such as dimensions, formulas, or numerical relationships. ,→ 2.Apply relevant mathematical principles, formulas, or logical reasoning to derive a solution. This could involve using geometric formulas, algebraic equations, or numerical analysis. ,→ 3.Compare your solution with each provided option to pinpoint the correct answer. 4.Provide a clear explanation of your reasoning, detailing how you arrived at your solution and why certain options are valid or invalid. ,→ Your solution should be precise, demonstrating a thorough understanding of the underlying mathematical concepts. Use clear and concise language to articulate your thought process and justify your conclusions. ,→ By adhering to these guidelines, you will be able to effectively solve the problem and confidently select the correct answer from the given options. ,→ Iter 12, Loss = 1.860 Given a mathematical problem involving numerical properties or relationships, your task is to accurately solve the problem and identify the correct answer from the provided options. Follow these steps to ensure a comprehensive and accurate solution:,→ ,→ 1. **Comprehend the Problem** : Read the problem statement carefully to identify all given elements and the required calculations, including any numerical relationships, formulas, or conditions. ,→ 2. **Apply Mathematical Principles** : Utilize appropriate mathematical concepts, formulas, or logical reasoning to solve the problem. This may include applying algebraic equations, numerical analysis, or other relevant mathematical tools. ,→ 3. **Verify the Solution** : Compare your derived solution with each provided option to determine
https://arxiv.org/abs/2505.16307v1
the correct answer. 4. **Explain Your Reasoning** : Provide a clear and detailed explanation of your thought process and the steps taken to arrive at your solution. Justify why certain options are valid or invalid based on your calculations and reasoning. ,→ 17 Your response should demonstrate a deep understanding of the mathematical concepts involved and use clear, concise language to explain your approach and conclusions. ,→ By following these guidelines, you will be able to effectively solve the problem and confidently choose the correct answer from the given options. ,→ Iter 14, Loss = 1.854 Given a mathematical problem involving divisibility rules or factor relationships, follow these steps to find the correct answer: ,→ 1. **Identify the Core Elements** : Clearly define the key numbers and conditions provided in the problem statement. 2. **Apply Mathematical Principles** : Use divisibility rules and factorization techniques to deduce the required conditions. ,→ 3. **Evaluate Each Option** : Carefully examine each given option against the derived conditions to determine which ones satisfy the problem 's criteria. ,→ 4. **Select the Correct Answer** : Based on your evaluation, choose the most accurate set of options that fulfill the problem 's requirements. ,→ Ensure your solution is logically sound and aligns with fundamental mathematical concepts. Avoid making assumptions not supported by the given information. ,→ Your response should include a clear explanation of your reasoning process and the final selected answer. Iter 15, Loss = 1.850 Given a mathematical problem, such as finding the curved surface area of a geometric shape, follow these steps to accurately solve the problem and select the correct answer from the provided options: ,→ 1. **Understand the Problem** : Carefully read and comprehend the given problem, identifying all relevant numerical values and geometric properties mentioned. ,→ 2. **Recall Relevant Formulas** : Identify and recall the appropriate mathematical formulas or principles necessary to solve the problem. ,→ 3. **Calculate the Solution** : Apply the identified formulas to the given data, performing all necessary calculations step-by-step. ,→ 4. **Verify Each Option** : Compare your calculated result with each provided answer option to determine which one matches your solution. ,→ 5. **Choose the Correct Answer** : Select the option that correctly represents the solution to the problem based on your calculations. ,→ Ensure your solution process is logically consistent and grounded in fundamental mathematical principles. Avoid making unsupported assumptions and verify each step of your calculation. ,→ Your response should include a detailed rationale explaining your reasoning process, leading to the selection of the correct answer. ,→ Iter 17, Loss = 1.846 Given a mathematical problem, such as calculating the curved surface area of a geometric shape, follow these steps to accurately solve the problem and select the correct answer from the provided options: ,→ 1. **Analyze the Problem Statement** : Carefully read the problem statement to identify all relevant numerical values and geometric properties involved. ,→ 2. **Identify Applicable Formulas** : Recall and list the relevant mathematical formulas or principles that are pertinent to solving the problem. ,→ 3. **Perform Calculations** : Apply the
https://arxiv.org/abs/2505.16307v1
identified formulas to the given data, ensuring each step of the calculation is accurate and logically sound. ,→ 4. **Evaluate Answer Options** : Compare your calculated result against each provided answer option to determine the correct match. ,→ 5. **Select the Correct Answer** : Choose the answer option that precisely corresponds to your calculated solution. In your response, provide a detailed rationale that outlines your reasoning process, clearly demonstrating how you arrived at your solution. Ensure that your approach adheres to fundamental mathematical principles and avoids any unsupported assumptions. Verify each step of your calculation for accuracy and consistency.,→ ,→ Iter 19, Loss = 1.796 Given a mathematical problem involving geometric shapes, such as finding the curved surface area, follow these guidelines to accurately solve the problem and select the correct answer from the provided options: ,→ 1. **Understand the Problem** : Thoroughly comprehend the given problem statement to identify all pertinent numerical values and geometric characteristics. ,→ 2. **Recall Relevant Formulas** : Identify and list the applicable mathematical formulas or principles needed to solve the problem. ,→ 18 3. **Execute Calculations** : Apply the recalled formulas to the given data, ensuring each calculation step is precise and logically consistent. ,→ 4. **Compare with Options** : Match your calculated result against each provided answer option to pinpoint the correct choice. ,→ 5. **Confirm Selection** : Select the answer option that aligns exactly with your calculated solution. Provide a detailed rationale explaining your reasoning process, highlighting how you applied fundamental mathematical principles to arrive at your solution. Ensure your approach is free from unsupported assumptions and verify each calculation step for accuracy and consistency.,→ ,→ Even with the use of mask-based analysis and model-in-the-loop rewriting for supervision, PMPO does not guarantee improvement in every single iteration. In cases where no generated candidate outperforms the current prompt, the original prompt is retained as the starting point for the next round of optimization. The degree of improvement across steps may vary due to factors like sampling randomness, batch composition, and the inherent difficulty of refining an already strong prompt. Unlike model fine-tuning, prompt generation lacks a stable optimization trajectory, making step-to-step changes inherently more variable. A.4 Prompt Optimized by ours In this section, we present the optimized prompts produced by our method during the main experiments. All prompts were optimized using Qwen2.5-14B as the optimization model, with the original model also serving as the evaluation and execution model. For brevity, we show representative examples for each benchmark dataset. BBH (Big-Bench Hard) For PMPO training on BBH, we treat the correct answer option as the selected output, and the remaining distractor options as rejected , enabling preference-based supervision using the model’s likelihoods. BBH Prompt (boolean_expressions) Your task is to accurately evaluate the truth value of a given Boolean expression. This expression may include Boolean constants (True, False) and fundamental Boolean operators (and, or, and not). Adhere strictly to standard Boolean logic throughout this process.,→ ,→ ### Key Considerations: -**Parentheses Handling** : Nested conditions within parentheses require careful management. Start by evaluating the innermost parentheses and proceed
https://arxiv.org/abs/2505.16307v1
outward. ,→ -**Operator Precedence** : Observe the correct order of operations: "not" has the highest precedence, followed by "and", and then "or". This ensures proper simplification of the expression. ,→ -**Logical Reasoning** : Clearly articulate the step-by-step logical reasoning applied during the evaluation process. Maintain clarity without sacrificing depth. ,→ ### Input Format: A string containing a Boolean expression, such as "not not True and True and not True". BBH Prompt (causal_judgement) Given a short story involving moral, intentional, or counterfactual analysis, your task is to determine how a typical person would respond to a causal question about the story. Your response should provide a clear chain of thought that leads to a reasoned conclusion.,→ ,→ Begin by thoroughly reading the story and the causal question. Identify the key elements and the specific causal relationship highlighted in the question. Analyze the cause-and-effect dynamics from a typical person 's perspective. Formulate your answer by clearly stating whether a typical person would consider the proposed causal link valid, supported by a logical rationale.,→ ,→ ,→ Guidelines for responding: 1. Carefully read and understand the story and the causal question. 2. Identify the critical causal factors within the story. 3. Evaluate the causal relationship from the viewpoint of a typical person. 4. Provide a logical and well-supported explanation for your answer. 5. Ensure your response comprehensively addresses the task without being overly specific to any particular example. 19 BBH Prompt (date_understanding) To effectively solve date-related questions based on given narratives, follow this structured yet flexible approach: ### Objective: Determine the correct date from a brief narrative containing specific details about dates and events. Provide the final answer in the "MM/DD/YYYY" format, accompanied by a detailed chain-of-thought explanation. ,→ ### Detailed Instructions: #### Step 1: Comprehend the Narrative Read the provided sentences carefully to identify all relevant dates and events. Pay special attention to any changes in dates due to scheduling adjustments or other factors. ,→ #### Step 2: Pinpoint Key Dates Identify all explicit dates mentioned in the narrative. Also, consider any shifts in these dates caused by external conditions. ,→ #### Step 3: Evaluate Required Calculations Determine whether the question necessitates adding or subtracting days, weeks, months, or years from the key dates. Remember to factor in calendar nuances, including the varying lengths of months and the impact of leap years. ,→ #### Step 4: Execute Calculations Perform the required calculations accurately, taking into account complexities such as leap years and differences in month lengths. Ensure your calculations are precise and logical. ,→ #### Step 5: Deliver the Result Present the final answer in the "MM/DD/YYYY" format. Provide a clear rationale explaining each step of your reasoning process and any assumptions made. ,→ BBH Prompt (disambiguation_qa) Given a sentence that contains a potentially ambiguous pronoun, your goal is to determine whether the pronoun clearly refers to a specific noun within the sentence or if its meaning is uncertain due to insufficient context. This process involves identifying the most likely referent of the pronoun or recognizing its ambiguity, providing a thorough rationale for your conclusion.,→
https://arxiv.org/abs/2505.16307v1
,→ ,→ To accomplish this task effectively, follow these steps: 1.**Identify the Pronoun** : Locate the pronoun in the sentence. 2.**Analyze Context** : Carefully examine the sentence for any clues that might reveal the pronoun 's antecedent. Consider the logical relationship between the pronoun and all potential antecedents. ,→ 3.**Decide on Clarity or Ambiguity** : Based on your analysis, determine if the pronoun 's reference is clear or ambiguous. 4.**Justify Your Conclusion** : Provide a clear and well-supported explanation for your decision, referencing pertinent details from the sentence. ,→ Your response should conform to one of the following formats: - For sentences deemed ambiguous: "The sentence is ambiguous because [detailed reasoning based on context]." - For sentences where the pronoun 's reference is clear: "The pronoun refers to [specific noun] because [detailed rationale based on context]." ,→ BBH Prompt (dyck_languages) Your task is to predict and complete the sequence of closing parentheses to form a valid Dyck-4 word. A Dyck-4 word requires a balanced arrangement of four types of parentheses: round (), square [], curly {}, and angle <>. Each opening bracket must be correctly matched with its corresponding closing bracket, ensuring proper nesting and balance throughout the sequence.,→ ,→ ,→ Follow these guidelines to solve the problem effectively: 1.**Identify Unmatched Brackets** : Start by identifying all opening brackets in the input sequence that do not yet have corresponding closing brackets. This step is essential for determining the types of closing brackets required next. ,→ 2.**Prioritize Innermost Structures** : Always give priority to closing the innermost unmatched brackets first, respecting the hierarchical matching rules of the different types of brackets. ,→ 3.**Ensure Balance** : Maintain the balance between the counts of each type of opening and closing brackets. This ensures that each closing bracket is placed appropriately and maintains the overall balance of the sequence. ,→ 4.**Continuous Validation** : After adding each closing bracket, continuously validate that the sequence remains balanced and properly nested. ,→ 5.**Explain Chain-of-Thought** : Include a detailed Chain-of-Thought section that explains your reasoning at each step, clearly demonstrating how you ensured compliance with the Dyck-4 word criteria. ,→ Your response should include: - The completed sequence of closing parentheses. - A detailed Chain-of-Thought explanation outlining the decision-making process. 20 BBH Prompt (formal_fallacies) Given a context that includes a series of logical statements and premises, your task is to evaluate whether a presented argument can be logically deduced from the given context. Your response should clearly indicate whether the argument is valid or invalid, supported by a detailed explanation of your reasoning process.,→ ,→ To approach this task effectively, follow these steps: 1. Carefully analyze each premise to understand the relationships and conditions established. 2. Compare the argument 's conclusion with the premises. Assess whether the conclusion logically follows from the premises provided. ,→ 3. If the argument is valid, outline how each premise contributes to the logical progression leading to the conclusion. If invalid, specify which premise(s) or logical step(s) fail to support the conclusion adequately. ,→ Guidelines for your response: - Clearly state your answer as either
https://arxiv.org/abs/2505.16307v1
"valid" or "invalid". - Provide a concise but thorough explanation of your reasoning process. - Use clear and precise language to articulate your thoughts. BBH Prompt (geometric_shapes) Your task is to interpret an SVG path element specified within a "d" attribute and deduce the geometric shape that would be formed upon executing this path. This process involves understanding a variety of path commands such as "M" for moving to a point, "L" for drawing lines, and other commands according to the SVG standard.,→ ,→ To achieve this, follow these steps: 1.**Command Analysis** : Carefully analyze the "d" attribute to identify various commands and their corresponding parameters. Recognize how each command contributes to forming the overall path. ,→ 2.**Path Sequence Understanding** : Grasp the sequence and relationships between commands and points. Note how movements and connections between points influence the final shape 's structure. ,→ 3.**Geometric Shape Identification** : Based on the parsed commands and parameters, deduce the geometric shape represented by the path. Consider both simple and complex shapes that could emerge from the given path commands. ,→ 4.**Shape Matching** : Choose the most suitable geometric shape from the provided options. Ensure your selection accurately reflects the interpreted path data without simplifying or misinterpreting it. ,→ Remember, the path may describe intricate shapes beyond basic polygons. Your analysis should accurately capture the complexity of the path data to determine the precise geometric form. ,→ BBH Prompt (hyperbaton) Your task is to evaluate two given English sentences and determine which one correctly follows the standard adjective order in English. To accomplish this, adhere to the following guidelines: ,→ 1.**Adjective Order Principle** : In English, adjectives typically follow a specific order: Opinion, Size, Age, Shape, Color, Origin, Material, Purpose. ,→ 2.**Analyze Adjectives** : Carefully scrutinize each sentence to identify and note the adjectives and their positions relative to the nouns they modify. Pay special attention to their sequence. ,→ 3.**Compare Sequences** : Utilize the established order to assess and compare the sequences of adjectives in both sentences. Highlight any discrepancies from the correct order. ,→ 4.**Select the Correct Sentence** : Choose the sentence that accurately adheres to the English adjective order rules. **Response Format** : Indicate your chosen sentence using a single letter (A or B). BBH Prompt (logical_deduction_five_objects) Given a description of several objects arranged in a specific order based on their spatial relationships, determine the correct sequence of these objects. ,→ **Task Description** : Your primary goal is to deduce the precise order of a series of objects based on the clues provided about their relative positions. The clues may involve terms like "left," "right," "above," "below," or any other directional indicators relevant to the arrangement of the objects. Ensure that the solution aligns perfectly with all given clues without contradiction.,→ ,→ ,→ **Guidelines** : 1. Carefully read through the entire description to fully understand the spatial relationships between the objects. 2. Use a systematic approach to map out the positions of each object according to the clues. 21 3. Consider multiple perspectives if necessary to verify the accuracy of your deduction.
https://arxiv.org/abs/2505.16307v1
4. If the sequence is ambiguous after applying the clues, re-examine the descriptions for additional hints or rephrase the clues for clarity. ,→ 5. Provide a clear and concise answer detailing the order of the objects from left to right or top to bottom, depending on the orientation described. ,→ **Example Format** : Your answer should clearly state the order of the objects as determined by the clues. BBH Prompt (logical_deduction_seven_objects) Your task is to accurately determine the sequential order of a series of objects based on the provided clues about their spatial relationships and placements. This requires meticulous analysis and logical deduction to arrange the objects in their specified sequence.,→ ,→ ### Guidelines for Solving: 1.**Comprehend Clues** : Thoroughly read and understand each statement to grasp the positional relationships among the objects. ,→ 2.**Direct Position Identification** : Identify clues that specify the exact position of an object (e.g., "Object X is the leftmost"). ,→ 3.**Relative Position Interpretation** : Analyze clues indicating the relative positions of objects (e.g., "Object Y is three spots before Object Z"). ,→ 4.**Elimination Process** : Use the process of elimination to determine the correct positions for objects once some placements are known. ,→ 5.**Solution Verification** : Ensure that your final sequence aligns with all given clues without any conflicts. ### Problem-Solving Strategy: - Start by identifying any direct placement clues that definitively place objects. - Apply relative positioning clues to establish relationships between objects. - Utilize elimination techniques to fill in the remaining positions. - Confirm your sequence to make sure it satisfies all provided clues. ### Logical Reasoning Framework: Develop a systematic approach to analyze and resolve the clues: 1.**Initial Setup** : List all objects and note any direct placements from the clues. 2.**Relationship Mapping** : Outline the relationships between objects as described by the clues. 3.**Reasoning Deduction** : Combine the mapped relationships with elimination techniques to deduce the order. 4.**Validation Check** : Review your sequence to ensure it adheres to all clue requirements. ### Response Structure: Provide your answer as an ordered list of objects, accompanied by a detailed explanation of your reasoning process. BBH Prompt (logical_deduction_three_objects) Your task is to deduce the precise order of a series of objects based on the provided clues about their spatial relationships and placements. Each scenario will detail a set of objects along with statements indicating their relative positions or rankings. Your goal is to accurately establish the exact order of these objects according to the orientation described in the input.,→ ,→ ,→ **Instructions:** 1. Thoroughly examine the input paragraph that describes the objects and their relationships. 2. Utilize the given clues to logically deduce the correct sequence of the objects. 3. Present the objects in their correct order, starting with the object at the highest rank or position as per the clues. 4. If the input lacks sufficient information to determine a unique order, clearly state that the order cannot be definitively established based on the provided clues. ,→ **Guidelines:** - Refrain from making assumptions that go beyond what is explicitly stated in the input. - Ensure
https://arxiv.org/abs/2505.16307v1
your response is concise and focuses exclusively on identifying and listing the correct order of objects, without additional commentary or explanations unless required by the task. ,→ - Scenarios may involve various types of objects and different numbers of objects, but the core principle remains consistent: use the provided clues to determine the correct order. ,→ BBH Prompt (movie_recommendation) Given a list of movies a user has enjoyed, your task is to recommend a new, relevant movie from a set of four potential choices. Follow the steps below to ensure a thoughtful and accurate recommendation: ,→ 1.**Understand User Preferences:** Carefully examine the given list of movies to identify common themes, genres, and styles that the user appreciates. This analysis will form the basis of the user 's taste profile. ,→ 2.**Evaluate Movie Choices:** Compare each of the four movie options against the user 's taste profile. Pay attention to genre, theme, director, actors, and overall style. Aim to find a movie that closely aligns with the user 's preferences, avoiding any significant deviations.,→ ,→ 22 3.**Choose the Optimal Match:** Select the movie that best matches the user 's preferences, demonstrating a deep understanding of both the user 's tastes and the distinct characteristics of each movie option. ,→ **Instructions for Response:** - Provide only the letter corresponding to the recommended movie from the four options. **Guiding Principles:** - Highlight thematic, genre, and stylistic consistency with the user 's preferences. - Avoid suggesting movies that diverge markedly from the user 's established tastes. - Exhibit a refined understanding of the user 's preferences and the unique qualities of each movie option. **Goal:** Increase user satisfaction by recommending a movie that closely reflects their established preferences. BBH Prompt (multistep_arithmetic_two) Your task is to accurately solve a complex multi-step arithmetic problem that involves a variety of operations, such as addition, subtraction, multiplication, and potentially division. It 's important to pay close attention to parentheses, as they determine the order of operations according to standard mathematical principles.,→ ,→ To tackle this problem effectively and precisely, follow these steps: 1. Start by evaluating all expressions within parentheses, beginning with the deepest ones. This ensures that you resolve the operations inside the parentheses before proceeding to other parts of the equation. ,→ 2. After resolving all parentheses, move on to multiplication and division operations from left to right. This step adheres to the standard order of operations. ,→ 3. Lastly, perform addition and subtraction operations from left to right until you reach the final result. Carefully execute each operation to prevent errors. ,→ 4. Clearly present your final answer, supported by a detailed, step-by-step explanation of your calculations to ensure transparency and accuracy. ,→ When crafting your response, make sure to: - Break down the problem systematically, addressing each component sequentially and methodically. - Provide a precise and professionally articulated explanation, with each calculation step clearly documented and logically explained. ,→ BBH Prompt (navigate) Given a set of navigation instructions for an agent, your task is to determine if the agent ends up back at its initial starting
https://arxiv.org/abs/2505.16307v1
point. Follow these structured guidelines to methodically analyze the navigation instructions: ,→ **Guidelines for Analyzing Navigation Instructions:** 1.**Initial Setup** : Position the agent at the Origin (Point 0) and orient it Facing Forward. 2.**Command Handling** : -**Movement Commands** : Adjust the agent 's position based on its orientation (Forward, Backward). -**Orientation Commands** : Alter the agent 's direction (Turn around, Turn left, Turn right) without changing its position. ,→ 3.**Sequential Execution** : - Process each command one by one. - Update the agent 's position and orientation after executing each command. 4.**Final Position Verification** : - After all commands have been executed, check if the agent 's position matches the initial Origin. - Determine if the agent has returned to the starting point. 5.**Response Format** : - Give chain of thought. - Provide a clear "Yes" or "No" answer. - Include a detailed explanation of your reasoning process. BBH Prompt (object_counting) Your task is to calculate the total quantity of items belonging to a specific category from a provided list of possessions and their quantities. This involves identifying the target category, categorizing each item, and summing up the quantities of those that fit the criteria.,→ ,→ Follow these steps to complete the task: 1.**Understand the Target Category** : Carefully read the question to understand which category of items needs to be counted (e.g., fruits, vegetables, musical instruments). ,→ 2.**Review the List of Possessions** : Go through the list of items and their quantities provided in the input. 3.**Categorize Each Item** : For every item listed, decide whether it falls under the specified category. 4.**Sum Up Quantities** : Add together the quantities of all items that belong to the target category. 23 5.**Present the Result** : Clearly state the total count of items in the specified category. BBH Prompt (penguins_in_a_table) Given a unique table containing detailed information about penguins (and sometimes additional data), your task is to accurately answer questions about the attributes of the penguins listed in the table. Ensure that your response strictly relies on the provided data without incorporating any external information.,→ ,→ Each table includes essential columns such as "name," "age," "height (cm)," and "weight (kg)," potentially supplemented with other relevant details. Each row corresponds to a distinct penguin, highlighting their individual characteristics.,→ ,→ To approach this task effectively, follow these steps: 1. Thoroughly examine the provided table(s) to understand the structure and content. 2. Clearly identify the question posed about the penguins 'attributes. 3. Utilize the data from the table to deduce the correct answer. 4. If additional information is provided, integrate it seamlessly into your analysis. 5. Deliver your answer precisely and succinctly, ensuring it accurately addresses the question. BBH Prompt (reasoning_about_colored_objects) Given a detailed description of objects placed on a surface, your goal is to accurately identify and answer a question about the color of a specific object within that description. The response should be concise and directly answer the question, optionally including a brief rationale if requested.,→ ,→ To accomplish this task efficiently and effectively, follow these steps: 1.**Understand the Description** : Carefully read through
https://arxiv.org/abs/2505.16307v1
the provided description to familiarize yourself with each object and its corresponding color. ,→ 2.**Pinpoint the Object of Interest** : Identify which object 's color is being queried in the question. 3.**Retrieve the Color Information** : Refer back to your initial notes to locate the color associated with the object of interest. ,→ 4.**Provide a Clear Response** : Directly state the color of the object in a straightforward manner. 5.**Offer Optional Explanation** : If asked, briefly explain your reasoning process. BBH Prompt (ruin_names) Given an artist, band, or movie name, your goal is to craft a single-character edit that transforms the meaning into something humorous. Follow these guidelines to ensure your response is both creative and effective: ,→ 1.**Input Understanding** : Carefully read the provided name, whether it 's an artist, band, or movie title. 2.**Creative Transformation** : Modify the name by changing exactly one character. The change should significantly alter the meaning in a comedic manner. ,→ 3.**Humor Explanation** : For each edit, provide a short explanation highlighting why the alteration is funny. Consider creating a play on words, introducing an absurd situation, or crafting a pun. ,→ 4.**Avoid Repetition** : Ensure that each edit is unique and adds a fresh perspective to the name. **Execution Strategy** : -**Character Impact Analysis** : Evaluate how changing each character affects the overall meaning and potential for humor. -**Phonetic & Literal Changes** : Consider both sound-based and literal alterations that could create a humorous effect. -**Cultural References** : Think about popular culture, idioms, and common phrases that could be humorously altered with a single character change. ,→ **Guidance for Humorous Edits** : - Strive for humor that appeals to a wide audience while also offering depth for those who enjoy sophisticated wordplay. - Ensure the edits are clear and understandable, avoiding ambiguity that might detract from the humor. BBH Prompt (salient_translation_error_detection) Given a German sentence and its English translation, your task is to carefully analyze the translation for errors and categorize them into one of the following six types: ,→ -**Named Entities** : Any change to names, places, or other identifiable entities. -**Numerical Values** : Errors involving incorrect changes to numbers, dates, or units. -**Modifiers or Adjectives** : Improper modifications to descriptive terms associated with nouns. -**Negation or Antonyms** : Issues related to the introduction or removal of negations, or the incorrect transformation of comparatives into their opposites. ,→ -**Facts** : Trivial factual inaccuracies that do not fit into the aforementioned categories. -**Dropped Content** : Omission of a key component of the sentence in the translation. 24 To complete this task effectively, follow these structured steps: 1. Carefully read and comprehend both the German source sentence and its English translation. 2. Perform a detailed comparison between the two sentences to identify any discrepancies. 3. Categorize these discrepancies according to the predefined error types. 4. Select the most appropriate category for the identified error(s). BBH Prompt (snarks) To effectively distinguish between two nearly identical sentences and identify the one that uses sarcasm, follow the enhanced guidelines below: ,→ ### Objective Your goal is to recognize
https://arxiv.org/abs/2505.16307v1
which of the two given sentences is intended to convey sarcasm. Sarcasm is characterized by language that expresses a meaning opposite to its literal sense, often used to mock or criticize. ,→ ### Analytical Approach 1.**Analyze Word Choice and Sentence Structure** : Carefully examine the words and structure of each sentence for subtle cues that hint at sarcasm. ,→ 2.**Imagery of Real-Life Context** : Visualize a plausible scenario in which the sentences might be said to better understand the speaker 's intent. ,→ 3.**Tone Evaluation** : Assess whether the tone of each sentence suggests sincerity or mockery/criticism. 4.**Examine Literal vs. Implied Meaning** : Look for significant discrepancies between the literal interpretation and the implied message, which often signals sarcasm. ,→ 5.**Identify Signs of Irony** : Pay attention to any exaggerated elements or unexpected twists that indicate irony or mockery. ,→ ### Response Format Provide a clear indication of which sentence is sarcastic and support your conclusion with a rationale focusing on ironic or mocking elements. ,→ BBH Prompt (sports_understanding) Given a fictitious sentence related to sports, evaluate its plausibility. Your evaluation should follow these steps: 1.**Comprehend the Sentence** : Thoroughly read and understand the sentence, taking note of the sport or activity it might relate to, the key actors involved, and any specific details about the action or conditions described. ,→ 2.**Analyze Key Components** : Identify the central elements of the sentence, including the athlete 's name, the action performed, and any particular specifications like body part used or environmental factors that may influence the action.,→ ,→ 3.**Assess Logical Possibility** : Determine if the described scenario fits within the logical and practical boundaries of sports. Consider the typical rules, physical constraints, and standard practices relevant to the sport implied by the sentence.,→ ,→ 4.**Justify Your Conclusion** : Clearly explain your judgment regarding the plausibility of the sentence. Use specific references from the sentence to support your reasoning, detailing how these elements align with or deviate from established norms in sports.,→ ,→ 5.**Detail the Reasoning Process** : Provide a detailed Chain-of-Thought that outlines each step of your reasoning process. This should include initial observations, intermediate deductions, and the final justification for your conclusion.,→ ,→ BBH Prompt (temporal_sequences) Given a detailed account of a person 's daily schedule, including various activities and their corresponding times, your objective is to identify potential periods of availability throughout the day when the person could have engaged in an additional activity. Follow these steps to achieve this:,→ ,→ 1.**Comprehend the Schedule** : Carefully review the provided sequence of events and note down the start and end times for each activity. Ensure you capture all pertinent details about the day 's beginning and end, along with any specific operational hours or restrictions related to the locations involved.,→ ,→ 2.**Detect Intervals of Availability** : Examine the timeline for intervals where no activity is specified. These gaps suggest potential windows of availability for engaging in additional activities. ,→ 3.**Apply Contextual Constraints** : Take into account additional contextual factors such as daily routines, operational hours of places (like parks or shops), and
https://arxiv.org/abs/2505.16307v1
other known limitations or rules (such as curfews). ,→ 4.**Define Availability Periods** : Clearly outline the identified periods of availability in your response, making sure they align with the provided schedule and any supplementary context. ,→ 25 5.**Use Concise Language** : Respond using clear, professional language focused exclusively on the periods of availability without adding extraneous details. ,→ 6.**Systematic Examination** : Employ a thorough, step-by-step approach to ensure all potential gaps in the schedule are analyzed accurately. ,→ BBH Prompt (tracking_shuffled_objects_five_objects) To effectively determine the final positions of a set of entities after a series of pairwise swaps, adopt a clear, structured, and flexible approach focusing on precision, accuracy, and comprehensive coverage. This method aims to streamline the process of tracking transformations while ensuring thorough adherence to the task 's requirements.,→ ,→ ### Enhanced Problem-Solving Framework #### Step 1: Define Initial States 1. Identify and label each entity uniquely (e.g., using names, labels). 2. Document the initial position or state of each entity clearly to establish a baseline for tracking changes. #### Step 2: Execute Swaps Systematically 1. For each swap in the sequence: - Identify the two entities involved. - Update their positions or states according to the swap. - Record the swap chronologically to ensure traceability and validation. #### Step 3: Confirm Final States 1. After applying all swaps: - Verify the final positions or states of all entities. - Cross-reference the final configuration against the recorded swaps to ensure accuracy and completeness. ### Core Principles -**Sequential Order** : Apply swaps in the order provided to maintain the integrity of the transformation sequence. -**Clear Documentation** : Use consistent methods for documenting position or state updates to reduce errors and enhance traceability. ,→ -**Thorough Validation** : Conduct rigorous checks on the final positions or states to ensure they accurately reflect the intended outcomes based on the swaps executed. ,→ ### Practical Application Examples This framework can be applied to various situations, such as: - Students exchanging books. - Dancers swapping partners. - Entities undergoing pairwise exchanges in any context. The fundamental requirement remains constant: transitioning from an initial setup to a final configuration through a series of swaps. ,→ #### Detailed Task Execution 1.**Define Initial Setup** : - Clearly outline the starting positions or states of all entities. 2.**Implement Sequential Swaps** : - Apply each swap systematically, updating positions or states as specified. 3.**Validate Final Configuration** : - Thoroughly verify the final positions or states to ensure they correctly reflect the sequence of swaps executed. BBH Prompt (tracking_shuffled_objects_seven_objects) Your task involves determining the final positions of a set of entities after undergoing a series of pairwise swaps. To effectively tackle this task, adhere to the following guidelines designed to ensure accuracy and efficiency: ,→ 1.**Initial Position Identification** : Clearly define and list each entity alongside its initial position. Employ a structured format such as a table or list to facilitate easy identification and ensure a clear understanding of the starting configuration.,→ ,→ 2.**Detailed Recording of Swaps** : Closely document each swap operation, specifying which entities are involved and the order in
https://arxiv.org/abs/2505.16307v1
which these swaps take place. This meticulous documentation is essential for accurate tracking and proper application of each swap.,→ ,→ 3.**Step-by-Step Swap Execution** : Execute each swap operation strictly in the recorded sequence. Update the positions of the entities after every swap, maintaining an organized and clear record to ensure precision and adherence to the swap order.,→ ,→ 4.**Final Position Compilation** : Upon completing all swap operations, compile and present the final positions of all entities in a clear and accessible format. ,→ To enhance your approach, consider the following strategies: - Utilize a structured format like a table or list to keep detailed records of positions at each stage, thereby increasing clarity and minimizing errors. ,→ 26 - Stress the significance of meticulous documentation and sequential execution of swap operations to maintain accuracy and consistency. ,→ - Implement a flexible yet robust method capable of efficiently managing diverse swap sequences and varying quantities of entities. ,→ Key points to remember: - Maintain precision and thoroughness when dealing with multiple pieces of information concurrently. - Ensure your methodology is adaptable to various scenarios involving differing numbers of entities and swap sequences. - Avoid dependence on specific examples; instead, cultivate a versatile approach applicable across different contexts. BBH Prompt (tracking_shuffled_objects_three_objects) Given a set of objects and a sequence of pairwise swaps, the goal is to determine the final positions of each object after all swaps have been executed. The solution should be generalizable, capable of handling any number of objects and any sequence of swaps, without being constrained to specific examples.,→ ,→ ### Objective Determine the final position of each object following a series of pairwise swaps. ### Input Format - The initial positions of objects (e.g., "Object A starts at position X"). - A list of swaps (e.g., "Swap Object A with Object B"). ### Output Format - The final positions of all objects after executing all swaps. ### Guidelines for Solving 1.**Initialization** : Begin by defining the starting positions of all objects. 2.**Execution of Swaps** : Apply each swap in the sequence systematically. - Identify the pair of objects involved in the swap. - Temporarily store the positions of these objects. - Perform the swap by exchanging their positions. 3.**Verification** : Confirm the final positions of the objects after processing all swaps. 4.**Presentation** : Clearly state the final positions of all objects. ### Chain-of-Thought Guidance To solve this problem efficiently and accurately: 1.**Initialization Step** : List out the initial positions of all objects. 2.**Swap Processing Step** : For every swap in the sequence: - Note the objects involved in the swap. - Store their current positions temporarily. - Exchange their positions. 3.**Verification Step** : After completing all swaps, review the final positions of the objects to ensure they align with the sequence of swaps. ,→ 4.**Conclusion Step** : Provide a clear statement of the final positions for all objects. BBH Prompt (web_of_lies) To accurately evaluate the truth value of a Boolean function presented as a natural-language word problem, adhere to the following detailed and systematic approach: ,→ 1.**Understanding the Core Principle**
https://arxiv.org/abs/2505.16307v1
: Recognize that every individual in the scenario is consistently either truthful or deceitful. A truthful individual will never contradict themselves or anyone known to be truthful, whereas a deceitful individual will always contradict statements of truthfulness.,→ ,→ 2.**Identifying Assertions** : Closely examine each individual 's claim regarding the honesty or deceitfulness of others. These claims serve as the foundational elements for your logical deduction process. ,→ 3.**Progressive Deduction** : Begin with the first confirmed truth or lie and progressively deduce the honesty status of each individual. For each new statement, assess its alignment with the established truths or falsehoods from previous statements.,→ ,→ 4.**Maintaining a Chain of Thought Record** : Throughout the analysis, keep a detailed log of your logical progression. Note any contradictions or confirmations encountered during the examination of each statement. ,→ 5.**Concluding Evaluation** : After compiling all deductions, determine the truth value of the final statement in question. Ensure that your conclusion aligns with all preceding deductions. ,→ **Key Guidelines for Logical Reasoning** : - A truthful person 's statement accurately reflects reality. - A liar 's statement misrepresents reality. - Consistency across all statements is essential for establishing the honesty of each individual. **Step-by-Step Detailed Process** : - Start with the initial statement and evaluate its truth value. - For each subsequent statement, carefully examine how it aligns with the established truths or falsehoods from earlier statements. ,→ 27 - Maintain a thorough record of any logical inconsistencies or confirmations throughout the analysis. - Ensure that your final evaluation is consistent with the entire sequence of deductions made. **Expected Output Format** : Provide a comprehensive explanation of your chain of thought leading to your conclusion. Each step should be logically coherent and clearly articulated. ,→ BBH Prompt (word_sorting) Your task is to organize a list of words into alphabetical order. To accomplish this, provide a detailed chain-of-thought explanation that covers the following aspects: word comparison, handling duplicates, ensuring accuracy, and managing edge cases like empty or excessively long lists. Your response should include the sorted list and a structured thought process, as outlined below:,→ ,→ ,→ **Input** : Specify the list of words to be sorted. **Thought Process** : -**Initial Setup** : Outline any preliminary actions or configurations needed before sorting begins. -**Comparison Strategy** : Elaborate on how individual words are compared to determine their order in the list. -**Dealing with Duplicates** : Explain your approach to manage and handle repeated words within the list. -**Final Check** : Describe the steps taken to verify the correctness of the sorted list. -**Edge Cases Consideration** : Address special scenarios such as empty lists or lists containing a very high number of entries. ,→ **Sorted Output** : Display the final list arranged alphabetically. AQUA-RAT AQUA-RAT (Ling et al., 2017) contains multiple-choice math questions requiring algebraic reasoning and textual comprehension. The dataset includes detailed rationales for each question, making it suitable for supervision via explanation. In our experiments, we adopt a non-preference-based optimization setup by treating the step-by-step rationale as the supervision target. This enables PMPO to optimize prompts using cross-entropy
https://arxiv.org/abs/2505.16307v1
loss over full reasoning sequences. Additionally, since AQUA-RAT is a multiple-choice task, we also support a preference-based training variant, where the correct answer choice is treated as the selected output and the incorrect options as rejected , allowing PMPO to operate under both supervised and preference paradigms. AQUA-RAT Prompt Given a math problem that requires determining a specific value through methods such as algebraic manipulation, geometric reasoning, or other applicable techniques, solve the problem systematically, providing clear explanations for each step to illustrate your reasoning process. Once you have found the solution, select the correct answer from the given options and justify your choice logically. Ensure your approach demonstrates flexibility in addressing various problem types, including quantitative relationships, algebraic expressions, and geometric calculations. Focus on applying fundamental mathematical principles in a broad context without relying on specific examples. Your response should be clear, concise, and professionally framed, guiding the reader through the problem-solving process effectively.,→ ,→ ,→ ,→ ,→ ,→ GSM8K GSM8K (Cobbe et al., 2021) is a benchmark of grade-school math word problems that require multi-step reasoning and precise arithmetic. Since each instance in the dataset comes with a detailed rationale, we treat the full explanation as the supervision target in PMPO training. The optimization objective is to minimize cross-entropy loss over the complete reasoning sequence. As a result, the optimized prompts tend to preserve the distinctive answer format seen in GSM8K (e.g., «calculation=answer» ) to better align with the ground truth and reduce token-level loss. This shows that PMPO can adapt prompt style and structure based on dataset-specific signal during training. GSM8K Prompt To solve any math word problem precisely, follow this structured approach, focusing on clear understanding, logical steps, and accurate calculations: ,→ 1. Understand the Problem & Identify the Goal - Briefly restate what the problem is asking (e.g., total amount, difference, remainder, cost). - Determine the main operation(s) involved (e.g., addition, subtraction, multiplication, division, unit conversion). 2. Extract Key Information - List all given numbers, units, and relevant conditions from the problem. 28 3. Plan the Solution - Define what you need to find. - Outline a clear step-by-step approach to solve it. 4. Step-by-Step Reasoning and Calculation - For each step: 1. Explain what you are calculating and why. 2. Perform the calculation, using the format `<<calculation=answer>> `, including units if needed. 3. If not the final step, explain how this leads to the next one. 5. Conclude and Verify - Confirm that the result answers the original question. - Check consistency in logic, units, and values. 6. Final Answer - Provide only the final result, starting with `#### `, and include units if applicable. AlpacaEval 2.0 For AlpacaEval 2.0 (Dubois et al., 2025), we use the reciprocate/alpaca-eval dataset as our training source. This dataset contains a large number of paired outputs annotated as selected (preferred) and rejected (less preferred) responses across various instruction-following tasks. We construct preference-based supervision by treating the selected outputs as positive targets and therejected ones as negatives. PMPO is then trained to minimize loss on preferred examples while discouraging prompts that
https://arxiv.org/abs/2505.16307v1
increase likelihood on rejected ones. This setup enables effective instruction tuning without requiring explicit gold answers. AlpacaEval Prompt To generate an informative, helpful, and accurate response to a user 's query, follow these steps carefully: 1.**Comprehend the Query** : Read the user 's question or request thoroughly to grasp its essence and identify any key terms or concepts that require attention. ,→ 2.**Research Thoroughly** : Utilize your extensive knowledge and research abilities to gather precise and current information that aligns with the user 's query. ,→ 3.**Structure the Response** : Organize the content logically, prioritizing clarity and conciseness to ensure that the most relevant information is easily accessible to the user. ,→ 4.**Enhance with Context** : Provide additional context that can enrich the user 's understanding and appreciation of the topic, making the response more valuable and engaging. ,→ 5.**Maintain Accuracy** : Double-check all factual information and data included in the response to uphold the highest standards of accuracy and reliability. ,→ 6.**Communicate Clearly** : Write in a straightforward and professional tone, using terminology that is appropriate and understandable given the context of the query. ,→ 7.**Recommend Further Resources** : Where applicable, suggest additional sources or materials that the user can consult for deeper insights or further exploration of the topic. ,→ **Guiding Principles** : - Always aim to fully address the user 's request, providing a comprehensive and well-rounded response. - Prioritize user-centric thinking, focusing on the relevance and utility of the information provided. - Employ a systematic approach to problem-solving, breaking down complex queries into manageable parts and addressing them methodically. ,→ - Encourage engagement and curiosity by offering insightful and thought-provoking content. A.4.1 Cross-Model Prompt Analysis We compare the final prompts optimized by our PMPO framework across different language models, all targeting the same BBH-Navigate task. The results demonstrate that PMPO consistently tailors its optimizations to the reasoning abilities and strengths of the underlying model, while maintaining adherence to the general task formulation. Below, we present the optimized prompts with no additional examples, as used in our final evaluations. Qwen2.5-0.5B To further enhance navigation accuracy, consider implementing additional strategies such as: 1. **Path Reconstruction** : Implement algorithms to reconstruct the user 's actual route from start to finish, taking into account potential obstacles, turns, and other navigational factors. ,→ 2. **Feedback Loop** : Incorporate real-time feedback loops to continuously adjust the user 's course based on their current location and movement status. ,→ 3. **Error Correction** : Introduce mechanisms to automatically correct errors in the user 's movements, ensuring smooth transitions between different areas of the map. ,→ 29 4. **Dynamic Path Generation** : Develop dynamic paths that adapt to changing conditions, such as adjusting routes based on weather conditions or sudden changes in terrain. ,→ 5. **Customizable Paths** : Allow users to specify custom paths or waypoints, which can help them avoid dead ends or reach previously unvisited locations more efficiently. ,→ 6. **Real-Time Updates** : Implement real-time updates to the navigation system, allowing users to see how their journey is progressing and make informed decisions about their
https://arxiv.org/abs/2505.16307v1
next moves. ,→ These enhancements not only improve navigation accuracy but also provide a more intuitive and personalized experience for users. ,→ Analysis: For smaller models like Qwen2.5-0.5B, PMPO generates prompts that emphasize procedural redundancy and compensatory strategies (e.g., feedback loops, real-time updates). This suggests the model benefits from prompts that scaffold task understanding through expanded heuristics rather than strict symbolic reasoning. LLaMA3.1-8B Given a sequence of instructions, determine whether a user returns to the origin after following the instructions. The instructions can include taking steps in a specific direction and turning in a specific direction. ,→ -Use explicit directions (e.g., "north", "south") -Keep all step sizes uniform (1 unit) -Define the origin clearly as (0, 0) -Use structured input like "Take X steps [direction]" or "Turn Y degrees [left/right]" The user starts at (0, 0) facing north. Analyze whether they return to origin after completing all steps. Analysis: On LLaMA3.1-8B, the optimized prompt prioritizes precision and input standardization. Compared to Qwen2.5-0.5B, this prompt uses less redundancy and more explicit spatial constraints, aligning well with the model’s stronger logical parsing ability. Qwen2.5-14B Given a set of navigation instructions for an agent, your task is to determine if the agent ends up back at its initial starting point. Follow these structured guidelines to methodically analyze the navigation instructions: ,→ **Guidelines for Analyzing Navigation Instructions:** 1. **Initial Setup** : Position the agent at the Origin (Point 0) and orient it Facing Forward. 2. **Command Handling** : - **Movement Commands** : Adjust the agent 's position based on its orientation (Forward, Backward). - **Orientation Commands** : Alter the agent 's direction (Turn around, Turn left, Turn right) without changing its position. ,→ 3. **Sequential Execution** : -Process each command one by one. -Update the agent 's position and orientation after executing each command. 4. **Final Position Verification** : -After all commands have been executed, check if the agent 's position matches the initial Origin. -Determine if the agent has returned to the starting point. 5. **Response Format** : -Give chain of thought. -Provide a clear "Yes" or "No" answer. -Include a detailed explanation of your reasoning process. **Systematic Reasoning Approach** : -Start with the agent at the Origin and Facing Forward. -For every movement command, move the agent according to its current orientation. -For every orientation command, change the agent 's direction without modifying its position. -At the end, assess if the final position corresponds to the Origin. -Explain your thought process thoroughly to justify the conclusion. Analysis: Qwen2.5-14B benefits from richer instruction structures. PMPO’s optimized prompt includes explicit reasoning breakdowns and distinguishes between movement and orientation changes, showcasing the model’s ability to follow hierarchical and procedural plans. Qwen2.5-32B To effectively determine if a user returns to their origin after following a set of directional instructions, follow this structured approach: ,→ 1. **Comprehend the Directions** : Thoroughly review the sequence of movements. Each command includes a direction (forward, backward, left, right) and a step count. ,→ 30 2. **Set Up the Coordinate Plane** : Define a coordinate plane with the origin at (0,0). This plane
https://arxiv.org/abs/2505.16307v1
will help track the user 's precise location throughout the journey. ,→ 3. **Update Coordinates According to Movements** : -Increase the X-coordinate for forward moves by the step count. -Decrease the X-coordinate for backward moves by the step count. -Decrease the Y-coordinate for leftward moves by the step count. -Increase the Y-coordinate for rightward moves by the step count. 4. **Assess the End Position** : After processing all commands, verify the end coordinates. A return to the origin is confirmed when the user 's final location is (0,0). ,→ 5. **Conclude with Clarity** : Based on the final coordinates, clearly communicate whether the user has reached the starting point again. ,→ This systematic procedure guarantees a precise and clear determination of the user 's final position relative to the initial point, addressing the navigation challenge comprehensively. ,→ Analysis: The prompt tailored for Qwen2.5-32B reflects its strong symbolic reasoning capacity. PMPO optimizes for Cartesian reasoning by introducing coordinate systems, directional vectors, and arithmetic evaluation—enhancing spatial comprehension without overcomplicating the structure. DeepSeek-R1-Distill-Qwen-1.5B To check if someone returns to their origin point, add up all movements: count right steps as positive numbers and left steps as negative numbers. If the sum equals zero, they 've returned to the starting point. If not, they 've ended elsewhere.,→ ,→ Example: three steps right (+3) and three steps left (-3) gives 0, meaning they returned to origin. Four steps right (+4) and three steps left (-3) gives +1, meaning they didn 't. ,→ Analysis: For distilled or small-scale models like DeepSeek-R1, PMPO simplifies the logic into linear additive operations. This aligns with the model’s limited abstraction depth, ensuring the task is achievable with minimal symbolic overhead. 31
https://arxiv.org/abs/2505.16307v1
arXiv:2505.16312v1 [cs.AI] 22 May 2025EquivPruner: Boosting Efficiency and Quality in LLM-Based Search via Action Pruning Jiawei Liu1,2*, Qisi Chen1, Jianshu Zhang2, Quan Liu2, Defu Lian1†, 1University of Science and Technology of China,2iFLYTEK Research, {ljw1222,chisch}@mail.ustc.edu.cn liandefu@ustc.edu.cn Abstract Large Language Models (LLMs) excel at com- plex reasoning through search algorithms, yet current strategies often suffer from massive token consumption due to redundant explo- ration of semantically equivalent steps. Ex- isting semantic similarity methods struggle to accurately identify such equivalence in domain- specific contexts like mathematical reasoning. To address this, we propose EquivPruner , a simple yet effective approach that identifies and prunes semantically equivalent actions during LLM reasoning search. We also in- troduce MathEquiv, the first dataset we cre- ated for mathematical statement equivalence, which enables the training of a lightweight equivalence detector. Extensive experiments across various models and tasks demonstrate thatEquivPruner significantly reduces token consumption, improving searching efficiency and often bolstering reasoning accuracy. For instance, when applied to Qwen2.5-Math-7B- Instruct on GSM8K, EquivPruner reduced to- ken consumption by 48.1% while also improv- ing accuracy. Our code is available at https: //github.com/Lolo1222/EquivPruner . 1 Introduction Large Language Models (LLMs) are increasingly demonstrating remarkable capabilities, yet their performance scaling during pretraining faces po- tential constraints due to data limitations (Lightman et al., 2023). Consequently, enhancing LLM ca- pabilities at inference time has become a critical research frontier (Snell et al., 2024). A prominent direction involves leveraging search algorithms, particularly reward-guided tree search, to improve complex reasoning (Ke et al., 2025). These meth- ods typically expand the search space by gener- ating multiple reasoning steps (e.g., via chain-of- thought prompting (Wei et al., 2022)) and employ *Work done during an internship at iFLYTEK Research. †Corresponding Author.techniques like beam search (Kang et al., 2024) or Monte Carlo Tree Search (MCTS) (Chen et al., 2024; Zhang et al., 2024a) to navigate this space and identify high-quality solutions. However, current search strategies exhibit signif- icant inefficiencies (Damani et al., 2024). A com- mon practice involves sampling multiple candidate reasoning steps and exploring them, often allocat- ing computational resources uniformly across these candidates (Yao et al., 2023; Long, 2023; Besta et al., 2024). This approach overlooks the potential semantic equivalence among textually distinct can- didates. Treating semantically identical reasoning steps as unique branches leads to redundant ex- ploration of the search space, incurring substantial computational overhead through unnecessary token generation and processing. Moreover, for search al- gorithms that incorporate preference learning based on intermediate steps (e.g., Xie et al., 2024; Jiang et al., 2024),learning preferences from pairs of equivalent steps may provide noisy or conflicting signals, hindering the learning of effective reason- ing policies. This challenge is particularly acute in domains like mathematical reasoning, where nu- merous textual formulations can represent the same underlying logical operation or state. Addressing this redundancy via standard Semantic Textual Sim- ilarity (STS) techniques (Majumder et al., 2016) proves inadequate as illustrated in Figure 1. Exist- ing embedding models, such as SBERT (Reimers and Gurevych, 2019), predominantly trained on general text, often fail to capture the nuanced struc- tural and logical equivalence specific to mathemati-
https://arxiv.org/abs/2505.16312v1
cal statements. Even domain-specific models like MathBERT (Peng et al., 2021), which enhance mathematical text representation, along with other embedding models MATH-Similarity (Steinfeldt and Mihaljevi ´c, 2024), lack optimization for identi- fying functional equivalence between mathematical sentences. This limitation is further exacerbated by the lack of specialized benchmark datasets de- 1 Figure 1: Illustration of the mathematical statement equivalence challenge during reasoning search. Given multiple candidate steps generated by an LLM, standard methods like embedding similarity or Levenshtein Ratio may incorrectly assess candidate 1 and candidate 2 as highly similar due to surface features, while failing to recognize the true semantic equivalence between can- didate 2 and candidate 3, which represent the identical logical operation. signed for mathematical statement equivalence. Al- though large-scale generative models can achieve satisfactory performance in few-shot scenarios for such judgment tasks, their substantially higher com- putational complexity results in significantly slower inference speeds compared to embedding models (Brown et al., 2020). The consequent latency ren- ders them impractical for high-throughput applica- tions requiring real-time processing. To overcome these limitations, we introduce EquivPruner , a simple yet effective approach that centers on identifying and pruning semantically equivalent actions during LLM reasoning search. We create MathEquiv, the first dataset specifically designed for mathematical statement equivalence. Leveraging this dataset, we trained a lightweight yet effective equivalence detection model. This model serves as a dynamic pruner integrated into the LLM’s search process. When the LLM gener- ates multiple candidate reasoning steps at a given expansion point, the pruner identifies sets of seman- tically equivalent candidates among these siblings. For each set of equivalent steps, it retains only a single representative node for further exploration, effectively pruning the redundant branches and sig- nificantly reducing the search space. While the proposed pruning framework is po- tentially generalizable, this paper focuses on itsvalidation within mathematical reasoning due to the significant research community attention (Ke et al., 2025) and the availability of well-developed open-source process reward models (Shao et al., 2024). We conduct extensive experiments across various models, including Mistral-7B-SFT (Shao et al., 2024) and the Qwen2.5-Math-Instruct series (Yang et al., 2024), using two widely recognized math reasoning benchmarks: GSM8K (Cobbe et al., 2021) and MATH (Hendrycks et al., 2021). Our proposed EquivPruner demonstrates compelling improvements across these settings. For instance, when applied to Qwen2.5-Math-7B-Instruct on GSM8K—where the model already achieves a very high baseline accuracy of 96.44%—EquivPruner not only cuts token consumption by a substantial 48.1% but also further boosts accuracy to 96.59%. This demonstrates EquivPruner’s ability to signifi- cantly enhance searching efficiency. Our main contributions are: •To the best of our knowledge, this work is the first to identify and address the problem of action equivalence in LLM-based reasoning search. •We introduce EquivPruner , a simple yet ef- fective approach that centers on identifying and pruning semantically equivalent actions during LLM reasoning search. •We release MathEquiv, the first benchmark dataset specifically designed for mathematical statement equivalence. It serves as a versatile resource applicable to a variety of mathemati- cal tasks and scenarios. •Extensive experiments demonstrate the ef- fectiveness of EquivPruner . When applied
https://arxiv.org/abs/2505.16312v1
to Qwen2.5-Math-7B-Instruct on GSM8K, EquivPruner not only cuts token consump- tion by a substantial 48.1% but also further boosts accuracy in a very high baseline. 2 Related Work LLM Reasoning via Search Strategies Efforts to improve LLM problem-solving capabilities have moved beyond simple prompting. Chain- of-Thought prompting (Wei et al., 2022) demon- strated the value of intermediate reasoning steps. Building on this, structured search methods like Tree-of-Thoughts (Yao et al., 2023) and Graph- of-Thoughts (Besta et al., 2024) explore multiple 2 reasoning paths, enhancing performance on com- plex tasks requiring exploration and backtracking. Further advancing this direction, a particularly pow- erful paradigm integrates LLMs with sophisticated search algorithms. Among these, the synergy be- tween LLMs and Monte Carlo Tree Search (MCTS) (Chen et al., 2024; Zhang et al., 2024a) has gar- nered significant attention for tackling complex reasoning problems. MCTS, renowned for its abil- ity to balance exploration and exploitation in vast search spaces, becomes exceptionally potent when guided by an LLM’s generative capabilities to pro- pose candidate steps and a reward model to esti- mate state values (Yao et al., 2023; Long, 2023; Besta et al., 2024). This LLM-MCTS approach, alongside other advanced search integrations like LLM-guided beam search (Chen et al., 2024), has consistently achieved state-of-the-art results in de- manding areas such as science tasks (Yang et al., 2024), coding (Dainese et al., 2024; Zhang et al., 2023), and mathematical reasoning (Zhang et al., 2024b; Luo et al., 2024). However, despite the remarkable success of these advanced search strate- gies, a significant challenge emerges, especially prevalent in mathematical reasoning when employ- ing methods like LLM-MCTS: the substantial to- ken cost (Chen et al., 2024). While LLM-MCTS explores many branches effectively, it often wastes resources evaluating syntactically distinct but se- mantically equivalent states. This redundancy un- necessarily expands the search space, consuming tokens without yielding novel solutions, thus limit- ing efficiency and scalability. Mathematical Equivalence Detection Effective detection of mathematical statement equivalence is crucial for efficient LLM-Based search tree pruning, yet current methodologies exhibit signif- icant shortcomings. For instance, rudimentary se- quence comparison metrics like Levenshtein sim- ilarity (Yujian and Bo, 2007) are fundamentally ill-suited, as they fail to capture the deep seman- tic and hierarchical structures inherent in math- ematical language, leading to unreliable equiva- lence assessments. Standard Semantic Textual Similarity models, such as SBERT (Reimers and Gurevych, 2019), trained predominantly on general language corpora, are designed to capture semantic relatedness rather than strict mathematical equiv- alence. Even domain-specific models like Math- BERT (Peng et al., 2021), which enhance mathe- matical text representation, along with other em-bedding models MATH-Similarity (Steinfeldt and Mihaljevi ´c, 2024), lack optimization for identify- ing functional equivalence between mathematical sentences. Their capacity to accurately recognize semantically equivalent mathematical sentences is thereby constrained, as illustrated by the examples in Figure 1. While LLMs like GPT-4o (Hurst et al., 2024) has the ability to recognize mathematical equivalence, their complex architectures introduce significant latency. This high time overhead renders them impractical for real-time pruning scenarios. Consequently, there is an urgent need to enable efficient pruning in LLM-based search. 3
https://arxiv.org/abs/2505.16312v1
Methodology 3.1 Define Semantic Equivalence in Mathematics Simply equating statements based on identical out- comes can be superficial and misleading, as it may overlook critical differences in conceptual articu- lation, structural formulation, symbolic interpre- tation, and methodological pathways. To address this, we propose a definition of semantic equiv- alence specifically attuned to these multifaceted aspects. Accordingly, in our framework, two math- ematical statements are considered semantically equivalent if and only if they rigorously satisfy the following criteria: •Conceptual Consistency: The statements must articulate identical mathematical con- cepts, definitions, or propositions without am- biguity. •Structural Equivalence: Their logical for- mulations, encompassing assumptions, deriva- tions, and conclusions, must be fully aligned. •Notational Precision: All variables, sym- bols, and mathematical expressions must be employed consistently, maintaining identical meanings across the statements. •Methodological Congruence: Semantic equivalence necessitates an alignment in the underlying methodology and reasoning. State- ments yielding the same final result via dis- parate approaches are not considered fully equivalent. Our approach to semantic equivalence thus man- dates a comprehensive assessment. It scrutinizes 3 Figure 2: The EquivPruner framework. Top: Training the lightweight equivalence pruner from labeled step-level sentence pairs. Bottom: Applying the trained lightweight pruner during tree-search-based LLM inference to remove semantically equivalent candidates generated by the LLM. the congruence of conceptual foundations, logi- cal structures, notational usage, and methodolog- ical approaches. Two mathematical statements are judged completely equivalent only when they demonstrate indivisible identity across all these critical facets. 3.2 The MathEquiv Dataset Recognizing the absence of dedicated datasets for mathematical statement equivalence, we con- structed and released MathEquiv to bridge this gap. The MathEquiv dataset was curated by initially em- ploying a Step-level Beam Search algorithm (Chen et al., 2024) to gather action candidates. These candidates were subsequently formulated into step- level sentence pairs. For the task of equivalence scoring, we imple- mented a five-tiered classification system. This granular approach was adopted to enhance the stability of the GPT model’s outputs, as pre- liminary experiments with binary classification (equivalent/non-equivalent) revealed inconsisten- cies in judgments. The five-tiered system yielded significantly more consistent and reliable assess- ments: •Level 4 (Exactly Equivalent): The state- ments are mathematically interchangeable in all respects, exhibiting identical meaning and form.•Level 3 (Likely Equivalent): Minor syntac- tic differences may be present, but the core mathematical content and logic align. •Level 2 (Indeterminable): Insufficient infor- mation is available to make a definitive judg- ment regarding equivalence. •Level 1 (Unlikely Equivalent): While some partial agreement may exist, critical discrep- ancies in logic, definition, or mathematical structure are observed. •Level 0 (Not Equivalent): The statements are fundamentally distinct in their mathematical meaning, derivation, or resultant outcomes. The MathEquiv dataset was labeled via an itera- tive refinement process. Initially, GPT-4o labeled a data subset, followed by human expert review. For discrepancies, the human-adjudicated label and its rationale were incorporated into GPT-4o’s prompt as few-shot examples. This cycle was repeated un- til model outputs for a randomly sampled subset consistently aligned with human consensus. Sub- sequently, the collection of few-shot examples was pruned to a minimal, representative set sufficient to maintain this model-human alignment. This
https://arxiv.org/abs/2505.16312v1
it- erative calibration process yielded the MathEquiv dataset, characterized by high-quality labels and an accurate assessment of semantic equivalence. The 4 final prompt is detailed in Figure 4. The MathEquiv dataset is available at https://huggingface.co/ datasets/Jiawei1222/MathEquiv . 3.3 Lightweight Pruner for Tree Search To facilitate dynamic, real-time pruning within our tree search algorithm, we developed and trained a dedicated Lightweight Pruner. The data collection process for training this pruner and its integration into the broader Tree-search-based LLM inference pipeline are illustrated in Figure 2. 3.3.1 Data Complexity in Pruner Training The MathEquiv dataset, suitable for assessing over- all statement equivalence, presents specific chal- lenges for training the Lightweight Pruner. The dataset’s step-level sentence pairs often consist of multiple sentences. A key difficulty is that step pairs labeled as non-equivalent at a macro-level may nevertheless contain sub-pairs of sentences that are semantically equivalent. This character- istic, common in data derived from intermediate mathematical problem-solving steps, can introduce ambiguity and hinder the pruner’s ability to learn fine-grained distinctions if not appropriately ad- dressed. The true equivalence status of these sub- sentence pairs can be viewed as a latent aspect of the data. 3.3.2 Pruner Training via Expectation-Maximization (EM) To effectively train the Lightweight Pruner amidst this data complexity, we employ the Expectation- Maximization (EM) algorithm, which is effective for handing the unobserved equivalence status of sub-sentence pairs within larger, complex training instances. The algorithm alternates between an Expectation (E) step and a Maximization (M) step: 1. E-step (Expectation Step): Given the model parameters θ(t)at iteration t, the pruner predicts the equivalence probability of each sub-sentence pair in multi-sentence samples. Sub-sentence pairs with probabilities exceeding a threshold are treated as high-confidence equivalents and removed from samples to refine the dataset for the next step. 2. M-step (Maximization Step): The model parameters are updated to θ(t+1)by maximizing the likelihood of the observed data, conditioned on the expectations derived in the E-step. By training on samples that have been simplified or where latent equivalences have been accounted for, the model can better focus on learning moresubtle or challenging distinctions necessary for ef- fective pruning. 4 Experiments In this section, we present a series of comprehen- sive experiments designed to validate the efficacy of EquivPruner. 4.1 MathEquiv Dataset Generation We constructed the MathEquiv dataset for math- ematical statement equivalence. The foundation of this dataset consists of 7,500 mathematical problems sourced from the MATH training set (Hendrycks et al., 2021). To prevent data leak- age between training, validation, and test phases of EquivPruner, these 7,500 problems were first split into training, validation, and test sets using an 8:1:1 ratio, respectively. For each problem in these distinct sets, we generated candidate reasoning step pairs using the Qwen2.5-Math-7B-Instruct model (Yang et al., 2024) via Step-level Beam Search. These pairs were subsequently filtered based on Levenshtein distance, and a balanced sample from each set was then annotated for equivalence by GPT-4o. This process resulted in distinct training, validation, and test sets of annotated step pairs for EquivPruner. The specific parameters for step pair generation, filtering criteria, and the final dataset sizes are
https://arxiv.org/abs/2505.16312v1
detailed in Appendix A.1. 4.2 Experimental Setup 4.2.1 Models and Datasets For inference, we utilized several LLMs: Qwen2.5- Math-7B-Instruct (Yang et al., 2024), Mistral-7B- SFT (Shao et al., 2024), and Qwen2.5-Math-1.5B- Instruct (Yang et al., 2024). Given that exist- ing open-source PRMs are predominantly tailored for mathematical reasoning, our current investi- gation is confined to mathematical tasks. Never- theless, the EquivPruner framework is designed for generalizability and can be readily extended to other domains like code generation and com- monsense reasoning. The Process Reward Model (PRM) employed for guiding the Monte Carlo Tree Search (MCTS) was Math-Shepherd-Mistral-7B- PRM (Shao et al., 2024). As EquivPruner was trained on data generated by Qwen2.5-Math-7B- Instruct, the Mistral-7B-SFT and Qwen2.5-Math- 1.5B-Instruct models serve as out-of-distribution (OOD) models in our experiments. 5 Table 1: Performance comparison of Vanilla MCTS and MCTS + EquivPruner across three language models on the MATH and GSM8K datasets. EquivPruner significantly reduces token consumption (Tokens, Ratio) while generally maintaining or improving accuracy (Acc, %). Best results within each model-dataset block are in bold . MethodsMATH GSM8K Acc Tokens Ratio Acc Tokens Ratio Qwen2.5-Math-7B-Instruct: Vanilla MCTS 83.40 106773 100.00% 96.44 34826 100.00% + EquivPruner 84.00 74194 69.49% 96.59 18071 51.89% Mistral-7b-sft: Vanilla MCTS 36.60 49251 100.00% 83.78 20217 100.00% + EquivPruner 37.40 38265 77.69% 85.06 12537 62.01% Qwen2.5-Math-1.5B-Instruct: Vanilla MCTS 75.60 91811 100.00% 91.05 39337 100.00% + EquivPruner 75.60 71878 78.29% 90.75 23752 60.38% Our evaluation was conducted on two standard benchmark datasets: •MATH (Hendrycks et al., 2021): Featuring challenging competition-level mathematics problems. Due to computational demands, our evaluation on the MATH dataset was per- formed on the MATH-500 subset, identical to the test partition used in Lightman et al. (2023). •GSM8K (Cobbe et al., 2021): Consisting of grade school mathematics word problems. Its test set has 1319 problems. Since EquivPruner was trained on data derived from MATH dataset problems, GSM8K is considered an OOD dataset. 4.2.2 Implementation Details The EquivPruner model itself is a fine-tuned Longformer-base (Beltagy et al., 2020), chosen for its efficiency suitable for real-time pruning. During the MCTS inference phase, the determination of equivalence between two reasoning step nodes in- volves a two-stage process. First, the Levenshtein ratio between the steps is calculated. If the ratio is less than or equal to 0.75, the nodes are imme- diately considered non-equivalent, acting as a fast filter. Only if the Levenshtein ratio is greater than 0.75 is the EquivPruner model invoked to make the final equivalence prediction. This hierarchical check balances speed and accuracy in the pruning process. The maximum number of newly generated tokens by the LLMs (max_new_tokens) was set to 1024, and the generation temperature was 0.7. All experiments were conducted on NVIDIA GeForceRTX 3090 GPUs. Further details are available in Appendix A.2. 4.3 Evaluation Metrics We adopted a vanilla MCTS (Chen et al., 2024) as the baseline for comparison. The evaluation of EquivPruner focuses on two primary aspects: •Effectiveness : Measured using solution accu- racy (Acc), the percentage of problems solved correctly. •Efficiency : Assessed through the total num- ber of tokens generated (Tokens) and a token ratio (Ratio),
https://arxiv.org/abs/2505.16312v1
defined as the ratio of tokens gen- erated by the EquivPruner-enhanced search to those generated by the baseline MCTS. 4.4 Main Results Table 1 presents our main experimental findings, comparing vanilla MCTS against MCTS aug- mented with EquivPruner. The results consis- tently demonstrate that EquivPruner substantially enhances computational efficiency across different language models and datasets, primarily by reduc- ing token generation while largely preserving or even improving solution accuracy. Efficiency Gains EquivPruner achieves signif- icant reductions in token counts across all con- figurations. For instance, with Qwen2.5-Math- 7B-Instruct on GSM8K, tokens were reduced by approximately 48.11% (Ratio: 51.89%), and on MATH, by 30.51% (Ratio: 69.49%). Similar sub- stantial token savings were observed for Mistral- 7B-SFT (e.g., 37.99% reduction on GSM8K) and 6 Table 2: Performance of EquivPruner with Step-level Beam Search (SBS) using the Qwen2.5-Math-7B- Instruct model on MATH and GSM8K. EquivPruner enhances accuracy (Acc, %) by promoting diversity among selected nodes, with token counts (Tokens, Ra- tio) remaining largely stable. Methods Acc Tokens Ratio MATH: SBS 82.00 21341 100.00% + EquivPruner 82.20 20952 98.18% GSM8K: SBS 96.06 8004 100.00% + EquivPruner 96.13 7927 99.04% Qwen2.5-Math-1.5B-Instruct (e.g., 39.62% reduc- tion on GSM8K). These figures highlight Equiv- Pruner’s effectiveness in pruning the search space. Accuracy Impact and Resource Optimization Crucially, these efficiency improvements are gen- erally accompanied by maintained or enhanced accuracy. Qwen2.5-Math-7B-Instruct saw accu- racy gains of +0.60% on MATH and +0.15% on GSM8K. With Mistral-7B-SFT, an OOD model relative to EquivPruner’s training data source, ac- curacy improved by +0.80% on MATH and +1.28% on GSM8K (also an OOD dataset for EquivPruner). This suggests that by eliminating redundant explo- rations, EquivPruner enables MCTS to allocate its search resources more effectively. For Qwen2.5- Math-1.5B-Instruct (another OOD model), accu- racy was maintained on MATH and saw a minor dip of -0.30% on GSM8K, which is a reasonable trade-off given the nearly 40% token reduction. Generalization The positive outcomes on OOD models (Mistral-7B-SFT, Qwen2.5-Math-1.5B- Instruct) and the OOD dataset (GSM8K) under- score EquivPruner’s generalization capabilities. It effectively identifies and removes equivalent rea- soning steps, allowing MCTS to conduct a more focused and efficient search across varied settings. 4.5 Effectiveness in Step-level Beam Search To demonstrate its versatility beyond MCTS, we evaluated EquivPruner with Step-level Beam Search (SBS) (Chen et al., 2024) using the Qwen2.5-Math-7B-Instruct model. Unlike MCTS, SBS does not construct an extensive search tree; instead, it dynamically selects top- kchild nodes during expansion. Given this mechanism, apply- ing EquivPruner to SBS is not primarily aimed at No Pruning (Vanilla MCTS)Pruning w/ Original LongformerPruning w/ Finetuned Longformer (w/o EM)EquivPruner (Finetuned w/ EM - Ours)020000400006000080000100000120000T oken Consumption106773 89998 85451 74194 T okens Consumed Accuracy 82.082.583.083.584.084.5 Accuracy (%) 83.4% 82.4%83.8%84.0%Figure 3: Ablation study of EquivPruner components. The plot illustrates the impact of different pruning strate- gies within a MCTS framework on Token Consumption (bars, left y-axis) and Accuracy (line, right y-axis). reducing the total number of generated tokens, as SBS inherently limits the breadth of the search. Instead, our hypothesis is that EquivPruner can enhance the quality of the search by eliminating redundant nodes before the top-
https://arxiv.org/abs/2505.16312v1
kselection occurs. This process ensures that the kchosen candidates are more diverse, potentially leading to the discov- ery of more effective reasoning paths and thereby improving overall task performance. The results in Table 2 validate this. On MATH, EquivPruner increased accuracy from 82.00% to 82.20% (+0.20%), and on GSM8K from 96.06% to 96.13% (+0.07%). Concurrently, token counts remained largely unchanged, with ratios of 98.18% on MATH and 99.04% on GSM8K. These find- ings suggest that even in search algorithms like SBS where token generation is already constrained, EquivPruner can still offer benefits. By ensuring that the limited slots in the beam are occupied by semantically distinct reasoning steps, EquivPruner promotes a more diverse and potentially more fruit- ful exploration of the solution space. This demon- strates that EquivPruner is a versatile component that can enhance different types of search strategies in LLM-based reasoning by improving the quality and diversity of explored paths. 4.6 Ablation Study To investigate the individual contributions of the key components of our EquivPruner—specifically, the fine-tuning process and the use of the EM al- gorithm—we conducted an ablation study. The experiments were performed using the Qwen2.5- Math-7B-Instruct model on the MATH dataset. We compare our full method, EquivPruner (Finetuned 7 w/ EM), against three variants: (1) No Pruning (vanilla MCTS baseline); (2) Pruning w/ Original Longformer (using a pre-trained Longformer-base without task-specific fine-tuning for equivalence); and (3) Pruning w/ Finetuned Longformer (w/o EM) (standard supervised fine-tuning without the EM algorithm). The results in Figure 3 demonstrate the impact of each component. Using the Original Longformer- base for pruning (Setting 2) reduces tokens (106773 to 89998) compared to No Pruning (Setting 1), but at the cost of a accuracy drop (83.4% to 82.4%), indicating that a generic model is insufficient. Stan- dard fine-tuning without EM (Setting 3) improves accuracy to 83.8% (surpassing No Pruning) while improve token efficiency to Setting 2 (89998 to 85451), underscoring the necessity of task-specific training. Critically, our full EquivPruner method with EM-based fine-tuning (Setting 4) achieves both the highest accuracy (84.0%) and the most sig- nificant token reduction (106773 to 74194). This highlights that both the fine-tuning process and specifically the EM algorithm are vital for maxi- mizing EquivPruner’s effectiveness in improving accuracy and token efficiency. 5 Conclusion In this paper, we introduce EquivPruner , a sim- ple yet effective approach to address inefficient token usage in LLM reasoning search by identify- ing and pruning semantically equivalent actions. We also introduce MathEquiv, the first dataset specifically designed for mathematical statement equivalence, which enables the training of an ef- fective lightweight equivalence detector. Exten- sive experiments demonstrate that EquivPruner significantly reduces token consumption—for ex- ample, by 48.1% for Qwen2.5-Math-7B-Instruct on GSM8K—while maintaining or often improv- ing reasoning accuracy across various models and tasks. Our findings underscore the substantial bene- fits of managing semantic redundancy in reasoning search, offering a valuable direction for enhancing the efficiency and effectiveness of LLMs. Limitations There are some limitations with our paper, which we reserve for future work. Firstly, due to computa- tional constraints, EquivPruner was not evaluated on language models
https://arxiv.org/abs/2505.16312v1
significantly larger than the 7B parameter scale. Secondly, our work focused onEquivPruner ’s application at inference time, and its potential integration with iterative LLM train- ing or refinement strategies remains an area for future exploration. Lastly, while designed for gen- eralizability, our empirical validation was primar- ily within mathematical reasoning, influenced by the availability of suitable process reward models (PRMs). Extending evaluation to other domains such as science tasks or commonsense reasoning, contingent upon broader PRM availability, offers a promising direction for future research. References Iz Beltagy, Matthew E Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. arXiv preprint arXiv:2004.05150 . Maciej Besta, Nils Blach, Ales Kubicek, Robert Gersten- berger, Michal Podstawski, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Hubert Niewiadomski, Pi- otr Nyczyk, and 1 others. 2024. Graph of thoughts: Solving elaborate problems with large language mod- els. In Proceedings of the AAAI Conference on Artifi- cial Intelligence , volume 38, pages 17682–17690. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, and 1 others. 2020. Language models are few-shot learners. Advances in neural information processing systems , 33:1877–1901. Guoxin Chen, Minpeng Liao, Chengxi Li, and Kai Fan. 2024. Alphamath almost zero: Process supervision without process. In The Thirty-eighth Annual Con- ference on Neural Information Processing Systems . Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, and 1 others. 2021. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168 . Nicola Dainese, Matteo Merler, Minttu Alakuijala, and Pekka Marttinen. 2024. Generating code world mod- els with large language models guided by monte carlo tree search. In The Thirty-eighth Annual Conference on Neural Information Processing Systems . Mehul Damani, Idan Shenfeld, Andi Peng, Andreea Bobu, and Jacob Andreas. 2024. Learning how hard to think: Input-adaptive allocation of lm computation. arXiv preprint arXiv:2410.04707 . Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Ja- cob Steinhardt. 2021. Measuring mathematical prob- lem solving with the math dataset. arXiv preprint arXiv:2103.03874 . 8 Aaron Hurst, Adam Lerer, Adam P Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, and 1 others. 2024. Gpt-4o system card. arXiv preprint arXiv:2410.21276 . Jinhao Jiang, Zhipeng Chen, Yingqian Min, Jie Chen, Xiaoxue Cheng, Jiapeng Wang, Yiru Tang, Haox- iang Sun, Jia Deng, Wayne Xin Zhao, and 1 oth- ers. 2024. Technical report: Enhancing llm reason- ing with reward-guided tree search. arXiv preprint arXiv:2411.11694 . Jikun Kang, Xin Zhe Li, Xi Chen, Amirreza Kazemi, Qianyi Sun, Boxing Chen, Dong Li, Xu He, Quan He, Feng Wen, and 1 others. 2024. Mindstar: Enhancing math reasoning in pre-trained llms at inference time. arXiv preprint arXiv:2405.16265 . Zixuan Ke, Fangkai Jiao, Yifei Ming, Xuan-Phi Nguyen, Austin Xu, Do Xuan Long, Minzhi Li, Chengwei Qin, Peifeng Wang, Silvio Savarese, and 1 others. 2025. A survey of frontiers in llm reasoning: Inference scal- ing, learning to reason, and agentic systems. arXiv preprint arXiv:2504.09037 . Hunter Lightman, Vineet Kosaraju,
https://arxiv.org/abs/2505.16312v1
Yuri Burda, Harri- son Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. 2023. Let’s verify step by step. In The Twelfth Inter- national Conference on Learning Representations . Jieyi Long. 2023. Large language model guided tree-of- thought. arXiv preprint arXiv:2305.08291 . Liangchen Luo, Yinxiao Liu, Rosanne Liu, Samrat Phatale, Meiqi Guo, Harsh Lara, Yunxuan Li, Lei Shu, Yun Zhu, Lei Meng, and 1 others. 2024. Im- prove mathematical reasoning in language models by automated process supervision. arXiv preprint arXiv:2406.06592 . Goutam Majumder, Partha Pakray, Alexander Gelbukh, and David Pinto. 2016. Semantic textual similarity methods, tools, and applications: A survey. Com- putación y Sistemas , 20(4):647–665. Shuai Peng, Ke Yuan, Liangcai Gao, and Zhi Tang. 2021. Mathbert: A pre-trained model for math- ematical formula understanding. arXiv preprint arXiv:2105.00377 . Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. InProceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) , pages 3982–3992. Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, YK Li, Y Wu, and 1 others. 2024. Deepseek- math: Pushing the limits of mathematical reason- ing in open language models. arXiv preprint arXiv:2402.03300 .Charlie Snell, Jaehoon Lee, Kelvin Xu, and Aviral Ku- mar. 2024. Scaling llm test-time compute optimally can be more effective than scaling model parameters. arXiv preprint arXiv:2408.03314 . Christian Steinfeldt and Helena Mihaljevi ´c. 2024. Eval- uation and domain adaptation of similarity models for short mathematical texts. In International Con- ference on Intelligent Computer Mathematics , pages 241–260. Springer. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, and 1 others. 2022. Chain-of-thought prompting elic- its reasoning in large language models. Advances in neural information processing systems , 35:24824– 24837. Yuxi Xie, Anirudh Goyal, Wenyue Zheng, Min-Yen Kan, Timothy P Lillicrap, Kenji Kawaguchi, and Michael Shieh. 2024. Monte carlo tree search boosts reasoning via iterative preference learning. arXiv preprint arXiv:2405.00451 . An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, and 1 others. 2024. Qwen2. 5 technical report. arXiv preprint arXiv:2412.15115 . Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Tom Griffiths, Yuan Cao, and Karthik Narasimhan. 2023. Tree of thoughts: Deliberate problem solving with large language models. Advances in neural information processing systems , 36:11809–11822. Li Yujian and Liu Bo. 2007. A normalized levenshtein distance metric. IEEE transactions on pattern analy- sis and machine intelligence , 29(6):1091–1095. Dan Zhang, Sining Zhoubian, Ziniu Hu, Yisong Yue, Yuxiao Dong, and Jie Tang. 2024a. Rest-mcts*: Llm self-training via process reward guided tree search. Advances in Neural Information Processing Systems , 37:64735–64772. Di Zhang, Jianbo Wu, Jingdi Lei, Tong Che, Jia- tong Li, Tong Xie, Xiaoshui Huang, Shufei Zhang, Marco Pavone, Yuqiang Li, and 1 others. 2024b. Llama-berry: Pairwise optimization for o1-like olympiad-level mathematical reasoning. arXiv preprint arXiv:2410.02884 . Shun Zhang, Zhenfang Chen, Yikang Shen, Mingyu Ding, Joshua B Tenenbaum, and Chuang Gan. 2023.
https://arxiv.org/abs/2505.16312v1
Planning with large language models for code gener- ation. In The Eleventh International Conference on Learning Representations . A Experimental Details A.1 MathEquiv Dataset Generation Details The MathEquiv dataset was constructed as follows: Problem Sourcing and Splitting : We selected 7,500 problems from the MATH training set 9 (Hendrycks et al., 2021). These problems were divided into three distinct sets for EquivPruner: a training set (6,000 problems, 80%), a validation set (750 problems, 10%), and a test set (750 prob- lems, 10%). This initial split of problems ensures no data leakage between the subsequently gener- ated step-pair datasets for EquivPruner. Step Pair Generation : For each problem within these three sets, we generated candidate reasoning steps us- ing the Qwen2.5-Math-7B-Instruct model (Yang et al., 2024). This generation was performed via a Step-level Beam Search with the following pa- rameters: beam size (k) = 8, temperature = 0.7, maximum search tree width (tree_max_width) = 10, maximum search tree depth (tree_max_depth) = 50, and maximum new tokens for generation (max_new_tokens) = 1024. Filtering : The gen- erated step pairs from each set were then filtered based on their Levenshtein ratio. Only pairs with a ratio between 0.75 and 0.99 (inclusive) were retained. This filtering aimed to capture mean- ingful variations while excluding nearly identical or overly dissimilar steps. Sampling and Anno- tation : From the filtered pairs of each set, we randomly sampled a large number for annotation: Training set: 80,000 pairs were annotated. Vali- dation set: 10,000 pairs were annotated. Test set: 10,000 pairs were annotated. This process resulted in the final training, validation, and test sets for the MathEquiv dataset, with no overlap in the un- derlying problems from which the step pairs were derived. A.2 Implementation Environment and MCTS Parameters All experiments were conducted using PyTorch version 2.4.0. The GPU infrastructure consisted of eight NVIDIA GeForce RTX 3090 GPUs, each with 24GB, utilizing CUDA version 12.1. The central processing unit was an Intel(R) Xeon(R) Platinum 8255C CPU equipped with 96 cores. A.2.1 EquivPruner Training The EquivPruner model, a fine-tuned Longformer- base (Beltagy et al., 2020), was trained using hy- perparameters selected via Bayesian optimization. The optimization aimed to maximize the ‘eval/f1’ metric over a maximum of 10 trials. The hyper- parameter search spaces are detailed in Table 3.Table 3: Hyperparameter search space for EquivPruner using Bayesian optimization. Hyperparameter Value or Range Learning Rate [1e-6,5e-5] Batch Size 4 Training Epochs Discrete Values {2, 3, 5} Weight Decay [0.0, 0.1] A.2.2 MCTS Parameters The Monte Carlo Tree Search (MCTS) based evaluation hyperparameters are detailed in Ta- ble 4. These MCTS parameters (temperature, tree_max_width, tree_max_depth, simulations, PUCT values) were kept consistent across base- line and EquivPruner-enhanced evaluations unless otherwise specified. Table 4: Monte Carlo Tree Search (MCTS) hyperparam- eters. Hyperparameter Value Number of Simulations 20 LLM Generation Temperature 0.7 LLM max_new_tokens 1024 Search Tree Maximum Width 10 Search Tree Maximum Depth 50 PUCT values 1.25 A.2.3 SBS Parameters The Step-level Beam Search (SBS) based eval- uation hyperparameters are detailed in Table 5. These SBS parameters (beam size, temperature, tree_max_width, tree_max_depth) were kept con- sistent
https://arxiv.org/abs/2505.16312v1
across baseline and EquivPruner-enhanced evaluations unless otherwise specified. Table 5: Step-level Beam Search (SBS) hyperparame- ters. Hyperparameter Value Beam Size 3 LLM Generation Temperature 0.7 LLM max_new_tokens 1024 Search Tree Maximum Width 10 Search Tree Maximum Depth 50 10 Please determine whether the following two sentences are semanticly equivalent, and return 0: Not equivalent at all; 1: May not be equivalent; 2: Can’t judge; 3: May be equivalent; 4: Exactly equivalent. Please explain the reason, reflect, and provide a more accurate result. Please output in the following Python dictionary format: { "reasoning_step": "The reasoning process of the model", "result": "Final result" (int) } Question: Sentence1: The inner sum is a geometric series with the first term1 2k2−k+1and common ratio1 2, and it has 2kterms. The sum of a geometric series is given by: a+b−1X n=aarn=a1−rb 1−r Sentence2: The inner sumPk2+k n=k2−k+11 2nis a geometric series with the first term1 2k2−k+1and common ratio1 2, and it has 2kterms. The sum of a geometric series is given by: a+r−1X n=aarn=a1−rr 1−r Output: { "reasoning_step": "Sentence1 uses the formulaPa+b−1 n=aarn=a1−rb 1−r, while Sentence2 usesPa+r−1 n=aarn=a1−rr 1−r. The variables and the structure of the formulas are different, which affects the semantic equivalence. Additionally, the limits of the summation in Sentence2 are explicitly given asPk2+k n=k2−k+11 2n, which is not present in Sentence1. These differences indicate that the sentences are not semantically equivalent.", "result": 0 } Question: Sentence1: The amplitude is the distance from the midline to the maximum value (or the minimum value). Therefore, we can calculate the amplitude as follows: Amplitude =Maximumvalue −Midline = 3−1 = 2 Since the amplitude is |a|, we have:|a| = 2. Since ais a positive constant, we conclude:a = 2 Sentence2: The amplitude of the sine wave is half the distance between the maximum value and the minimum value. Therefore, we can calculate the amplitude as follows: Amplitude =Maximumvalue −Minimumvalue 2=3−(−1) 2=3 + 1 2=4 2= 2 Since the amplitude is |a|, we have:|a| = 2. Since ais a positive constant, we conclude: a = 2 Output: { "reasoning_step": "Sentence1 defines the amplitude as the distance from the midline to the maximum value, calculating it as Amplitude =Maximumvalue −Midline = 3−1 = 2 . Sentence2 defines the amplitude as half the distance between the maximum and minimum values, calculating it as Amplitude =Maximumvalue −Minimumvalue 2=3−(−1) 2= 2. While both methods yield the same result ( a= 2), the definitions and calculations are fundamentally different. This difference in methodology means the sentences are not semantically equivalent.", "result": 0 } Question: Sentence1: {sentence1} Sentence2: {sentence2} Output: Figure 4: Complete prompt for labeling. 11
https://arxiv.org/abs/2505.16312v1
arXiv:2505.16315v2 [cs.AI] 23 May 2025Incentivizing Dual Process Thinking for Efficient Large Language Model Reasoning Xiaoxue Cheng1∗, Junyi Li2∗, Zhenduo Zhang3, Xinyu Tang1, Wayne Xin Zhao1†,Xinyu Kong3,Zhiqiang Zhang3 1Gaoling School of Artificial Intelligence, Renmin University of China 2Department of Computer Science, National University of Singapore3Ant Group chengxiaoxue@ruc.edu.cn, junyi_cs@nus.edu.sg, batmanfly@gmail.com Abstract Large reasoning models (LRMs) have demonstrated strong performance on com- plex reasoning tasks, but often suffer from overthinking, generating redundant content regardless of task difficulty. Inspired by the dual process theory in cog- nitive science, we propose Adaptive Cognition Policy Optimization ( ACPO ), a reinforcement learning framework that enables LRMs to achieve efficient reason- ing through adaptive cognitive allocation and dynamic system switch. ACPO incorporates two key components: (1) introducing system-aware reasoning tokens to explicitly represent the thinking modes thereby making the model’s cognitive process transparent, and (2) integrating online difficulty estimation and token length budget to guide adaptive system switch and reasoning during reinforcement learning. To this end, we propose a two-stage training strategy. The first stage begins with supervised fine-tuning to cold start the model, enabling it to generate reasoning paths with explicit thinking modes. In the second stage, we apply ACPO to further enhance adaptive system switch for difficulty-aware reasoning. Exper- imental results demonstrate that ACPO effectively reduces redundant reasoning while adaptively adjusting cognitive allocation based on task complexity, achieving efficient hybrid reasoning. 1 Introduction Recent advances in large reasoning models (LRMs) [ 1] have demonstrated remarkable success on complex tasks such as mathematical reasoning [ 2,3,4,5], largely attributed to reinforcement learning that encourages the generation of detailed, step-by-step reasoning processes. LRMs improve answer accuracy through self-reflection and self-verification during long reasoning paths. As the reasoning length increases, the performance of model tends to improve accordingly [ 2,6,7]. Although the long chain-of-thought (CoT) [ 8] reasoning in LRMs is effective for solving complex problems, it often leads to overthinking [9,10], producing redundant reasoning paths. Most existing LRMs rely on fixed reasoning strategies, lacking the ability to dynamically switch between different thinking modes based on task complexity. This rigidity results in inefficient inference, particularly for simple problems that could be resolved more effectively with concise and direct reasoning. Several recent efforts have explored long CoT compression for efficient reasoning [ 11,12]. One line of work fine-tunes LRMs using supervision from shorter chain-of-thought exemplars [ 13,14], encouraging the model to arrive at correct answers with fewer intermediate steps. Another line introduces length penalties into reinforcement learning reward functions [ 3,15,16,17], explicitly ∗Equal Contribution. †Corresponding author. Preprint. Under review. discouraging unnecessarily long reasoning trajectories during training. Since many prior approaches overlook task difficulty and treat all samples uniformly with the sole objective of shortening reasoning paths, some recent methods attempt to address this by estimating length budgets offline and training the model on sampled trajectories accordingly [ 18]. However, such offline strategies depend on precomputed budgets and fixed preference data, restricting their adaptability and scalability. Inspired by the dual process theory in cognitive science [ 19], which states that humans have two systems for thinking — fast, intuitive thinking (System 1) and
https://arxiv.org/abs/2505.16315v2
slow, deliberate thinking (System 2), we pose the following research question: Can LRMs learn to dynamically switch between fast and slow thinking modes based on task complexity, enabling more efficient and adaptive reasoning? In this paper, we propose Adaptive Cognition Policy Optimization ( ACPO ), a reinforcement learning framework that enables LRMs to perform efficient and adaptive reasoning through dynamic system switch between fast and slow thinking modes. To achieve this, we first introduce system-aware reasoning tokens ( e.g.,<fast_think> ,<slow_think> ) to explicitly indicate the model engagement in fast or slow thinking modes during the reasoning process. Based on these tokens, we construct a dataset from which the reasoning trajectory is interleaved with fast thinking and slow thinking steps, followed by the final answer. Then, we propose a two-stage training strategy to enable dynamic system switch reasoning. In the first stage, we perform supervised fine-tuning with the constructed dataset as a cold start, establishing the model’s foundational ability to generate explicit thinking process. In the second stage, we apply reinforcement learning with ACPO for further enhancement. Specifically, we introduce an online token length budget (TLB) mechanism that estimates task difficulty based on the model’s sampling success rate, providing a real-time signal to adjust the reasoning budget. These estimates are further incorporated into a reward function that guides cognitive allocation through two components: a TLB reward that encourages difficulty-aware length control, and a system pattern reward that incentivizes appropriate system switch between fast and slow thinking modes. We evaluate ACPO on a range of complex reasoning benchmarks. Unlike traditional reinforcement learning methods that focus solely on accuracy or fixed-length compression, ACPO dynamically adjusts both reasoning length and cognitive effort based on task difficulty. For challenging problems, it effectively reduces redundant reasoning steps, while for simpler tasks, it avoids overcompression and maintains high accuracy. These results highlight the advantages of difficulty-aware reasoning, demonstrating the effectiveness of dynamic cognitive control through adaptive reward optimization. The main contributions of this work are as follows: •We introduce system-aware reasoning tokens to explicitly annotate fast and slow thinking steps in LRMs, enabling transparent reasoning paths and providing a foundation for dynamic control of the model’s cognitive strategies. •We propose ACPO, a reinforcement learning framework that integrates online difficulty estimation and token length budget to dynamically calibrate the reward function and steer difficulty-aware cognitive allocation. •We demonstrate through extensive experiments that ACPO effectively compresses redundant reasoning content while maintaining accuracy, achieving a robust balance between efficiency and performance by avoiding overcompression and underexploration. 2 Preliminary 2.1 Group Relative Policy Optimization (GRPO) Group Relative Policy Optimization (GRPO) [ 20] is a reinforcement learning framework that elim- inates the need for a value function by estimating advantages in a group-relative manner. Given a specific question-answer pair (q, a), the behavior policy πθoldsamples a group of Gindividual responses {yi}G i=1. For each token yi,tin response yi, its normalized advantage is computed based on the group-level rewards {Ri}G i=1as follows: ˆAi,t=Ri−mean ({Ri}G i=1) std({Ri}G i=1). (1) 2 Similar to Proximal Policy Optimization (PPO) [ 21], GRPO adopts a clipped surrogate objective with an
https://arxiv.org/abs/2505.16315v2
additional KL regularization term to stabilize optimization: JGRPO(θ) =E(q,a)∼D,{yi}G i=1∼πθold(·|q) " 1 GGX i=11 |yi||yi|X t=1 min ri,t(θ)ˆAi,t,clip ri,t(θ),1−ε,1 +ε ˆAi,t −βD KL(πθ||πref)!# ,(2) where ri,t(θ) =πθ(yi,t|q,yi,<t) πθold(yi,t|q,yi,<t)is the importance sampling ratio for the t-th token yi,t. 2.2 Token Length Budget The token length budget (TLB) [ 18] is a method introduced for estimating an appropriate token-level budget for reasoning trajectories. Instead of prescribing a fixed or hard constraint, TLB adaptively adjusts the expected output length based on the sampling accuracy of candidate responses, capturing the inherent difficulty of the task. Specifically, for a given question q,Ncandidate responses are sampled from a language model. The TLB, denoted as Lbudget , is computed as: Lbudget =p·Lr+ (1−p)·Lmax, (3) where p=c Nrepresents the sampling success rate, with cdenoting the number of correct responses, Lrdenotes the average token length among the correct responses, Lmaxis the maximum token length among all responses. This formulation allows the budget to adapt to task complexity, providing flexible length guidance based on sampling accuracy without external supervision. In the DAST method [ 18], TLB is computed in an offline manner by sampling multiple responses for each question before training. These responses are then converted to preference pairs for further fine-tuning according to their estimated TLB scores. 3 Method In this section, we propose ACPO , a reinforcement learning framework that enables dynamic and adaptive system switch of LRMs to improve reasoning efficiency and adaptability to task complexity. This framework incorporates system-aware reasoning tokens to explicitly represent fast and slow thinking steps of LRMs, and integrates an online length budget estimation in RL training to dynamically adjust reasoning length based on task difficulty, guiding the model to balance accuracy and efficiency through adaptive cognitive effort allocation. We first describe the use of system-aware reasoning tokens for system explicitization, followed by the construction of explicit reasoning paths for supervised fine-tuning. Next, we present the reinforcement learning process of ACPO, providing a detailed overview of the online token length budget estimation and reward design. The overall framework of the proposed ACPO is illustrated in Figure 1. 3.1 Explicit Dual Process Reasoning The dual process theory [ 19] models human thinking as a combination of fast, intuitive processes (System 1) and slower, deliberate reasoning (System 2). Inspired by this theory, we introduce system-aware reasoning tokens as explicit indicators of different thinking modes within the model’s reasoning process. Specifically, we define four special tokens, <fast_think> ,</fast_think> , <slow_think> and</slow_think> , to explicitly wrap fast and slow reasoning steps, respectively. The introduction of reasoning tokens serves two purposes. First, it makes the model’s internal thinking modes explicitly observable, revealing the process of system switch. Second, it provides a fine-grained mechanism for controlling and monitoring the model’s cognitive dynamics. With this explicit thinking process, we can train the model for system switch, optimizing its cognitive allocation based on task difficulty. 3.1.1 Data Construction To enable the model to learn explicit thinking patterns, we construct a training dataset for supervised fine-tuning. Each sample consists of a question paired with its corresponding answer consisting of interleaved fast thinking and
https://arxiv.org/abs/2505.16315v2
slow thinking segments. To ensure the quality of the training data, we utilize the LIMO dataset [ 22], a carefully curated, high-quality dataset of complex mathematical 3 Query Response 1 Response 2 Response 3 Response N Longest Response ... ACPO Policy ModelDifficulty: Success Rate TLB: Avg Length Max Lengthy1 y2 y3 ynR1 R2 R3 RnA1 A2 A3 AnGroup Norm ... ... ...System Explicitization Candidate SamplingComparison and AnnotationDual Process Reasoning <slow_think > ... </slow_think > <fast_think> ... </fast_think > <answer>...</answer>Shortest Response SFT Accuracy Reward System Pattern RewardOnline TLB Reward System 1 System 2Figure 1: The overall framework of ACPO. The upper section illustrates the system explicitization process and cold start training via SFT. The lower section presents the ACPO training phase. reasoning tasks with high challenge and diverse knowledge coverage. These tasks demand precise, multi-step reasoning to arrive at correct answers, making them well-suited for constructing explicit thinking processes. Specifically, our data construction involves two main steps: Candidate Response Sampling andResponse Comparison and Annotation . •Candidate Response Sampling: Building on the approach in TOPS [ 23], we prompt the DeepSeek- R1-Distill-Qwen-32B [2] model to generate multiple candidate responses with varying lengths for each question. We adopt the prompting strategies in TOPS to encourage diversity of the reasoning lengths. After generating these candidates, we filter out incorrect responses and select the longest and shortest correct answers for further processing. •Response Comparison and Annotation: We employ GPT-4 [24] as an evaluator to perform fine- grained comparison and annotation between the selected response pairs. For those reasoning steps that are present in both short and long responses, we consider them as essential and detailed components that should be marked as slow thinking, as they represent critical, non-omittable reasoning processes. For other steps that appear in the long response but are omitted or summarized in the short response, we view them as trivial steps that should be labeled as fast thinking. Based on this principle, we reframe the short response by enclosing essential reasoning steps with <slow_think></slow_think> tags and trivial reasoning steps with <fast_think></fast_think> tags. An illustrative example is provided in Figure 4. 3.1.2 Supervised Fine-Tuning (Cold Start) After completing the data construction process, we obtain a dataset containing 745 annotated samples with explicit reasoning tokens. This dataset is used for supervised fine-tuning, allowing the model to learn to generate outputs with clearly interleaved fast and slow thinking modes. This cold start phase equips the model with the foundational ability to explicitly switch between different thinking modes. We train the model with a standard cross-entropy loss to ensure its thinking process aligns with the intended system switch patterns. 3.2 Reinforcement Learning with ACPO While the SFT process allows the model to align its outputs with the annotated reasoning tokens, explicitly capturing fast and slow thinking patterns, this approach may lead to overfitting and a ten- dency to memorize fixed reasoning paths. To overcome this limitation, we introduce a reinforcement learning phase, optimizing the fine-tuned model to achieve more flexible and adaptive cognition allocation through interaction and exploration. In the following sections, we present the proposed ACPO method,
https://arxiv.org/abs/2505.16315v2
including the online token length budget estimation and reward design that are specifically tailored for dynamic system switch. 4 3.2.1 Reward Design In this work, we adopt GRPO as the underlying RL algorithm, extending its framework with a customized reward structure to explicitly guide the model’s reasoning behavior. Formally, during the ACPO optimization process, we sample a set of Ncandidate responses {y1, y2, . . . , y N}from the current policy model for each training query qi. For each response, we compute a composite reward that consists of three components: accuracy reward, online TLB reward, and system pattern reward to encourage accurate and efficient reasoning with adaptive cognition allocation, as described below. Accuracy Reward. For each sampled response yi, we assess its correctness by comparing the generated answer with the ground-truth label in the training data. The accuracy reward is assigned as: Racc,i=+1,ifyiis correct , −1,otherwise .(4) Online TLB Reward. To guide the model in generating reasoning paths of appropriate length that align with the task difficulty, we introduce an online token length budget (TLB) reward. Unlike the offline approach in DAST, which requires extensive pre-sampling and pairwise preference construction, our method estimates the length budget Lbudget online during training, without additional computational overhead. This is achieved by leveraging the natural group-based sampling process in GRPO, allowing the sampling success rate pand token length budget Lbudget to be directly calculated from the existing candidate responses, as defined in Eq. 3. Given a sampled response yiwith actual length Li, we compute its TLB reward based on the deviation from the estimated Lbudget . The online TLB score RTLB,ifor each response is then defined as: RTLB,i=tanh(−λi),ifyiis correct , tanh( λi), otherwise ,(5) where λi=Li−Lbudget Lbudgetand the tanh function is used to smoothly bound the reward within (-1, 1). The TLB reward adapts the model’s reasoning strategy to the difficulty of each task, promoting efficient fast thinking for easier problems while allowing longer and more deliberate reasoning for harder problems. Additionally, the online TLB estimation enables real-time adaptation and avoids reliance on static preference data, leading to better generalization across diverse tasks. System Pattern Reward. To further encourage difficulty-aware system switch, we use the sampling success rate pfrom the TLB estimation as a proxy for task difficulty. For easier queries with higher p, the model is encouraged to allocate a larger proportion of its reasoning path to fast thinking. In contrast, for harder queries with lower p, the model is incentivized to allocate more steps to slow thinking for more careful deliberation. Given a sampled response yi, we compute the proportions of fast and slow thinking tokens within the total reasoning sequence, denoted as ρfast,iandρslow,i, respectively. The system pattern reward is then defined as: Rthink,i=ρfast,i,ifp > p thresh, ρslow,i,otherwise ,(6) where pthresh is a predefined complexity threshold separating easy and hard questions. The final reward Rifor each sampled response yiis computed as a weighted combination of the three components: Ri=  max ( wacc·Racc,i+wlen·RTLB,i+wthink·Rthink,i,0.1),ifyiis correct , min ( wacc·Racc,i+wlen·RTLB,i+wthink·Rthink,i,−0.1),ifyiis incorrect .(7) The weights wacc,wlen, andwthinkare hyperparameters that balance the importance of different reward signals
https://arxiv.org/abs/2505.16315v2
during optimization. In order to ensure that the reward for correct responses remains strictly positive and the reward for incorrect responses remains strictly negative, we apply max andmin operations to clip the final reward within the desired range. Based on the final reward, we leverage the GRPO objective in Eq. 2 to optimize the policy model. 5 Table 1: Evaluation results of ACPO on three different reasoning models across the MATH 500 and AIME 2024 datasets. Bold fonts indicate the best performance for each reasoning model. MethodsMATH 500 AIME 2024 Accuracy #Token ACU ↑Accuracy #Token ACU ↑ DeepSeek-R1-Distill-Qwen-1.5B 83.9 5708 0.98 28.9 16894 0.11 THINKPRUNE 82.9 2356 2.35 27.0 7574 0.24 ACPO-1.5B 81.0 1679 3.22 30.0 6670 0.30 DeepScaleR-1.5B-Preview 87.8 3914 1.50 43.1 17206 0.17 L1-Exact 79.8 1044 5.09 16.7 1798 0.62 L1-Max 81.8 999 5.46 23.3 2230 0.69 DeepSeek-R1-Distill-Qwen-7B 92.8 3977 0.33 55.5 13254 0.06 SFT_Shortest 91.8 2954 0.44 50.0 10757 0.07 SimPO_Shortest 87.8 970 1.29 33.3 2737 0.17 SimPO_DAST 92.6 2802 0.47 53.3 6337 0.12 ACPO-7B 91.6 1405 0.93 52.8 4520 0.17 DeepSeek-R1-Distill-Llama-8B 89.1 5003 0.22 42.9 16374 0.04 ACPO-8B 87.4 2232 0.49 43.3 7405 0.07 4 Experiment 4.1 Experimental Setup Datasets and Evaluation Metrics. We conduct training on the DeepScaleR-Preview-Dataset [ 25], a mathematical dataset consisting of 40K question-answer pairs drawn from AIME, AMC, Omni- Math [ 26] and STILL [ 27]. For evaluation, we assess model performance on three mathematical datasets: GSM8K [ 28], AIME 2024, and MATH 500 [ 29]. For each test question, we generate 16 responses using a sampling temperature of 0.6and a top- pvalue of 0.95, and compute pass@1 to measure accuracy. We report the average number of tokens generated per response in each dataset to assess reasoning efficiency. Moreover, we use Accuracy per Computation Unit (ACU) metric [ 30] to capture the balance between reasoning accuracy and efficiency, which is defined as : ACU =Accuracy #Params ×#Tokens(8) Baselines. We conduct experiments on DeepSeek-R1-Distill-Qwen-1.5B, DeepSeek-R1-Distill- Qwen-7B, and DeepSeek-R1-Distill-Llama-8B [ 2] and compare ACPO against the following methods. •DeepScaleR-1.5B-Preview [25] is a model trained on DeepSeek-R1-Distill-Qwen-1.5B with GRPO. We include it as an important baseline in our evaluations. •THINKPRUNE [16] trains models with a fixed token limit, pruning unfinished responses beyond this limit with zero reward. We include THINKPRUNE with 2k token length constraint trained on DeepSeek-R1-Distill-Qwen-1.5B for comparison. •L1[17] trains models to follow specified response lengths by introducing exact and maxi- mum length penalties in reinforcement learning, yielding two variants, L1-Exact andL1-Max , both trained on DeepScaleR-1.5B-Preview . •DAST [18] introduces an offline approach to estimate token length budgets through sampling and construct length preference data for SimPO [ 31] training. We include the comparison methods SFT_Shortest ,SimPO_Shortest , and SimPO_DAST from DAST in our experiments. Implementation Details. In the cold start phase, we fine-tune the models for 3 epochs using 745 annotated samples with explicit reasoning tokens. For ACPO training, we adopt the same hyperparameter settings as used in DeepScaleR-1.5B-Preview. Specifically, we use a learning rate of1×10−6, a batch size of 128, and a maximum context length of 8K tokens during training. The
https://arxiv.org/abs/2505.16315v2
models are trained for one epoch, and both the SFT and RL stages are conducted using the VeRL framework [ 32]. We set pthresh= 0.5in Eq. 6 and set the reward weights as wacc= 0.6,wlen= 0.3, andwthink= 0.1in Eq. 7. 6 Table 3: Performance comparison between three reasoning models trained with GRPO and ACPO on the MATH 500 and AIME 2024 datasets. MethodsMATH 500 AIME 2024 Accuracy #Token ACU ↑Accuracy #Token ACU ↑ DeepSeek-R1-Distill-Qwen-1.5B 83.9 5708 0.98 28.9 16894 0.11 +GRPO 84.5 3098 1.82 29.0 12990 0.15 +ACPO 81.0 1679 3.22 30.0 6670 0.30 DeepSeek-R1-Distill-Qwen-7B 92.8 3977 0.33 55.5 13254 0.06 +GRPO 92.5 3700 0.36 53.2 8577 0.08 +ACPO 91.6 1405 0.93 52.8 4520 0.17 DeepSeek-R1-Distill-Llama-8B 89.1 5003 0.22 42.9 16374 0.04 +GRPO 90.4 2172 0.52 43.3 8883 0.06 +ACPO 87.4 2232 0.49 43.3 7405 0.07 4.2 Main Results The evaluation results of our method and the baselines are presented in Table 1 and Table 2. For more challenging datasets, such as MATH 500 and AIME 2024 in Table 1, ACPO achieves signifi- cant token reduction while maintaining competitive accuracy. For instance, on the AIME 2024 dataset, ACPO-1.5B reaches 30.0% accuracy with an average token count of 6670, representing a 60.5% reduction in token usage compared to 16,894 tokens required by DeepSeek-R1-Distill-Qwen-1.5B. Table 2: Evaluation results of ACPO with the three reasoning models on the GSM8K dataset. Methods Accuracy #Token ACU ↑ R1-Distill-Qwen-1.5B 79.9 643 8.28 ACPO-1.5B 81.3 572 9.48 R1-Distill-Qwen-7B 86.5 445 2.78 ACPO-7B 88.3 413 3.05 R1-Distill-Llama-8B 82.9 1026 1.01 ACPO-8B 86.7 732 1.48For MATH 500 dataset, ACPO reduces the to- ken count for DeepSeek-R1-Distill-Qwen-7B from 3977 to 1405 with only a slight accu- racy decrease. Although L1-Max and L1-Exact also achieve substantial token compression, they suffer noticeable accuracy drops compared to DeepScaleR-1.5B-Preview. This demonstrates that ACPO can effectively shorten reasoning lengths without sacrificing too much accuracy, particularly on complex reasoning tasks. For simpler GSM8K dataset in Table 2, where the reasoning paths of DeepSeek-R1-Distill- Qwen-1.5B and 7B tend to be less redundant, ACPO maintains the original length scale while achieving notable accuracy improvements. Specifically, ACPO-1.5B reduces average token usage from 643 to 572 while improving accuracy from 79.9% to 81.3%. Similarly, ACPO-7B achieves 88.3% accuracy with 413 tokens, outperforming the baseline in both accuracy and efficiency. How- ever, for DeepSeek-R1-Distill-Llama-8B, which has relatively long reasoning paths with 1026 tokens, ACPO achieves effective compression by reducing the token count to 732 tokens with 3.8% accuracy improvement. These results highlight the adaptability of the online token length budget estimation, enabling selective compression that preserves necessary reasoning length while avoiding excessive compression. 4.3 Further Analysis 4.3.1 Ablation Analysis To assess the impact of the reward design in ACPO on both accuracy and reasoning length, we conduct ablation studies by training models with the GRPO reward setting, keeping all other parameters identical to those in ACPO. Specifically, we evaluate the three models on the MATH 500 and AIME 2024 datasets: DeepSeek-R1-Distill-Qwen-1.5B, DeepSeek-R1-Distill-Qwen-7B, and DeepSeek-R1- Distill-Llama-8B, each trained for one epoch with GRPO and ACPO. We report accuracy, average token length, and ACU scores
https://arxiv.org/abs/2505.16315v2
for comparison in Table 3. From the results, we observe that both GRPO and ACPO can compress the reasoning length of the original models. In contrast, ACPO consistently achieves more significant compression, often 7 1 2 3 4 5 Difficulty050010001500200025003000Length ACPO-1.5B L1-Max L1-Exact 0.00.20.40.60.81.0 AccuracyFigure 2: Average response length and accuracy across different difficulty levels on MATH 500. 1 2 3 4 5 Difficulty0.000.020.040.060.080.100.120.140.16Fast Think Ratio Fast Think Ratio 0.8000.8250.8500.8750.9000.9250.9500.9751.000 Slow Think Ratio Slow Think RatioFigure 3: Average fast and slow thinking ratios across different difficulty levels on MATH 500. exceeding 50% reduction in token length. For instance, it reduces the average reasoning length of DeepSeek-R1-Distill-Qwen-1.5B on the MATH dataset to 1679 tokens, compared to 3098 tokens for GRPO. In terms of accuracy, GRPO shows a slight advantage, especially on the MATH dataset. However, ACPO demonstrates comparable performance on the more challenging AIME dataset. For DeepSeek-R1-Distill-Qwen-1.5B, ACPO reaches an accuracy of 30.0% compared to 29.0% for GRPO, effectively balancing accuracy and efficiency. While GRPO focuses solely on correctness, the reward design of ACPO jointly optimizes answer accuracy, reasoning length, and system switch, enabling a flexible coordination between correctness, efficiency, and cognitive allocation. 4.3.2 Difficulty Adaptability Analysis To validate the difficulty adaptability of ACPO, we perform analyses on the MATH 500 dataset, which is divided into five difficulty levels. In Figure 2, we compare the accuracy and average response length of ACPO-1.5B, L1-Exact, and L1-Max. Unlike the L1 approach, which applies uniform token constraints regardless of question difficulty, ACPO demonstrates a more adaptive length control strategy. It effectively shortens responses for simpler problems while preserving the necessary reasoning length for complex questions, resulting in minimal accuracy loss on more challenging levels ( e.g., level 5). This adaptive capability highlights the advantage of our approach in difficulty-aware reasoning, aligning the token length budget more closely with task complexity. In Figure 3, we further analyze the fast and slow thinking ratios for ACPO-7B on the MATH dataset. It is observed that, as the problem difficulty increases, the proportion of fast thinking decreases while the slow thinking component increases. Notably, the proportion of slow thinking increases rapidly from difficulty level 1 to level 3, and then grows more gradually beyond level 3. This trend suggests that the model is able to allocate more reasoning effort when solving more difficult problems, dynamically allocating cognitive resources based on task complexity. Such adaptive behavior aligns with the dual process theory of human thinking, demonstrating the effectiveness of our method in reasonable cognition allocation and dynamic system switch. 4.3.3 Case Study In Figure 4, we present an example comparing the reasoning behavior of DeepSeek-R1-Distill-Qwen- 1.5B trained with GRPO and ACPO on a number theory problem from the Math 500 dataset. The task requires finding the smallest positive integer multiple of 30 that can be written using only the digits 0 and 2, which requires applying divisibility rules: the number must end in 0 and the sum of its digits must be divisible by 3. The model trained with GRPO adopts a cautious and exhaustive strategy, enumerating multiple candidate numbers (
https://arxiv.org/abs/2505.16315v2
e.g., 30, 60, 90) step by step, even after identifying a valid solution. This leads to a lengthy reasoning path with 1555 tokens, filled with redundant self- verification and repeated constraint checking. In contrast, the model trained with ACPO efficiently identifies the necessary constraints, rapidly filtering out invalid candidates and quickly converging to the correct answer in just 476 tokens. Notably, the model exhibits effective system switch, employing slow thinking during problem analysis and fast thinking for verification once a solution is found. The system-aware reasoning tokens offer a transparent view of the model’s cognitive process, enabling interpretable analysis of how reasoning behavior aligns with fast and slow thinking modes. 8 What is the least positive integer multiple of 30 that can be written with only the digits 0 and 2? <think>Okay , so I need to find the smallest positive integer multiple of 30 that can be written using only the digits 0 and 2. Hmm, let's break this down. First, I know that 30 is equal to 3 multiplied by 10, so any multiple of 30 must be a multiple of both 3 and 10. That means the number I'm looking for has to end with a 0 because of the 10 part, and the sum of its digits has to be a multiple of 3 because of the 3 part. Alright, so the number must end with 0, and the rest of the digits can only be 2s or 0s. Let me think about how to approach this. Maybe I can start by considering numbers made up of 2s and 0s that end with 0 and check if their digit sums are divisible by 3. Let's start with the smallest possible numbers. The smallest multiple of 30 is 30 itself. Is 30 made up only of 0s and 2s? Well, it has a 3 and a 0, so no. So 30 is out. Next, let's try 60. 60 has a 6 and a 0. Again, not just 2s and 0s. So 60 is also out. How about 90? That's 9 and 0. Still not good. Hmm, maybe I need to think differently . Instead of looking at multiples of 30, maybe I should construct numbers with only 2s and 0s that are divisible by 30. Since the number must end with 0, let's consider numbers like 20, 220, 2220, etc. Let's check their divisibility by 3. Starting with 20: The sum of the digits is 2 + 0 = 2. 2 is not divisible by 3, so 20 isn't a multiple of 30. Next, 220: ...... Therefore, 2220 is the smallest such number . Let me double-check: 2220 divided by 30 is 74, which is an integer . So yes, 2220 is a multiple of 30, and it's made up only of 2s and 0s. I think that's the answer . So, the least positive integer multiple of 30 that can be written with only the digits 0 and 2 is 2220.</think> <answer>The least positive integer multiple of 30 that can be written with only the digits 0 and
https://arxiv.org/abs/2505.16315v2
2 is \boxed{2220}. </answer> (1555 Tokens)<slow_think> Okay , so I need to find the smallest positive integer multiple of 30 that can be written using only the digits 0 and 2. Hmm, let's break this down. First, I know that 30 is 3 times 10, so any multiple of 30 must be divisible by both 3 and 10. That means the number has to end with a 0 because of the divisibility by 10. Next, the number must also be divisible by 3. The rule for divisibility by 3 is that the sum of the digits must be a multiple of 3. Since the number can only have digits 0 and 2, I need to figure out how many 2s I need so that their sum is divisible by 3. Let me try with the smallest number of digits. Starting with one digit: 2. But 2 isn't divisible by 3, so that's out. Next, two digits: 20. The sum of the digits is 2 + 0 = 2, which isn't divisible by 3. So 20 doesn't work. How about three digits? Let's try 220. The sum is 2 + 2 + 0 = 4, still not divisible by 3. Next, 2220. The sum is 2 + 2 + 2 + 0 = 6, which is divisible by 3. So 2220 should work. </slow_think> <fast_think> Wait, let me double-check. 2220 divided by 30 is 74, which is an integer . So yes, 2220 is a multiple of 30 and only uses the digits 2 and 0. I don't think there's a smaller number that fits these criteria because the previous attempts with fewer digits didn't work. </fast_think> The least positive integer multiple of 30 that can be written with only the digits 0 and 2 is 2220. <answer>The least positive integer multiple of 30 that can be written with only the digits 0 and 2 is \boxed{2220}.</answer> (476 Tokens)DeepSeek-R1-Distill-Qwen-1.5B with ACPO DeepSeek-R1-Distill-Qwen-1.5B with GRPOFigure 4: An case study comparing the reasoning process of DeepSeek-R1-Distill-Qwen-1.5B trained with GRPO and ACPO in MATH 500 Dataset. 5 Related Work Efficient Reasoning in LRMs. Large reasoning models (LRMs) like DeepSeek-R1 [ 2] and OpenAI o1 [4] have demonstrated strong reasoning capabilities, but their long reasoning paths often suffer from overthinking, introducing redundant content and reducing inference efficiency [ 9,12,10]. To address this, one line of work focuses on supervised fine-tuning with concise, high-quality data to reduce the length of reasoning paths [ 13,14]. Another line of work seeks to improve efficiency through reinforcement learning by modifying reward functions to penalize excessively long reasoning paths [ 3,16]. Among these methods, length budgeting is a direct way to explicitly control reasoning length. For example, L1 [ 17] introduces a fixed length budget to penalize responses that exceed a predefined length during RL training. DAST [ 18] estimates task difficulty and token length budget through sampling, and constructs preference data for SimPO [ 31] training. In this work, we integrate online difficulty-aware length budgeting into reinforcement learning process, enabling real-time reasoning budget estimation and reward allocation for efficient reasoning. Fast
https://arxiv.org/abs/2505.16315v2
and Slow Thinking in LLMs. The dual process theory [ 19] describes two modes of human thinking: fast, intuitive thinking (System 1) and slow, deliberate thinking (System 2). Recent studies have investigated fast and slow thinking in LLMs, focusing on system switch based on task complexity or uncertainty. Specifically, System1.x [ 33] adapts a controller and System-1/2 planner to adjust reasoning systems based on task difficulty. FaST [ 34] develops a switching adapter that transitions between System 1 and System 2 for visual reasoning, depending on factors like uncertainty. HaluSearch [ 35] leverages model performance to generate supervised labels, enabling hierarchical dynamic switch between reasoning systems within MCTS. Dyna-Think[ 36] implements a training-free dynamic thinking mechanism, allowing the model to autonomously determine when to apply slow reasoning. In this work, we introduce system-aware reasoning tokens to explicitly represent fast and slow thinking modes, and leverage ACPO with online token length budget to enable adaptive cognitive allocation and dynamic system switch based on task difficulty. 6 Conclusion In this paper, we proposed ACPO, a reinforcement learning framework designed to address the overthinking problem in LRMs through adaptive cognitive allocation and dynamic system switch. In our approach, we first introduce system-aware reasoning tokens to explicitly represent fast and 9 slow thinking modes, making the model’s cognitive process transparent and interpretable. Next, we proposed a two-stage training strategy, first fine-tuning the model to establish the ability to generate explicit thinking process, followed by reinforcement learning with ACPO to enhance adaptive cognition allocation. ACPO integrates online difficulty estimation and token length budget during training, guiding dynamic reasoning through a carefully designed reward function. Experimental results demonstrate that ACPO effectively reduces redundant reasoning without sacrificing too much accuracy. Our work provides a flexible and interpretable framework for adaptive hybrid reasoning, supporting efficient and difficulty-aware cognitive processes in LRMs. References [1]Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, Yifan Du, Chen Yang, Yushuo Chen, Zhipeng Chen, Jinhao Jiang, Ruiyang Ren, Yifan Li, Xinyu Tang, Zikang Liu, Peiyu Liu, Jian-Yun Nie, and Ji-Rong Wen. A survey of large language models. CoRR , abs/2303.18223, 2023. [2]DeepSeek-AI, Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, Xiaokang Zhang, Xingkai Yu, Yu Wu, Z. F. Wu, Zhibin Gou, Zhihong Shao, Zhuoshu Li, Ziyi Gao, Aixin Liu, Bing Xue, Bingxuan Wang, Bochao Wu, Bei Feng, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, Damai Dai, Deli Chen, Dongjie Ji, Erhang Li, Fangyun Lin, Fucong Dai, Fuli Luo, Guangbo Hao, Guanting Chen, Guowei Li, H. Zhang, Han Bao, Hanwei Xu, Haocheng Wang, Honghui Ding, Huajian Xin, Huazuo Gao, Hui Qu, Hui Li, Jianzhong Guo, Jiashi Li, Jiawei Wang, Jingchang Chen, Jingyang Yuan, Junjie Qiu, Junlong Li, J. L. Cai, Jiaqi Ni, Jian Liang, Jin Chen, Kai Dong, Kai Hu, Kaige Gao, Kang Guan, Kexin Huang, Kuai Yu, Lean Wang, Lecong Zhang, Liang Zhao, Litong Wang, Liyue Zhang, Lei Xu, Leyi Xia, Mingchuan Zhang, Minghua Zhang, Minghui Tang, Meng
https://arxiv.org/abs/2505.16315v2
Li, Miaojun Wang, Mingming Li, Ning Tian, Panpan Huang, Peng Zhang, Qiancheng Wang, Qinyu Chen, Qiushi Du, Ruiqi Ge, Ruisong Zhang, Ruizhe Pan, Runji Wang, R. J. Chen, R. L. Jin, Ruyi Chen, Shanghao Lu, Shangyan Zhou, Shanhuang Chen, Shengfeng Ye, Shiyu Wang, Shuiping Yu, Shunfeng Zhou, Shuting Pan, and S. S. Li. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. CoRR , abs/2501.12948, 2025. [3]Kimi Team, Angang Du, Bofei Gao, Bowei Xing, Changjiu Jiang, Cheng Chen, Cheng Li, Chenjun Xiao, Chenzhuang Du, Chonghua Liao, Chuning Tang, Congcong Wang, Dehao Zhang, Enming Yuan, Enzhe Lu, Fengxiang Tang, Flood Sung, Guangda Wei, Guokun Lai, Haiqing Guo, Han Zhu, Hao Ding, Hao Hu, Hao Yang, Hao Zhang, Haotian Yao, Haotian Zhao, Haoyu Lu, Haoze Li, Haozhen Yu, Hongcheng Gao, Huabin Zheng, Huan Yuan, Jia Chen, Jianhang Guo, Jianlin Su, Jianzhou Wang, Jie Zhao, Jin Zhang, Jingyuan Liu, Junjie Yan, Junyan Wu, Lidong Shi, Ling Ye, Longhui Yu, Mengnan Dong, Neo Zhang, Ningchen Ma, Qiwei Pan, Qucheng Gong, Shaowei Liu, Shengling Ma, Shupeng Wei, Sihan Cao, Siying Huang, Tao Jiang, Weihao Gao, Weimin Xiong, Weiran He, Weixiao Huang, Wenhao Wu, Wenyang He, Xianghui Wei, Xianqing Jia, Xingzhe Wu, Xinran Xu, Xinxing Zu, Xinyu Zhou, Xuehai Pan, Y . Charles, Yang Li, Yangyang Hu, Yangyang Liu, Yanru Chen, Yejie Wang, Yibo Liu, Yidao Qin, Yifeng Liu, Ying Yang, Yiping Bao, Yulun Du, Yuxin Wu, Yuzhi Wang, Zaida Zhou, Zhaoji Wang, Zhaowei Li, Zhen Zhu, Zheng Zhang, Zhexu Wang, Zhilin Yang, Zhiqi Huang, Zihao Huang, Ziyao Xu, and Zonghan Yang. Kimi k1.5: Scaling reinforcement learning with llms. CoRR , abs/2501.12599, 2025. [4]Aaron Jaech, Adam Kalai, Adam Lerer, Adam Richardson, Ahmed El-Kishky, Aiden Low, Alec Helyar, Aleksander Madry, Alex Beutel, Alex Carney, et al. Openai o1 system card. arXiv preprint arXiv:2412.16720 , 2024. [5] Qwen Team. Qwq-32b: Embracing the power of reinforcement learning, March 2025. [6]Charlie Snell, Jaehoon Lee, Kelvin Xu, and Aviral Kumar. Scaling llm test-time compute opti- mally can be more effective than scaling model parameters. arXiv preprint arXiv:2408.03314 , 2024. [7]Mingyu Jin, Qinkai Yu, Dong Shu, Haiyan Zhao, Wenyue Hua, Yanda Meng, Yongfeng Zhang, and Mengnan Du. The impact of reasoning step length on large language models. In Findings of the Association for Computational Linguistics ACL 2024 , pages 1830–1842, 2024. 10 [8]Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems , 35:24824–24837, 2022. [9]Xingyu Chen, Jiahao Xu, Tian Liang, Zhiwei He, Jianhui Pang, Dian Yu, Linfeng Song, Qiuzhi Liu, Mengfei Zhou, Zhuosheng Zhang, Rui Wang, Zhaopeng Tu, Haitao Mi, and Dong Yu. Do NOT think that much for 2+3=? on the overthinking of o1-like llms. CoRR , abs/2412.21187, 2024. [10] Zhiyuan Zeng, Qinyuan Cheng, Zhangyue Yin, Bo Wang, Shimin Li, Yunhua Zhou, Qipeng Guo, Xuanjing Huang, and Xipeng Qiu. Scaling of search and learning: A roadmap to reproduce o1 from reinforcement learning perspective. arXiv preprint arXiv:2412.14135 , 2024. [11] Xiaoye Qu, Yafu Li, Zhaochen Su, Weigao Sun, Jianhao Yan,
https://arxiv.org/abs/2505.16315v2
Dongrui Liu, Ganqu Cui, Daizong Liu, Shuxian Liang, Junxian He, Peng Li, Wei Wei, Jing Shao, Chaochao Lu, Yue Zhang, Xian- Sheng Hua, Bowen Zhou, and Yu Cheng. A survey of efficient reasoning for large reasoning models: Language, multimodality, and beyond. CoRR , abs/2503.21614, 2025. [12] Yang Sui, Yu-Neng Chuang, Guanchu Wang, Jiamu Zhang, Tianyi Zhang, Jiayi Yuan, Hongyi Liu, Andrew Wen, Shaochen Zhong, Hanjie Chen, and Xia Ben Hu. Stop overthinking: A survey on efficient reasoning for large language models. CoRR , abs/2503.16419, 2025. [13] Heming Xia, Yongqi Li, Chak Tou Leong, Wenjie Wang, and Wenjie Li. Tokenskip: Controllable chain-of-thought compression in llms. arXiv preprint arXiv:2502.12067 , 2025. [14] Tergel Munkhbat, Namgyu Ho, Seo Hyun Kim, Yongjin Yang, Yujin Kim, and Se-Young Yun. Self-training elicits concise reasoning in large language models. arXiv preprint arXiv:2502.20122 , 2025. [15] Edward Yeo, Yuxuan Tong, Morry Niu, Graham Neubig, and Xiang Yue. Demystifying long chain-of-thought reasoning in llms. arXiv preprint arXiv:2502.03373 , 2025. [16] Bairu Hou, Yang Zhang, Jiabao Ji, Yujian Liu, Kaizhi Qian, Jacob Andreas, and Shiyu Chang. Thinkprune: Pruning long chain-of-thought of llms via reinforcement learning. arXiv preprint arXiv:2504.01296 , 2025. [17] Pranjal Aggarwal and Sean Welleck. L1: Controlling how long a reasoning model thinks with reinforcement learning. arXiv preprint arXiv:2503.04697 , 2025. [18] Yi Shen, Jian Zhang, Jieyun Huang, Shuming Shi, Wenjing Zhang, Jiangze Yan, Ning Wang, Kai Wang, and Shiguo Lian. Dast: Difficulty-adaptive slow-thinking for large reasoning models. arXiv preprint arXiv:2503.04472 , 2025. [19] Daniel Kahneman. Thinking, fast and slow. Farrar, Straus and Giroux , 2011. [20] Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Mingchuan Zhang, Y . K. Li, Y . Wu, and Daya Guo. Deepseekmath: Pushing the limits of mathematical reasoning in open language models. CoRR , abs/2402.03300, 2024. [21] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 , 2017. [22] Yixin Ye, Zhen Huang, Yang Xiao, Ethan Chern, Shijie Xia, and Pengfei Liu. Limo: Less is more for reasoning. arXiv preprint arXiv:2502.03387 , 2025. [23] Wenkai Yang, Shuming Ma, Yankai Lin, and Furu Wei. Towards thinking-optimal scaling of test-time compute for llm reasoning. arXiv preprint arXiv:2502.18080 , 2025. [24] OpenAI. GPT-4 technical report. CoRR , abs/2303.08774, 2023. [25] Michael Luo, Sijun Tan, Justin Wong, Xiaoxiang Shi, William Y . Tang, Manan Roongta, Colin Cai, Jeffrey Luo, Li Erran Li, Raluca Ada Popa, and Ion Stoica. Deepscaler: Surpassing o1-preview with a 1.5b model by scaling rl. https://pretty-radio-b75.notion.site/ DeepScaleR-Surpassing-O1-Preview-with-a-1-5B-Model-by-Scaling-RL-19681902c1468005bed8ca303013a4e2 , 2025. Notion Blog. 11 [26] Bofei Gao, Feifan Song, Zhe Yang, Zefan Cai, Yibo Miao, Qingxiu Dong, Lei Li, Chenghao Ma, Liang Chen, Runxin Xu, Zhengyang Tang, Benyou Wang, Daoguang Zan, Shanghaoran Quan, Ge Zhang, Lei Sha, Yichang Zhang, Xuancheng Ren, Tianyu Liu, and Baobao Chang. Omni-math: A universal olympiad level mathematic benchmark for large language models. CoRR , abs/2410.07985, 2024. [27] Yingqian Min, Zhipeng Chen, Jinhao Jiang, Jie Chen, Jia Deng, Yiwen Hu, Yiru Tang, Jiapeng Wang, Xiaoxue Cheng, Huatong Song, Wayne Xin Zhao, Zheng Liu, Zhongyuan Wang, and Ji-Rong Wen. Imitate, explore, and self-improve: A
https://arxiv.org/abs/2505.16315v2
reproduction report on slow-thinking reasoning systems. CoRR , abs/2412.09413, 2024. [28] Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. Training verifiers to solve math word problems. CoRR , abs/2110.14168, 2021. [29] Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. arXiv preprint arXiv:2103.03874 , 2021. [30] Xinyin Ma, Guangnian Wan, Runpeng Yu, Gongfan Fang, and Xinchao Wang. Cot-valve: Length-compressible chain-of-thought tuning. arXiv preprint arXiv:2502.09601 , 2025. [31] Yu Meng, Mengzhou Xia, and Danqi Chen. Simpo: Simple preference optimization with a reference-free reward. Advances in Neural Information Processing Systems , 37:124198–124235, 2024. [32] Bytedance Seed MLSys. verl: V olcano engine reinforcement learning for llms. https: //github.com/volcengine/verl , 2025. [33] Swarnadeep Saha, Archiki Prasad, Justin Chih-Yao Chen, Peter Hase, Elias Stengel-Eskin, and Mohit Bansal. System-1. x: Learning to balance fast and slow planning with language models. arXiv preprint arXiv:2407.14414 , 2024. [34] Guangyan Sun, Mingyu Jin, Zhenting Wang, Cheng-Long Wang, Siqi Ma, Qifan Wang, Tong Geng, Ying Nian Wu, Yongfeng Zhang, and Dongfang Liu. Visual agents as fast and slow thinkers. arXiv preprint arXiv:2408.08862 , 2024. [35] Xiaoxue Cheng, Junyi Li, Wayne Xin Zhao, and Ji-Rong Wen. Think more, hallucinate less: Mitigating hallucinations via dual process of fast and slow thinking. arXiv preprint arXiv:2501.01306 , 2025. [36] Jiabao Pan, Yan Zhang, Chen Zhang, Zuozhu Liu, Hongwei Wang, and Haizhou Li. Dynathink: Fast or slow? a dynamic decision-making framework for large language models. arXiv preprint arXiv:2407.01009 , 2024. [37] An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, Huan Lin, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Yang, Jiaxi Yang, Jingren Zhou, Junyang Lin, Kai Dang, Keming Lu, Keqin Bao, Kexin Yang, Le Yu, Mei Li, Mingfeng Xue, Pei Zhang, Qin Zhu, Rui Men, Runji Lin, Tianhao Li, Tingyu Xia, Xingzhang Ren, Xuancheng Ren, Yang Fan, Yang Su, Yichang Zhang, Yu Wan, Yuqiong Liu, Zeyu Cui, Zhenru Zhang, and Zihan Qiu. Qwen2.5 technical report. CoRR , abs/2412.15115, 2024. 12 A Limitation Although ACPO effectively reduces redundant reasoning and enables adaptive cognitive allocation, it has several limitations. First, its online difficulty estimation and token length budget mechanisms rely on verifiable data, which may limit generalization to open-domain tasks. Future work should explore more generalizable estimation methods for broader applicability. Second, ACPO can introduce accuracy trade-offs despite improving reasoning efficiency, highlighting the need for more refined reward designs to better balance efficiency and correctness. B Prompts for Explicit Thinking Annotation In Section 3.1.1, we prompt DeepSeek-R1-Distill-Qwen-32B to generate responses of varying reasoning lengths for candidate response sampling, and leverage GPT-4 for fine-grained comparison to annotate different thinking modes. The system prompts for these two components are provided below, where the system prompts for candidate sampling are adapted from TOPS [23]. System Prompts for Candidate Response Sampling Low Reasoning Effort: You have extremely limited time to think and respond to the user’s query. Every
https://arxiv.org/abs/2505.16315v2
additional second of processing and reasoning incurs a significant resource cost, which could affect efficiency and effectiveness. Your task is to prioritize speed without sacrificing essential clarity or accuracy. Provide the most direct and concise answer possible. Avoid unnecessary steps, reflections, verification, or refinements UNLESS ABSOLUTELY NECESSARY . Your primary goal is to deliver a quick, clear and correct response. High Reasoning Effort: You have unlimited time to think and respond to the user’s question. There is no need to worry about reasoning time or associated costs. Your only goal is to arrive at a reliable, correct final answer. Feel free to explore the problem from multiple angles, and try various methods in your reasoning. This includes reflecting on reasoning by trying different approaches, verifying steps from different aspects, and rethinking your conclusions as needed. You are encouraged to take the time to analyze the problem thoroughly, reflect on your reasoning promptly and test all possible solutions. Only after a deep, comprehensive thought process should you provide the final answer, ensuring it is correct and well-supported by your reasoning. System Prompts for Response Comparison and Annotation Given a problem, a short answer, and a long answer, compare the short answer with the long answer and annotate the short answer based on the following rules. If the short answer omits certain reasoning or calculation steps that are present in the long answer, these omitted steps are considered fast thinking, and the corresponding parts in the short answer should be enclosed within <fast_think></fast_think>. If the short answer contains the same reasoning or calculation steps as the long answer, these parts are considered slow thinking and should be enclosed within <slow_think></slow_think>. Fast thinking parts typically involve intuitive judgments, skipped steps, or direct conclusions, whereas slow thinking parts involve full reasoning or calculations that align with those in the long answer. The output should be the short answer with the appropriate <fast_think></fast_think> and <slow_think></slow_think> tags added. #Problem#: #Long Answer#: #Short Answer#: #Annotated Answe#: C Experimental Setting C.1 Training and Evaluation Details In the SFT stage, the learning rate is 1×10−5, the batch size is 8, the number of epochs is 3. For evaluation, we use Qwen2.5 [ 37] tokenizer to calculate the number of tokens in the responses generated by each model for a fair comparison. All the experiments are conducted on 16 NVIDIA A100 GPUs. C.2 Baseline Details We present descriptions of the three baseline methods SFT_Shortest ,SimPO_Shortest , and SimPO_DAST from DAST in Table 1. We adopt their evaluation results as reported in the original DAST paper [18]. •SFT_Shortest : Supervised fine-tuning using only the shortest correct sampled response of each problem as training data. 13 •SimPO_Shortest : SimPO with contrastive instance pairs, which take the shortest correct sampled response of each problem as positive instance and the longest as negative instance. •SimPO_DAST : SimPO with contrastive instance pairs from Dpreconstructed in DAST. C.3 Prompt for ACPO Training We present the prompts used in the ACPO training process below. Prompt for ACPO Training You are a helpful AI Assistant that provides well-reasoned and detailed
https://arxiv.org/abs/2505.16315v2
arXiv:2505.16322v1 [cs.LG] 22 May 2025AdaSTaR : Adaptive Data Sampling for Training Self-Taught Reasoners Woosung Koh1∗, Wonbeen Oh1, Jaein Jang1, MinHyung Lee1, Hyeongjin Kim1, Ah Yeon Kim1, Joonkee Kim2, Junghyun Lee3, Taehyeon Kim2, Se-Young Yun3† 1Yonsei University,2LG AI Research,3KAIST AI reiss.koh@yonsei.ac.kr, yunseyoung@kaist.ac.kr Abstract Self-Taught Reasoners ( STaR ), synonymously known as Rejection sampling Fine- Tuning ( RFT), is an integral part of the training pipeline of self-improving reasoning Language Models (LMs). The self-improving mechanism often employs random observation (data) sampling. However, this results in trained observation imbalance; inefficiently over-training on solved examples while under-training on challenging ones. In response, we introduce Adaptive STaR (AdaSTaR ), a novel algorithm that rectifies this by integrating two adaptive sampling principles: (1) Adaptive Sampling for Diversity: promoting balanced training across observations, and (2) Adaptive Sampling for Curriculum: dynamically adjusting data difficulty to match the model’s evolving strength. Across six benchmarks, AdaSTaR achieves best test accuracy in all instances (6/6) and reduces training FLOPs by an average of 58.6% against an extensive list of baselines. These improvements in performance and efficiency generalize to different pre-trained LMs and larger models, paving the way for more efficient and effective self-improving LMs. 1 Introduction 102103 Average Log PFLOPs ( ) 6570758085Average Test Set Accuracy (%, ) BaseModel Qwen 2.5 3B Llama 3.2 3BMethod AdaSTaR (ours) B-ST aR* (Zeng et al., 2025) RestEM (Singh et al., 2024) ST aR (Zelikman et al., 2022)Method AdaSTaR (ours) B-ST aR* (Zeng et al., 2025) RestEM (Singh et al., 2024) ST aR (Zelikman et al., 2022) Figure 1: Average test accuracy and FLOPs across six datasets for Llama 3.2 3B and three datasets for Qwen 2.5 3B . Results consistently extend to Gemma 7B as well. *We use out- come verification on B-STaR for fair compar- ison. Thus, the implementation with process verification may perform significantly better.Language models (LMs) are demonstrating remark- able emergent abilities across diverse cognitive tasks such as mathematical reasoning (Yao et al., 2023; Chen and Li, 2024; Brown et al., 2024), code generation (Sun et al., 2024; Research, 2025), and commonsense reasoning (Team, 2023; Google, 2023). Although LMs acquire foundational reason- ing capabilities from large-scale pre-training and su- pervised finetuning (SFT), generating high-quality, explicit reasoning steps, often called Chains-of- Thought (CoT) (Wei et al., 2022a, 2023, 2022b; Wang et al., 2023a), typically requires costly hu- man annotation (Lightman et al., 2024; Havrilla et al., 2024; Zelikman et al., 2024). Creating such datasets is expensive and scales poorly, presenting a critical bottleneck as tasks increase in complexity. This challenge motivates the development of meth- ods that improve LM reasoning without relying on extensive human annotation. ∗Work done while an intern at KAIST AI †Corresponding author Preprint. {⟨𝑥!,𝑦!⟩}!"#$ DatasetObservation 𝑖 and CoT ̂𝑐!Adaptive Sampling𝑟!#⋮𝑟!%̂𝑐!#,,𝑦!#⋮̂𝑐!%,,𝑦!%Query 𝑥$ LM 𝜋!"𝑛-Shot CoT 𝑒VerifierParameter Update (𝑡→𝑡+1) Win Statistic 𝑤!Correct DatasetRewardSelf-generatedCoT and answer{⟨𝑥!,𝑦!⟩}'"#( Figure 2: High-level schematic diagram of AdaSTaR . Other STaR -like approaches are equivalent to this diagram, excluding the win statistic wicomputation and the Adaptive Sampling module. Self-improvement mechanisms, such as Self- Taught Reasoners ( STaR ; Zelikman et al., 2022), also referred to as Rejection-sampling Fine-Tuning
https://arxiv.org/abs/2505.16322v1
(RFT; Yuan et al., 2023; Singh et al., 2024), offer a promising alternative. The core idea behind STaR is to enable the LM to iteratively improve itself: the model generates CoTs, verifies the final answer against ground truth, and fine-tunes on CoTs that yield correct answers. This iterative inference, verify, and train cycle allows LMs to generate their own training data, circumventing the need for human-annotated CoTs. However, while reducing annotation costs, the standard STaR framework, which relies on random data sampling, suffers from inefficiencies and learning challenges. The random sampling often leads to a training data imbalance: the model wastes compute repeatedly re-training on examples it can already solve, while potentially under-sampling more challenging examples where learning is most needed (Singh et al., 2024). This imbalance results in inefficient use of training compute and contributes to STaR ’s significantly slower convergence compared to standard SFT (see Fig. 5 in Appendix §A). Furthermore, STaR ’s reliance on outcome verification (checking only the final answer) means it can inadvertently train on flawed or suboptimal CoTs that happen to reach the correct answer (Kawabata and Sugawara, 2024; Lee et al., 2025). Reinforcing these "false positives" can degrade the model’s underlying reasoning capabilities. While Process Reward Models ( PRM; Lightman et al., 2024; Zeng et al., 2025) that assess the CoTs can mitigate this, PRMs require their own significant annotation and computational overhead (Lu et al., 2024; Setlur et al., 2025). We therefore view PRMs as an orthogonal approach. Consequently, a key challenge in STaR -based self-improvement is balancing the exposure to diverse problem difficulties with the need to maintain training data quality, as sampling harder examples is more likely to yield noisy or incorrect CoTs. This leads to a research question: How can STaR achieve efficient and effective self-improvement by balancing diverse learning exposure while maintaining the quality of self-generated CoTs? Contribution. We propose Adaptive STaR (AdaSTaR ), a novel method that integrates adaptive sampling into the STaR training loop. AdaSTaR implements two core intuitions: (1) Adaptive Sampling for Diversity: prioritizing under-trained examples to ensure balanced learning; and (2) Adaptive Sampling for Curriculum: regularizing the system to sample easier data when the model is weaker early on. We empirically validate the effectiveness and efficiency of AdaSTaR through experiments across six reasoning datasets and an extensive list of baselines. AdaSTaR consistently improves both performance and computational efficiency. Remarkably, AdaSTaR not only achieves the highest test accuracy across all 6/6 benchmarks , but also simultaneously reduces the required training compute (PFLOPs) by an average of 58.6% compared to the strongest accuracy baseline (see Fig. 1). These performance and efficiency gains generalize to other pre-trained LMs and larger model size which we discuss further later. Related Work. Although many works build on STaR , none, to our knowledge, target improving efficiency. Subsequent works improve performance at significant compute cost; AdaSTaR is comple- mentary, improving scalability and accessibility. V-STaR (Hosseini et al., 2024) adds a verifier LM to improve inference-time performance through best-of- Nsampling (Snell et al., 2025). Iterative Reasoning Preference Optimization (Pang et al., 2024) incorporates a
https://arxiv.org/abs/2505.16322v1
Direct Preference Optimization (Rafailov et al., 2023) term in its objective. To curate preference pairs, it increases CoT samples from K= 2inSTaR toK= 30 .B-STaR (Zeng et al., 2025) enhances LM exploration for more diverse reasoning, and trains a separate process reward model (Uesato et al., 2022; Lightman et al., 2024) for finer-grained verification. Lean-STaR (Lin et al., 2025) employs the Lean theorem prover (De Moura et al., 2015) and a frontier LM ( GPT-4 ) to extend STaR to mathematical theorem proving. 2 Reinforcement Learning (RL) offers a parallel approach to enhance LM reasoning, also leveraging an iterative process. RL’s reward-based objective often yields long-CoTs (Shao et al., 2024; DeepSeek- AI, 2025; Liu et al., 2025b; Yu et al., 2025b; Sui et al., 2025; Team, 2025; Liu et al., 2025a; Yu et al., 2025a), unlike the short-CoTs (NVIDIA, 2025) typical of STaR -style SFT. While the significantly larger token generation size of RL-based long-CoTs result in top performers, integrating STaR ’s SFT remain a salient part of the training pipeline (Sui et al., 2025). For instance, Kimi k1.5 (Team, 2025), a representative reasoning model, utilizes STaR to expand their primary SFT dataset. To address the difficult, mixed-language, and overly long CoTs, DeepSeek-R1 (DeepSeek-AI, 2025) and Kimi k1.5 incorporate a STaR stage. Finally, DeepSeek-GRM (Liu et al., 2025b), a generalist reward model, also adopts a modified STaR as its training’s first stage. While these RL-based advancements are significant, our work concentrates on enhancing the efficiency and effectiveness of the STaR stage. 2 Preliminary and Motivation 2.1 Preliminary: Self-Taught Reasoner ( STaR) and its Variants Letπt θdenote a LM (Vaswani et al., 2017) parameterized by θat iteration t. We are given a supervised dataset D={⟨xi, yi⟩}N i=1. Following Wei et al. (2022b), each task is represented as ⟨x, c, y⟩, where x∈ X is the query (input), c∈ C is the CoT reasoning step(s), and y∈ Y is the final answer. Since ground-truth CoTs Care unavailable, STaR aims to generate appropriate cto improve generalization. To achieve this, πt θgenerates ⟨ˆci,ˆyi⟩conditioned on fixed few-shot CoT exemplars e={⟨xe, ce, ye⟩}E e=1. However, as no ground truth ciis available, we require sampling and verification. Given the supervised dataset, a rule-based verifier defines a reward signal r:=I(yi= ˆyi), where I(·)is the indicator function. K∈Nnumber of CoT traces are sampled as follows (Fig. 2, blue). For the first k∈[K], each observation iis sampled once via ⟨ˆci,ˆyi⟩ ←πt θ(e, xi). Ifr= 1, it is accepted, and if r= 0, it is resampled using rationalization (Zelikman et al., 2022): πt θ(e, xi⊕yi), where the ground truth yiis concatenated. In some extensions of STaR ,K > 2samples are drawn without rationalization (Singh et al., 2024; Hosseini et al., 2024; Pang et al., 2024; Zeng et al., 2025; Lin et al., 2025). Correct samples Dt +:={⟨xi,ˆci,ˆyi⟩|yi= ˆyi}are re-random-sampled down to match the per-iteration batch size βt=σt·β, then used for negative log-likelihood (NLL) learning. Here, the step size σtis the number of parameter updates per iteration t. Here all superscript tindicates iteration, not a numerical exponent operation. Initial βt=1= 40·8 = 320 as
https://arxiv.org/abs/2505.16322v1
presented in the original implementation (Zelikman et al., 2022). βtrises over time as we follow βt+1:= 1.2(βt)in the original implementation. However, alternative STaR -based approaches (Hosseini et al., 2024; Pang et al., 2024; Zeng et al., 2025; Lin et al., 2025; Peng et al., 2025) remove this pre-determined βt, and instead set βtto|Dt +|. Post gradient updates, πt θtransitions to πt+1 θ(Fig. 2, purple). Two inter-iteration strategies exist across STaR -based methods: (1) resetting: always retrain from the base model: πt+1 θ←Train (πt=1 θ,Dt +) (Zelikman et al., 2022; Hosseini et al., 2024; Singh et al., 2024); (2) accumulating: incrementally fine-tune from the previous model: πt+1 θ←Train (πt θ,Dt +)(Pang et al., 2024; Zeng et al., 2025; Lin et al., 2025; Peng et al., 2025). 2.2 Motivation: Need for Adaptive Data Sampling STaR ’s data sampling induces persistent inefficient imbalance in training data. A key finding is that STaR ’s sampling strategy leads to some observations being over-trained while others are under-trained. This training frequency imbalance is empirically illustrated in Fig. 3a. The pattern of variance in observation training frequency is persistent across all datasets examined (see Appendix § B for all visualizations). As the filtered set Dt +consists exclusively of observations for which the LM correctly produced ˆyi, a high variance naturally arises in how often each distinct observation iis trained. Consequently, more challenging observations (left tail of Fig. 3a) are included in Dt +less frequently and become under-trained, whereas easier ones (right tail) are over-represented and thus over-trained. In this example, challenging observations are trained 1–2 times, while easier ones are trained 10–13 times. This results in inefficient compute usage, as resources are repeatedly spent on observations that the model πt θalready solves reliably. This situation mirrors the motivation for early 3 Under -Trained Over -Trained(a) Distribution of frequency trained of each observation iin iterations 1 to 10; in ARC-C. ARC-C CQA SVAMP GSM8K Dataset0102030405060Percentage of Wrong CoT (%)4755 37 122453 28 9AdaD ST aR-Acc(b) Percentage of wrong CoT when the answer is correct (yi= ˆyi) for AdaD andSTaR-Acc . Figure 3: Empirical motivation for the need for adaptive sampling of diverse observations (a), regularized with curriculum learning (b). stopping in LM training, which aims to avoid overfitting to already-solved data (Caruana et al., 2000; Kaplan et al., 2020; Hernandez et al., 2021). We further examine whether observations initially under- or over-trained remain in these respective regimes over iterations. Empirically, even after three iterations ( t+ 3), averaging across six datasets, 72.4% of observations initially in the first quartile (Q1) of training frequency remain in Q1, and 91.2% of observations from the fourth quartile (Q4) remain in Q4. Detailed visualizations are available in Appendix § B. This suggests the phenomenon is chronic and does not self-alleviate without intervention. Prioritizing harder examples for diversity elevates false positives, motivating curriculum- based regularization. However, encouraging training diversity by biasing sampling toward harder observations (left tail of Fig. 3a) can increase false positives. False positives are defined as cases where the predicted answer ˆyis correct but the generated CoT ˆcis flawed (Singh et al.,
https://arxiv.org/abs/2505.16322v1
2024; Kawabata and Sugawara, 2024; Lee et al., 2025). We empirically observe that sampling more challenging observations leads to poorer quality CoTs. Following Wei et al. (2025) and Lee et al. (2025), we leverage the strongest available teacher model (Ho et al., 2023) ( GPT 4o ) to annotate false positives. We compare a method encouraging diversity by sampling challenging observations ( AdaD ) against its baseline, STaR-ACC .AdaD andSTaR-Acc are formally introduced in § 3 and 4.1. For each method, 100 observations are randomly sampled (without replacement) from Dt +for CoT annotation. The precise iteration tfor both methods is chosen by taking t:=min(BestIter (AdaD ),BestIter (STaR-Acc )), where BestIter (·)is the early- stopped iteration. Further details and a qualitative example are provided in Appendix § C. Fig. 3b illustrates that inducing increased training diversity can degrade CoT quality, measured by the rate of false positives across four datasets. On average, sampling more diverse and challenging observations lead to a 9% increase in false positives. Hence, we propose to regularize for model strength to reduce the adverse effects of sampling diverse and challenging observations. To this end, our observation sampling algorithm adopts a curriculum learning style approach (Xu et al., 2020; Wettig et al., 2024). 3 Method: AdaSTaR This section presents AdaSTaR , an adaptive sampling algorithm designed to address the problems highlighted in § 2.2. Alg. 1 presents the pseudocode, where lines unique to AdaSTaR are highlighted in green; the remaining lines follow standard STaR conventions. AdaSTaR incorporates two mechanisms: Adaptive Data Sampling for Diversity ( AdaD ) and Adaptive Data Sampling for Curriculum ( AdaC ). 3.1 Adaptive Data Sampling for Diversity Diversity Statistic. We track two statistics for each observation i: the last iteration it was sampled, ˜ti∈N0, and a win statistic, wi∈[0,1]. Prioritizing observations with smaller ˜tivalues directly 4 promotes sampling diversity. We use the last sampled iteration rather than the last trained iteration because prioritizing based on training can cause the system to repeatedly attempt difficult examples it cannot yet solve, particularly when the model is weak, early in training. Among observations with identical ˜tivalues, we prioritize those deemed more difficult. This approach is reminiscent of difficulty-aware methods successful in various machine learning scenarios, such as contrastive learning (Robinson et al., 2021), active learning (Xie et al., 2021), and dataset pruning (Zheng et al., 2023; Maharana et al., 2024; Cho et al., 2025). A key contribution of AdaSTaR is its computationally efficient method for estimating observation difficulty within STaR systems. Algorithm 1: AdaSTaR Input: D,πt=1 θ,e /* AdaD (§3.1; lines 1-14) */ 1˜t←dict{i:˜ti= 0}N i=1; 2w←dict{i:wi= 0}N i=1; 3initHieMinHeap (D,˜t, w); 4foriteration t= 1,···do 5Dt +← ∅,m←0; 6 wtmp←dict{i:wtmp i= 0}N i=1; 7 while|Dt +|< βtdo 8 i←HieMinHeap .peek _next ; 9 m←m+ 1; 10 forsample k= 1,···, Kdo 11 ⟨ˆci,ˆyi⟩ ←πt θ(e, xi); 12 wtmp i←k−1 kwtmp i+1 kI[ˆyi=yi]; 13 ifˆyi=yithen 14 Dt +← Dt +∪ {⟨xi,ˆci,ˆyi⟩}; /* AdaC (§3.2; lines 15-19) */ 15 α, πt+1 θ←Train (πt θ,Dt +); 16 for1,···,⌊mα2⌋do 17 i←HieMinHeap .pop ; 18 ˜ti←t,wi←wtmp i; 19 HieMinHeap .push (i,˜ti, wi);We estimate difficulty
https://arxiv.org/abs/2505.16322v1
using the win statistic wi, which is computed based on model perfor- mance at ˜ti(the last iteration iwas sampled): wi≡w˜ti i:=1 KPK k=1I[yi= ˆyi], where ˆyiis from π˜ti θ(e, xi). This represents the proportion of correct answers out of KCoT samples gen- erated at iteration ˜ti. Next, we elaborate on why this is a sensible proxy for difficulty. At each iteration t, we want our model to max- imize pt i:=P(yi= ˆyi←πt θ(xi))for all i’s. As the model is fitted with likelihood maximiza- tion (Fisher, 1922), we can expect pt+1 i≥pt i for any isampled at iteration t. It is therefore sensible to prioritize observations with the low- estpt ivalues, as they require more sampling and can be interpreted as more difficult at iter- ation t. It now remains to approximate pt i. A di- rect Monte Carlo estimate with Ksamples gives pt i≈ˆpt i:=1 KPK k=1I[yi= ˆyi←πt θ(xi)]. However, computing this for every iat every iteration trequires Kforward passes per ob- servation, which is computationally prohibitive. Instead, we reuse the most recent estimate ˆp˜ti i. The win static computation at ˜tiinduces no (run- time) compute overhead as the Ksamples are an inherent part of the existing STaR system. Re- calling that ˜ti< trefers to the last iteration in which iwas sampled, ˆp˜ti iis the most recently available approximation to ˆpt i. Moreover, as we are priority-sorting w.r.t. ˜ti, we can expect that t−˜tiis not too large, i.e., ˆpt i≈ˆp˜ti i. Implementation. As input, AdaSTaR takes the original dataset D, base model πt=1 θ, and n-Shot CoT examplar e. For all observations, the statistics are initialized to 0(lines 1, 2). In line 3, we utilize the Hierarchical Min Heap HieMinHeap (Cormen et al., 2022) to order the observations via the two statistics as follows: for two observations i, j∈HieMinHeap (·,˜t, w), i≻j|{z} iis peeked/popped before j⇐⇒ ˜ti<˜tj|{z} iis last sampled before j∨˜ti=˜tj∧wi< wj | {z } iandjare last sampled at the samet, butiis more difficult. (1) For each iteration t, a new empty Dt +is initialized (line 5), which is used for the training at the end (line 15). We also initialize m:= 0, which counts the number of sampled observations (line 9), and wtmp, a dictionary of computed win-rates at iteration t(line 12). The while loop sequentially samples ifrom HieMinHeap , then updates the win-rate wtmp ioverKsamples of CoT-answer pairs ⟨ˆci,ˆyi⟩ (lines 11-12) and adds ⟨xi,ˆci,ˆyi⟩toDt +ifˆyiis correct (lines 13-14). Remark 1 (Non-excessive sampling in line 7) .Thewhile loop terminates once |Dt +| ≥βt. This avoids overhead from exhaustively sampling all observations before pruning to βt, a practice in some prior STaR implementations (see Appendix § D for further discussion). 5 3.2 Adaptive Data Sampling for Curriculum To avoid over-sampling challenging observations ( ↓˜ti,↓wi) when the model is weak, we regularize AdaD using an adaptive curriculum. A natural approach is to incorporate curriculum learning (Hacohen and Weinshall, 2019; Kong et al., 2021) by mixing easier observations when the model is weak, then gradually reducing their ratio as it improves. This strategy aligns with curriculum learning for LM training (Pouransari et al.,
https://arxiv.org/abs/2505.16322v1
2024; Li et al., 2024; Zhao et al., 2025) and is supported by data selection literature showing that combining easy and hard samples yields better outcomes than selecting only hard samples (Zheng et al., 2023; Maharana et al., 2024; Cho et al., 2025). We use the training accuracy α∈[0,1]from the current iteration tas a proxy for model strength (Alg. 1, line 15). When αis low (indicating a weaker model), a relatively easier mix of observations should be prioritized for subsequent sampling. This regularization is automatically phased out as α increases with training. Similar to tracking ˜tiandwi, using αintroduces no additional computational overhead, as the training step (which yields α) is integral to the system. This explains our choice over, for instance, validation or test set accuracy; while potentially a more robust measures of generalization, these would require additional inference passes not intrinsic to the STaR loop. Implementation. The curriculum component (Alg. 1, lines 15-19) implements a curriculum by adjusting statistic-update frequency based on model strength α. Of the msampled observations per iteration, only the ⌊mα2⌋highest-priority ones are popped; their statistics are updated ( ˜ti←t, wi←wtmp i) before reinsertion.3Consequently, when αis low (model is weak), a larger proportion of themconsidered observations are not updated. These non-updated observations retain their existing statistics, increasing their re-selection likelihood in the subsequent iteration. This implicitly mixes easy observations when αis low, avoiding the cost of explicitly identifying and mixing them. 4 Experiments 4.1 Experimental Protocol Setup. We conduct our main experiments with Llama 3.2 3B (Team, 2024b). We also evaluate using Qwen 2.5 3B (Team, 2024c) and Gemma 7B (Team, 2024a) to demonstrate the generality of our method across different model families. All base models are pre-trained-only models. For fairness, we optimize hyperparameters using the original STaR and apply them consistently across all methods. Further experimental details are provided in Appendix § E. Datasets. We attempt to get a wide coverage of reasoning tasks by using six well-known datasets. We use the AI2 Reasoning Challenge’s Challenge set (ARC-C; Clark et al., 2018) for scientific reasoning, CommonsenseQA (CQA; Talmor et al., 2019) for commonsense reasoning, and CLadder 1.5 (Jin et al., 2023) for causal reasoning. For natural language inference reasoning we use Adversarial NLI (ANLI; Nie et al., 2020). For mathematical reasoning we use GSM8K (Cobbe et al., 2021) and SV AMP (Patel et al., 2021). For the mathematical reasoning datasets, we disable rationalization (i.e., providing hints) as it meaningfully degrades performance. Moreover, we unavoidably use Qwen 2.5 3B for GSM8K, as all STaR -based methods fail to self-improve with Llama 3.2 3B as the base model. We discuss this further in Appendix § F. Evaluation. We use two evaluation metrics: Test Set Accuracy (Acc.) and Floating Point Operations (FLOPs). The corresponding early-stopped (Caruana et al., 2000) epoch (e) and iteration (it) for vanilla SFT and STaR -based approaches, respectively are reported. All methods are given an equal and large compute budget to ensure that the peak value is obtained via early-stopping. For reproducibility, we evaluate accuracy using zero-shot greedy decoding unless stated otherwise. We use FLOPs as
https://arxiv.org/abs/2505.16322v1
our computational cost metric as memory usage remains approximately constant across methods. FLOPs are computed empirically following Kaplan et al. (2020), Sardana et al. (2024). Baselines. We categorize our baselines into two groups: (1) Vanilla SFT methods: Regular SFT, SFT with 8-shot chain-of-thought prompting (SFT + 8-CoT; Wei et al., 2022b), and SFT with 5-sample self-consistency decoding (SFT + 5-SC; Wang et al., 2023b) with temperature 0.7. 3The choice of f(α) :=α2is a hyperparameter. It allows more repetition of easy observations when the model is weak, and rapidly phases out this regularization effect as the model strengthens. 6 Table 1: Empirical results where Test Set Accuracy (%, ↑) is reported under zero-shot greedy decoding, excluding the 5-SC evaluation. Total training costs are reported in Peta FLOPs ( ↓). Best Acc. and PFLOPs is bolded , and second best is underlined in each section (excluding SFT). In (red) we quantify percent PFLOPs reduction against the highest accuracy baseline. Evaluation ARC-C CQA CLadder 1.5 Metric Acc. ( ↑)t PFLOPs ( ↓) Acc. ( ↑)t PFLOPs ( ↓) Acc. ( ↑)t PFLOPs ( ↓) SFT 61.4 1.0e7.0 71 .8 1.0e24.0 31 .0 7.0e382.3 SFT + 8-CoT 59.0 1.5e10.5 71 .6 2.5e60.1 43 .6 3.0e163.9 SFT + 5-SC 63.8 4.5e31.6 76 .4 2.5e60.1 45 .2 8.0e437.0 STaR 71.6 13it 351.4 72.2 25it2877.8 53.4 25 it 8427.3 STaR-Full 69.8 27 it 739.4 72.2 12 it 1502.7 53.8 19 it 6523.7 STaR-Acc 73.2 18it 639.8 74.6 19it 1745.3 94.2 28 it 9663.0 STaR-Acc-Full 71.8 5 it 135.8 76.0 10 it 1158.3 94.2 15 it 4465.4 STaR-Acc-Full-K 71.4 3 it 302.2 73.0 4 it 1760.9 80.0 6 it 6382.3 ReSTEM70.8 4 it 637.1 72.8 2 it 1548.4 53.4 5 it 10498.3 B-STaR 67.8 2 it 222.8 68.4 2 it 800.9 52.8 4 it 3937.3 AdaSTaR (ours ) 73.8 10 it 174.4 (↓72.7%) 78.0 20it 779.3 (↓32.7%) 95.6 23 it 3610.0 (↓19.2%) Evaluation ANLI GSM8K SV AMP Metric Acc. ( ↑)t PFLOPs ( ↓) Acc. ( ↑)t PFLOPs ( ↓) Acc. ( ↑)t PFLOPs ( ↓) SFT 64.2 4 e 262.9 61.0 2.5 e 177.3 57.0 5.5 e 21.7 SFT + 8-CoT 65.2 5 e 328.7 68.0 1 e 70.9 61.5 7.5 e 29.6 SFT + 5-SC 49.2 2 e 131.5 67.2 2.5 e 177.3 61.5 5.5 e 21.7 STaR 61.0 23 it 4195.3 76.0 4 it 409.2 71.0 20 it 373.8 STaR-Full 57.6 13 it 2604.6 72.6 4 it 684.8 57.5 37 it 348.5 STaR-Acc 64.8 22 it 3528.4 77.0 3 it 305.2 71.5 10 it 106.2 STaR-Acc-Full 64.6 5 it 986.0 74.6 2 it 333.0 74.0 18 it 167.3 STaR-Acc-Full-K 58.8 4 it 2528.4 77.0 2 it 1456.5 75.0 7 it 229.3 ReSTEM63.0 9 it 10938.5 77.0 2 it 2229.1 75.0 4 it 247.8 B-STaR 59.4 10 it 6373.4 73.6 3 it 2120.2 72.0 5 it 228.9 AdaSTaR (ours ) 66.8 21 it 1340.9 (↓62.0%) 77.0 2 it 19.3 (↓93.7%) 75.5 9 it 65.7 (↓71.3%) (2)STaR variants: First, STaR (Zelikman et al., 2022), and STaR-Acc where the model is accumu- lated instead of being reset every iteration t.
https://arxiv.org/abs/2505.16322v1
Most works that build on STaR choose to accumulate the model over iterations. We incorporate AdaSTaR onSTaR-Acc , asSTaR consistently performs empirically worse. Next, STaR-Full andSTaR-Acc-Full , which is an alternative approach to eliminating the CoT sampling inefficiency described in Remark 1. In -Full , the predetermined βt is replaced with the total number of correct samples, i.e., |Dt +|. Therefore, no adaptive observation sampling scheme can be used when implementing -Full . Peng et al. (2025)’s underlying algorithm can be viewed as STaR-Acc-Full . Additionally, we include STaR-Acc-Full-K where -Kdenotes a larger CoT generation sample size K. The majority of STaR -based methods (Hosseini et al., 2024; Pang et al., 2024; Zeng et al., 2025; Lin et al., 2025) adopt -Full-K as their core strategy. In our ex- periments we set K:= 5 as larger Kdid not meaningfully improve performance, while dramatically raising compute cost. Furthermore, for -K, we omit rationalization (i.e., providing ground truth as a hint), as prior works in this setting do not employ it. We include ReSTEM(Singh et al., 2024), an improvement over RFT(Yuan et al., 2023) mentions the under- and over-training imbalance we discuss in § 2.2. ReSTEMutilizes a cut-off threshold per observation ito ensure training diversity. Finally, we include B-STaR (Zeng et al., 2025) with outcome verification for insight. B-STaR is the only method that builds on STaR with open-source code, allowing for faithful replication. Although Lean-STaR (Lin et al., 2025) is open-source, it is tailored to mathematical theorem proving and thus incompatible with our benchmarks. 4.2 Results We first briefly discuss the baselines’ performance. As organized in Tab. 1, although STaR -based approaches often outperform SFT in accuracy, they incur substantially compute costs (measured in FLOPs). Aligned with the existing literature’s tendency to use model accumulation ( -Acc ), we see that no model accumulation in the case of STaR ,STaR-Full , and ReSTEMcommonly performs poorly. However, contrary to existing approaches’ large K, we do not necessarily observe performance improving as we scale K.ResTEMusesK= 11 ,STaR-Acc-Full-K andB-STaR usesK= 5, and all remaining approaches use K= 2. Comparing our AdaSTaR to baselines, AdaSTaR performs best in terms of accuracy in 6 of 6 bench- marks relative to 10 baselines, all while reducing training FLOPS by 58.6% on average (minimum of 7 0 100 200 300 PFLOPs0.40.50.60.7Test Set Accuracy(%) 0 50 100 150 200 PFLOPs0.60.70.80.9 677.2 0 250 500 750 1000 1250 PFLOPs0.60.70.80.9 2610.2ST aR ST aR-Full ST aR-Acc ST aR-Acc-Full ST aR-Acc-Full-K RestEMB-ST aR AdaSTaR (ours)Figure 4: Visualizing the entire learning curve for SV AMP on Llama 3.2 3B (left), Qwen 2.5 3B (center), and Gemma 7B (right). Each method’s curve is charted up to its best (early-stopped) iteration. The highest test accuracy is marked as a star, and second best as a diamond. As some methods converge only after a significant amount of PFLOPs, for legibility of shorter curves, we use dashed lines, and annotate the precise PFLOPs cost on the chart. 19.2% to a maximum of 93.7%) relative to the strongest accuracy-achieving baseline. If there are numerous tied best baselines, we use the
https://arxiv.org/abs/2505.16322v1
lowest PFLOPs to be conservative. Finally, for an intuitive visual understanding of our HieMinHeap , we provide empirical visualizations in Appendix § G. To further evaluate generality, we test AdaSTaR on datasets that perform relatively weakly on Llama 3.2 3B using different base models and sizes. Therefore, on Qwen 2.5 3B , well known to be strong on mathematical reasoning, we experiment on ARC-C, GSM8K, and SV AMP. On Gemma 7B we experiment on ARC-C, ANLI, and SV AMP, as we observe that all methods perform significantly worse on GSM8K, relative to Qwen 2.5 3B . Among these five datasets (GSM8K is excluded as this is in the main text), AdaSTaR achieves best test accuracy 4 of 5 times, while demonstrating similar levels of training cost (FLOPs) reduction. Comprehensive results are presented in Appendix § H (Qwen 2.5 3B ) and § I ( Gemma 7B ). For an intuitive visualization across different base models, we visualize the entire learning curve trained on SV AMP for Llama 3.2 3B ,Qwen 2.5 3B , and Gemma 7B in Fig. 4. Notably, across all three base models, AdaSTaR achieves faster gains in test accuracy under equal compute budgets. This aligns with the findings of Singh et al. (2024), which empirically demonstrate that performance gains from STaR -based approaches transfer well to larger-scale base models. 4.3 Ablation Study: Role of Diversity and Curriculum Design Choices Set-up. To gain a more granular understanding of the adaptive sampling mechanism, we evaluate three ablation variants of AdaSTaR and analyze the standard deviation (SD) of observation training frequencies to assess whether the under- and over-training patterns observed in Fig. 3a are mitigated. The first version is AdaSTaR without (wo.) AdaC , which is synonymous to AdaD . Secondly, AdaSTaR wo.wi, which changes the HieMinHeap to a regular MinHeap , only considering the last sampled iteration ˜tifor priority. Finally, we experiment with a priority-flipped version ( AdaSTaR-PF ), which prioritizes wifirst and ˜tisecond. Results. We provide empirical results in Tab. 2, including STaR-Acc asAdaSTaR is mounted on top of STaR-Acc . Aligned with the described theory in § 3, AdaD (AdaSTaR wo.AdaC ) most effectively reduces under- and over-training on average ( ↓SD). However, contrary to the intuitive expectation that increased diversity ( ↓SD) would improve test accuracy, we observe a sharp decline. We see that including AdaC solves this problem effectively while simultaneously maintaining high levels of trained observation diversity ( ↓SD). AdaSTaR wo.widoes indeed, on average, reduce SD, but fails to meaningfully improve test accuracy. Therefore, we can conclude that leveraging wito induce sampling more challenging observations within tied ˜tiis a salient part of AdaSTaR . We can decompose the rise in training diversity by quantifying the fall in SD throughout STaR-Acc →AdaSTaR wo.wi→AdaSTaR : 1.72→1.65→ 1.45. AdaSTaR-PF fails to reduce SD, as it aggressively samples challenging observations ( ↓wi), resulting in frequent resampling of difficult examples. It also results in worsened test accuracy, likely due to poorer CoT quality (see § 3.2). 8 Table 2: Ablation empirical results with Accuracy ( ↑), and Standard Deviation (SD). SD
https://arxiv.org/abs/2505.16322v1
of obser- vations’ trained frequency distribution is computed from iterations 1 to 2, 1 to 10, or 1 to 20 for benchmarks that converge very quickly (GSM8K), quickly (ARC-C, SV AMP), or slowly (CQA, CLadder 1.5, ANLI), respectively. Largest Acc. and lowest SD is bolded , and second is underlined . Evaluation ARC-C CQA CLadder 1.5 ANLI GSM8K SV AMP Average Metric Acc. SD Acc. SD Acc. SD Acc. SD Acc. SD Acc. SD Acc. SD STaR-Acc 73.2 1.50 74.6 1.11 94.2 1.36 64.8 1.07 77.0 0.47 71.5 4.78 75.9 1.72 AdaSTaR wo.AdaC 72.0 1.14 74.4 0.90 52.4 1.13 65.8 0.88 75.4 0.00 70.0 4.61 68.3 1.44 AdaSTaR wo.wi 73.6 1.39 74.6 0.92 93.4 1.13 64.0 0.92 76.8 0.33 73.0 5.19 75.9 1.65 AdaSTaR-PF 72.4 1.82 74.8 1.00 67.8 1.24 64.4 0.98 77.0 0.32 72.0 5.02 71.4 1.73 AdaSTaR (ours ) 73.8 1.26 78.0 0.99 95.6 1.12 66.8 1.04 77.0 0.32 75.5 3.98 77.8 1.45 5 Discussion and Additional Empirical Takeaways We first discuss salient aspects of our adaptive sampling mechanism in AdaSTaR (1, 2) , then present additional empirical insights drawn from extensive experiments with datasets and baselines under the STaR framework (3, 4) . (1) Near Zero Compute Cost Statistics. Notably, AdaSTaR ’s observation sampling algorithm adapts based on three statistics: ˜ti,wi, andα, which costs virtually no overhead run-time to compute. While the HieMinHeap does incur some run-time compute, it is negligibly minor. Our empirical tests indicate that run-time overhead is near zero relative to the (inference) sampling and training stage. The same can be said for the minimal memory footprint. Therefore, AdaSTaR is a lightweight extension that measures and leverages statistics extractable within the existing STaR system. (2) Balancing Diversity and Difficulty through Adaptive Sampling. Our key finding is that promoting observation diversity ( ↓SD) while regularizing for model strength consistently improves performance and reduces training compute cost (Tab. 1, 5, 6). Our ablation study (Tab. 2) confirms that only encouraging inference diversity without a difficulty measure ( AdaSTaR wo.wi) does not lead to performance improvement. However, we also observe that failing to regularize for difficulty when the model is weaker ( AdaSTaR wo.AdaC ) leads to even worse outcomes. Thus, adaptively sampling more challenging observations becomes increasingly effective as model strength improves. (3) Model Accumulation is Generally Better. As seen in Tab. 1 (and also supported by Tab. 5, 6), using model accumulation ( -Acc ) consistently leads to improved performance. Across all experiments in the main text and Appendix, transitioning from STaR toSTaR-Acc , and from STaR-Full to STaR-Acc-Full , leads to average accuracy improvements: 73.6% →79.0% and 67.8% →78.8%, respectively, along with a corresponding average reduction in FLOPs of 16.4 % and 37.9%. This result is particularly noteworthy given that recent literature is divided on the use of -Acc , with some adopting it (Pang et al., 2024; Zeng et al., 2025; Lin et al., 2025; Peng et al., 2025), while others omit it (Zelikman et al., 2022; Hosseini et al., 2024; Singh et al., 2024). (4) Cold Starting with STaR Does Not Always Work. We
https://arxiv.org/abs/2505.16322v1
empirically find that the viability of self-improvement via STaR depends on the difficulty of the task relative to the strength of the base model. Therefore, as discussed in § 4.1 and Appendix F, while STaR -based approaches fail to self- improve on Llama 3.2 3B , self-improvement can be realized on the better pre-trained Qwen 2.5 3B. This potentially explains why Peng et al. (2025) uses an instruction-tuned base model instead of cold starting from a pre-trained-only model. Similarly, Hosseini et al. (2024) and Zeng et al. (2025) includes an SFT stage prior to the self-improvement stage. Aligned with recent large reasoning model training (DeepSeek-AI, 2025; Team, 2025; Liu et al., 2025b), the key takeaway is that a STaR -based algorithm is part of a larger training pipeline. Precisely which stage within the training pipeline it should be integrated into is an open problem. Lastly, we also discuss the limitations and future avenues in Appendix § J and broader impact in Appendix § K. 9 Acknowledgments and Disclosure of Funding This work was improved by collaborating with researchers at LG AI Research. J. Lee and S.-Y . Yun were supported by the Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government(MSIT) (No. RS-2022-II220311, Development of Goal- Oriented Reinforcement Learning Techniques for Contact-Rich Robotic Manipulation of Everyday Objects, No. RS-2024-00457882, AI Research Hub Project, and No. RS-2019-II190075, Artificial Intelligence Graduate School Program (KAIST)). References Bradley Brown, Jordan Juravsky, Ryan Ehrlich, Ronald Clark, Quoc V Le, Christopher Ré, and Azalia Mirhoseini. Large Language Monkeys: Scaling Inference Compute with Repeated Sampling. arXiv preprint arXiv:2407.21787 , 2024. URL https://arxiv.org/abs/2407.21787 . Rich Caruana, Steve Lawrence, and C. Giles. Overfitting in Neural Nets: Backpropagation, Conjugate Gradient, and Early Stopping. In Advances in Neural Information Processing Systems , volume 13, pages 381–387. MIT Press, 2000. URL https://proceedings.neurips.cc/paper_files/ paper/2000/file/059fdcd96baeb75112f09fa1dcc740cc-Paper.pdf . Sijia Chen and Baochun Li. Toward Adaptive Reasoning in Large Language Models with Thought Rollback. In Proceedings of the 41st International Conference on Machine Learning , volume 235 ofProceedings of Machine Learning Research , pages 7033–7056. PMLR, 21–27 Jul 2024. URL https://proceedings.mlr.press/v235/chen24y.html . Yeseul Cho, Baekrok Shin, Changmin Kang, and Chulhee Yun. Lightweight Dataset Pruning without Full Training via Example Difficulty and Prediction Uncertainty. arXiv preprint arXiv:2502.06905 , 2025. URL https://arxiv.org/abs/2502.06905 . Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. Think you have Solved Question Answering? Try ARC, the AI2 Reasoning Challenge. arXiv preprint arXiv:1803.05457 , 2018. URL https://arxiv.org/abs/1803. 05457 . Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. Training Verifiers to Solve Math Word Problems. arXiv preprint arXiv:2110.14168 , 2021. URL https://arxiv.org/abs/2110.14168 . Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms . MIT Press, Cambridge, MA, 4th edition, 2022. ISBN 978-0-262-04628-2. Leonardo De Moura, Soonho Kong, Jeremy Avigad, Floris Van Doorn, and Jakob von Raumer. The Lean Theorem Prover (System Description). In Automated Deduction-CADE-25: 25th Interna- tional Conference on Automated Deduction,
https://arxiv.org/abs/2505.16322v1
Berlin, Germany, August 1-7, 2015, Proceedings 25, pages 378–388. Springer, 2015. URL https://link.springer.com/chapter/10.1007/ 978-3-319-21401-6_26 . DeepSeek-AI. DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learn- ing. arXiv preprint arXiv:2501.12948 , 2025. URL https://arxiv.org/abs/2501.12948 . John Duchi, Elad Hazan, and Yoram Singer. Adaptive Subgradient Methods for Online Learning and Stochastic Optimization. Journal of Machine Learning Research , 12(61):2121–2159, 2011. URL http://jmlr.org/papers/v12/duchi11a.html . Ronald A Fisher. On the mathematical foundations of theoretical statistics. Philosophical transactions of the Royal Society of London. Series A, containing papers of a mathematical or physical character , 222(594-604):309–368, 1922. URL https://royalsocietypublishing.org/doi/10.1098/ rsta.1922.0009 . Gemini Team Google. Gemini: A Family of Highly Capable Multimodal Models. arXiv preprint arXiv:2312.11805 , 2023. URL https://arxiv.org/abs/2312.11805 . 10 Guy Hacohen and Daphna Weinshall. On The Power of Curriculum Learning in Training Deep Networks. In Proceedings of the 36th International Conference on Machine Learning , volume 97 ofProceedings of Machine Learning Research , pages 2535–2544. PMLR, 09–15 Jun 2019. URL https://proceedings.mlr.press/v97/hacohen19a.html . Alexander Havrilla, Sharath Chandra Raparthy, Christoforos Nalmpantis, Jane Dwivedi-Yu, Maksym Zhuravinskyi, Eric Hambro, and Roberta Raileanu. GLoRe: When, Where, and How to Improve LLM Reasoning via Global and Local Refinements. In Forty-first International Conference on Machine Learning , 2024. URL https://openreview.net/forum?id=LH6R06NxdB . Danny Hernandez, Jared Kaplan, Tom Henighan, and Sam McCandlish. Scaling Laws for Transfer. arXiv preprint arXiv:2102.01293 , 2021. URL https://arxiv.org/abs/2102.01293 . Namgyu Ho, Laura Schmid, and Se-Young Yun. Large Language Models Are Reasoning Teachers. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 14852–14882, Toronto, Canada, July 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.acl-long.830. URL https://aclanthology.org/2023. acl-long.830/ . Arian Hosseini, Xingdi Yuan, Nikolay Malkin, Aaron Courville, Alessandro Sordoni, and Rishabh Agarwal. V-STaR: Training Verifiers for Self-Taught Reasoners. In First Conference on Language Modeling , 2024. URL https://openreview.net/forum?id=stmqBSW2dV . Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. LoRA: Low-Rank Adaptation of Large Language Models. In International Conference on Learning Representations , 2022. URL https://openreview.net/forum?id= nZeVKeeFYf9 . Zhijing Jin, Yuen Chen, Felix Leeb, Luigi Gresele, Ojasv Kamal, Zhiheng Lyu, Kevin Blin, Fer- nando Gonzalez Adauto, Max Kleiman-Weiner, Mrinmaya Sachan, and Bernhard Schölkopf. CLadder: Assessing Causal Reasoning in Language Models. In Advances in Neural Informa- tion Processing Systems , volume 36, pages 31038–31065. Curran Associates, Inc., 2023. URL https://openreview.net/forum?id=e2wtjx0Yqu . Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling Laws for Neural Language Models. arXiv preprint arXiv:2001.08361 , 2020. URL https://arxiv.org/abs/2001.08361 . Akira Kawabata and Saku Sugawara. Rationale-Aware Answer Verification by Pairwise Self- Evaluation. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing , pages 16178–16196, Miami, Florida, USA, November 2024. Association for Com- putational Linguistics. doi: 10.18653/v1/2024.emnlp-main.905. URL https://aclanthology. org/2024.emnlp-main.905/ . Diederik P. Kingma and Jimmy Ba. Adam: A Method for Stochastic Optimization. In 3rd Interna- tional Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings , 2015. URL http://arxiv.org/abs/1412.6980 . Yajing Kong, Liu Liu, Jun Wang, and
https://arxiv.org/abs/2505.16322v1
Dacheng Tao. Adaptive Curriculum Learning. In Proceed- ings of the IEEE/CVF International Conference on Computer Vision (ICCV) , pages 5067–5076, October 2021. URL https://openaccess.thecvf.com/content/ICCV2021/html/Kong_ Adaptive_Curriculum_Learning_ICCV_2021_paper.html . Jaehyeok Lee, Keisuke Sakaguchi, and JinYeong Bak. Self-Training Meets Consistency: Improving LLMs’ Reasoning with Consistency-Driven Rationale Evaluation. In Proceedings of the 2025 Con- ference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers) , pages 10519–10539, Albuquerque, New Mexico, April 2025. Association for Computational Linguistics. ISBN 979-8-89176-189-6. URL https://aclanthology.org/2025.naacl-long.528/ . Dongyuan Li, Ying Zhang, Zhen Wang, Shiyin Tan, Satoshi Kosugi, and Manabu Okumura. Active Learning for Abstractive Text Summarization via LLM-Determined Curriculum and Certainty Gain Maximization. In Findings of the Association for Computational Linguistics: EMNLP 11 2024 , pages 8959–8971, Miami, Florida, USA, November 2024. Association for Computational Linguistics. doi: 10.18653/v1/2024.findings-emnlp.523. URL https://aclanthology.org/ 2024.findings-emnlp.523/ . Hunter Lightman, Vineet Kosaraju, Yuri Burda, Harrison Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. Let’s Verify Step by Step. In The Twelfth International Conference on Learning Representations , 2024. URL https://openreview.net/ forum?id=v8L0pN6EOi . Haohan Lin, Zhiqing Sun, Sean Welleck, and Yiming Yang. Lean-STaR: Learning to Interleave Thinking and Proving. In The Thirteenth International Conference on Learning Representations , 2025. URL https://openreview.net/forum?id=SOWZ59UyNc . Zichen Liu, Changyu Chen, Wenjun Li, Penghui Qi, Tianyu Pang, Chao Du, Wee Sun Lee, and Min Lin. Understanding R1-Zero-Like Training: A Critical Perspective. arXiv preprint arXiv:2503.20783 , 2025a. URL https://arxiv.org/abs/2503.20783 . Zijun Liu, Peiyi Wang, Runxin Xu, Shirong Ma, Chong Ruan, Peng Li, Yang Liu, and Yu Wu. Inference-Time Scaling for Generalist Reward Modeling. arXiv preprint arXiv:2504.02495 , 2025b. URL https://arxiv.org/abs/2504.02495 . Jianqiao Lu, Zhiyang Dou, Hongru WANG, Zeyu Cao, Jianbo Dai, Yunlong Feng, and Zhijiang Guo. AutoPSV: Automated Process-Supervised Verifier. In The Thirty-eighth Annual Conference on Neural Information Processing Systems , 2024. URL https://openreview.net/forum?id= eOAPWWOGs9 . Adyasha Maharana, Prateek Yadav, and Mohit Bansal. D2Pruning: Message Passing for Balancing Diversity & Difficulty in Data Pruning. In The Twelfth International Conference on Learning Representations , 2024. URL https://openreview.net/forum?id=thbtoAkCe9 . Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, and Douwe Kiela. Adversarial NLI: A New Benchmark for Natural Language Understanding. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics , pages 4885–4901, Online, July 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.acl-main.441. URL https://aclanthology.org/2020.acl-main.441/ . NVIDIA. Nemotron-H: A Family of Accurate and Efficient Hybrid Mamba-Transformer Models, 2025. URL https://arxiv.org/abs/2504.03624 . Richard Yuanzhe Pang, Weizhe Yuan, Kyunghyun Cho, He He, Sainbayar Sukhbaatar, and Ja- son Weston. Iterative Reasoning Preference Optimization. In Advances in Neural Information Processing Systems , volume 37, pages 116617–116637. Curran Associates, Inc., 2024. URL https://openreview.net/forum?id=4XIKfvNYvx . Arkil Patel, Satwik Bhattamishra, and Navin Goyal. Are NLP models really able to solve sim- ple math word problems? In Kristina Toutanova, Anna Rumshisky, Luke Zettlemoyer, Dilek Hakkani-Tur, Iz Beltagy, Steven Bethard, Ryan Cotterell, Tanmoy Chakraborty, and Yichao Zhou, editors, Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies , pages 2080–2094, Online, June 2021. Association for Computational Linguistics.
https://arxiv.org/abs/2505.16322v1
doi: 10.18653/v1/2021.naacl-main.168. URL https://aclanthology.org/2021.naacl-main.168/ . Xiangyu Peng, Congying Xia, Xinyi Yang, Caiming Xiong, Chien-Sheng Wu, and Chen Xing. Regenesis: LLMs can grow into reasoning generalists via self-improvement. In The Thirteenth International Conference on Learning Representations , 2025. URL https://openreview.net/ forum?id=YUYJsHOf3c . Hadi Pouransari, Chun-Liang Li, Jen-Hao Rick Chang, Pavan Kumar Anasosalu Vasu, Cem Koc, Vaishaal Shankar, and Oncel Tuzel. Dataset Decomposition: Faster LLM Training with Variable Sequence Length Curriculum. In Advances in Neural Information Processing Systems , volume 37, pages 36121–36147. Curran Associates, Inc., 2024. URL https://openreview.net/forum? id=r8M9SfYMDi . 12 Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D Manning, Stefano Ermon, and Chelsea Finn. Direct Preference Optimization: Your Language Model is Secretly a Reward Model. In Advances in Neural Information Processing Systems , volume 36, pages 53728–53741. Curran Associates, Inc., 2023. URL https://openreview.net/forum?id=HPuSIXJaa9 . LG AI Research. EXAONE Deep: Reasoning Enhanced Language Models. arXiv preprint arXiv:2503.12524 , 2025. URL https://arxiv.org/abs/2503.12524 . Joshua David Robinson, Ching-Yao Chuang, Suvrit Sra, and Stefanie Jegelka. Contrastive Learning with Hard Negative Samples. In International Conference on Learning Representations , 2021. URL https://openreview.net/forum?id=CR1XOQ0UTh- . Nikhil Sardana, Jacob Portes, Sasha Doubov, and Jonathan Frankle. Beyond Chinchilla-Optimal: Ac- counting for Inference in Language Model Scaling Laws. In Proceedings of the 41st International Conference on Machine Learning , volume 235 of Proceedings of Machine Learning Research , pages 43445–43460. PMLR, 21–27 Jul 2024. URL https://proceedings.mlr.press/v235/ sardana24a.html . Amrith Setlur, Chirag Nagpal, Adam Fisch, Xinyang Geng, Jacob Eisenstein, Rishabh Agarwal, Alekh Agarwal, Jonathan Berant, and Aviral Kumar. Rewarding Progress: Scaling Automated Process Verifiers for LLM Reasoning. In The Thirteenth International Conference on Learning Representations , 2025. URL https://openreview.net/forum?id=A6Y7AqlzLW . Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, Y . K. Li, Y . Wu, and Daya Guo. DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models. arXiv preprint arXiv:2402.03300 , 2024. URL https://arxiv.org/abs/2402.03300 . Avi Singh, John D Co-Reyes, Rishabh Agarwal, Ankesh Anand, Piyush Patil, Xavier Garcia, Peter J Liu, James Harrison, Jaehoon Lee, Kelvin Xu, Aaron T Parisi, Abhishek Kumar, Alexander A Alemi, Alex Rizkowsky, Azade Nova, Ben Adlam, Bernd Bohnet, Gamaleldin Fathy Elsayed, Hanie Sedghi, Igor Mordatch, Isabelle Simpson, Izzeddin Gur, Jasper Snoek, Jeffrey Pennington, Jiri Hron, Kathleen Kenealy, Kevin Swersky, Kshiteej Mahajan, Laura A Culp, Lechao Xiao, Maxwell Bileschi, Noah Constant, Roman Novak, Rosanne Liu, Tris Warkentin, Yamini Bansal, Ethan Dyer, Behnam Neyshabur, Jascha Sohl-Dickstein, and Noah Fiedel. Beyond Human Data: Scaling Self-Training for Problem-Solving with Language Models. Transactions on Machine Learning Research , 2024. ISSN 2835-8856. URL https://openreview.net/forum?id=lNAyUngGFK . Expert Certification. Charlie Victor Snell, Jaehoon Lee, Kelvin Xu, and Aviral Kumar. Scaling Test-Time Compute Optimally Can be More Effective than Scaling LLM Parameters. In The Thirteenth International Conference on Learning Representations , 2025. URL https://openreview.net/forum?id= 4FWAwZtd2n . Yang Sui, Yu-Neng Chuang, Guanchu Wang, Jiamu Zhang, Tianyi Zhang, Jiayi Yuan, Hongyi Liu, Andrew Wen, Shaochen Zhong, Hanjie Chen, and Xia Hu. Stop Overthinking: A Survey on Efficient Reasoning for Large Language Models. arXiv preprint arXiv:2503.16419 , 2025. URL https://arxiv.org/abs/2503.16419 . Zhihong Sun, Chen Lyu, Bolun Li, Yao Wan, Hongyu
https://arxiv.org/abs/2505.16322v1
Zhang, Ge Li, and Zhi Jin. Enhancing Code Generation Performance of Smaller Models by Distilling the Reasoning Ability of LLMs. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024) , pages 5878–5895, Torino, Italia, May 2024. ELRA and ICCL. URL https://aclanthology.org/2024.lrec-main.521/ . Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers) , pages 4149–4158, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1421. URL https: //aclanthology.org/N19-1421/ . 13 Gemma Team. Gemma: Open Models Based on Gemini Research and Technology. arXiv preprint arXiv:2403.08295 , 2024a. URL https://arxiv.org/abs/2403.08295 . Kimi Team. Kimi k1.5: Scaling Reinforcement Learning with LLMs. arXiv preprint arXiv:2501.12599 , 2025. URL https://arxiv.org/abs/2501.12599 . Llama Team. The Llama 3 Herd of Models. arXiv preprint arXiv:2407.21783 , 2024b. URL https://arxiv.org/abs/2407.21783 . Qwen Team. Qwen Technical Report. arXiv preprint arXiv:2309.16609 , 2023. URL https: //arxiv.org/abs/2309.16609 . Qwen Team. Qwen2.5 Technical Report. arXiv preprint arXiv:2412.15115 , 2024c. URL https: //arxiv.org/abs/2412.15115 . Jonathan Uesato, Nate Kushman, Ramana Kumar, Francis Song, Noah Siegel, Lisa Wang, Antonia Creswell, Geoffrey Irving, and Irina Higgins. Solving math word problems with process-and outcome-based feedback. arXiv preprint arXiv:2211.14275 , 2022. URL https://arxiv.org/ abs/2211.14275 . Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. Attention is All you Need. In Advances in Neural Information Processing Systems , volume 30, pages 6000–6010. Curran Associates, Inc., 2017. URL https://proceedings.neurips.cc/paper_files/paper/2017/file/ 3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf . Boshi Wang, Sewon Min, Xiang Deng, Jiaming Shen, You Wu, Luke Zettlemoyer, and Huan Sun. Towards Understanding Chain-of-Thought Prompting: An Empirical Study of What Matters. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 2717–2739, Toronto, Canada, July 2023a. Association for Computational Linguistics. doi: 10.18653/v1/2023.acl-long.153. URL https://aclanthology.org/2023. acl-long.153/ . Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V Le, Ed H. Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. Self-Consistency Improves Chain of Thought Reasoning in Lan- guage Models. In The Eleventh International Conference on Learning Representations , 2023b. URL https://openreview.net/forum?id=1PL1NIMMrw . Hui Wei, Shenghua He, Tian Xia, Fei Liu, Andy Wong, Jingyang Lin, and Mei Han. Systematic Evaluation of LLM-as-a-Judge in LLM Alignment Tasks: Explainable Metrics and Diverse Prompt Templates. In ICLR 2025 Workshop on Building Trust in Language Models and Applications , 2025. URL https://openreview.net/forum?id=CAgBCSt8gL . Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H. Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, and William Fedus. Emergent Abilities of Large Lan- guage Models. Transactions on Machine Learning Research , 2022a. ISSN 2835-8856. URL https://openreview.net/forum?id=yzkSU5zdwD . Survey Certification. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, brian ichter, Fei Xia, Ed Chi, Quoc V Le, and Denny Zhou. Chain-of-Thought Prompting Elicits Reasoning in Large Language Models. InAdvances in Neural Information Processing Systems , volume 35, pages 24824–24837.
https://arxiv.org/abs/2505.16322v1
Curran Associates, Inc., 2022b. URL https://openreview.net/forum?id=_VjQlMeSB_J . Jerry Wei, Jason Wei, Yi Tay, Dustin Tran, Albert Webson, Yifeng Lu, Xinyun Chen, Hanxiao Liu, Da Huang, Denny Zhou, and Tengyu Ma. Larger language models do in-context learning differently. arXiv preprint arXiv:2303.03846 , 2023. URL https://arxiv.org/abs/2303.03846 . Alexander Wettig, Aatmik Gupta, Saumya Malik, and Danqi Chen. QuRating: Selecting High-Quality Data for Training Language Models. In Proceedings of the 41st International Conference on Machine Learning , volume 235 of Proceedings of Machine Learning Research , pages 52915–52971. PMLR, 21–27 Jul 2024. URL https://proceedings.mlr.press/v235/wettig24a.html . 14 Shuai Xie, Zunlei Feng, Ying Chen, Songtao Sun, Chao Ma, and Mingli Song. DEAL: Difficulty- Aware Active Learning for Semantic Segmentation. In Computer Vision – ACCV 2020 , pages 672–688, Cham, 2021. Springer International Publishing. ISBN 978-3-030-69525-5. URL https://link.springer.com/chapter/10.1007/978-3-030-69525-5_40 . Benfeng Xu, Licheng Zhang, Zhendong Mao, Quan Wang, Hongtao Xie, and Yongdong Zhang. Curriculum Learning for Natural Language Understanding. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics , pages 6095–6104, Online, July 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.acl-main.542. URL https: //aclanthology.org/2020.acl-main.542/ . Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Tom Griffiths, Yuan Cao, and Karthik Narasimhan. Tree of Thoughts: Deliberate Problem Solving with Large Language Models. In Advances in Neural Information Processing Systems , volume 36, pages 11809–11822. Curran Associates, Inc., 2023. URL https://openreview.net/forum?id=5Xc1ecxO1h . Qiying Yu, Zheng Zhang, Ruofei Zhu, Yufeng Yuan, Xiaochen Zuo, Yu Yue, Tiantian Fan, Gaohong Liu, Lingjun Liu, Xin Liu, Haibin Lin, Zhiqi Lin, Bole Ma, Guangming Sheng, Yuxuan Tong, Chi Zhang, Mofan Zhang, Wang Zhang, Hang Zhu, Jinhua Zhu, Jiaze Chen, Jiangjie Chen, Chengyi Wang, Hongli Yu, Weinan Dai, Yuxuan Song, Xiangpeng Wei, Hao Zhou, Jingjing Liu, Wei-Ying Ma, Ya-Qin Zhang, Lin Yan, Mu Qiao, Yonghui Wu, and Mingxuan Wang. DAPO: An Open- Source LLM Reinforcement Learning System at Scale. arXiv preprint arXiv:2503.14476 , 2025a. URL https://arxiv.org/abs/2503.14476 . Zhaojian Yu, Yinghao Wu, Yilun Zhao, Arman Cohan, and Xiao-Ping Zhang. Z1: Efficient Test-time Scaling with Code. arXiv preprint arXiv:2504.00810 , 2025b. URL https://arxiv.org/abs/ 2504.00810 . Zheng Yuan, Hongyi Yuan, Chengpeng Li, Guanting Dong, Keming Lu, Chuanqi Tan, Chang Zhou, and Jingren Zhou. Scaling Relationship on Learning Mathematical Reasoning with Large Language Models. arXiv preprint arXiv:2308.01825 , 2023. URL https://arxiv.org/abs/2308.01825 . Eric Zelikman, Yuhuai Wu, Jesse Mu, and Noah Goodman. STaR: Bootstrapping Reasoning With Rea- soning. In Advances in Neural Information Processing Systems , volume 35, pages 15476–15488. Curran Associates, Inc., 2022. URL https://openreview.net/forum?id=_3ELRdg2sgI . Eric Zelikman, Georges Raif Harik, Yijia Shao, Varuna Jayasiri, Nick Haber, and Noah Goodman. Quiet-STaR: Language Models Can Teach Themselves to Think Before Speaking. In First Confer- ence on Language Modeling , 2024. URL https://openreview.net/forum?id=oRXPiSOGH9 . Weihao Zeng, Yuzhen Huang, Lulu Zhao, Yijun Wang, Zifei Shan, and Junxian He. B-STaR: Moni- toring and Balancing Exploration and Exploitation in Self-Taught Reasoners. In The Thirteenth International Conference on Learning Representations , 2025. URL https://openreview.net/ forum?id=P6dwZJpJ4m . Zirui Zhao, Hanze Dong, Amrita Saha, Caiming Xiong, and Doyen Sahoo. Automatic Curriculum Expert Iteration for Reliable LLM Reasoning. In The Thirteenth International Conference on Learning Representations , 2025. URL
https://arxiv.org/abs/2505.16322v1
https://openreview.net/forum?id=3ogIALgghF . Haizhong Zheng, Rui Liu, Fan Lai, and Atul Prakash. Coverage-centric Coreset Selection for High Pruning Rates. In The Eleventh International Conference on Learning Representations , 2023. URL https://openreview.net/forum?id=QwKvL6wC8Yi . 15 A Total Training Time Comparison Fig. 5 is conducted using full fine-tuning of Llama 3.2 3B (Team, 2024b). The training run-time is set to the early-stopped epoch (iteration) (Caruana et al., 2000). 0 20 40 60 80 100 Training Time (Hours)CQAANLICLADDER ST aR SFT + 5-SC SFT + 8-shot CoT SFT 0 1 2 3 4 Training Time (Hours)ARC-CGSM8KSVAMP Figure 5: Total training run-time on 4 ×RTX 3090 24G, across three common reasoning datasets CLadder 1.5, ANLI, CQA, SV AMP, GSM8K, and ARC-C. STaR , SFT, and SFT 8-shot Chain- of-Thought is evaluated under zero-shot greedy decoding. Training times across SFT 5-sample Self-Consistency, SFT 8-shot Chain-of-Thought, and SFT differ as their best early-stop epoch differs. 16 B Observation Distribution Visualizations Across All Datasets Under -Trained Over -Trained (a) Distribution of frequency trained of each observation iin iterations 1 to 10 in ARC-C. 0 20 40 60 80 Percent of [Q1] or [Q4] in Quartiles[Q1] t+1 t+2 t+3 [Q4] t+1 t+2 t+388.0 91.1 75.9 0.0 0.0 0.011.4 8.9 22.2 0.0 3.0 16.50.6 0.0 1.9 24.2 24.4 21.40.0 0.0 0.0 75.8 72.6 62.1Q1 Q2 Q3 Q4 (b) Initial Quartile 1 [Q1] and 4 [Q4] are computed based on iterations 1 to 10. 1 2 3 4 5 6 7 8 Accumulated Trained Frequency050010001500200025003000Number of Observations (c) Distribution of frequency trained of each observation iin iterations 1 to 10 in CQA. 0 20 40 60 80 100 Percent of [Q1] or [Q4] in Quartiles[Q1] t+1 t+2 t+3 [Q4] t+1 t+2 t+394.4 89.6 81.9 0.0 0.0 0.03.1 4.3 11.7 0.0 0.0 0.02.5 6.2 6.4 0.0 0.0 0.00.0 0.0 0.0 100.0 100.0 100.0(d) Initial Quartile 1 [Q1] and 4 [Q4] are computed based on iterations 1 to 10. 2 4 6 8 10 12 14 Accumulated Trained Frequency0500100015002000Number of Observations (e) Distribution of frequency trained of each observation iin iterations 1 to 10 in CLadder 1.5. 0 20 40 60 80 100 Percent of [Q1] or [Q4] in Quartiles[Q1] t+1 t+2 t+3 [Q4] t+1 t+2 t+391.9 82.9 75.0 0.0 0.0 0.07.8 16.0 21.4 0.0 0.0 0.00.3 1.1 3.6 0.0 0.0 0.00.0 0.0 0.0 100.0 100.0 100.0Q1 Q2 Q3 Q4 (f) Initial Quartile 1 [Q1] and 4 [Q4] are computed based on iterations 1 to 10. 1 2 3 4 5 6 7 8 9 Accumulated Trained Frequency050010001500200025003000Number of Observations (g) Distribution of frequency trained of each observation iin iterations 1 to 10 in ANLI. 0 20 40 60 80 100 Percent of [Q1] or [Q4] in Quartiles[Q1] t+1 t+2 t+3 [Q4] t+1 t+2 t+392.8 84.9 77.9 0.0 0.0 0.07.2 14.4 20.4 0.0 0.0 0.00.0 0.7 1.6 0.0 0.0 0.00.0 0.0 0.0 100.0 100.0 100.0(h) Initial Quartile 1 [Q1] and 4 [Q4] are computed based on iterations 1 to 10. Figure 6: ARC-C (a, b), CQA (c, d), CLadder 1.5 (e, f), and ANLI (g, h) datasets illustrate persistent relative under- and over-training across observations. 17
https://arxiv.org/abs/2505.16322v1
1 2 3 4 5 6 7 8 9 10 Accumulated Trained Frequency050100150200250300350400Number of Observations(a) Distribution of frequency trained of each observation iin iterations 1 to 10 in GSM8K. 0 20 40 60 80 100 Percent of [Q1] or [Q4] in Quartiles[Q1] t+1 t+2 t+3 [Q4] t+1 t+2 t+370.8 48.1 39.4 0.0 0.0 0.029.2 43.6 40.3 0.0 0.0 0.00.0 8.3 20.3 0.0 0.0 0.00.0 0.0 0.0 100.0 100.0 100.0(b) Initial Quartile 1 [Q1] and 4 [Q4] are computed based on iterations 1 to 10. 3 6 9 12 15 18 21 Accumulated Trained Frequency020406080100120Number of Observations (c) Distribution of frequency trained of each observation iin iterations 1 to 10 in SV AMP. 0 20 40 60 80 Percent of [Q1] or [Q4] in Quartiles[Q1] t+1 t+2 t+3 [Q4] t+1 t+2 t+387.4 86.2 84.3 0.0 0.0 0.012.6 13.8 15.7 0.0 0.0 0.00.0 0.0 0.0 15.0 11.7 15.00.0 0.0 0.0 85.0 88.3 85.0(d) Initial Quartile 1 [Q1] and 4 [Q4] are computed based on iterations 1 to 10. Figure 7: GSM8K (a, b) and SV AMP (c, d) datasets also show consistent patterns of over- and under-training across sample quartiles. C AdaD Induces Poor Quality CoT We use the following prompt. Prompt for GPT 4o Annotator System: You are an expert QA/CoT reviewer. For each entry, check whether the chain-of- thought (CoT) logically supports the answer. Respond ONLY with an array of the indices (0-based) of entries whose CoT is WRONG. Please output like: [1, 6], and explain why User: [ { question : ... , chain_of_thought: ... , answer: ... }, ... { question : ... , chain_of_thought: ... , answer: ... }] We now provide a qualitative example. Consider this example question. Question ( xi) Example There are 6 houses on a block. If a mailman has to give 24 pieces of junk mail to each block, how many pieces of junk mail should he give in each house? Using this question, we examine the CoTs generated by AdaD (red container) and STaR-Acc (green container). 18 Incorrect CoT (AdaD) Given CoT: 6 blocks, and 24 pieces of junk mail for each block. So, 24 pieces for 6 blocks. Then calculates:24 6= 4pieces per house. Error Explanation: • Misinterpreted "6 houses on a block" as "6 blocks." • Incorrectly calculated division using blocks instead of houses. Correct CoT (STaR-Acc) Given CoT: 6 houses per block, total of 24 pieces per block. Thus, each house gets:24 6= 4pieces. Correct Steps: • Correctly identified number of houses per block. • Correctly distributed mail equally to each house. D Excessive CoT Sampling Inefficiency D.1 Problem: Excessive Sampling, then Filtering A salient observation we make is that an unnecessarily large amount of CoT samples are unused in training. Remember that, STaR inferences the entire dataset D4,{⟨x1,ˆc1,ˆy1⟩,···,⟨xN,ˆcN,ˆyN⟩}, then filters down to correct samples Dt +:={⟨xi,ˆci,ˆyi⟩|I(yi= ˆyi)}. We denote the size |Dt +|=Mt. Next, it throws away or re-uses parts of Dt +5to fit the predetermined per iteration batch size βt. As mentioned in § 2.1, βt=σt×β, where σtis the number of gradient update steps per iteration and β
https://arxiv.org/abs/2505.16322v1
is the batch size for each gradient update step. In the case that Mt> βt, some ⟨xi,ˆci,ˆyi⟩, are discarded. Such discarded samples can not be cached and use in the next iteration t+ 1because the fundamental idea of iterations is that an improved model πt+1 θis used to generate new samples. The compute and memory wastage, especially in earlier iterations, is significant. For a concrete understanding, we visualize this sampling inefficiency empirically across the datasets in Fig. 8. D.2 Existing Solution However, as mentioned in § 2.1, all methods that resolve this excessive sampling ( Mt−βt) problem ofSTaR (Hosseini et al., 2024; Pang et al., 2024; Zeng et al., 2025; Lin et al., 2025; Peng et al., 2025) simply removes this pre-determined βt, and instead set βtto|Dt +|. This approach can be viewed as bringing the blue curve up to the red curve; i.e., βt← |Dt +|. We name this approach as -Full in our experiments (§ 4). B-STaR (Zeng et al., 2025) also embodies this approach. ReSTEM(Singh et al., 2024) does not resolve this problem as they keep the filtering mechanism, as it is useful in their algorithm. 4This can be known from line 3 in Alg. 1 in the STaR paper. This is also the case in their open-source code. 5Excluding some rare edge scenarios Mt=βt 19 1 2 3 4 5 6 7 8 9 Iteration t400600800100012001400 Data Samples (a) ARC-C 2 4 6 810 12 14 16 18 Iteration t02000400060008000 Data Samples (b) CQA 2 4 6 810 12 14 16 18 Iteration t02000400060008000 Data Samples (c) CLadder 1.5 3 6 9 12 15 18 Iteration t0200040006000800010000 Data Samples (d) ANLI 2 4 6 8 10 12 14 Iteration t100020003000400050006000 Data Samples (e) GSM8K 1 2 3 4 Iteration t350400450500550 Data Samples (f) SV AMP Figure 8: Visualizing the CoT sampling inefficiencies in STaR across numerous datasets. Initial βt=1= 40 as presented in the original implementation. βtrises over time as we follow the +20% of gradient update steps σevery iteration in the original implementation. That is, βt+1:= 1.2(βt). If Mt> βt, there is an inference sampling inefficiency as Mt−βtdata samples are not used. D.3 Our Approach Alternatively, as we aim to reach peak performance as computationally efficiently as possible, we keep STaR ’s original βtcurve, and instead, sample CoTs ⟨xi,ˆci,ˆyi⟩ ←πt θ(e, xi)up till|Dt +|=βtis filled. This approach can be viewed as bringing the red curve down to the blue curve; i.e., |Dt +| ←βt. E Further Details on Experimental Configuration and Setting Common Configuration. We primarily conduct our experiments on numerous nodes with 8×RTX 3090 24G, with equivalent hardware specifications across nodes. For a few compute heavy experi- ments we use nodes with 8×A100 40G. All training is done on the same arbitrary seed value of 10. This value has never been changed. Hyperparameters are organized in Tab. 3. Parameters ARC-C CQA CLadder 1.5 ANLI GSM8K SV AMP Batch size 8 8 8 8 8 8 Learning rate 10−510−510−510−510−510−5 Weight decay 0.01 0.01 0.01 0.01 0.01 0.01 Warm up steps 100 100 100 100 100 100 Optimizer
https://arxiv.org/abs/2505.16322v1
Adam Adam Adam Adam Adam Adam Model precision bf16 bf16 bf16 bf16 bf16 bf16 Samples for self consistency 5 5 5 5 5 5 Inference decoding temperature 1.0 1.0 1.0 1.0 1.0 1.0 Evaluation decoding temperature 0 0 0 0 0 0 Rationalization (default) True True True True False False Table 3: Hyperparameters across datasets. 20 Dataset Configuration. For ARC-C, we combined the train and validation dataset for training. The ANLI dataset is comprised of R1, R2, and R3 versions. For our experiment, we used R1, and random sampled (without replacement) 10,000 samples for efficient evaluation. In GSM8K, high quality ground truth cis already available in the SFT dataset. To compare whether STaR is able to improve on the SFT case where high quality ground truth cis unavailable, we do not include the cin the SFT dataset. That is, we only train on ⟨x, y⟩as all STaR -like approaches are not given access to c. Dataset and evaluation sizes are provided in Tab. 4. Dataset Train set Test set ARC-C 1,418 1,172 CQA 9,741 1,140 CLadder 1.5 8,089 2,023 ANLI (R1) 10,000 1,000 GSM8K 7,473 1,319 SV AMP 800 300 Table 4: Train and test set sizes for each dataset ReSTEMConfiguration. We follow the original implementation’s ReSTEMconfiguration (Singh et al., 2024) as close as possible. The only change we make is reducing K:= 32 and cut-off threshold value of 10toK:= 11 and cut-off threshold value to 3. This is done as larger Kand cut-off threshold values resulted in worsened performance with dramatic rise in compute cost. We kept the ratio of K to cut-off threshold as close to the paper’s implementation. For instance, when sampled K= 11 , an easy observation imay result in 8 correct samples, while more challenging ones may result in 2. In this case, if the threshold is set to 3, the observation with 8 correct ⟨xi,ˆci,ˆyi⟩will be reduced to a maximum of 3, shrinking the imbalance from 8:2 to 3:2. B-STaR Configuration. We follow the original implementation’s B-STaR configuration presented in their paper (Zeng et al., 2025) as close as possible. For any implementation that is not explicitly specificed in the paper, we use their official open-source implementation. We set the range of temperature search space as [0.4,1.1]in increments of 0.1 as in the paper. We set K:= 5 as in the paper. We set their balancing hyperparameter n⋆:= 6 as in the paper. The only change we make is their training queries ( M) per iteration. We first experimented by setting M:= 2627 as they did for their experiments that did not include a SFT stage, pre- STaR training. However, this resulted in poor performance. In response, we set Mto the entire original dataset size, which helped performance. FLlama 3.2 3B Fails to Self-Improve on GSM8K STaR -based methods fail to self-improve on GSM8K using Llama 3.2 3B as the base model (Fig. 9). Therefore, we use Qwen 2.5 3B instead in the main text. 2 4 6 8 10 Iteration0.260.280.300.320.34Accuracy ST aR-Acc ST aR-Acc-Full Figure 9: Visualizing the learning curve for STaR-Acc andSTaR-Acc-Full for GSM8K
https://arxiv.org/abs/2505.16322v1
using Llama 3.2 3B as the base model. 21 G Visualizing Empirical Heaps Last Sampled Iteration t 1 23 45 67 89 1011 1213 1415 1617 1819 2021 2223 2425 2627 2829 0 20 40 60 80 100 Percentage (%)123456789101112131415161718Iteration t (a) ARC-C 0 20 40 60 80 100 Percentage (%)12345678910111213141516171819202122232425Iteration t (b) CQA 0 20 40 60 80 100 Percentage (%)1234567891011121314151617181920212223Iteration t (c) CLadder 1.5 0 20 40 60 80 100 Percentage (%)12345678910111213141516171819202122232425262728Iteration t (d) ANLI 0 20 40 60 80 100 Percentage (%)123Iteration t (e) GSM8K 0 20 40 60 80 100 Percentage (%)12345678910111213141516Iteration t (f) SV AMP 22 HQwen 2.5 3B Base Model Empirical Results Refer to Tab. 5 for empirical results using Qwen 2.5 3B as the base model. Experiment settings are equivalent to the main experiments. GSM8K in Tab. 5 is equivalent to that of Tab. 1, as the main text’s GSM8K is Qwen 2.5 3B based. We describe why Tab. 1 is Qwen 2.5 3B based in § 4.1. Table 5: Qwen 2.5 3B empirical results where Test Set Accuracy (%, ↑) is reported under zero-shot greedy decoding, excluding the 5-SC evaluation. Total training costs are reported in Peta FLOPs ( ↓). Best Acc. and PFLOPs is bolded , and second best is underlined in each section (excluding SFT). In (red) we quantify percent PFLOPs reduction against the highest accuracy baseline. Evaluation ARC-C GSM8K SV AMP Metric Acc. ( ↑)t PFLOPs ( ↓) Acc. ( ↑)t PFLOPs ( ↓) Acc. ( ↑)t PFLOPs ( ↓) SFT 33.8 6.5e43.9 61 .0 2.5e177.3 68 .5 0.5e1.89 SFT + 8-CoT 75.2 7.5e50.6 68 .0 1e 70.9 86 .5 4.0e15.2 SFT + 5-SC 67.4 6.0e40.5 67 .2 2.5e177.3 73 .5 0.5e1.89 STaR 80.4 20it825.9 76.0 4it 409.2 92 .5 8it 96.2 STaR-Full 83.2 22it606.2 72 .6 4it 684.8 91 .5 16it196.2 STaR-Acc 84.4 11it264.1 77.0 3it 305.2 95.0 10it129.6 STaR-Acc-Full 84.6 4it 110.8 74.6 2it 333.0 93 .5 6it 73.0 STaR-Acc-Full-K 82.2 2it 225.1 77.0 2it 1456.5 92 .0 2it 105.3 ReSTEM81.0 8it 874.6 77.0 2it 2229.1 92 .0 10it677.2 B-STaR 83.2 10it583.3 72 .6 2it 1185.7 91 .0 3it 150.4 AdaSTaR (ours ) 85.0 12it239.9(↓0%) 77.0 2 it 19.3 (↓93.7%) 94.5 8it 83.9 (↓35.3%) IGemma 7B Base Model Empirical Results Refer to Tab. 6 for empirical results using Gemma 7B as the base model. We use Low-Rank Adaptation (LoRA; Hu et al., 2022) fine-tuning set to rank = 32. All other settings are equivalent to the main experiments. Table 6: Gemma 7B empirical results where Test Set Accuracy (%, ↑) is reported under zero-shot greedy decoding, excluding the 5-SC evaluation. Total training costs are reported in Peta FLOPs ( ↓). Best Acc. and PFLOPs is bolded , and second best is underlined in each section (excluding SFT). In (red) we quantify percent PFLOPs reduction against the highest accuracy baseline. Evaluation ARC-C ANLI SV AMP Metric Acc. ( ↑)t PFLOPs ( ↓) Acc. ( ↑)t PFLOPs ( ↓) Acc. ( ↑)t PFLOPs ( ↓) SFT 49.2 0.5e0.01 66 .0 5e 1.1 61 .0 3e 0.04 SFT + 8-CoT 76.6 5.5e0.13
https://arxiv.org/abs/2505.16322v1
53.0 5.5e1.2 82 .0 4e 0.05 SFT + 5-SC 66.0 0.5e0.01 67 .6 7e 1.6 61 .0 3e 0.04 STaR 82.0 17it530.6 62 .4 28it13105 .8 87 .0 15it332.8 STaR-Full 76.2 3it 93.5 43.8 12it5536.6 67 .5 36it832.9 STaR-Acc 85.4 13it383.0 62 .0 20it9298.8 87.0 13it281.3 STaR-Acc-Full 84.6 20it 533.0 61.8 8it 3751.0 87.5 24it510.6 STaR-Acc-Full-K 85.0 12it1334.4 65.0 12it20743 .3 87.5 6it 485.7 ReSTEM81.6 5it 1221.0 62.2 17 it 43060.8 82.5 7it 1245.6 B-STaR 84.8 15 it 1936.0 63.8 22 it 27315.1 84.0 7 it 2610.2 AdaSTaR (ours ) 85.4 14it 321.0 (↓16.2%) 65.2 20it 3055.4 (↓85.3%) 89.5 14it207.0 (↓57.4%) 23 J Limitation and Future Work We discuss relevant limitations, to the best of our knowledge, and avenues for future research. First, a natural direction for future work is to explore combinations of AdaSTaR with other advanced STaR -based methods. For instance, investigating the performance of a combined AdaSTaR and an inference-time verifier, such as that in V-STaR (Hosseini et al., 2024), presents a promising research avenue. Such explorations are beyond the scope of the current study. Second, while our experiments demonstrate AdaSTaR ’s efficacy, a larger computational budget would have permitted evaluation on even larger-scale models. Nevertheless, our empirical study provides robust evidence of AdaSTaR ’s effectiveness across three distinct models: Llama 3.2 3B ,Qwen 2.5 3B , and Gemma 7B . Moreover, existing work (Singh et al., 2024) suggests that gains from STaR -based training on smaller models often amplify on larger scales, implying our findings may well extend or even strengthen with increased model size. Furthermore, the model sizes used in our study (up to 7B parameters) are comparable to those in related STaR literature (Zelikman et al., 2022, 2024; Zeng et al., 2025) that uses 6 to 7B base models. Third, similar to other adaptive methods such as Adam (Kingma and Ba, 2015) and AdaGrad (Duchi et al., 2011), AdaSTaR introduces a new hyperparameter f(α) :=α2. A more granular tuning is deferred to future work. It is anticipated that such tuning could lead to further enhancements in AdaSTaR ’s performance and efficiency. Finally, building upon our discussion (§ 5), a salient direction for future work is to investigate the optimal integration of various STaR-based methods within the end-to-end training pipeline incorporating RL-style long CoT generation. This investigation is particularly pertinent given the current divergence in methodologies: the STaR stage is implemented either prior to RL (Team, 2025; Liu et al., 2025b) or subsequent to it (DeepSeek-AI, 2025). Furthermore, a critical open question is whether, and to what extent, enhancements achieved during the STaR phase directly propagate to performance gains in the subsequent RL stage. K Broader Impact The development of AdaSTaR presents notable positive societal benefits stemming from its ability to achieve strong performance with significantly reduced PFLOPs. •Environmental Sustainability: By lowering the computational requirements (FLOPs) for training effective models, AdaSTaR contributes to more environmentally sustainable AI practices. This reduction directly translates to lower energy consumption and a diminished carbon footprint associated with model development and deployment. •Economic Value and Accessibility: The substantial computational savings unlock eco-
https://arxiv.org/abs/2505.16322v1