text
string
source
string
inference, the Original Multi-Agent Debate (MAD@3) prompt, and our proposed RCR (RCR-MAD (Ours)@3) prompting. 1 2 3 4 5 6 76065707580859095Accuracy (%) GSM8K 1 2 3 4 5 6 7404550556065707580 GSM Plus 1 2 3 4 5 6 7 Number of Agents80.082.585.087.590.092.595.097.5100.0Accuracy (%) ARC-Easy 1 2 3 4 5 6 7 Number of Agents65707580859095 ARC-Challenge Qwen-2.5-1.5B Qwen-2.5-3B Qwen-2.5-7B Qwen-2.5-14B Figure 4: Scaling up agents Accuracy of four Qwen model sizes as the number of agents grows from 1-7. 3) D OES AGENT DIVERSITY MATTER ?We ob- serve two consistent trends here: First, when the individual agents have comparable standalone ac- curacy, cross-family mixtures beat homogeneous agents team, supporting the idea that architectural diversity yields complementary reasoning paths. Second, when the pool mixes a strong and a weaker model, the debate result gravitates toward the stronger member—adding the weaker agent neither helps nor seriously harms, suggesting that diversity only helps when all agents can contribute novel insights. Complete results for every dataset and roster is available in Appendix B. 4) W HYGRPO OVER OTHER FINE -TUNING METHODS ?GRPO consistently outperforms the alternatives, indicating that its relative- advantage reward balances exploration and pol-ModelOriginal (GSM-Plus) SFT DPO GRPO Qwen-2.5-1.5B 42.00 47 .31 51 .34 55.92 Qwen-2.5-3B 61.75 58 .33 64 .32 69.50 Qwen-2.5-7B 68.62 67 .89 69 .88 74.71 Table 3: Accuracy on GSM-Plus after 10K training steps using three optimization objectives. icy stability better than plain maximum-likelihood (SFT) or preference-only (DPO/PPO) updates. Ta- ble 3 compare three update rules under a fixed com- pute budget: (1) classical supervised fine-tuning on debate answers (SFT); (2) Direct Preference Opti- misation using the majority vote as the preferred sample; (3) Group Relative Policy Optimisation (GRPO). GRPO delivers the largest accuracy jump on GSM-Plus for every model size. Both SFT and DPO give smaller gains and even slight regressions on the 3 B model, highlighting the risk of over- fitting when the reward ignores policy shift. We also observe that GRPO keeps KL<0.24across sizes, whereas DPO averages 0.43. The relative- advantage term in GRPO therefore not only boosts reward but also constrains drift, reducing catas- trophic forgetting. 5) D ATA SELECTION STRATEGY .We test three data sampling schemes on GSM-Plus: Random- 2Kselects 2000 examples uniformly from the full pool (10552); Debate-Only keeps only data points where agents entered at least one critique round (t≥1);All-Traces trains on the entire cleaned set. Table 4 shows that accuracy rises monotonically with coverage: the full corpus beats Debate-Only 7 Model Random-2K Debate-Only All-Traces Qwen-1.5B 44.82 51.61 55.92 Qwen-3B 58.10 62.70 69.50 Qwen-7B 69.71 72.53 74.71 Table 4: Effect of training-set size and composition. GSM-Plus accuracy after one evolution round using three trace-selection schemes. 2000 4000 6000 8000 10000 Training Steps4550556065707580Accuracy (%) Qwen-2.5-1.5B Qwen-2.5-3B Qwen-2.5-7B Qwen-2.5-14B Llama-3B Llama-8B Figure 5: Diminishing returns in GRPO updates after 8K steps. GSM-Plus accuracy for five models as a function of the number of training steps during GRPO. by4.43 pts (avg) and Random-2K by 9.17 pts (avg). The gap is largest for Qwen-1.5B, suggesting that smaller models benefit from easier “round-0” examples that Random-2K may miss and
https://arxiv.org/abs/2505.15734v1
Debate- Only discards. We therefore use the full trace set in all other experiments. 6) H OW LONG DO WE TRAIN ?Figure 5 plots GSM-Plus accuracy as we grow the number of GRPO training steps from 2K to 10K. All mod- els share the similiar trend: rapid gains up to about 8K steps followed by saturation. Small and mid-size models profit the most from the early updates—Qwen-1.5B climbs 8.0 pts between 2K and 6K samples—whereas larger models such as Qwen-14B rise more slowly but steady. Beyond 8K the curve flattens: the average improvement from 8Kß10 k is only +0.32 pts while wall-clock time grows by 25%. 7) D OES ITERATIVE FINE -TUNING HURT ? Figure 6 plots GSM8K and GSM-Plus accuracy for Qwen-1.5B after the first and second evolution rounds under four sampling temperatures. When we keep the original exploratory setting ( T= 1.0) the model loses 2.0 pts on GSM8K and gains only 13.5 pts on GSM-Plus—well below the +33.5 pts it achieved in Round 1—confirming a clear case of catastrophic forgetting. Lowering the temperature 1.0 0.7 0.4 0.0 Temperature203040506070Accuracy (%) Evolution Round 1 GSM8K GSM-Plus 1.0 0.7 0.4 0.0 Temperature40506070Accuracy (%) Evolution Round 2 GSM8K GSM-PlusFigure 6: Iterative fine-tuning and forgetting. Accu- racy of Qwen-1.5 B after the first and second evolution rounds at four sampling temperatures. stabilises training: at T= 0.4Round-2 accuracy is within 0.9 pts of Round 1 on GSM-Plus and almost fully recovers on GSM8K; a deterministic sched- ule (T= 0.0) even adds +3.3 pts on GSM8K but plateaus on GSM-Plus. The mechanism is visible in the KL divergence between successive students. At T= 1.0we mea- sureKLevo=0.37for Qwen-1.5B, whereas T= 0.4 cuts this to 0.19 and T= 0.0to 0.11, matching the reduction in forgetting. We therefore adopt a linear decay from 0.7 in Round 1 to 0.3 in later rounds for all models up to 3B parameters; larger models did not require temperature adjustment. 5 Conclusion In this paper, we introduced the DEBATE , TRAIN , EVOLVE (DTE) framework, a novel approach en- abling language models to autonomously enhance their reasoning capabilities by leveraging multi- agent debate traces. Our REFLECT -CRITIQUE - REFINE prompting strategy significantly improved debate quality, reducing sycophancy and reason- ing errors. Experiments demonstrated substan- tial accuracy gains, notably an average improve- ment of 8.92% accuracy on the challenging GSM- PLUS dataset. Additionally, we showed strong cross-domain generalization, confirming that our approach captures general reasoning skills rather than dataset-specific patterns. Importantly, DTE effectively combines the benefits of multi-agent debate with the computational efficiency of single- model inference. 8 Limitations Despite its effectiveness, our approach has certain limitations. Firstly, iterative fine-tuning within the DTE framework can cause catastrophic forget- ting, particularly evident in smaller language mod- els (<3B parameters), leading to potential model collapse. Although we explored several mitiga- tion strategies, completely eliminating this issue remains challenging. Secondly, our framework as- sumes the availability of high-quality initial debate traces; thus, its efficacy may degrade if debates are of poor quality or if initial agent performance is weak. Third, our study primarily focused on structured reasoning
https://arxiv.org/abs/2505.15734v1
tasks like mathematical and commonsense reasoning. The applicability and ef- fectiveness of DTE on less structured or more open- ended tasks, such as natural language generation or dialogue systems, require further investigation. Lastly, although computationally efficient com- pared to traditional MAD setups, DTE still incurs higher training costs than standard single-model fine-tuning. Future work should aim to optimize the framework further, enhancing its practicality and accessibility. Ethics Statement This study explore the self-evolution of language models using publicly available benchmarks and datasets such as GSM8K, ARC, and Common- senseQA. All data used in our experiments are non-sensitive and freely accessible, ensuring com- pliance with ethical research standards and repro- ducibility. Our method involves fine-tuning on model-generated content, without introducing or relying on any human-annotated private data. Acknowledgements This work was supported by NSF NAIRR Pilot with PSC Neocortex, NCSA Delta; Amazon, Cisco Research, Commonwealth Cyber Initiative, Ama- zon–Virginia Tech Center for Efficient and Robust Machine Learning, and Sanghani Center for AI and Data Analytics at Virginia Tech. The views, findings, conclusions, and recommendations ex- pressed in this work are those of the authors and do not necessarily reflect the opinions of the funding agencies.References Marah Abdin, Sahaj Agarwal, Ahmed Awadallah, Vidhisha Balachandran, Harkirat Behl, Lingjiao Chen, Gustavo de Rosa, Suriya Gunasekar, Mo- jan Javaheripi, Neel Joshi, and 1 others. 2025. Phi-4-reasoning technical report. arXiv preprint arXiv:2504.21318 . Marah Abdin, Jyoti Aneja, Harkirat Behl, Sébastien Bubeck, Ronen Eldan, Suriya Gunasekar, Michael Harrison, Russell J Hewett, Mojan Javaheripi, Piero Kauffmann, and 1 others. 2024. Phi-4 technical re- port. arXiv preprint arXiv:2412.08905 . Justin Chih-Yao Chen, Swarnadeep Saha, and Mohit Bansal. 2023. Reconcile: Round-table conference improves reasoning via consensus among diverse llms. arXiv preprint arXiv:2309.13007 . Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. 2018. Think you have solved question answering? try arc, the ai2 reasoning challenge. Preprint , arXiv:1803.05457. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. 2021. Training verifiers to solve math word prob- lems. Preprint , arXiv:2110.14168. Caia Costello, Simon Guo, Anna Goldie, and Azalia Mirhoseini. 2025. Think, prune, train, improve: Scal- ing reasoning without scaling models. arXiv preprint arXiv:2504.18116 . Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer. 2023. Qlora: Efficient finetuning of quantized llms. Preprint , arXiv:2305.14314. Yilun Du, Shuang Li, Antonio Torralba, Joshua B Tenen- baum, and Igor Mordatch. 2023. Improving factual- ity and reasoning in language models through multia- gent debate. In Forty-first International Conference on Machine Learning . Andrew Estornell, Jean-Francois Ton, Yuanshun Yao, and Yang Liu. 2024. Acc-debate: An actor-critic approach to multi-agent debate. arXiv preprint arXiv:2411.00053 . Zhibin Gou, Zhihong Shao, Yeyun Gong, Yelong Shen, Yujiu Yang, Nan Duan, and Weizhu Chen. 2023. Critic: Large language models can self-correct with tool-interactive critiquing. ArXiv , abs/2305.11738. Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2021. Lora: Low-rank adaptation of large language models. Preprint , arXiv:2106.09685. Shuyang Jiang, Yuhao Wang, and Yu
https://arxiv.org/abs/2505.15734v1
Wang. 2023. Self- evolve: A code evolution framework via large lan- guage models. ArXiv , abs/2306.02907. 9 Suhas Kotha, Jacob Mitchell Springer, and Aditi Raghu- nathan. 2024. Understanding catastrophic forgetting in language models via implicit inference. Preprint , arXiv:2309.10105. Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E. Gonzalez, Hao Zhang, and Ion Stoica. 2023. Ef- ficient memory management for large language model serving with pagedattention. Preprint , arXiv:2309.06180. Qintong Li, Leyang Cui, Xueliang Zhao, Lingpeng Kong, and Wei Bi. 2024. GSM-plus: A compre- hensive benchmark for evaluating the robustness of LLMs as mathematical problem solvers. In Proceed- ings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers) , pages 2961–2984, Bangkok, Thailand. Associ- ation for Computational Linguistics. Tian Liang, Zhiwei He, Wenxiang Jiao, Xing Wang, Yan Wang, Rui Wang, Yujiu Yang, Shuming Shi, and Zhaopeng Tu. 2023. Encouraging divergent thinking in large language models through multi-agent debate. arXiv preprint arXiv:2305.19118 . Jianqiao Lu, Wanjun Zhong, Wenyong Huang, Yufei Wang, Qi Zhu, Fei Mi, Baojun Wang, Weichao Wang, Xingshan Zeng, Lifeng Shang, and 1 others. 2023. Self: Self-evolution with language feedback. arXiv preprint arXiv:2310.00533 . Yun Luo, Zhen Yang, Fandong Meng, Yafu Li, Jie Zhou, and Yue Zhang. 2025. An empirical study of catas- trophic forgetting in large language models during continual fine-tuning. Preprint , arXiv:2308.08747. Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, and 1 others. 2023. Self-refine: Iterative refinement with self-feedback. Advances in Neural Information Processing Systems , 36:46534–46594. Someen Park, Jaehoon Kim, Seungwan Jin, Sohyun Park, and Kyungsik Han. 2024. Predict: Multi-agent- based debate simulation for generalized hate speech detection. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing , pages 20963–20987. Baolin Peng, Michel Galley, Pengcheng He, Hao Cheng, Yujia Xie, Yu Hu, Qiuyuan Huang, Lars Lidén, Zhou Yu, Weizhu Chen, and Jianfeng Gao. 2023. Check your facts and try again: Improving large language models with external knowledge and automated feed- back. ArXiv , abs/2302.12813. Leonardo Ranaldi and Andrè Freitas. 2024. Self-refine instruction-tuning for aligning reasoning in language models. arXiv preprint arXiv:2405.00402 . Keita Saito, Akifumi Wachi, Koki Wataoka, and Youhei Akimoto. 2023. Verbosity bias in preference la- beling by large language models. arXiv preprint arXiv:2310.10076 .Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, Y . K. Li, Y . Wu, and Daya Guo. 2024. Deepseekmath: Pushing the limits of mathemati- cal reasoning in open language models. Preprint , arXiv:2402.03300. Andries Smit, Paul Duckworth, Nathan Grinsztajn, Thomas D Barrett, and Arnu Pretorius. 2023. Should we be going mad? a look at multi-agent debate strate- gies for llms. arXiv preprint arXiv:2311.17371 . Gaurav Srivastava, Shuxiang Cao, and Xuan Wang. 2025. Towards reasoning ability of small language models. arXiv preprint arXiv:2502.11569 . Vighnesh Subramaniam, Yilun Du, Joshua B Tenen- baum, Antonio Torralba, Shuang Li, and Igor Mor- datch. 2025. Multiagent finetuning: Self improve- ment with diverse reasoning chains. arXiv preprint arXiv:2501.05707 . Alon
https://arxiv.org/abs/2505.15734v1
Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2019. CommonsenseQA: A ques- tion answering challenge targeting commonsense knowledge. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers) , pages 4149–4158, Minneapolis, Minnesota. Association for Computational Linguistics. Qineng Wang, Zihao Wang, Ying Su, Hanghang Tong, and Yangqiu Song. 2024. Rethinking the bounds of llm reasoning: Are multi-agent discussions the key? InAnnual Meeting of the Association for Computa- tional Linguistics . Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Al- isa Liu, Noah A Smith, Daniel Khashabi, and Han- naneh Hajishirzi. 2022. Self-instruct: Aligning lan- guage models with self-generated instructions. arXiv preprint arXiv:2212.10560 . Eric Zelikman, Yuhuai Wu, Jesse Mu, and Noah D Goodman. 2024. Star: Self-taught reasoner boot- strapping reasoning with reasoning. In Proc. the 36th International Conference on Neural Information Pro- cessing Systems , volume 1126. Weihao Zeng, Can Xu, Yingxiu Zhao, Jian-Guang Lou, and Weizhu Chen. 2024. Automatic instruc- tion evolving for large language models. Preprint , arXiv:2406.00770. 10 Contents of the Appendix A Datasets Details 12 B Implementation Details 12 C R EFLECT –CRITIQUE –REFINE Prompt Design 13 D Additional Self-Evolution Results 15 D.1 Complete GRPO results (all steps, temperature) . . . . . . . . . . . . . . . . . . . . . . 15 D.2 Complete Round 2 MAD Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 D.3 GRPO round 2 results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 D.4 Complete Round 3 MAD Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 D.5 Complete Cross Domain Task Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 E Complete Results of Large-scale Empirical Study on MAD using RCR Prompting 20 E.1 Evaluation Metrics and Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 E.2 Overview of Results Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 E.3 Key Findings and Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 E.3.1 Impact of Agent Settings . . . . .
https://arxiv.org/abs/2505.15734v1
. . . . . . . . . . . . . . . . . . . . . . . . . 21 E.3.2 Cross-Model Debate Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . 21 E.3.3 Three-Agent Debate Effectiveness . . . . . . . . . . . . . . . . . . . . . . . . . 21 E.3.4 Dataset-Specific Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 E.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 F Additional Results 43 F.1 Original MAD Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 F.2 Majority V ote@3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 F.3 Scaling Results for Multiple Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 11 A Datasets Details We evaluate our Multi-Agent Debate (MAD) approach on five diverse reasoning benchmarks. In the following, we briefly describe each dataset along with their splits. In this paper, we use the test split to evaluate all Small Language Models (SLMs). Table 5 summarizes the splits for each dataset. Dataset Train Validation Test GSM8K 7,473 – 1,319 GSM+ – 10,552 2,400 ARC-Easy 2,251 570 2,376 ARC-Challenge 1,119 299 1,172 CommonsenseQA 9,741 1,221 1,140 Table 5: Dataset splits and example counts. Note that GSM8K is provided with only training and test splits. GSM8K (Cobbe et al., 2021) is a collection of high-quality grade school math word problems that require multi-step reasoning. In the main configuration, the dataset contains a total of 8,790 examples, with 7,473 examples in the training split and 1,319 examples in the test split. These problems typically require between 2 and 8 steps to solve, making it an excellent benchmark for evaluating mathematical reasoning capabilities. GSM-Plus (Li et al., 2024) extends the GSM8K benchmark with more challenging and diverse mathe- matical word problems. GSM+ problems generally require more sophisticated multi-step reasoning and often involve more complex mathematical concepts than those in the original GSM8K dataset. ARC (Clark et al., 2018) comprises two subsets of multiple-choice science questions: •ARC-Easy : Contains 2,251 train, 570 validation, and 2,376 test examples. These questions are answerable by most
https://arxiv.org/abs/2505.15734v1
middle school students and test basic science knowledge. •ARC-Challenge : Contains 1,119 train, 299 validation, and 1,172 test examples. These questions are more challenging and typically answered incorrectly by both retrieval-based algorithms and word co-occurrence algorithms. CommonsenseQA (Talmor et al., 2019) requires commonsense reasoning to answer multiple-choice questions. It has 9,741 training examples, 1,221 validation examples, and 1,140 test examples. The questions are specifically designed to test commonsense knowledge and reasoning capabilities that go beyond simple factual recall. B Implementation Details Training For model fine-tuning, we used GRPO to enhance Language Models on the targeted reasoning tasks. Our training pipeline utilized the Unsloth1and TRL2libraries for efficient parameter-efficient fine-tuning with QLoRA (Dettmers et al., 2023). Models were trained with a LoRA (Hu et al., 2021) rank of 128 and target modules including query, key, value, output, gate, up, and down projections. We used 8-bit Adam optimizer with beta parameters of (0.9, 0.99) and a weight decay of 0.1. The learning rate was set to 5e-6 with a cosine decay schedule and 10% warmup ratio. Training proceeded for 10,000 steps with a per-device batch size of 8. To improve output formatting, we implemented a multi-component reward function consisting of: (1) an answer correctness reward, (2) format adherence rewards for XML tags structure, (3) a numeric response reward, and (4) a tag-counting reward to incentivize proper tag usage. Each model was instructed to output responses in a structured XML format with separate <reasoning> and<answer> tags to facilitate consistent answer extraction and evaluation. To manage 1https://github.com/unslothai/unsloth 2https://github.com/huggingface/trl 12 memory constraints on high-end GPUs, we set the maximum sequence length to 2048 tokens with a 512-token maximum for prompts and 1536 tokens for model completions. Inference We conducted all model inferences using NVIDIA H100-80GB, A100-80GB, L40-48GB, and A40-48GB GPUs. For efficient inference, we used the vLLM library (Kwon et al., 2023)3, dynamically allocating the required number of GPUs to load each model. Multi-GPU utilization was enabled using Hugging Face Accelerate4for model sharding and speed optimization. C R EFLECT –CRITIQUE –REFINE Prompt Design Prompt 1: RCR Prompting for Math Reasoning Datasets (GSM8K, GSM-Plus) Prompt Template You are Agent {self.agent_id} in a multi-agent debate to solve the following math problem: Problem: {question} {own_previous} Here are the solutions from other agents: {context} This is debate round {round_num}. Please carefully analyze all solutions—including your own—identify any errors in reasoning, and provide your revised solution. •Ifyoubelieve your previousanswer iscorrect, explain why anddefend it. •Ifyoubelieve youmade anerror,explain theerrorandprovide acorrected solution. •Ifyoubelieve another agent’s answer iscorrect, explain why youagree with it. Your final answer must be in the format {answer }at the end. Prompt 2: RCR Prompting for Science Reasoning Datasets (ARC-E, ARC-C) Prompt Template You are Agent {self.agent_id} in a multi-agent debate to solve the following scientific problem: Problem: {question} {own_previous} Here are the solutions from other agents: {context} This is debate round {round_num}. Please carefully analyze all solutions—including your own—identify any misconceptions or flawed scientific reasoning, and provide your revised solution. •Ifyoubelieve your previousanswer iscorrect, explain thescientificprinciples supporting your answer. •Ifyoubelieve youmade anerror,explain thescientificmisconceptionandprovide acorrected solution. •Ifyoubelieve another agent’s answer iscorrect, explain
https://arxiv.org/abs/2505.15734v1
why their scientific reasoningis sound. Your final answer must be in the format {answer }at the end. 3https://docs.vllm.ai/en/latest/ 4https://github.com/huggingface/accelerate 13 Prompt 3: RCR Prompting for Commonsense Reasoning Datasets (CSQA) Prompt Template You are Agent {self.agent_id} in a multi-agent debate to solve the following commonsense reasoning problem: Problem: {question} {own_previous} Here are the solutions from other agents: {context} This is debate round {round_num}. Please carefully analyze all solutions—including your own—identify any flawed assumptions or logical inconsistencies, and provide your revised solution. •Ifyoubelieve your previousanswer iscorrect, explain thelogicalreasoningandreal-world knowl edge supportingit. •Ifyoubelieve youmade anerror,explain theflawed assump tion orinconsistency andprovide acorrected solution. •Ifyoubelieve another agent’s answer iscorrect, explain why their reasoningaligns with common sense knowl edge. Your final answer must be in the format {answer }at the end. 14 D Additional Self-Evolution Results In this section, we present a comprehensive analysis of our DEBATE , TRAIN , EVOLVE framework across multiple experimental settings. We first examine the impact of various GRPO configurations, followed by analyses of multi-round training effects, and finally cross-domain generalization results. Our experiments utilize a diverse set of models ranging from 1.5B to 14B parameters and evaluate performance on challenging reasoning benchmarks including GSM8K, GSM-Plus, ARC-Challenge, ARC-Easy, and CommonsenseQA. D.1 Complete GRPO results (all steps, temperature) We begin by investigating how different GRPO hyperparameters affect model performance. Tables 6, 7, and 8 present results across three datasets (GSM8K, GSM-Plus, and ARC-Challenge) for six different model configurations, varying training steps (2000, 5000, and 10000) and sampling temperatures (0.8 and 0.2). Several key patterns emerge from these results. First, we observe that larger models (7B+) generally maintain or improve their performance through GRPO fine-tuning, while smaller models (particularly Llama-3B) occasionally exhibit catastrophic forgetting at higher step counts. Second, lower temperature (0.2) typically yields more stable optimization trajectories for most model configurations, especially at higher step counts. This supports our hypothesis that constraining policy drift during fine-tuning is crucial for successful reasoning evolution. Notably, the Qwen-2.5-3B model demonstrates remarkable stability across configurations, with con- sistent performance gains on GSM-Plus (from 61.75% to 69.50%) and robust maintenance of GSM8K performance. In contrast, the Llama-3B model shows significant performance degradation at higher step counts with 0.8 temperature, dropping to near-random performance (2.73%) after 10000 steps on GSM8K, while maintaining better stability at 0.2 temperature. For ARC-Challenge, we observe that all models benefit from MAD evolution, with particularly strong gains for Qwen-2.5-7B (from 87.22% to 91.64%) and Llama-8B (from 77.65% to 85.07%). These results suggest that our framework effectively generalizes across both mathematical reasoning and scientific question-answering domains. D.2 Complete Round 2 MAD Results After the first round of GRPO fine-tuning, we evaluated the performance of models in a multi-agent debate setting to assess how evolution affects collaborative reasoning. Table 10 presents these results across different debate configurations: exponential temperature scaling (Exp), default settings (Default), temperature-4 settings (temp4), and deterministic setting (Det). The MAD Round 2 results demonstrate that evolved models generally maintain their collaborative reasoning capabilities after GRPO fine-tuning. For most models, MAD performance after evolution either improves or remains comparable to the original MAD results. The
https://arxiv.org/abs/2505.15734v1
Qwen-2.5-7B model, for instance, achieves 77.75% accuracy on GSM-Plus under the temp4 configuration, which represents a 3.58% improvement over its original MAD performance. Interestingly, we observe that different debate configurations yield varying results across model sizes. Smaller models like Qwen-2.5-1.5B show significant performance variation across configurations, with deterministic settings yielding the best results (69.07% on GSM8K and 56.62% on GSM-Plus). In contrast, larger models like Qwen-2.5-7B demonstrate more consistent performance across configurations. The exponential temperature scaling configuration generally underperforms other settings, particularly for smaller models. This suggests that controlled diversity in debate is beneficial, but excessive exploration may hinder collaborative reasoning effectiveness. D.3 GRPO round 2 results To investigate the effects of iterative evolution, we conducted a second round of GRPO fine-tuning on models that had already undergone one round of evolution. Table 9 presents these results for four model configurations across two datasets (GSM8K and GSM-Plus). 15 The second round of GRPO training reveals interesting dynamics in model evolution. For the Qwen family of models, we observe continued performance improvements or stability across most configurations. The Qwen-2.5-7B model, for instance, achieves further gains on GSM-Plus, reaching 73.75% accuracy (a 5.13% improvement over its first round GRPO performance). However, the Llama-3B model exhibits significant performance degradation in certain configurations, particularly at higher step counts with 0.8 temperature (dropping to 35.63% on GSM8K and 23.02% on GSM-Plus). This reinforces our finding that smaller models are more sensitive to optimization instability during iterative fine-tuning. Importantly, using a lower temperature of 0.2 substantially mitigates this issue, allowing the Llama-3B model to maintain competitive performance (73.62% on GSM8K) even after two rounds of evolution. These results highlight the importance of careful hyperparameter selection during iterative self-evolution, particularly for smaller models that may be more susceptible to catastrophic forgetting or excessive policy drift. D.4 Complete Round 3 MAD Results To investigate the long-term stability of collaborative reasoning capabilities through multiple evolution iterations, we conducted a third round of multi-agent debate after the second round of GRPO fine-tuning. Table 11 presents these results for three Qwen models across the same four debate configurations. The Round 3 MAD results reveal interesting trends in iterative evolution. For the Qwen-2.5-3B and Qwen-2.5-7B models, performance remains relatively stable across debate configurations, indicating robust retention of reasoning capabilities through multiple fine-tuning iterations. However, the Qwen- 2.5-1.5B model shows more variable performance, particularly under the exponential temperature scaling configuration where it drops to 44.28% on GSM8K. Notably, the deterministic debate setting (Det) consistently produces the best or near-best performance across all models and datasets, suggesting that reduced randomness in collaborative reasoning becomes increasingly important after multiple evolution rounds. This aligns with our hypothesis that controlling policy drift is crucial for successful iterative evolution. The stability of larger models (3B+) across multiple evolution rounds indicates that our DEBATE , TRAIN , EVOLVE framework can support continuous improvement without substantial performance degradation when applied to sufficiently capable base models. D.5 Complete Cross Domain Task Results A key question for self-evolution frameworks is whether improvements generalize beyond the training domain. Table 12 presents results for models fine-tuned
https://arxiv.org/abs/2505.15734v1
on either GSM8K or GSM-Plus and evaluated on multiple out-of-domain tasks including ARC-Easy, ARC-Challenge, and CommonsenseQA. The cross-domain results reveal impressive generalization capabilities. Models fine-tuned on mathemat- ical reasoning tasks (GSM8K and GSM-Plus) show substantial performance improvements not only on the alternative math dataset but also on science and commonsense reasoning benchmarks. For instance, the Qwen-2.5-14B model fine-tuned on GSM8K achieves 98.19% accuracy on ARC-Easy, 93.69% on ARC-Challenge, and 83.70% on CommonsenseQA. Interestingly, models fine-tuned on GSM-Plus generally perform better on GSM8K than vice versa. For example, the Qwen-2.5-1.5B model achieves 73.09% on GSM8K when fine-tuned on GSM-Plus, but only 51.21% on GSM-Plus when fine-tuned on GSM8K. This asymmetry suggests that GSM-Plus may require more diverse reasoning strategies that transfer well to simpler tasks. The strong cross-domain performance demonstrates that our DEBATE , TRAIN , EVOLVE framework does not simply optimize for specific datasets but instead enhances fundamental reasoning capabilities that generalize across tasks. This is a critical advantage over traditional supervised fine-tuning approaches that often exhibit limited transferability. 16 ModelBase PerformanceMADGRPO (Temperature 0.8) GRPO (Temperature 0.2) Train Test 2k steps 5k steps 10k steps 2k steps 5k steps 10k steps Qwen-2.5-1.5B 81.55 62.77 72.33 67.78 71.42 71.04 73.09 66.49 53.98 Qwen-2.5-3B 91.28 84.08 85.14 85.06 85.14 86.13 84.00 86.05 84.38 Qwen-2.5-7B 94.29 90.67 91.21 88.32 86.73 84.00 86.96 86.35 88.02 Llama-3B 83.90 72.55 73.84 69.22 21.53 2.73 72.40 75.06 3.26 Llama-8B 89.08 81.73 82.18 84.61 85.29 85.22 86.81 84.91 0.15 Qwen-2.5-14B 94.89 92.80 93.33 87.72 89.84 91.81 86.58 89.34 93.74 Table 6: Complete GRPO Results on GSM8K Dataset. Results show accuracy (%) for different models under various GRPO configurations. Training hyperparameters include learning rate of 5e-6 and context length of 256 tokens. MAD refers to Multi-Agent Debate baseline performance. ModelBase PerformanceMADGRPO (Temperature 0.8) GRPO (Temperature 0.2) Train Test 2k steps 5k steps 10k steps 2k steps 5k steps 10k steps Qwen-2.5-1.5B 42.40 42.00 51.62 47.49 54.46 19.00 52.33 53.04 55.92 Qwen-2.5-3B 61.14 61.75 67.79 66.21 66.71 69.13 64.04 67.25 68.25 Qwen-2.5-7B 68.27 68.62 74.17 64.71 73.38 74.71 67.75 72.54 74.50 Llama-3B 47.68 45.67 51.12 52.38 53.29 52.33 51.79 49.54 53.79 Llama-8B 58.56 55.62 60.79 64.96 61.58 66.17 65.08 63.46 60.46 Qwen-2.5-14B 71.11 71.79 77.25 70.79 73.54 75.88 73.00 73.42 75.62 Table 7: Complete GRPO Results on GSM-Plus Dataset. Results show accuracy (%) for different models under various GRPO configurations on the more challenging GSM-Plus dataset. Training hyperparameters include learning rate of 5e-6. ModelBase PerformanceMADGRPO (Temperature 0.8) GRPO (Temperature 0.2) Train Test 2k steps 5k steps 10k steps 2k steps 5k steps 10k steps Qwen-2.5-1.5B — 69.21 68.52 30.03 62.63 68.36 47.27 51.88 67.51 Qwen-2.5-3B — 83.53 84.64 81.66 80.29 83.63 81.91 79.78 83.95 Qwen-2.5-7B — 87.22 91.64 88.57 88.48 90.63 88.43 88.57 90.89 Llama-3B — 73.12 76.19 75.51 74.32 76.87 76.79 74.57 77.23 Llama-8B — 77.65 85.07 83.70 84.45 86.03 84.98 85.53 86.53 Qwen-2.5-14B — 90.27 93.77 91.81 92.49 93.13 91.47 91.47 92.67 Table 8: Complete GRPO Results on ARC-Challenge Dataset. Results show accuracy (%) for different models under various GRPO configurations on the ARC-Challenge dataset. Training hyperparameters include learning rate
https://arxiv.org/abs/2505.15734v1
of 5e-6 and context length of 128 tokens. Base train performance was not evaluated for this dataset. 17 Model DatasetGRPO Round 2 (Temp 0.8) GRPO Round 2 (Temp 0.2) 2k steps 5k steps 2k steps 5k steps Qwen-2.5-1.5BGSM8K 65.73 68.54 69.98 72.18 GSM-Plus 47.38 50.12 46.37 48.04 Qwen-2.5-3BGSM8K 84.84 86.05 84.46 84.08 GSM-Plus 65.71 67.96 65.67 67.00 Qwen-2.5-7BGSM8K 86.28 87.19 88.17 87.34 GSM-Plus 69.42 73.75 70.54 73.12 Llama-3BGSM8K 55.88 35.63 73.62 64.29 GSM-Plus 48.75 23.02 52.42 25.08 Table 9: Complete GRPO Round 2 Results. Results show accuracy (%) after second round of GRPO training across different step counts and temperature settings. All models were trained with learning rate of 5e-6 and context length of 128 tokens. Model DatasetMAD Configuration Exp Default temp4 Det Qwen-2.5-1.5BGSM8K 46.32 66.34 68.61 69.07 GSM-Plus 22.09 53.18 55.62 56.62 Qwen-2.5-3BGSM8K 84.08 86.66 86.35 86.50 GSM-Plus 69.62 70.25 69.67 70.29 Qwen-2.5-7BGSM8K 91.36 90.75 91.05 89.99 GSM-Plus 76.42 77.00 77.75 77.62 Llama-3BGSM8K 66.26 75.97 75.51 75.36 GSM-Plus 53.62 54.58 55.96 56.04 Llama-8BGSM8K 84.69 85.90 86.96 85.60 GSM-Plus 65.00 65.92 66.46 66.50 Table 10: Complete MAD Round 2 Results. Results show accuracy (%) for different models in multi-agent debate after first round of GRPO fine-tuning. Exp = exponential temperature scaling, Default = standard configuration, temp4 = temperature-4 settings, Det = deterministic configuration. Model DatasetMAD Configuration Exp Default temp4 Det Qwen-2.5-1.5BGSM8K 44.28 60.65 67.70 72.40 GSM-Plus 35.54 48.62 51.67 51.75 Qwen-2.5-3BGSM8K 83.78 85.60 85.75 86.13 GSM-Plus 63.67 63.42 64.16 64.47 Qwen-2.5-7BGSM8K 89.76 91.05 90.90 91.13 GSM-Plus 69.67 69.85 70.50 69.88 Table 11: Complete MAD Round 3 Results. Results show accuracy (%) for different models in multi-agent debate after second round of GRPO fine-tuning. Exp = exponential temperature scaling, Default = standard configuration, temp4 = temperature-4 settings, Det = deterministic configuration. 18 Model Fine-tuned onEvaluation Dataset GSM8K GSM-Plus ARC-Easy ARC-Challenge CommonsenseQA Qwen-2.5-1.5BGSM8K — 51.21 85.02 69.88 64.29 GSM-Plus 73.09 — 85.10 69.45 64.21 Qwen-2.5-3BGSM8K — 65.54 93.94 84.30 75.92 GSM-Plus 86.50 — 94.15 84.13 75.92 Qwen-2.5-7BGSM8K — 69.63 96.42 91.72 82.96 GSM-Plus 91.81 — 96.38 90.87 82.88 Llama-3BGSM8K — 52.38 87.12 72.01 68.14 GSM-Plus 76.35 — 86.57 69.20 68.55 Llama-8BGSM8K — 63.75 93.01 84.39 74.12 GSM-Plus 86.88 — 93.98 85.49 73.87 Qwen-2.5-14BGSM8K — 73.46 98.19 93.69 83.70 GSM-Plus 93.33 — 97.98 94.28 82.23 Table 12: Complete Cross Domain Task Results. Results show accuracy (%) on various datasets after fine-tuning on either GSM8K or GSM-Plus. Dashes (—) indicate that evaluation was not performed on the same dataset used for fine-tuning. 19 E Complete Results of Large-scale Empirical Study on MAD using RCR Prompting This section presents a comprehensive analysis of our large-scale empirical investigation into Multi-Agent Debate (MAD) using Recursive Critical Reflection (RCR) prompting across five diverse benchmarks: GSM8K, GSM-Plus, ARC-Easy, ARC-Challenge, and CommonsenseQA. Through extensive experi- mentation involving various model combinations and parameter settings, we evaluate how collaborative reasoning among multiple language model agents affects problem-solving performance. E.1 Evaluation Metrics and Methodology To facilitate systematic comparison and analysis of debate outcomes, we track the following key metrics across all debate configurations: •Accuracy : The primary performance measure, representing the percentage of problems correctly
https://arxiv.org/abs/2505.15734v1
solved after the debate process concludes. •∆(Performance Delta) : Measures the performance change relative to appropriate baselines. We report several variants including: –∆(vs Base): Change compared to the single agent’s performance –∆(vs Lower Agent): Change compared to the lower-performing agent in cross-agent debates –∆(vs Upper Agent): Change compared to the better-performing agent in cross-agent debates –∆(vs Lowest): Change compared to the lowest-performing agent in three-agent settings •Debate Rounds : The average number of interaction rounds required to reach consensus or the maximum allowed limit, indicating debate efficiency. •Sycophancy : A normalized measure (per data points) quantifying the tendency of agents to abandon their answers in favor of matching another agent’s previous response, providing insights into social influence dynamics. •State Transitions : Tracked as C →I (correct to incorrect) and I →C (incorrect to correct) counts, these reveal the qualitative nature of answer changes during debate. •Debate Helped : The overall count of instances where the debate process improved the final outcome compared to initial responses. Our evaluation spans multiple dimensions of agent configuration: •Agent Settings : We systematically vary temperature parameter across four settings: –Default: Balanced temperature –Deterministic (Det.): Lower temperature for more consistent outputs –Exploratory (Exp.): Higher temperature for more diverse responses –Mixed: Combinations of the above settings across different agents •Debate Structures : We investigate four primary debate configurations: –Single-Model Debate: Multiple instances of the same model with varied parameter settings –Cross-Agent Debate: Two different models debating with various parameter settings –Three Identical Agents: Three instances of the same model with potentially different settings –Three Varied Agents: Three different models engaging in debate 20 E.2 Overview of Results Organization Our extensive experimental results are organized in Tables 13-32, systematically covering all five datasets with the four debate configurations described above. For each dataset, we present: • Table set 1 (Tables 13-16): Performance on GSM8K • Table set 2 (Tables 17-20): Performance on GSM-Plus • Table set 3 (Tables 21-24): Performance on ARC-Easy • Table set 4 (Tables 25-28): Performance on ARC-Challenge • Table set 5 (Tables 29-32): Performance on CommonsenseQA E.3 Key Findings and Patterns E.3.1 Impact of Agent Settings Our analysis reveals that agent parameter settings significantly influence debate outcomes across all datasets. We observe that while the Default setting provides reliable performance, Exploratory settings often lead to higher variance in outcomes, sometimes yielding exceptional improvements but also risking performance degradation. The Deterministic setting generally produces more consistent but potentially conservative results. The sycophancy metric proves particularly informative, showing higher values in debates between models with substantial performance gaps. This suggests that lower-performing models tend to defer to higher-performing ones, which can be either beneficial or detrimental depending on the initial state distribution. E.3.2 Cross-Model Debate Dynamics In cross-agent debates (Tables 10-14), we find that pairing models with complementary strengths often produces synergistic effects. The ∆metrics relative to both upper and lower agents reveal important patterns: when a high-performing model debates with a weaker one, the debate outcome typically falls between their individual performances but closer to the stronger model’s baseline. State transitions (C →I and I →C) provide
https://arxiv.org/abs/2505.15734v1
valuable insights into debate quality. A high I →C rate coupled with a low C →I rate indicates constructive debate where correct reasoning prevails, while the opposite pattern signals problematic dynamics where convincing but incorrect reasoning dominates. E.3.3 Three-Agent Debate Effectiveness The introduction of a third agent creates more complex interaction patterns. Three-agent debates con- sistently show lower sycophancy rates compared to two-agent settings, suggesting that the presence of multiple perspectives reduces blind conformity. When all three agents are identical, we observe that diversity in parameter settings typically outperforms homogeneous settings. In three varied agent debates, we find particularly interesting results when combining models of different sizes and architectures. As shown in Table 16, certain combinations like "Qwen-2.5-3B + Phi-mini-3.8B + Llama-3.1-3B" achieve accuracy improvements even compared to the highest-performing individual agent, suggesting effective complementarity between these models’ reasoning approaches. E.3.4 Dataset-Specific Patterns Our results indicate substantial variation in debate effectiveness across different datasets: •GSM8K and GSM+ : Harder Mathematical reasoning tasks (GSM-Plus) show the most consistent benefits from debate, with average debate rounds typically higher than other datasets, suggesting that step-by-step verification is particularly valuable for these problems. •ARC-Easy and ARC-Challenge : Multiple-choice science questions reveal interesting patterns where sycophancy is generally lower, but debate can still improve performance when appropriately configured. 21 •CommonsenseQA : This dataset exhibits unique characteristics where debates tend to conclude more quickly, suggesting that commonsense reasoning may be less amenable to explicit verification through debate. E.4 Conclusion Tables 13-32 collectively present a comprehensive empirical foundation for understanding the effects of Multi-Agent Debate using RCR prompting across diverse reasoning tasks. The metrics reveal nuanced patterns in how debate influences performance, with clear evidence that appropriate configuration of debate participants and settings can yield substantial improvements over single-agent performance. The consistent tracking of accuracy, deltas, debate rounds, sycophancy, and state transitions provides a multi-dimensional view of debate quality beyond simple performance measures. These results demonstrate that MAD is not universally beneficial but rather depends critically on the specific combination of models, parameter settings, and problem domains. Our findings establish an important baseline for future research on collaborative reasoning between language models, highlighting both the potential and the challenges of multi-agent approaches to complex problem-solving. 22 Agent 1 Agent 2 Agent Settings MAD Accuracy ∆ Debate Sycophancy C →I I→C Debate (RCR Prompting) Rounds (Avg / 1319) Helped (Avg) (Overall) Qwen-2.5-0.5B Qwen-2.5-0.5B Both: Default 47.38 5.38 ↑ 1.60 1 .17 156 .00 251 220 Qwen-2.5-0.5B Qwen-2.5-0.5B Both: Deterministic 47.31 5.31 ↑ 0.00 0 .00 0 .00 0 0 Qwen-2.5-0.5B Qwen-2.5-0.5B Both: Exploratory 39.20 2.8 ↓ 2.19 1 .25 185 .00 274 234 Qwen-2.5-0.5B Qwen-2.5-0.5B Both: Det. & Exp. 43.14 1.14 ↑ 1.89 1 .09 185 .00 262 226 Qwen-2.5-1.5B Qwen-2.5-1.5B Both: Default 70.89 8.12 ↑ 0.86 0 .70 101 .00 352 317 Qwen-2.5-1.5B Qwen-2.5-1.5B Both: Deterministic 63.46 0.69 ↑ 0.00 0 .00 0 .00 0 0 Qwen-2.5-1.5B Qwen-2.5-1.5B Both: Exploratory 71.57 8.8 ↑ 1.05 0 .84 94 .00 449 399 Qwen-2.5-1.5B Qwen-2.5-1.5B Both: Det. & Exp. 72.33 9.56 ↑ 0.98 0 .71 99 .00 423 377 Qwen-2.5-3B Qwen-2.5-3B Both:
https://arxiv.org/abs/2505.15734v1
Default 86.05 0.91 ↑ 0.31 0 .21 55 .00 115 104 Qwen-2.5-3B Qwen-2.5-3B Both: Deterministic 84.99 0.15 ↓ 0.00 0 .00 0 .00 0 0 Qwen-2.5-3B Qwen-2.5-3B Both: Exploratory 85.52 0.38 ↑ 0.35 0 .26 62 .00 116 103 Qwen-2.5-3B Qwen-2.5-3B Both: Det. & Exp. 86.28 1.14 ↑ 0.34 0 .19 50 .00 106 101 Qwen-2.5-7B Qwen-2.5-7B Both: Default 91.74 1.07 ↑ 0.16 0 .13 28 .00 53 49 Qwen-2.5-7B Qwen-2.5-7B Both: Deterministic 90.60 0.07 ↓ 0.00 0 .00 0 .00 0 0 Qwen-2.5-7B Qwen-2.5-7B Both: Exploratory 91.21 0.54 ↑ 0.18 0 .15 27 .00 59 57 Qwen-2.5-7B Qwen-2.5-7B Both: Det. & Exp. 91.51 0.84 ↑ 0.18 0 .15 33 .00 57 55 Qwen-2.5-14B Qwen-2.5-14B Both: Default 93.48 0.68 ↑ 0.11 0 .13 22 .00 46 43 Qwen-2.5-14B Qwen-2.5-14B Both: Deterministic 93.18 0.38 ↑ 0.00 0 .00 0 .00 0 0 Qwen-2.5-14B Qwen-2.5-14B Both: Exploratory 93.33 0.53 ↑ 0.11 0 .12 20 .00 48 48 Qwen-2.5-14B Qwen-2.5-14B Both: Det. & Exp. 93.63 0.83 ↑ 0.13 0 .15 24 .00 44 39 Qwen-2.5-32B Qwen-2.5-32B Both: Default 95.00 0.08 ↑ 0.05 0 .06 11 .00 21 20 Qwen-2.5-32B Qwen-2.5-32B Both: Deterministic 94.77 0.15 ↓ 0.00 0 .00 0 .00 0 0 Qwen-2.5-32B Qwen-2.5-32B Both: Exploratory 95.38 0.46 ↑ 0.07 0 .08 9 .00 32 31 Qwen-2.5-32B Qwen-2.5-32B Both: Det. & Exp. 95.30 0.38 0 .04 0 .05 12 .00 23 21 Llama-3.1-3B Llama-3.1-3B Both: Default 74.91 2.36 ↑ 0.73 0 .49 106 .00 208 183 Llama-3.1-3B Llama-3.1-3B Both: Deterministic 74.37 1.82 ↑ 0.00 0 .00 0 .00 0 0 Llama-3.1-3B Llama-3.1-3B Both: Exploratory 72.40 0.15 ↓ 0.94 0 .57 138 .00 225 202 Llama-3.1-3B Llama-3.1-3B Both: Det. & Exp. 73.84 1.29 ↑ 0.80 0 .48 133 .00 193 175 Llama-3.1-8B Llama-3.1-8B Both: Default 82.56 0.83 ↑ 0.48 0 .38 86 .00 116 105 Llama-3.1-8B Llama-3.1-8B Both: Deterministic 81.50 0.23 ↓ 0.00 0 .00 0 .00 0 0 Llama-3.1-8B Llama-3.1-8B Both: Exploratory 80.67 1.06 ↓ 0.60 0 .40 98 .00 162 149 Llama-3.1-8B Llama-3.1-8B Both: Det. & Exp. 82.18 0.45 ↑ 0.56 0 .39 97 .00 142 126 Phi-mini-3.8B Phi-mini-3.8B Both: Default 87.72 0.84 ↑ 0.29 0 .27 51 .00 101 95 Phi-mini-3.8B Phi-mini-3.8B Both: Deterministic 86.73 0.15 ↓ 0.02 0 .00 0 .00 2 1 Phi-mini-3.8B Phi-mini-3.8B Both: Exploratory 87.95 1.07 ↑ 0.30 0 .26 48 .00 112 99 Phi-mini-3.8B Phi-mini-3.8B Both: Det. & Exp. 87.34 0.46 ↑ 0.33 0 .26 62 .00 103 95 Mistral-7B Mistral-7B Both: Default 33.74 12.36 ↑ 1.65 0 .73 101 .00 454 340 Mistral-7B Mistral-7B Both: Deterministic 20.02 1.36 0 .04 0 .00 0 .00 0 0 Mistral-7B Mistral-7B Both: Exploratory 35.71 14.33 ↑ 1.85 0 .80 110 .00 509 381 Mistral-7B Mistral-7B Both: Det. & Exp. 33.51 12.13 1 .53 0 .68 97 .00 433 334 Table 13: Performance in Multi-Agent Debate Settings on the GSM8K Dataset. This table showcases the impact of different Agent Settings (controlling temperature and top_p parameters like Default, Deterministic, Exploratory, and a combination) on the MAD Accuracy (RCR Prompting) of various language models. The ∆column quantifies theimprovement (or decline) over the single
https://arxiv.org/abs/2505.15734v1
base model performance . Further metrics include average Debate Rounds , normalized Sycophancy (per 1319 data points), and transitions between correct (C) and incorrect (I) states (C→I, I→C), highlighting the nuanced effects of debate dynamics. 23 Agent 1 Agent 2 Agent Settings Accuracy ∆(Lower Agent) ∆(Upper Agent) Debate Sycophancy C →I I→C Debate Rounds (Avg / 1319) Helped (Avg) (Overall) Qwen-2.5-0.5B Qwen-2.5-1.5B 1: Default & 2: Default 62.40 20.4 ↑ 0.37↓ 1.52 0 .96 168 .00 434 387 Qwen-2.5-0.5B Qwen-2.5-1.5B 1: Det. & 2: Det. 62.32 20.32 ↑ 0.45↓ 1.27 0 .72 155 .00 357 323 Qwen-2.5-0.5B Qwen-2.5-1.5B 1: Exp. & 2: Exp. 58.91 16.91 ↑ 3.86↓ 1.95 1 .03 175 .00 531 448 Qwen-2.5-0.5B Qwen-2.5-1.5B 1: Det. & 2: Exp. 60.88 18.88 ↑ 1.89↓ 1.54 0 .83 147 .00 416 344 Qwen-2.5-0.5B Qwen-2.5-1.5B 1: Exp. & 2: Det. 61.18 19.18 ↑ 1.59↓ 1.67 0 .87 164 .00 474 425 Qwen-2.5-1.5B Llama-3.1-3B 1: Default & 2: Default 76.42 13.65 ↑ 3.87↑ 1.09 0 .56 107 .00 388 342 Qwen-2.5-1.5B Llama-3.1-3B 1: Det. & 2: Det. 75.59 12.82 ↑ 3.04↑ 1.14 0 .36 93 .00 285 258 Qwen-2.5-1.5B Llama-3.1-3B 1: Exp. & 2: Exp. 76.57 13.8 ↑ 4.02↑ 1.17 0 .65 96 .00 416 355 Qwen-2.5-1.5B Llama-3.1-3B 1: Det. & 2: Exp. 75.06 12.29 ↑ 2.51↑ 1.22 0 .48 111 .00 362 326 Qwen-2.5-1.5B Llama-3.1-3B 1: Exp. & 2: Det. 76.04 13.27 ↑ 3.49↑ 1.12 0 .59 129 .00 383 331 Qwen-2.5-3B Phi-mini-3.8B 1: Default & 2: Default 87.41 2.27 ↑ 0.53↑ 0.39 0 .22 53 .00 128 114 Qwen-2.5-3B Phi-mini-3.8B 1: Det. & 2: Det. 85.97 0.83 ↑ 0.91↓ 0.43 0 .17 74 .00 82 72 Qwen-2.5-3B Phi-mini-3.8B 1: Exp. & 2: Exp. 88.63 3.49 ↑ 1.75↑ 0.44 0 .27 46 .00 155 142 Qwen-2.5-3B Phi-mini-3.8B 1: Det. & 2: Exp. 86.73 1.59 ↑ 0.15↓ 0.40 0 .20 63 .00 105 99 Qwen-2.5-3B Phi-mini-3.8B 1: Exp. & 2: Det. 88.10 2.96 ↑ 1.22↑ 0.41 0 .23 57 .00 135 126 Qwen-2.5-1.5B Qwen-2.5-3B 1: Default & 2: Default 82.71 19.94 ↑ 2.43↓ 0.71 0 .51 67 .00 370 359 Qwen-2.5-1.5B Qwen-2.5-3B 1: Det. & 2: Det. 81.27 18.5 ↑ 3.87↓ 0.62 0 .48 94 .00 284 275 Qwen-2.5-1.5B Qwen-2.5-3B 1: Exp. & 2: Exp. 83.17 20.4 ↑ 1.97↓ 0.80 0 .56 68 .00 414 392 Qwen-2.5-1.5B Qwen-2.5-3B 1: Det. & 2: Exp. 82.87 20.1 ↑ 2.27↓ 0.76 0 .48 74 .00 328 310 Qwen-2.5-1.5B Qwen-2.5-3B 1: Exp. & 2: Det. 82.26 19.49 ↑ 2.88↓ 0.75 0 .52 82 .00 384 372 Llama-3.1-3B Llama-3.1-8B 1: Default & 2: Default 78.54 5.99 ↑ 3.19↓ 0.77 0 .51 122 .00 213 195 Llama-3.1-3B Llama-3.1-8B 1: Det. & 2: Det. 79.23 6.68 ↑ 2.5↓ 0.68 0 .48 130 .00 159 143 Llama-3.1-3B Llama-3.1-8B 1: Exp. & 2: Exp. 77.10 4.55 ↑ 4.63↓ 0.93 0 .58 127 .00 238 224 Llama-3.1-3B Llama-3.1-8B 1: Det. & 2: Exp. 79.83 7.28 ↑ 1.9↓ 0.81 0 .45 123 .00 211 183 Llama-3.1-3B Llama-3.1-8B 1: Exp. & 2: Det. 77.18 4.63 ↑ 4.55↓ 0.87 0 .56 141 .00 183 173 Qwen-2.5-7B Qwen-2.5-14B
https://arxiv.org/abs/2505.15734v1
1: Default & 2: Default 92.19 1.52 ↑ 0.61↓ 0.16 0 .13 39 .00 63 61 Qwen-2.5-7B Qwen-2.5-14B 1: Det. & 2: Det. 92.04 1.37 ↑ 0.76↓ 0.17 0 .13 47 .00 53 50 Qwen-2.5-7B Qwen-2.5-14B 1: Exp. & 2: Exp. 93.10 2.43 ↑ 0.3↑ 0.16 0 .15 33 .00 72 68 Qwen-2.5-7B Qwen-2.5-14B 1: Det. & 2: Exp. 92.19 1.52 ↑ 0.61↓ 0.15 0 .11 37 .00 58 58 Qwen-2.5-7B Qwen-2.5-14B 1: Exp. & 2: Det. 92.80 2.13 ↑ 0.00 0 .17 0 .16 39 .00 64 60 Table 14: Performance Analysis of Cross-Agent Debates on the GSM8K Dataset. This table details the outcomes of debates between different language models (Agent 1 and Agent 2). Agent Settings specify the configuration (e.g., Default, Deterministic (Det.), Exploratory (Exp.)) applied to Agent 1 and Agent 2 respectively, influencing temperature and top_p parameters. The table presents overall Accuracy , along with ∆(Lower Agent) and∆ (Upper Agent) indicating the performance change for each agent relative to a baseline. Additional metrics include average Debate Rounds , normalized Sycophancy (per 1319 data points), and transitions between correct (C) and incorrect (I) states (C →I, I→C) to show debate impact. 24 Agent 1 Agent 2 Agent 3 Agent Settings Accuracy ∆(Improvement) Debate Sycophancy C →I I→C Debate Rounds (Avg / 1319) Helped (Avg) (Overall) Qwen-2.5-0.5B Qwen-2.5-0.5B Qwen-2.5-0.5B All: Default 41.70 0.3↓ 2.77 3 .17 414 .00 393 .00 236 .00 Qwen-2.5-0.5B Qwen-2.5-0.5B Qwen-2.5-0.5B All: Deterministic 47.31 5.31↑ 0.00 0 .00 0 .00 0 .00 0 .00 Qwen-2.5-0.5B Qwen-2.5-0.5B Qwen-2.5-0.5B All: Exploratory 36.09 5.91↓ 3.47 3 .33 438 .00 450 .00 282 .00 Qwen-2.5-0.5B Qwen-2.5-0.5B Qwen-2.5-0.5B 1 Det, 2 Exp 38.36 3.64↓ 3.13 2 .90 412 .00 370 .00 246 .00 Qwen-2.5-0.5B Qwen-2.5-0.5B Qwen-2.5-0.5B 2 Det, 1 Exp 43.06 1.06↑ 1.97 1 .42 306 .00 300 .00 211 .00 Qwen-2.5-1.5B Qwen-2.5-1.5B Qwen-2.5-1.5B All: Default 72.48 9.71↑ 1.35 1 .64 193 .00 652 .00 469 .00 Qwen-2.5-1.5B Qwen-2.5-1.5B Qwen-2.5-1.5B All: Deterministic 63.99 1.22↑ 0.00 0 .00 0 .00 0 .00 0 .00 Qwen-2.5-1.5B Qwen-2.5-1.5B Qwen-2.5-1.5B All: Exploratory 75.13 12.36↑ 1.57 1 .82 181 .00 796 .00 547 .00 Qwen-2.5-1.5B Qwen-2.5-1.5B Qwen-2.5-1.5B 1 Det, 2 Exp 74.83 12.06↑ 1.51 1 .71 170 .00 741 .00 534 .00 Qwen-2.5-1.5B Qwen-2.5-1.5B Qwen-2.5-1.5B 2 Det, 1 Exp 72.25 9.48↑ 0.97 1 .03 131 .00 510 .00 329 .00 Qwen-2.5-3B Qwen-2.5-3B Qwen-2.5-3B All: Default 86.96 1.82↑ 0.49 0 .52 85 .00 191 .00 147 .00 Qwen-2.5-3B Qwen-2.5-3B Qwen-2.5-3B All: Deterministic 84.99 0.15↓ 0.00 0 .00 0 .00 0 .00 0 .00 Qwen-2.5-3B Qwen-2.5-3B Qwen-2.5-3B All: Exploratory 87.64 2.5↑ 0.60 0 .65 85 .00 256 .00 200 .00 Qwen-2.5-3B Qwen-2.5-3B Qwen-2.5-3B 1 Det, 2 Exp 86.73 1.59↑ 0.63 0 .56 110 .00 236 .00 179 .00 Qwen-2.5-3B Qwen-2.5-3B Qwen-2.5-3B 2 Det, 1 Exp 86.05 0.91↑ 0.40 0 .32 75 .00 130 .00 99 .00 Qwen-2.5-7B Qwen-2.5-7B Qwen-2.5-7B All: Default 93.03 2.36↑ 0.22 0 .22 33 .00 110 .00 88 .00 Qwen-2.5-7B Qwen-2.5-7B Qwen-2.5-7B All: Deterministic 90.60 0.07↓ 0.00 0 .00 0 .00 0 .00 0 .00 Qwen-2.5-7B Qwen-2.5-7B Qwen-2.5-7B All: Exploratory 92.42 1.75↑ 0.24 0 .24 52
https://arxiv.org/abs/2505.15734v1
.00 110 .00 87 .00 Qwen-2.5-7B Qwen-2.5-7B Qwen-2.5-7B 1 Det, 2 Exp 92.12 1.45↑ 0.24 0 .24 44 .00 106 .00 86 .00 Qwen-2.5-7B Qwen-2.5-7B Qwen-2.5-7B 2 Det, 1 Exp 91.96 1.29↑ 0.17 0 .17 28 .00 76 .00 52 .00 Qwen-2.5-14B Qwen-2.5-14B Qwen-2.5-14B All: Default 94.09 1 .29 0 .11 0 .13 18 .00 67 .00 59 .00 Qwen-2.5-14B Qwen-2.5-14B Qwen-2.5-14B All: Deterministic 92.95 0 .15 0 .00 0 .00 0 .00 0 .00 0 .00 Qwen-2.5-14B Qwen-2.5-14B Qwen-2.5-14B All: Exploratory 94.24 1 .44 0 .14 0 .16 26 .00 88 .00 78 .00 Qwen-2.5-14B Qwen-2.5-14B Qwen-2.5-14B 1 Det, 2 Exp 94.31 1 .51 0 .13 0 .16 17 .00 81 .00 68 .00 Qwen-2.5-14B Qwen-2.5-14B Qwen-2.5-14B 2 Det, 1 Exp 92.87 0 .07 0 .09 0 .08 30 .00 33 .00 29 .00 Qwen-2.5-32B Qwen-2.5-32B Qwen-2.5-32B All: Default 95.30 0 .38 0 .07 0 .07 18 .00 44 .00 39 .00 Qwen-2.5-32B Qwen-2.5-32B Qwen-2.5-32B All: Deterministic 94.77 0 .15 0 .00 0 .00 0 .00 0 .00 0 .00 Qwen-2.5-32B Qwen-2.5-32B Qwen-2.5-32B All: Exploratory 94.84 0 .08 0 .08 0 .09 21 .00 51 .00 47 .00 Qwen-2.5-32B Qwen-2.5-32B Qwen-2.5-32B 1 Det, 2 Exp 95.30 0 .38 0 .07 0 .07 16 .00 49 .00 41 .00 Qwen-2.5-32B Qwen-2.5-32B Qwen-2.5-32B 2 Det, 1 Exp 95.22 0 .30 0 .05 0 .05 11 .00 34 .00 24 .00 Phi-mini-3.8B Phi-mini-3.8B Phi-mini-3.8B All: Default 88.40 1.52↑ 0.42 0 .55 86 .00 168 .00 129 .00 Phi-mini-3.8B Phi-mini-3.8B Phi-mini-3.8B All: Deterministic 86.66 0.22↓ 0.01 0 .01 0 .00 0 .00 0 .00 Phi-mini-3.8B Phi-mini-3.8B Phi-mini-3.8B All: Exploratory 88.10 1.22↑ 0.48 0 .59 99 .00 197 .00 145 .00 Phi-mini-3.8B Phi-mini-3.8B Phi-mini-3.8B 1 Det, 2 Exp 87.87 0.99↑ 0.46 0 .53 95 .00 178 .00 132 .00 Phi-mini-3.8B Phi-mini-3.8B Phi-mini-3.8B 2 Det, 1 Exp 87.72 0.84↑ 0.32 0 .41 64 .00 121 .00 80 .00 Llama-3.1-3B Llama-3.1-3B Llama-3.1-3B All: Default 72.63 0.08↑ 1.29 1 .29 265 .00 317 .00 238 .00 Llama-3.1-3B Llama-3.1-3B Llama-3.1-3B All: Deterministic 73.16 0.61↑ 0.00 0 .00 0 .00 0 .00 0 .00 Llama-3.1-3B Llama-3.1-3B Llama-3.1-3B All: Exploratory 72.78 0.23↑ 1.49 1 .39 246 .00 414 .00 312 .00 Llama-3.1-3B Llama-3.1-3B Llama-3.1-3B 1 Det, 2 Exp 73.69 1.14↑ 1.39 1 .28 251 .00 407 .00 283 .00 Llama-3.1-3B Llama-3.1-3B Llama-3.1-3B 2 Det, 1 Exp 72.93 0.38↑ 1.08 0 .87 203 .00 229 .00 147 .00 Mistral-7B Mistral-7B Mistral-7B All: Default 37.83 16.45↑ 2.37 1 .97 203 .00 894 .00 454 .00 Mistral-7B Mistral-7B Mistral-7B All: Deterministic 20.02 1.36↓ 0.04 0 .00 0 .00 0 .00 0 .00 Mistral-7B Mistral-7B Mistral-7B All: Exploratory 39.27 17.89↑ 2.81 2 .30 189 .00 904 .00 480 .00 Mistral-7B Mistral-7B Mistral-7B 1 Det, 2 Exp 38.89 17.51↑ 2.61 2 .13 222 .00 940 .00 476 .00 Mistral-7B Mistral-7B Mistral-7B 2 Det, 1 Exp 35.33 13.95↑ 1.82 1 .39 135 .00 694 .00 360 .00 Llama-3.1-8B Llama-3.1-8B Llama-3.1-8B All: Default 84.23 2.5↑ 0.72 0 .82 135 .00 429 .00 192 .00 Llama-3.1-8B Llama-3.1-8B Llama-3.1-8B All: Deterministic 81.50 0.23↓ 0.00 0 .00 0 .00 0 .00 0
https://arxiv.org/abs/2505.15734v1
.00 Llama-3.1-8B Llama-3.1-8B Llama-3.1-8B All: Exploratory 83.70 1.97↑ 0.88 0 .89 162 .00 310 .00 230 .00 Llama-3.1-8B Llama-3.1-8B Llama-3.1-8B 1 Det, 2 Exp 83.32 1.59↑ 0.86 0 .86 160 .00 284 .00 211 .00 Llama-3.1-8B Llama-3.1-8B Llama-3.1-8B 2 Det, 1 Exp 82.26 0.53↑ 0.67 0 .63 129 .00 199 .00 132 .00 Table 15: Performance Analysis of Three Identical Agents Debating on GSM8K. This table shows results when three instances of the same model ( Agent 1 ,Agent 2 ,Agent 3 being identical) engage in a debate. Agent Settings describe the configuration mix across these three agents (e.g., All Default, or a mix like 1 Deterministic (Det), 2 Exploratory (Exp)). Accuracy is the debate outcome, and ∆(Improvement) is the change from the single agent’s baseline. Standard metrics like Debate Rounds , normalized Sycophancy (per 1319 data points), and error transition rates (C →I, I→C) are also included. 25 Agent 1 Agent 2 Agent 3 Agent Settings Accuracy ∆(vs Lowest) Debate Sycophancy C →I I→C Debate Rounds (Avg / 1319) Helped (Avg) (Overall) Qwen-2.5-0.5B Qwen-2.5-1.5B Qwen-2.5-3B All: Default 80.82 4.32↓ 1.81 1 .58 154 .00 859 .00 639 .00 Qwen-2.5-0.5B Qwen-2.5-1.5B Llama-3.1-3B All: Default 69.52 3.03↓ 2.43 1 .76 271 .00 718 .00 508 .00 Qwen-2.5-0.5B Qwen-2.5-1.5B Phi-mini-3.8B All: Default 76.04 10.84↓ 2.20 1 .47 267 .00 727 .00 532 .00 Qwen-2.5-0.5B Qwen-2.5-3B Llama-3.1-3B All: Default 79.15 5.99↓ 2.10 1 .36 184 .00 696 .00 536 .00 Qwen-2.5-0.5B Qwen-2.5-3B Phi-mini-3.8B All: Default 83.62 3.24↓ 1.82 1 .08 150 .00 618 .00 534 .00 Qwen-2.5-0.5B Llama-3.1-3B Phi-mini-3.8B All: Default 76.57 10.31↓ 2.39 1 .16 255 .00 515 .00 402 .00 Qwen-2.5-1.5B Qwen-2.5-3B Llama-3.1-3B All: Default 82.71 2.43↓ 1.24 1 .06 156 .00 544 .00 436 .00 Qwen-2.5-1.5B Qwen-2.5-3B Phi-mini-3.8B All: Default 85.22 1.66↓ 1.08 0 .85 139 .00 460 .00 388 .00 Qwen-2.5-1.5B Llama-3.1-3B Phi-mini-3.8B All: Default 81.20 5.68↓ 1.33 1 .05 196 .00 560 .00 446 .00 Qwen-2.5-3B Phi-mini-3.8B Llama-3.1-3B All: Default 86.96 0.08↑ 0.89 0 .71 127 .00 372 .00 297 .00 Qwen-2.5-3B Qwen-2.5-3B Phi-mini-3.8B All: Default 87.64 0.76↑ 0.60 0 .55 97 .00 227 .00 175 .00 Qwen-2.5-3B Phi-mini-3.8B Phi-mini-3.8B All: Default 87.79 0.91↑ 0.58 0 .53 111 .00 209 .00 167 .00 Qwen-2.5-0.5B Qwen-2.5-1.5B Qwen-2.5-1.5B All: Default 68.46 5.69↑ 2.10 2 .09 221 .00 795 .00 570 .00 Qwen-2.5-0.5B Qwen-2.5-0.5B Qwen-2.5-1.5B All: Default 55.12 7.65↓ 2.60 2 .52 364 .00 628 .00 407 .00 Table 16: Performance Analysis of Three-Agent Debates (Varied Models) on GSM8K . This table presents outcomes from debates involving three potentially different language models ( Agent 1 ,Agent 2 ,Agent 3 ). All debates use default agent settings. The ∆(vs Lowest) column indicates the performance change of the debate outcome (Accuracy) compared to the baseline performance of the lowest-performing agent among the three in that specific debate. Standard metrics like Debate Rounds , normalized Sycophancy (per 1319 data points), and error transition rates (C →I, I→C) are also included. 26 Agent 1 Agent 2 Agent Settings MAD Accuracy ∆ Debate Sycophancy C →I I→C Debate (RCR Prompting) Rounds (Avg / 2400) Helped (Avg) (Overall) Qwen-2.5-0.5B
https://arxiv.org/abs/2505.15734v1
Qwen-2.5-0.5B Both: Default 27.33 2.54 ↑ 2.00 1 .51 248 .00 348 295 Qwen-2.5-0.5B Qwen-2.5-0.5B Both: Deterministic 29.25 4.46 ↑ 0.02 0 .00 0 .00 2 1 Qwen-2.5-0.5B Qwen-2.5-0.5B Both: Exploratory 23.12 1.67 ↓ 2.56 1 .43 284 .00 351 289 Qwen-2.5-0.5B Qwen-2.5-0.5B Both: Det. & Exp. 27.33 2.54 ↑ 2.26 1 .33 267 .00 396 336 Qwen-2.5-1.5B Qwen-2.5-1.5B Both: Default 53.12 11.12 ↑ 1.14 0 .91 210 .00 555 502 Qwen-2.5-1.5B Qwen-2.5-1.5B Both: Deterministic 47.29 5.29 ↑ 0.03 0 .00 0 .00 0 0 Qwen-2.5-1.5B Qwen-2.5-1.5B Both: Exploratory 51.62 9.62 ↑ 1.40 1 .08 218 .00 647 551 Qwen-2.5-1.5B Qwen-2.5-1.5B Both: Det. & Exp. 52.29 10.29 ↑ 1.17 0 .85 181 .00 528 477 Qwen-2.5-3B Qwen-2.5-3B Both: Default 67.42 5.67 ↑ 0.62 0 .39 133 .00 225 213 Qwen-2.5-3B Qwen-2.5-3B Both: Deterministic 67.38 5.63 ↑ 0.05 0 .00 0 .00 0 0 Qwen-2.5-3B Qwen-2.5-3B Both: Exploratory 67.79 6.04 ↑ 0.69 0 .46 132 .00 296 265 Qwen-2.5-3B Qwen-2.5-3B Both: Det. & Exp. 66.46 4.71 ↑ 0.67 0 .36 163 .00 223 208 Qwen-2.5-7B Qwen-2.5-7B Both: Default 74.17 5.55 ↑ 0.35 0 .26 62 .00 135 127 Qwen-2.5-7B Qwen-2.5-7B Both: Deterministic 73.62 5.00 ↑ 0.04 0 .00 0 .00 0 0 Qwen-2.5-7B Qwen-2.5-7B Both: Exploratory 74.17 5.55 ↑ 0.39 0 .30 88 .00 158 150 Qwen-2.5-7B Qwen-2.5-7B Both: Det. & Exp. 74.46 5.84 ↑ 0.33 0 .25 78 .00 126 118 Qwen-2.5-14B Qwen-2.5-14B Both: Default 77.21 5.42 ↑ 0.32 0 .32 47 .00 102 100 Qwen-2.5-14B Qwen-2.5-14B Both: Deterministic 76.25 4.46 ↑ 0.06 0 .00 0 .00 0 0 Qwen-2.5-14B Qwen-2.5-14B Both: Exploratory 77.25 5.46 ↑ 0.33 0 .32 45 .00 128 123 Qwen-2.5-14B Qwen-2.5-14B Both: Det. & Exp. 76.96 5.17 ↑ 0.31 0 .29 48 .00 99 93 Qwen-2.5-32B Qwen-2.5-32B Both: Default 73.33 0.87 ↑ 0.24 0 .19 29 .00 62 59 Qwen-2.5-32B Qwen-2.5-32B Both: Deterministic 72.79 0.33 ↑ 0.08 0 .00 0 .00 0 0 Qwen-2.5-32B Qwen-2.5-32B Both: Exploratory 73.42 0.96 ↑ 0.27 0 .23 32 .00 91 88 Qwen-2.5-32B Qwen-2.5-32B Both: Det. & Exp. 73.46 1.00 ↑ 0.26 0 .19 26 .00 70 68 Phi-mini-3.8B Phi-mini-3.8B Both: Default 69.62 6.20 ↑ 0.60 0 .47 113 .00 204 191 Phi-mini-3.8B Phi-mini-3.8B Both: Deterministic 69.21 5.79 ↑ 0.13 0 .02 0 .00 6 3 Phi-mini-3.8B Phi-mini-3.8B Both: Exploratory 70.38 6.96 ↑ 0.67 0 .50 117 .00 267 242 Phi-mini-3.8B Phi-mini-3.8B Both: Det. & Exp. 69.42 6.00 ↑ 0.62 0 .45 114 .00 203 188 Mistral-7B Mistral-7B Both: Default 23.42 8.38 ↑ 1.91 0 .77 159 .00 576 434 Mistral-7B Mistral-7B Both: Deterministic 14.33 0.71 ↓ 0.15 0 .01 0 .00 4 2 Mistral-7B Mistral-7B Both: Exploratory 23.29 8.25 ↑ 2.13 0 .85 149 .00 586 437 Mistral-7B Mistral-7B Both: Det. & Exp. 22.75 7.71 ↑ 1.93 0 .77 147 .00 556 414 Llama-3.1-3B Llama-3.1-3B Both: Default 51.58 5.91 ↑ 1.20 0 .82 232 .00 439 378 Llama-3.1-3B Llama-3.1-3B Both: Deterministic 50.50 4.83 ↑ 0.01 0 .00 0 .00 0 0 Llama-3.1-3B Llama-3.1-3B Both: Exploratory 51.12 5.45 ↑ 1.47 0 .87 233 .00 482 406 Llama-3.1-3B Llama-3.1-3B Both: Det. & Exp. 50.75
https://arxiv.org/abs/2505.15734v1
5.08 ↑ 1.28 0 .74 218 .00 381 333 Llama-3.1-8B Llama-3.1-8B Both: Default 62.04 6.42 ↑ 0.95 0 .72 202 .00 313 274 Llama-3.1-8B Llama-3.1-8B Both: Deterministic 61.04 5.42 ↑ 0.00 0 .00 0 .00 0 0 Llama-3.1-8B Llama-3.1-8B Both: Exploratory 60.79 5.17 ↑ 1.12 0 .77 197 .00 340 303 Llama-3.1-8B Llama-3.1-8B Both: Det. & Exp. 60.96 5.34 ↑ 1.01 0 .72 214 .00 304 273 Table 17: Comparative Analysis of Language Model Performance in Multi-Agent Debate Settings on the GSM- Plus Dataset. This table showcases the impact of different Agent Settings (controlling temperature and top_p parameters like Default, Deterministic, Exploratory, and a combination) on the MAD Accuracy (RCR Prompting) of various language models. The ∆column quantifies the improvement (or decline) over the single base model performance . Further metrics include average Debate Rounds , normalized Sycophancy (per 2400 data points), and transitions between correct (C) and incorrect (I) states (C →I, I→C), highlighting the nuanced effects of debate dynamics. 27 Agent 1 Agent 2 Agent Settings MAD ∆Lower ∆Upper Debate Sycophancy C →I I→C Debate Accuracy Rounds (Avg / 2400) Helped (Avg) (Overall) Qwen-2.5-0.5B Qwen-2.5-1.5B Both: Default 41.38 16.59↑ 0.62↓ 1.85 1 .12 314 628 548 Qwen-2.5-0.5B Qwen-2.5-1.5B Both: Deterministic 42.67 17.88↑ 0.67↑ 1.58 0 .89 292 565 505 Qwen-2.5-0.5B Qwen-2.5-1.5B Both: Exploratory 39.54 14.75↑ 2.46↓ 2.30 1 .20 320 722 604 Qwen-2.5-0.5B Qwen-2.5-1.5B Both: Det. & Exp. 40.04 15.25↑ 1.96↓ 1.97 1 .04 301 588 492 Qwen-2.5-0.5B Qwen-2.5-1.5B Both: Exp. & Det. 44.25 19.46↑ 2.25↑ 2.00 1 .04 278 750 664 Qwen-2.5-1.5B Llama-3.1-3B Both: Default 54.42 12.42↑ 8.75↑ 1.56 0 .75 232 612 532 Qwen-2.5-1.5B Llama-3.1-3B Both: Deterministic 54.37 12.37↑ 8.70↑ 1.56 0 .50 224 489 435 Qwen-2.5-1.5B Llama-3.1-3B Both: Exploratory 54.21 12.21↑ 8.54↑ 1.77 0 .89 255 696 602 Qwen-2.5-1.5B Llama-3.1-3B Both: Det. & Exp. 53.29 11.29↑ 7.62↑ 1.65 0 .62 249 555 488 Qwen-2.5-1.5B Llama-3.1-3B Both: Exp. & Det. 54.58 12.58↑ 8.91↑ 1.51 0 .77 249 603 533 Qwen-2.5-3B Phi-mini-3.8B Both: Default 70.21 8.46↑ 6.79↑ 0.79 0 .41 132 304 275 Qwen-2.5-3B Phi-mini-3.8B Both: Deterministic 69.83 8.08↑ 6.41↑ 0.78 0 .29 128 224 200 Qwen-2.5-3B Phi-mini-3.8B Both: Exploratory 69.71 7.96↑ 6.29↑ 0.83 0 .47 136 339 303 Qwen-2.5-3B Phi-mini-3.8B Both: Det. & Exp. 69.88 8.13↑ 6.46↑ 0.79 0 .31 133 241 216 Qwen-2.5-3B Phi-mini-3.8B Both: Exp. & Det. 70.58 8.83↑ 7.16↑ 0.81 0 .38 134 307 276 Qwen-2.5-1.5B Qwen-2.5-3B Both: Default 63.79 21.79↑ 2.04↑ 1.05 0 .67 154 573 537 Qwen-2.5-1.5B Qwen-2.5-3B Both: Deterministic 63.92 21.92↑ 2.17↑ 0.85 0 .60 180 500 471 Qwen-2.5-1.5B Qwen-2.5-3B Both: Exploratory 63.79 21.79↑ 2.04↑ 1.12 0 .76 165 680 639 Qwen-2.5-1.5B Qwen-2.5-3B Both: Det. & Exp. 62.58 20.58↑ 0.83↑ 1.09 0 .61 174 525 483 Qwen-2.5-1.5B Qwen-2.5-3B Both: Exp. & Det. 64.25 22.25↑ 2.50↑ 1.08 0 .68 189 640 608 Llama-3.1-3B Llama-3.1-8B Both: Default 56.75 11.08↑ 1.13↑ 1.29 0 .88 264 422 381 Llama-3.1-3B Llama-3.1-8B Both: Deterministic 57.08 11.41↑ 1.46↑ 1.13 0 .74 278 348 316 Llama-3.1-3B Llama-3.1-8B Both: Exploratory 57.17 11.50↑ 1.55↑ 1.43 0 .89 241 490 424 Llama-3.1-3B Llama-3.1-8B Both: Det. & Exp. 57.21 11.54↑ 1.59↑ 1.27 0 .72
https://arxiv.org/abs/2505.15734v1
259 420 362 Llama-3.1-3B Llama-3.1-8B Both: Exp. & Det. 56.67 11.00↑ 1.05↑ 1.27 0 .80 298 411 364 Qwen-2.5-7B Qwen-2.5-14B Both: Default 75.88 7.26↑ 4.09↑ 0.38 0 .28 88 165 159 Qwen-2.5-7B Qwen-2.5-14B Both: Deterministic 75.54 6.92↑ 3.75↑ 0.32 0 .24 83 119 112 Qwen-2.5-7B Qwen-2.5-14B Both: Exploratory 75.08 6.46↑ 3.29↑ 0.39 0 .30 111 168 153 Qwen-2.5-7B Qwen-2.5-14B Both: Det. & Exp. 76.12 7.50↑ 4.33↑ 0.36 0 .25 92 155 148 Qwen-2.5-7B Qwen-2.5-14B Both: Exp. & Det. 76.33 7.71↑ 4.54↑ 0.35 0 .31 78 143 133 Table 18: Comparative Analysis of Mixed-Model Performance in Multi-Agent Debate Settings on the GSM-Plus Dataset. This table showcases the impact of different Agent Settings on the MAD Accuracy when pairing different language models together. The ∆Lower and∆Upper columns quantify the improvement (or decline) over each individual model’s base performance. Further metrics include average Debate Rounds , normalized Sycophancy (per 2400 data points), and transitions between correct (C) and incorrect (I) states (C →I, I→C), highlighting the dynamics when models of different capabilities debate together. 28 Agent 1 Agent 2 Agent 3 Agent Settings Accuracy ∆ Debate Sycophancy C →I I→C Debate Rounds (Avg / 2400) Helped (Avg) (Overall) Qwen-2.5-0.5B Qwen-2.5-0.5B Qwen-2.5-0.5B Default 25.00 0.21↑ 3.21 3 .75 583 473 299 Qwen-2.5-0.5B Qwen-2.5-0.5B Qwen-2.5-0.5B Deterministic 29.21 4.42↑ 0.02 0 .00 0 0 0 Qwen-2.5-0.5B Qwen-2.5-0.5B Qwen-2.5-0.5B Exploratory 20.75 4.04↓ 3.88 3 .78 645 578 344 Qwen-2.5-0.5B Qwen-2.5-0.5B Qwen-2.5-0.5B 1 Det. & 2 Exp. 22.67 2.12↓ 3.66 3 .40 667 467 296 Qwen-2.5-0.5B Qwen-2.5-0.5B Qwen-2.5-0.5B 2 Det. & 1 Exp. 25.42 0.63↑ 2.45 1 .96 454 394 279 Qwen-2.5-1.5B Qwen-2.5-1.5B Qwen-2.5-1.5B Default 53.04 11.04↑ 1.87 2 .28 446 995 676 Qwen-2.5-1.5B Qwen-2.5-1.5B Qwen-2.5-1.5B Deterministic 47.29 5.29↑ 0.03 0 .00 0 0 0 Qwen-2.5-1.5B Qwen-2.5-1.5B Qwen-2.5-1.5B Exploratory 53.33 11.33↑ 2.24 2 .74 357 1159 774 Qwen-2.5-1.5B Qwen-2.5-1.5B Qwen-2.5-1.5B 1 Det. & 2 Exp. 53.67 11.67↑ 2.03 2 .35 394 1116 756 Qwen-2.5-1.5B Qwen-2.5-1.5B Qwen-2.5-1.5B 2 Det. & 1 Exp. 53.17 11.17↑ 1.31 1 .41 265 793 514 Qwen-2.5-3B Qwen-2.5-3B Qwen-2.5-3B Default 67.38 5.63↑ 0.97 1 .01 273 423 326 Qwen-2.5-3B Qwen-2.5-3B Qwen-2.5-3B Deterministic 67.38 5.63↑ 0.05 0 .00 0 0 0 Qwen-2.5-3B Qwen-2.5-3B Qwen-2.5-3B Exploratory 68.00 6.25↑ 1.09 1 .12 223 537 404 Qwen-2.5-3B Qwen-2.5-3B Qwen-2.5-3B 1 Det. & 2 Exp. 68.54 6.79↑ 1.08 0 .94 235 428 343 Qwen-2.5-3B Qwen-2.5-3B Qwen-2.5-3B 2 Det. & 1 Exp. 67.12 5.37↑ 0.78 0 .61 202 274 208 Qwen-2.5-7B Qwen-2.5-7B Qwen-2.5-7B Default 75.79 7.17↑ 0.51 0 .52 84 272 209 Qwen-2.5-7B Qwen-2.5-7B Qwen-2.5-7B Deterministic 73.62 5.00↑ 0.04 0 .00 0 0 0 Qwen-2.5-7B Qwen-2.5-7B Qwen-2.5-7B Exploratory 74.96 6.34↑ 0.55 0 .54 117 270 220 Qwen-2.5-7B Qwen-2.5-7B Qwen-2.5-7B 1 Det. & 2 Exp. 75.25 6.63↑ 0.50 0 .50 120 267 214 Qwen-2.5-7B Qwen-2.5-7B Qwen-2.5-7B 2 Det. & 1 Exp. 74.42 5.80↑ 0.39 0 .39 97 181 135 Qwen-2.5-14B Qwen-2.5-14B Qwen-2.5-14B Default 77.92 6.13↑ 0.35 0 .35 55 166 140 Qwen-2.5-14B Qwen-2.5-14B Qwen-2.5-14B Deterministic 76.54 4.75↑ 0.05 0 .00 0 3 1 Qwen-2.5-14B Qwen-2.5-14B Qwen-2.5-14B Exploratory 77.29 5.50↑ 0.38 0 .40 69 188 159 Qwen-2.5-14B Qwen-2.5-14B Qwen-2.5-14B 1 Det. & 2 Exp. 77.21
https://arxiv.org/abs/2505.15734v1
5.42↑ 0.38 0 .37 72 172 143 Qwen-2.5-14B Qwen-2.5-14B Qwen-2.5-14B 2 Det. & 1 Exp. 77.21 5.42↑ 0.28 0 .25 48 105 81 Qwen-2.5-32B Qwen-2.5-32B Qwen-2.5-32B Default 73.46 1.00↑ 0.29 0 .23 48 112 96 Qwen-2.5-32B Qwen-2.5-32B Qwen-2.5-32B Deterministic 72.79 0.33↑ 0.08 0 .00 0 0 0 Qwen-2.5-32B Qwen-2.5-32B Qwen-2.5-32B Exploratory 73.46 1.00↑ 0.33 0 .31 46 123 109 Qwen-2.5-32B Qwen-2.5-32B Qwen-2.5-32B 1 Det. & 2 Exp. 73.88 1.42↑ 0.29 0 .23 42 131 106 Qwen-2.5-32B Qwen-2.5-32B Qwen-2.5-32B 2 Det. & 1 Exp. 73.12 0.66↑ 0.24 0 .17 40 75 60 Phi-mini-3.8B Phi-mini-3.8B Phi-mini-3.8B Default 70.21 6.79↑ 0.90 1 .12 226 389 284 Phi-mini-3.8B Phi-mini-3.8B Phi-mini-3.8B Deterministic 69.17 5.75↑ 0.12 0 .04 0 3 1 Phi-mini-3.8B Phi-mini-3.8B Phi-mini-3.8B Exploratory 70.25 6.83↑ 0.95 1 .11 219 423 327 Phi-mini-3.8B Phi-mini-3.8B Phi-mini-3.8B 1 Det. & 2 Exp. 69.83 6.41↑ 0.93 1 .02 232 390 293 Phi-mini-3.8B Phi-mini-3.8B Phi-mini-3.8B 2 Det. & 1 Exp. 69.54 6.12↑ 0.73 0 .81 191 292 202 Mistral-7B Mistral-7B Mistral-7B Default 24.04 8.99↑ 2.75 2 .12 312 979 525 Mistral-7B Mistral-7B Mistral-7B Deterministic 14.37 0.67↓ 0.15 0 .02 0 8 3 Mistral-7B Mistral-7B Mistral-7B Exploratory 27.04 12.00↑ 3.03 2 .49 325 1234 628 Mistral-7B Mistral-7B Mistral-7B 1 Det. & 2 Exp. 23.92 8.88↑ 2.90 2 .25 349 1046 544 Mistral-7B Mistral-7B Mistral-7B 2 Det. & 1 Exp. 23.00 7.96↑ 2.16 1 .55 232 855 458 Llama-3.1-3B Llama-3.1-3B Llama-3.1-3B Default 51.54 5.87↑ 1.89 1 .93 454 733 476 Llama-3.1-3B Llama-3.1-3B Llama-3.1-3B Deterministic 50.67 5.00↑ 0.01 0 .00 0 0 0 Llama-3.1-3B Llama-3.1-3B Llama-3.1-3B Exploratory 50.71 5.04↑ 2.26 2 .12 520 857 544 Llama-3.1-3B Llama-3.1-3B Llama-3.1-3B 1 Det. & 2 Exp. 50.17 4.50↑ 2.12 1 .96 515 744 493 Llama-3.1-3B Llama-3.1-3B Llama-3.1-3B 2 Det. & 1 Exp. 51.33 5.66↑ 1.50 1 .23 309 493 322 Llama-3.1-8B Llama-3.1-8B Llama-3.1-8B Default 62.67 7.05↑ 1.43 1 .60 345 572 407 Llama-3.1-8B Llama-3.1-8B Llama-3.1-8B Deterministic 61.04 5.42↑ 0.00 0 .00 0 0 0 Llama-3.1-8B Llama-3.1-8B Llama-3.1-8B Exploratory 61.08 5.46↑ 1.69 1 .85 385 624 446 Llama-3.1-8B Llama-3.1-8B Llama-3.1-8B 1 Det. & 2 Exp. 62.12 6.50↑ 1.51 1 .64 374 588 413 Llama-3.1-8B Llama-3.1-8B Llama-3.1-8B 2 Det. & 1 Exp. 61.12 5.50↑ 1.20 1 .20 335 414 269 Table 19: Comparative Analysis of Language Model Performance in Multi-Agent Debate Settings on the GSM- Plus Dataset. This table showcases the impact of different Agent Settings (controlling temperature and top_p parameters like Default, Deterministic, Exploratory, and combinations) on the Accuracy of various language models in three-agent configurations. The ∆column quantifies the improvement (or decline) over the single base model performance . Further metrics include average Debate Rounds , normalized Sycophancy (per 2400 data points), and transitions between correct (C) and incorrect (I) states (C →I, I→C), highlighting the nuanced effects of debate dynamics. 29 Agent 1 Agent 2 Agent 3 Agent Settings Accuracy ∆ Debate Sycophancy C →I I→C Debate Rounds (Avg / 2400) Helped (Avg) (Overall) Qwen-2.5-0.5B Qwen-2.5-1.5B Qwen-2.5-3B Default 60.00 1.75↓ 2.35 2 .05 338 1356 951 Qwen-2.5-0.5B Qwen-2.5-1.5B Llama-3.1-3B Default 47.46 1.79↑ 3.11 2 .23 596 1086 718 Qwen-2.5-0.5B Qwen-2.5-1.5B Phi-mini-3.8B Default 56.62 6.80↓ 2.83 1
https://arxiv.org/abs/2505.15734v1
.93 503 1168 857 Qwen-2.5-0.5B Qwen-2.5-3B Llama-3.1-3B Default 59.62 2.13↓ 2.83 1 .90 364 1202 895 Qwen-2.5-0.5B Qwen-2.5-3B Phi-mini-3.8B Default 65.25 1.83↑ 2.42 1 .48 353 1190 946 Qwen-2.5-0.5B Llama-3.1-3B Phi-mini-3.8B Default 56.92 6.50↓ 3.13 1 .64 536 980 724 Qwen-2.5-1.5B Qwen-2.5-3B Llama-3.1-3B Default 64.00 2.25↑ 1.91 1 .59 321 1048 773 Qwen-2.5-1.5B Qwen-2.5-3B Phi-mini-3.8B Default 67.25 3.83↑ 1.61 1 .25 299 857 692 Qwen-2.5-1.5B Llama-3.1-3B Phi-mini-3.8B Default 63.50 0.08↑ 2.02 1 .57 405 1079 766 Qwen-2.5-3B Phi-mini-3.8B Llama-3.1-3B Default 69.08 5.66↑ 1.58 1 .20 255 825 653 Qwen-2.5-3B Qwen-2.5-3B Phi-mini-3.8B Default 68.79 7.04↑ 1.13 0 .90 291 461 340 Qwen-2.5-3B Phi-mini-3.8B Phi-mini-3.8B Default 69.21 5.79↑ 1.10 0 .92 279 424 317 Qwen-2.5-0.5B Qwen-2.5-1.5B Qwen-2.5-1.5B Default 49.88 7.88↑ 2.44 2 .50 456 1197 794 Qwen-2.5-0.5B Qwen-2.5-0.5B Qwen-2.5-1.5B Default 37.21 4.79↓ 3.07 3 .24 589 969 607 Table 20: Comparative Analysis of Mixed Multi-Agent Debate Settings on the GSM-Plus Dataset. This table examines performance when combining different language models in three-agent debate configurations. The first section shows combinations of three different models, while the second section explores configurations with duplicate models. The ∆column indicates performance changes relative to the best single model in each combination, with improvements in green and declines in red. Metrics include Debate Rounds , normalized Sycophancy (per 2400 data points), and transitions between states (C →I, I→C). 30 Agent 1 Agent 2 Agent Settings Accuracy ∆ Debate Sycophancy C →I I→C Debate Rounds (Avg / 2376) Helped (Avg) (Overall) Qwen-2.5-0.5B Qwen-2.5-0.5B Default 52.90 1.73 ↓ 1.15 0 .99 460 .00550 482 Qwen-2.5-0.5B Qwen-2.5-0.5B Deterministic 53.24 1.39 ↓ 0.00 0 .00 0 .00 0 0 Qwen-2.5-0.5B Qwen-2.5-0.5B Exploratory 49.07 5.56 ↓ 1.46 1 .09 558 .00628 530 Qwen-2.5-0.5B Qwen-2.5-0.5B Det. & Exp. 52.99 1.64 ↓ 1.15 0 .97 426 .00572 516 Qwen-2.5-1.5B Qwen-2.5-1.5B Default 86.15 0.47 ↓ 0.38 0 .38 130 .00415 403 Qwen-2.5-1.5B Qwen-2.5-1.5B Deterministic 84.60 2.02 ↓ 0.00 0 .00 0 .00 0 0 Qwen-2.5-1.5B Qwen-2.5-1.5B Exploratory 83.42 3.20 ↓ 0.55 0 .55 160 .00574 547 Qwen-2.5-1.5B Qwen-2.5-1.5B Det. & Exp. 86.62 0.00 0 .41 0 .42 135 .00449 434 Qwen-2.5-3B Qwen-2.5-3B Default 94.02 0.96 ↑ 0.14 0 .13 56 .00117 114 Qwen-2.5-3B Qwen-2.5-3B Deterministic 93.35 0.29 ↑ 0.00 0 .00 0 .00 0 0 Qwen-2.5-3B Qwen-2.5-3B Exploratory 94.15 1.09 ↑ 0.16 0 .15 49 .00158 157 Qwen-2.5-3B Qwen-2.5-3B Det. & Exp. 94.07 1.01 ↑ 0.15 0 .13 70 .00126 124 Qwen-2.5-7B Qwen-2.5-7B Default 96.17 1.48 ↑ 0.05 0 .05 31 .0039 37 Qwen-2.5-7B Qwen-2.5-7B Deterministic 96.55 1.86 ↑ 0.00 0 .00 0 .00 0 0 Qwen-2.5-7B Qwen-2.5-7B Exploratory 96.93 2.24 ↑ 0.05 0 .05 21 .0057 53 Qwen-2.5-7B Qwen-2.5-7B Det. & Exp. 96.46 1.77 ↑ 0.05 0 .04 30 .0035 34 Qwen-2.5-14B Qwen-2.5-14B Default 98.19 2.53 ↑ 0.03 0 .02 15 .0021 21 Qwen-2.5-14B Qwen-2.5-14B Deterministic 97.77 2.11 ↑ 0.00 0 .00 0 .00 0 0 Qwen-2.5-14B Qwen-2.5-14B Exploratory 98.15 2.49 ↑ 0.02 0 .02 8 .0020 20 Qwen-2.5-14B Qwen-2.5-14B Det. & Exp. 97.94 2.28 ↑ 0.03 0 .02 16 .0024 24 Qwen-2.5-32B Qwen-2.5-32B Default 98.53 0.21 ↑ 0.02 0 .03 10 .0014 13 Qwen-2.5-32B Qwen-2.5-32B Deterministic
https://arxiv.org/abs/2505.15734v1
98.36 0.04 ↑ 0.00 0 .00 0 .00 0 0 Qwen-2.5-32B Qwen-2.5-32B Exploratory 98.53 0.21 ↑ 0.02 0 .03 8 .0014 14 Qwen-2.5-32B Qwen-2.5-32B Det. & Exp. 98.36 0.04 ↑ 0.02 0 .02 9 .0010 8 Phi-mini-3.8B Phi-mini-3.8B Default 95.88 3.92 ↑ 0.11 0 .16 40 .0071 60 Phi-mini-3.8B Phi-mini-3.8B Deterministic 95.37 3.41 ↑ 0.00 0 .00 0 .00 0 0 Phi-mini-3.8B Phi-mini-3.8B Exploratory 94.74 2.78 ↑ 0.16 0 .21 59 .00126 116 Phi-mini-3.8B Phi-mini-3.8B Det. & Exp. 94.95 2.99 ↑ 0.14 0 .19 56 .0089 78 Mistral-7B Mistral-7B Default 81.06 0.04 ↑ 0.35 0 .28 158 .00227 219 Mistral-7B Mistral-7B Deterministic 80.43 0.59 ↓ 0.00 0 .00 0 .00 0 0 Mistral-7B Mistral-7B Exploratory 80.18 0.84 ↓ 0.43 0 .32 203 .00261 251 Mistral-7B Mistral-7B Det. & Exp. 82.41 1.39 ↑ 0.37 0 .27 129 .00240 235 Llama-3.1-3B Llama-3.1-3B Default 87.71 3.07 ↑ 0.26 0 .21 128 .00163 153 Llama-3.1-3B Llama-3.1-3B Deterministic 86.66 2.02 ↑ 0.00 0 .00 0 .00 0 0 Llama-3.1-3B Llama-3.1-3B Exploratory 88.09 3.45 ↑ 0.28 0 .26 118 .00216 208 Llama-3.1-3B Llama-3.1-3B Det. & Exp. 86.91 2.27 ↑ 0.28 0 .22 127 .00181 172 Llama-3.1-8B Llama-3.1-8B Default 94.44 5.34 ↑ 0.11 0 .11 54 .0079 75 Llama-3.1-8B Llama-3.1-8B Deterministic 93.64 4.54 ↑ 0.00 0 .00 0 .00 0 0 Llama-3.1-8B Llama-3.1-8B Exploratory 93.60 4.50 ↑ 0.15 0 .17 60 .00118 109 Llama-3.1-8B Llama-3.1-8B Det. & Exp. 94.53 5.43 ↑ 0.12 0 .13 54 .0095 93 Table 21: Comparative Analysis of Language Model Performance in Multi-Agent Debate Settings on the ARC-Easy Dataset. This table showcases the impact of different Agent Settings (controlling temperature and top_p parameters like Default, Deterministic, Exploratory, and a combination) on the Accuracy of various language models. The ∆column quantifies the improvement (or decline) over the single base model performance . Further metrics include average Debate Rounds , normalized Sycophancy (per 2376 data points), and transitions between correct (C) and incorrect (I) states (C →I, I→C), highlighting the nuanced effects of debate dynamics. 31 Agent 1 Agent 2 Agent Settings Accuracy ∆Lower ∆Upper Debate Sycophancy C →I I→C Debate Rounds (Avg / 2376) Helped (Avg) (Overall) Qwen-2.5-0.5B Qwen-2.5-1.5B Default 76.98 22.35↑ 9.64↓ 0.95 0 .75 262 .00804 760 Qwen-2.5-0.5B Qwen-2.5-1.5B Deterministic 79.38 24.75↑ 7.24↓ 0.81 0 .62 200 .00734 711 Qwen-2.5-0.5B Qwen-2.5-1.5B Exploratory 73.19 18.56↑ 13.43↓ 1.16 0 .85 300 .00899 828 Qwen-2.5-0.5B Qwen-2.5-1.5B Det. & Exp. 75.21 20.58↑ 11.41↓ 0.95 0 .78 260 .00846 790 Qwen-2.5-0.5B Qwen-2.5-1.5B Exp. & Det. 77.65 23.02↑ 8.97↓ 1.07 0 .75 275 .00829 794 Qwen-2.5-1.5B Llama-3.1-3B Default 88.55 1.93↑ 3.91↑ 0.40 0 .39 146 .00376 357 Qwen-2.5-1.5B Llama-3.1-3B Deterministic 88.13 1.51↑ 3.49↑ 0.29 0 .24 150 .00242 239 Qwen-2.5-1.5B Llama-3.1-3B Exploratory 88.05 1.43↑ 3.41↑ 0.49 0 .48 161 .00483 457 Qwen-2.5-1.5B Llama-3.1-3B Det. & Exp. 86.99 0.37↑ 1.35↑ 0.37 0 .39 172 .00290 277 Qwen-2.5-1.5B Llama-3.1-3B Exp. & Det. 87.71 1.09↑ 2.07↑ 0.45 0 .40 165 .00447 433 Qwen-2.5-3B Phi-mini-3.8B Default 95.24 2.18↑ 3.28↑ 0.15 0 .14 61 .00135 132 Qwen-2.5-3B Phi-mini-3.8B Deterministic 94.91 1.85↑ 2.95↑ 0.14 0 .12 72 .00106 102 Qwen-2.5-3B Phi-mini-3.8B Exploratory 95.24 2.18↑ 3.28↑ 0.17
https://arxiv.org/abs/2505.15734v1
0 .16 57 .00184 178 Qwen-2.5-3B Phi-mini-3.8B Det. & Exp. 94.91 1.85↑ 2.95↑ 0.17 0 .15 68 .00148 148 Qwen-2.5-3B Phi-mini-3.8B Exp. & Det. 95.75 2.69↑ 3.79↑ 0.15 0 .14 58 .00146 139 Qwen-2.5-1.5B Qwen-2.5-3B Default 91.88 5.26↑ 1.18↓ 0.33 0 .29 112 .00363 359 Qwen-2.5-1.5B Qwen-2.5-3B Deterministic 92.59 5.97↑ 0.47↓ 0.24 0 .23 94 .00263 254 Qwen-2.5-1.5B Qwen-2.5-3B Exploratory 91.79 5.17↑ 1.27↓ 0.42 0 .38 95 .00498 487 Qwen-2.5-1.5B Qwen-2.5-3B Det. & Exp. 92.76 6.14↑ 0.20↓ 0.27 0 .27 81 .00294 286 Qwen-2.5-1.5B Qwen-2.5-3B Exp. & Det. 92.51 5.89↑ 0.45↓ 0.39 0 .32 96 .00469 466 Llama-3.1-3B Llama-3.1-8B Default 91.79 7.15↑ 2.69↑ 0.24 0 .22 110 .00184 179 Llama-3.1-3B Llama-3.1-8B Deterministic 91.12 6.48↑ 2.02↑ 0.22 0 .16 113 .00138 133 Llama-3.1-3B Llama-3.1-8B Exploratory 90.61 5.97↑ 1.51↑ 0.28 0 .27 115 .00202 192 Llama-3.1-3B Llama-3.1-8B Det. & Exp. 90.99 6.35↑ 1.89↑ 0.24 0 .18 108 .00152 149 Llama-3.1-3B Llama-3.1-8B Exp. & Det. 91.96 7.32↑ 2.86↑ 0.28 0 .26 99 .00229 222 Qwen-2.5-7B Qwen-2.5-14B Default 97.94 3.25↑ 2.28↑ 0.05 0 .05 21 .0055 55 Qwen-2.5-7B Qwen-2.5-14B Deterministic 97.64 2.95↑ 1.98↑ 0.07 0 .04 20 .0048 47 Qwen-2.5-7B Qwen-2.5-14B Exploratory 97.39 2.70↑ 1.73↑ 0.08 0 .07 32 .0067 66 Qwen-2.5-7B Qwen-2.5-14B Det. & Exp. 97.43 2.74↑ 1.77↑ 0.06 0 .05 33 .0049 48 Qwen-2.5-7B Qwen-2.5-14B Exp. & Det. 97.47 2.78↑ 1.81↑ 0.07 0 .04 27 .0049 48 Table 22: Comparative Analysis of Different Language Model Pairs in Multi-Agent Debate Settings on the ARC-Easy Dataset. This table showcases the impact of different Agent Settings (controlling temperature and top_p parameters) on the Accuracy of various model pairs. The ∆Lower and∆Upper columns quantify the improvement (or decline) over each individual model’s single-agent performance. Further metrics include average Debate Rounds , normalized Sycophancy (per 2376 data points), and transitions between correct (C) and incorrect (I) states (C →I, I→C), highlighting the nuanced effects of debate dynamics between different model pairings. 32 Agent 1 Agent 2 Agent Settings MAD Accuracy ∆ Debate Sycophancy C →I I→C Debate (RCR Prompting) Rounds (Avg / 2376) Helped (Avg) (Overall) Qwen-2.5-0.5B Qwen-2.5-0.5B Both: Default 51.30 3.33 ↓ 2.18 2 .67 1046 .00 990 642 Qwen-2.5-0.5B Qwen-2.5-0.5B Both: Deterministic 53.24 1.39 ↓ 0.00 0 .00 0 .00 0 0 Qwen-2.5-0.5B Qwen-2.5-0.5B Both: Exploratory 46.80 7.83 ↓ 2.78 3 .22 1228 .00 1099 655 Qwen-2.5-0.5B Qwen-2.5-0.5B 1 Det. & 2 Exp. 48.99 5.64 ↓ 2.47 2 .82 1136 .00 1053 658 Qwen-2.5-0.5B Qwen-2.5-0.5B 2 Det. & 1 Exp. 50.80 3.83 ↓ 1.34 1 .60 794 .00 793 495 Qwen-2.5-1.5B Qwen-2.5-1.5B Both: Default 87.37 0.75 ↑ 0.63 0 .84 232 .00 717 573 Qwen-2.5-1.5B Qwen-2.5-1.5B Both: Deterministic 84.60 2.02 ↓ 0.00 0 .00 0 .00 0 0 Qwen-2.5-1.5B Qwen-2.5-1.5B Both: Exploratory 85.61 1.01 ↓ 0.90 1 .17 279 .00 1011 795 Qwen-2.5-1.5B Qwen-2.5-1.5B 1 Det. & 2 Exp. 86.32 0.30 ↓ 0.76 0 .98 275 .00 834 672 Qwen-2.5-1.5B Qwen-2.5-1.5B 2 Det. & 1 Exp. 86.53 0.09 ↓ 0.43 0 .62 198 .00 587 451 Qwen-2.5-3B Qwen-2.5-3B Both: Default 94.87 1.81 ↑ 0.19 0 .19 80 .00 196 165 Qwen-2.5-3B Qwen-2.5-3B Both: Deterministic 93.35 0.29 ↑
https://arxiv.org/abs/2505.15734v1
0.00 0 .00 0 .00 0 0 Qwen-2.5-3B Qwen-2.5-3B Both: Exploratory 94.28 1.22 ↑ 0.25 0 .28 102 .00 252 206 Qwen-2.5-3B Qwen-2.5-3B 1 Det. & 2 Exp. 94.70 1.64 ↑ 0.25 0 .23 90 .00 238 195 Qwen-2.5-3B Qwen-2.5-3B 2 Det. & 1 Exp. 93.94 0.88 ↑ 0.20 0 .18 94 .00 162 117 Qwen-2.5-7B Qwen-2.5-7B Both: Default 96.21 1.52 ↑ 0.08 0 .08 53 .00 69 58 Qwen-2.5-7B Qwen-2.5-7B Both: Deterministic 96.17 1.48 ↑ 0.00 0 .00 0 .00 0 0 Qwen-2.5-7B Qwen-2.5-7B Both: Exploratory 96.55 1.86 ↑ 0.10 0 .11 57 .00 86 71 Qwen-2.5-7B Qwen-2.5-7B 1 Det. & 2 Exp. 96.55 1.86 ↑ 0.10 0 .11 56 .00 78 65 Qwen-2.5-7B Qwen-2.5-7B 2 Det. & 1 Exp. 96.34 1.65 ↑ 0.07 0 .07 39 .00 56 40 Qwen-2.5-14B Qwen-2.5-14B Both: Default 98.15 2.49 ↑ 0.04 0 .04 23 .00 29 26 Qwen-2.5-14B Qwen-2.5-14B Both: Deterministic 97.77 2.11 ↑ 0.00 0 .00 0 .00 0 0 Qwen-2.5-14B Qwen-2.5-14B Both: Exploratory 98.19 2.53 ↑ 0.04 0 .05 18 .00 40 36 Qwen-2.5-14B Qwen-2.5-14B 1 Det. & 2 Exp. 98.02 2.36 ↑ 0.03 0 .04 28 .00 40 31 Qwen-2.5-14B Qwen-2.5-14B 2 Det. & 1 Exp. 97.81 2.15 ↑ 0.03 0 .03 23 .00 28 25 Qwen-2.5-32B Qwen-2.5-32B Both: Default 98.57 0.25 ↑ 0.02 0 .03 16 .00 15 13 Qwen-2.5-32B Qwen-2.5-32B Both: Deterministic 98.36 0.04 ↑ 0.00 0 .00 0 .00 0 0 Qwen-2.5-32B Qwen-2.5-32B Both: Exploratory 98.48 0.16 ↑ 0.02 0 .02 15 .00 14 14 Qwen-2.5-32B Qwen-2.5-32B 1 Det. & 2 Exp. 98.48 0.16 ↑ 0.02 0 .03 16 .00 15 12 Qwen-2.5-32B Qwen-2.5-32B 2 Det. & 1 Exp. 98.32 0.00 0.01 0 .02 12 .00 9 6 Phi-mini-3.8B Phi-mini-3.8B Both: Default 95.79 3.83 ↑ 0.16 0 .28 79 .00 138 105 Phi-mini-3.8B Phi-mini-3.8B Both: Deterministic 95.37 3.41 ↑ 0.00 0 .00 0 .00 0 0 Phi-mini-3.8B Phi-mini-3.8B Both: Exploratory 94.91 2.95 ↑ 0.28 0 .43 110 .00 234 185 Phi-mini-3.8B Phi-mini-3.8B 1 Det. & 2 Exp. 96.34 4.38 ↑ 0.18 0 .27 70 .00 189 149 Phi-mini-3.8B Phi-mini-3.8B 2 Det. & 1 Exp. 95.92 3.96 ↑ 0.13 0 .24 53 .00 115 83 Llama-3.1-3B Llama-3.1-3B Both: Default 87.33 2.69 ↑ 0.46 0 .44 252 .00 292 227 Llama-3.1-3B Llama-3.1-3B Both: Deterministic 87.63 2.99 ↑ 0.00 0 .00 0 .00 0 0 Llama-3.1-3B Llama-3.1-3B Both: Exploratory 87.71 3.07 ↑ 0.58 0 .61 255 .00 415 323 Llama-3.1-3B Llama-3.1-3B 1 Det. & 2 Exp. 87.58 2.94 ↑ 0.53 0 .48 241 .00 328 259 Llama-3.1-3B Llama-3.1-3B 2 Det. & 1 Exp. 88.47 3.83 ↑ 0.32 0 .27 148 .00 236 169 Llama-3.1-8B Llama-3.1-8B Both: Default 93.86 4.76 ↑ 0.20 0 .26 114 .00 139 102 Llama-3.1-8B Llama-3.1-8B Both: Deterministic 93.64 4.54 ↑ 0.00 0 .00 0 .00 0 0 Llama-3.1-8B Llama-3.1-8B Both: Exploratory 94.19 5.09 ↑ 0.25 0 .36 130 .00 190 141 Llama-3.1-8B Llama-3.1-8B 1 Det. & 2 Exp. 94.11 5.01 ↑ 0.23 0 .33 119 .00 185 143 Llama-3.1-8B Llama-3.1-8B 2 Det. & 1 Exp. 94.49 5.39 ↑ 0.14 0 .20 69 .00 139 89 Mistral-7B Mistral-7B
https://arxiv.org/abs/2505.15734v1
Both: Default 82.20 1.18 ↑ 0.69 0 .71 318 .00 469 342 Mistral-7B Mistral-7B Both: Deterministic 80.43 0.59 ↓ 0.00 0 .00 0 .00 0 0 Mistral-7B Mistral-7B Both: Exploratory 82.66 1.64 ↑ 0.83 0 .88 325 .00 566 429 Mistral-7B Mistral-7B 1 Det. & 2 Exp. 82.37 1.35 ↑ 0.78 0 .81 324 .00 506 376 Mistral-7B Mistral-7B 2 Det. & 1 Exp. 81.69 0.67 ↑ 0.47 0 .51 230 .00 346 230 Table 23: Comparative Analysis of Language Model Performance in Multi-Agent Debate Settings on the ARC- Easy Dataset. This table showcases the impact of different Agent Settings (controlling temperature and top_p parameters like Default, Deterministic, Exploratory, and combinations) on the MAD Accuracy (RCR Prompting) of various language models. The ∆column quantifies the improvement (or decline) over the single base model performance . Further metrics include average Debate Rounds , normalized Sycophancy (per 2376 data points), and transitions between correct (C) and incorrect (I) states (C →I, I→C), highlighting the nuanced effects of debate dynamics. 33 Agent 1 Agent 2 Agent 3 MAD Accuracy ∆ Debate Sycophancy C →I I→C Debate (RCR Prompting) Rounds (Avg / 2376) Helped (Avg) (Overall) Qwen-2.5-0.5B Qwen-2.5-1.5B Qwen-2.5-3B 92.72 0.34↓ 1.00 0 .95 145 1377 1153 Qwen-2.5-0.5B Qwen-2.5-1.5B Llama-3.1-3B 84.64 0 .00 1 .18 1 .27 387 1223 1006 Qwen-2.5-0.5B Qwen-2.5-1.5B Phi-mini-3.8B 92.93 0.97↑ 1.03 1 .04 184 1379 1156 Qwen-2.5-0.5B Qwen-2.5-3B Llama-3.1-3B 91.20 1.86↓ 1.13 0 .99 213 1221 1070 Qwen-2.5-0.5B Qwen-2.5-3B Phi-mini-3.8B 89.48 3.58↓ 1.09 1 .12 299 1157 1024 Qwen-2.5-0.5B Llama-3.1-3B Phi-mini-3.8B 91.79 0.17↓ 0.58 0 .72 238 559 479 Qwen-2.5-1.5B Qwen-2.5-3B Llama-3.1-3B 91.84 1.22↓ 0.56 0 .60 189 560 479 Qwen-2.5-1.5B Qwen-2.5-3B Phi-mini-3.8B 95.54 2.48↑ 0.39 0 .45 103 509 449 Qwen-2.5-1.5B Llama-3.1-3B Phi-mini-3.8B 91.79 0.17↓ 0.58 0 .72 238 559 479 Qwen-2.5-3B Phi-mini-3.8B Llama-3.1-3B 94.07 1.01↑ 0.41 0 .43 162 332 283 Qwen-2.5-3B Qwen-2.5-3B Phi-mini-3.8B 95.88 2.82↑ 0.26 0 .26 86 253 214 Qwen-2.5-3B Phi-mini-3.8B Phi-mini-3.8B 96.34 3.28↑ 0.26 0 .31 71 227 180 Qwen-2.5-0.5B Qwen-2.5-1.5B Qwen-2.5-1.5B 84.64 2.00↓ 1.22 1 .22 300 1229 1012 Qwen-2.5-0.5B Qwen-2.5-0.5B Qwen-2.5-1.5B 72.43 14.19↓ 1.86 2 .11 616 1400 982 Table 24: Comparative Analysis of Multi-Model Combinations in Agent Debate Settings on the ARC-Easy Dataset. This table showcases the performance of heterogeneous agent teams consisting of different language models. The MAD Accuracy (RCR Prompting) reflects the team performance, while the ∆column quantifies the improvement (or decline) relative to the best single model in each combination. Additional metrics include average Debate Rounds , normalized Sycophancy (per 2376 data points), and transitions between correct (C) and incorrect (I) states (C→I, I→C), revealing how diverse model combinations affect debate dynamics and overall helpfulness. 34 Agent 1 Agent 2 Agent Settings MAD Accuracy ∆ Debate Sycophancy C →I I→C Debate (ARC-Challenge) Rounds (Avg / 1172) Helped (Avg) (Overall) Qwen-2.5-0.5B Qwen-2.5-0.5B Both: Default 39.51 1.54 ↑ 1.32 1 .10 253 .00 265 228 Qwen-2.5-0.5B Qwen-2.5-0.5B Both: Deterministic 40.78 2.81 ↑ 0.00 0 .00 0 .00 0 0 Qwen-2.5-0.5B Qwen-2.5-0.5B Both: Exploratory 37.54 0.43 ↓ 1.51 1 .14 266 .00 309 245 Qwen-2.5-0.5B Qwen-2.5-0.5B Both: Det. & Exp. 39.85 1.88
https://arxiv.org/abs/2505.15734v1
↑ 1.34 1 .12 247 .00 259 227 Qwen-2.5-1.5B Qwen-2.5-1.5B Both: Default 70.90 1.69 ↑ 0.57 0 .58 115 .00 249 242 Qwen-2.5-1.5B Qwen-2.5-1.5B Both: Deterministic 67.58 1.63 ↓ 0.00 0 .00 0 .00 0 0 Qwen-2.5-1.5B Qwen-2.5-1.5B Both: Exploratory 68.52 0.69 ↓ 0.75 0 .70 133 .00 296 275 Qwen-2.5-1.5B Qwen-2.5-1.5B Both: Det. & Exp. 69.88 0.67 ↑ 0.60 0 .61 101 .00 262 252 Qwen-2.5-3B Qwen-2.5-3B Both: Default 85.41 1.88 ↑ 0.29 0 .29 53 .00 114 111 Qwen-2.5-3B Qwen-2.5-3B Both: Deterministic 84.13 0.60 ↑ 0.00 0 .00 0 .00 0 0 Qwen-2.5-3B Qwen-2.5-3B Both: Exploratory 84.64 1.11 ↑ 0.30 0 .27 56 .00 116 109 Qwen-2.5-3B Qwen-2.5-3B Both: Det. & Exp. 83.70 0.17 ↑ 0.28 0 .23 70 .00 79 73 Qwen-2.5-7B Qwen-2.5-7B Both: Default 91.55 4.33 ↑ 0.11 0 .11 29 .00 46 45 Qwen-2.5-7B Qwen-2.5-7B Both: Deterministic 91.21 3.99 ↑ 0.00 0 .00 0 .00 0 0 Qwen-2.5-7B Qwen-2.5-7B Both: Exploratory 91.64 4.42 ↑ 0.12 0 .11 23 .00 53 51 Qwen-2.5-7B Qwen-2.5-7B Both: Det. & Exp. 92.06 4.84 ↑ 0.13 0 .12 30 .00 48 43 Qwen-2.5-14B Qwen-2.5-14B Both: Default 94.54 4.27 ↑ 0.06 0 .05 13 .00 24 24 Qwen-2.5-14B Qwen-2.5-14B Both: Deterministic 94.37 4.10 ↑ 0.00 0 .00 0 .00 0 0 Qwen-2.5-14B Qwen-2.5-14B Both: Exploratory 93.77 3.50 ↑ 0.06 0 .07 23 .00 24 24 Qwen-2.5-14B Qwen-2.5-14B Both: Det. & Exp. 94.71 4.44 ↑ 0.06 0 .06 11 .00 22 21 Qwen-2.5-32B Qwen-2.5-32B Both: Default 98.53 3.25 ↑ 0.02 0 .06 10 .00 14 13 Qwen-2.5-32B Qwen-2.5-32B Both: Deterministic 98.36 3.08 ↑ 0.00 0 .00 0 .00 0 0 Qwen-2.5-32B Qwen-2.5-32B Both: Exploratory 98.53 3.25 ↑ 0.02 0 .06 8 .00 14 14 Qwen-2.5-32B Qwen-2.5-32B Both: Det. & Exp. 98.36 3.08 ↑ 0.02 0 .04 9 .00 10 8 Phi-mini-3.8B Phi-mini-3.8B Both: Default 90.10 5.37 ↑ 0.24 0 .34 42 .00 75 66 Phi-mini-3.8B Phi-mini-3.8B Both: Deterministic 88.91 4.18 ↑ 0.00 0 .00 0 .00 0 0 Phi-mini-3.8B Phi-mini-3.8B Both: Exploratory 87.03 2.30 ↑ 0.31 0 .40 58 .00 107 100 Phi-mini-3.8B Phi-mini-3.8B Both: Det. & Exp. 88.05 3.32 ↑ 0.23 0 .31 46 .00 69 62 Llama-3.1-3B Llama-3.1-3B Both: Default 75.77 2.65 ↑ 0.46 0 .37 93 .00 130 126 Llama-3.1-3B Llama-3.1-3B Both: Deterministic 74.66 1.54 ↑ 0.00 0 .00 0 .00 0 0 Llama-3.1-3B Llama-3.1-3B Both: Exploratory 76.19 3.07 ↑ 0.50 0 .43 89 .00 166 149 Llama-3.1-3B Llama-3.1-3B Both: Det. & Exp. 75.60 2.48 ↑ 0.45 0 .34 108 .00 129 124 Llama-3.1-8B Llama-3.1-8B Both: Default 87.20 9.55 ↑ 0.26 0 .30 45 .00 91 88 Llama-3.1-8B Llama-3.1-8B Both: Deterministic 85.75 8.10 ↑ 0.00 0 .00 0 .00 0 0 Llama-3.1-8B Llama-3.1-8B Both: Exploratory 85.07 7.42 ↑ 0.28 0 .32 58 .00 96 94 Llama-3.1-8B Llama-3.1-8B Both: Det. & Exp. 86.86 9.21 ↑ 0.23 0 .27 56 .00 84 80 Mistral-7B Mistral-7B Both: Default 70.48 1.71 ↑ 0.51 0 .37 99 .00 145 137 Mistral-7B Mistral-7B Both: Deterministic 68.26 0.51 ↓ 0.00 0 .00 0 .00 0 0 Mistral-7B Mistral-7B Both: Exploratory 72.78 4.01 ↑ 0.58 0 .44 106 .00
https://arxiv.org/abs/2505.15734v1
185 177 Mistral-7B Mistral-7B Both: Det. & Exp. 70.82 2.05 ↑ 0.50 0 .34 84 .00 151 142 Table 25: Comparative Analysis of Language Model Performance in Multi-Agent Debate Settings on the ARC- Challenge Dataset. This table showcases the impact of different Agent Settings (controlling temperature and top_p parameters like Default, Deterministic, Exploratory, and a combination) on the MAD Accuracy of various language models. The ∆column quantifies the improvement (or decline) over the single base model performance shown in parentheses next to each model name. Further metrics include average Debate Rounds , normalized Sycophancy (per 1172 data points), and transitions between correct (C) and incorrect (I) states (C →I, I→C), highlighting the nuanced effects of debate dynamics. 35 Agent 1 Agent 2 Agent Settings MAD Accuracy ∆1 ∆2 Debate Sycophancy C →I I→C Debate (ARC-Challenge) Rounds (Avg / 1172) Helped (Avg) (Overall) Qwen-2.5-0.5B Qwen-2.5-1.5B Both: Default 58.28 20.31 ↑10.93↓ 1.27 0 .97 193 401 369 Qwen-2.5-0.5B Qwen-2.5-1.5B Both: Deterministic 63.57 25.60 ↑5.64↓ 1.09 0 .81 169 375 357 Qwen-2.5-0.5B Qwen-2.5-1.5B Both: Exploratory 55.80 17.83 ↑13.41↓ 1.46 1 .07 211 418 368 Qwen-2.5-0.5B Qwen-2.5-1.5B Both: Det. & Exp. 60.32 22.35 ↑8.89↓ 1.10 0 .94 181 397 360 Qwen-2.5-0.5B Qwen-2.5-1.5B Both: Exp. & Det. 61.43 23.46 ↑7.78↓ 1.39 0 .95 197 409 387 Qwen-2.5-1.5B Llama-3.1-3B Both: Default 72.35 3.14 ↑ 0.77↓ 0.67 0 .66 143 216 207 Qwen-2.5-1.5B Llama-3.1-3B Both: Deterministic 74.91 5.70 ↑ 1.79↑ 0.51 0 .51 135 191 185 Qwen-2.5-1.5B Llama-3.1-3B Both: Exploratory 73.12 3.91 ↑ 0.00 0 .78 0 .78 153 281 265 Qwen-2.5-1.5B Llama-3.1-3B Both: Det. & Exp. 76.02 6.81 ↑ 2.90↑ 0.60 0 .66 127 219 205 Qwen-2.5-1.5B Llama-3.1-3B Both: Exp. & Det. 74.15 4.94 ↑ 1.03↑ 0.71 0 .61 135 291 274 Qwen-2.5-3B Phi-mini-3.8B Both: Default 87.97 4.44 ↑ 3.24↑ 0.32 0 .31 59 133 130 Qwen-2.5-3B Phi-mini-3.8B Both: Deterministic 88.57 5.04 ↑ 3.84↑ 0.31 0 .25 58 110 107 Qwen-2.5-3B Phi-mini-3.8B Both: Exploratory 87.03 3.50 ↑ 2.30↑ 0.38 0 .37 72 173 160 Qwen-2.5-3B Phi-mini-3.8B Both: Det. & Exp. 87.80 4.27 ↑ 3.07↑ 0.33 0 .30 59 141 139 Qwen-2.5-3B Phi-mini-3.8B Both: Exp. & Det. 89.85 6.32 ↑ 5.12↑ 0.34 0 .30 50 143 137 Qwen-2.5-1.5B Qwen-2.5-3B Both: Default 82.25 13.04 ↑1.28↓ 0.51 0 .45 80 247 243 Qwen-2.5-1.5B Qwen-2.5-3B Both: Deterministic 82.59 13.38 ↑0.94↓ 0.42 0 .40 80 205 200 Qwen-2.5-1.5B Qwen-2.5-3B Both: Exploratory 81.91 12.70 ↑1.62↓ 0.66 0 .56 94 317 310 Qwen-2.5-1.5B Qwen-2.5-3B Both: Det. & Exp. 83.45 14.24 ↑0.08↓ 0.47 0 .46 66 227 219 Qwen-2.5-1.5B Qwen-2.5-3B Both: Exp. & Det. 83.62 14.41 ↑0.09↑ 0.62 0 .51 67 328 320 Llama-3.1-3B Llama-3.1-8B Both: Default 81.66 8.54 ↑ 4.01↑ 0.47 0 .41 114 141 133 Llama-3.1-3B Llama-3.1-8B Both: Deterministic 80.46 7.34 ↑ 2.81↑ 0.51 0 .36 120 135 124 Llama-3.1-3B Llama-3.1-8B Both: Exploratory 75.68 2.56 ↑ 1.97↓ 0.48 0 .43 107 160 151 Llama-3.1-3B Llama-3.1-8B Both: Det. & Exp. 80.12 7.00 ↑ 2.47↑ 0.46 0 .37 117 138 132 Llama-3.1-3B Llama-3.1-8B Both: Exp. & Det. 80.97 7.85 ↑ 3.32↑ 0.49 0 .43 109 159 154 Qwen-2.5-7B Qwen-2.5-14B Both: Default 93.43 6.21 ↑ 3.16↑ 0.14
https://arxiv.org/abs/2505.15734v1
0 .11 35 54 53 Qwen-2.5-7B Qwen-2.5-14B Both: Deterministic 93.60 6.38 ↑ 3.33↑ 0.13 0 .10 24 59 58 Qwen-2.5-7B Qwen-2.5-14B Both: Exploratory 94.45 7.23 ↑ 4.18↑ 0.15 0 .14 27 67 65 Qwen-2.5-7B Qwen-2.5-14B Both: Det. & Exp. 93.00 5.78 ↑ 2.73↑ 0.16 0 .13 37 50 49 Qwen-2.5-7B Qwen-2.5-14B Both: Exp. & Det. 93.77 6.55 ↑ 3.50↑ 0.15 0 .12 26 58 58 Table 26: Comparative Analysis of Mixed Model Pairs in Multi-Agent Debate Settings on the ARC-Challenge Dataset. This table showcases different model combinations and the impact of various Agent Settings on accuracy. ∆1represents the improvement over the lower-capability model (the first agent), while ∆2represents the improve- ment or decline relative to the higher-capability model (the second agent). Values in parentheses next to each model name indicate the single-agent baseline performance. The table also shows average Debate Rounds , normalized Sycophancy (per 1172 data points), and transitions between correct (C) and incorrect (I) states, demonstrating how mixed-capability agents interact in debate scenarios. 36 Agent 1 Agent 2 Agent 3 Agent Settings Accuracy ∆ Debate Sycophancy C →I I→C Debate Rounds (Avg / 1172) Helped (Avg) (Overall) Qwen-2.5-0.5B Qwen-2.5-0.5B Qwen-2.5-0.5B Default 35.15 2.82↓ 2.54 3 .14 535 484 283 Qwen-2.5-0.5B Qwen-2.5-0.5B Qwen-2.5-0.5B Deterministic 40.78 2.81↑ 0.00 0 .00 0 0 0 Qwen-2.5-0.5B Qwen-2.5-0.5B Qwen-2.5-0.5B Exploratory 35.32 2.65↓ 3.12 3 .54 587 528 303 Qwen-2.5-0.5B Qwen-2.5-0.5B Qwen-2.5-0.5B 1 Det. & 2 Exp. 37.20 0.77↓ 2.78 3 .19 523 503 306 Qwen-2.5-0.5B Qwen-2.5-0.5B Qwen-2.5-0.5B 2 Det. & 1 Exp. 38.23 0.26↑ 1.49 1 .75 404 353 219 Qwen-2.5-1.5B Qwen-2.5-1.5B Qwen-2.5-1.5B Default 72.53 3.32↑ 0.98 1 .29 206 454 343 Qwen-2.5-1.5B Qwen-2.5-1.5B Qwen-2.5-1.5B Deterministic 67.58 1.63↓ 0.00 0 .00 0 0 0 Qwen-2.5-1.5B Qwen-2.5-1.5B Qwen-2.5-1.5B Exploratory 72.10 2.89↑ 1.37 1 .85 235 611 433 Qwen-2.5-1.5B Qwen-2.5-1.5B Qwen-2.5-1.5B 1 Det. & 2 Exp. 71.93 2.72↑ 1.12 1 .53 229 520 386 Qwen-2.5-1.5B Qwen-2.5-1.5B Qwen-2.5-1.5B 2 Det. & 1 Exp. 70.82 1.61↑ 0.63 0 .93 163 345 245 Qwen-2.5-3B Qwen-2.5-3B Qwen-2.5-3B Default 85.75 2.22↑ 0.43 0 .43 79 197 156 Qwen-2.5-3B Qwen-2.5-3B Qwen-2.5-3B Deterministic 84.13 0.60↑ 0.00 0 .00 0 0 0 Qwen-2.5-3B Qwen-2.5-3B Qwen-2.5-3B Exploratory 86.26 2.73↑ 0.50 0 .57 96 229 167 Qwen-2.5-3B Qwen-2.5-3B Qwen-2.5-3B 1 Det. & 2 Exp. 86.26 2.73↑ 0.51 0 .48 106 193 149 Qwen-2.5-3B Qwen-2.5-3B Qwen-2.5-3B 2 Det. & 1 Exp. 84.73 1.20↑ 0.33 0 .31 71 131 101 Qwen-2.5-7B Qwen-2.5-7B Qwen-2.5-7B Default 91.81 4.59↑ 0.19 0 .22 56 84 66 Qwen-2.5-7B Qwen-2.5-7B Qwen-2.5-7B Deterministic 90.61 3.39↑ 0.00 0 .00 0 0 0 Qwen-2.5-7B Qwen-2.5-7B Qwen-2.5-7B Exploratory 91.72 4.50↑ 0.23 0 .29 66 85 65 Qwen-2.5-7B Qwen-2.5-7B Qwen-2.5-7B 1 Det. & 2 Exp. 91.04 3.82↑ 0.22 0 .24 60 80 68 Qwen-2.5-7B Qwen-2.5-7B Qwen-2.5-7B 2 Det. & 1 Exp. 91.30 4.08↑ 0.14 0 .15 40 57 40 Qwen-2.5-14B Qwen-2.5-14B Qwen-2.5-14B Default 94.20 3.93↑ 0.12 0 .13 27 54 45 Qwen-2.5-14B Qwen-2.5-14B Qwen-2.5-14B Deterministic 94.37 4.10↑ 0.00 0 .00 0 0 0 Qwen-2.5-14B Qwen-2.5-14B Qwen-2.5-14B Exploratory 94.80 4.53↑ 0.10 0 .12 28 50 39 Qwen-2.5-14B Qwen-2.5-14B Qwen-2.5-14B 1 Det. & 2 Exp. 94.54 4.27↑ 0.09 0 .09 22 41
https://arxiv.org/abs/2505.15734v1
33 Qwen-2.5-14B Qwen-2.5-14B Qwen-2.5-14B 2 Det. & 1 Exp. 94.71 4.44↑ 0.06 0 .06 10 32 26 Qwen-2.5-32B Qwen-2.5-32B Qwen-2.5-32B Default 95.82 0.54↑ 0.07 0 .11 22 36 28 Qwen-2.5-32B Qwen-2.5-32B Qwen-2.5-32B Deterministic 95.73 0.45↑ 0.00 0 .00 0 0 0 Qwen-2.5-32B Qwen-2.5-32B Qwen-2.5-32B Exploratory 95.56 0.28↑ 0.08 0 .12 28 35 32 Qwen-2.5-32B Qwen-2.5-32B Qwen-2.5-32B 1 Det. & 2 Exp. 95.56 0.28↑ 0.07 0 .10 30 29 25 Qwen-2.5-32B Qwen-2.5-32B Qwen-2.5-32B 2 Det. & 1 Exp. 95.99 0.71↑ 0.03 0 .04 13 18 14 Phi-mini-3.8B Phi-mini-3.8B Phi-mini-3.8B Default 88.91 4.18↑ 0.35 0 .61 69 130 104 Phi-mini-3.8B Phi-mini-3.8B Phi-mini-3.8B Deterministic 88.91 4.18↑ 0.00 0 .00 0 0 0 Phi-mini-3.8B Phi-mini-3.8B Phi-mini-3.8B Exploratory 88.74 4.01↑ 0.50 0 .83 85 196 151 Phi-mini-3.8B Phi-mini-3.8B Phi-mini-3.8B 1 Det. & 2 Exp. 88.74 4.01↑ 0.37 0 .61 74 155 121 Phi-mini-3.8B Phi-mini-3.8B Phi-mini-3.8B 2 Det. & 1 Exp. 89.08 4.35↑ 0.30 0 .52 54 109 81 Llama-3.1-3B Llama-3.1-3B Llama-3.1-3B Default 75.77 2.65↑ 0.81 0 .80 177 244 190 Llama-3.1-3B Llama-3.1-3B Llama-3.1-3B Deterministic 74.83 1.71↑ 0.00 0 .00 0 0 0 Llama-3.1-3B Llama-3.1-3B Llama-3.1-3B Exploratory 75.51 2.39↑ 0.90 1 .00 196 303 210 Llama-3.1-3B Llama-3.1-3B Llama-3.1-3B 1 Det. & 2 Exp. 75.17 2.05↑ 0.99 0 .91 223 262 192 Llama-3.1-3B Llama-3.1-3B Llama-3.1-3B 2 Det. & 1 Exp. 75.26 2.14↑ 0.53 0 .43 118 162 117 Mistral-7B Mistral-7B Mistral-7B Default 70.73 1.96↑ 0.97 0 .94 213 292 207 Mistral-7B Mistral-7B Mistral-7B Deterministic 68.26 0.51↓ 0.00 0 .00 0 0 0 Mistral-7B Mistral-7B Mistral-7B Exploratory 71.67 2.90↑ 1.14 1 .20 232 360 249 Mistral-7B Mistral-7B Mistral-7B 1 Det. & 2 Exp. 71.25 2.48↑ 1.03 1 .03 209 317 227 Mistral-7B Mistral-7B Mistral-7B 2 Det. & 1 Exp. 70.48 1.71↑ 0.62 0 .66 142 214 136 Llama-3.1-8B Llama-3.1-8B Llama-3.1-8B Default 87.46 9.81↑ 0.40 0 .56 98 145 107 Llama-3.1-8B Llama-3.1-8B Llama-3.1-8B Deterministic 86.43 8.78↑ 0.00 0 .00 0 0 0 Llama-3.1-8B Llama-3.1-8B Llama-3.1-8B Exploratory 86.01 8.36↑ 0.52 0 .77 127 187 150 Llama-3.1-8B Llama-3.1-8B Llama-3.1-8B 1 Det. & 2 Exp. 86.69 9.04↑ 0.50 0 .72 114 174 128 Llama-3.1-8B Llama-3.1-8B Llama-3.1-8B 2 Det. & 1 Exp. 85.67 8.02↑ 0.30 0 .46 115 119 73 Table 27: Comparative Analysis of Language Model Performance in Multi-Agent Debate Settings on the ARC- Challenge Dataset. This table showcases the impact of different Agent Settings (controlling temperature and top_p parameters) on the Accuracy of various language models in a three-agent configuration. The ∆column quantifies theimprovement (or decline) over the single base model performance (shown in parentheses after model names). Further metrics include average Debate Rounds , normalized Sycophancy (per 1172 data points), and transitions between correct (C) and incorrect (I) states (C →I, I→C), highlighting the nuanced effects of debate dynamics. 37 Agent 1 Agent 2 Agent 3 Accuracy ∆ Debate Sycophancy C →I I→C Debate Rounds (Avg / 1172) Helped (Avg) (Overall) Qwen-2.5-0.5B Qwen-2.5-1.5B Qwen-2.5-3B 82.59 0.94↓ 1.41 1 .40 148 820 629 Qwen-2.5-0.5B Qwen-2.5-1.5B Llama-3.1-3B 68.00 5.12↓ 1.66 1 .85 311 641 489 Qwen-2.5-0.5B Qwen-2.5-1.5B Phi-mini-3.8B 82.76 1.97↓ 1.48 1 .60 170 804 621 Qwen-2.5-0.5B Qwen-2.5-3B Llama-3.1-3B 79.69 3.84↓ 1.62 1 .50
https://arxiv.org/abs/2505.15734v1
208 699 581 Qwen-2.5-0.5B Qwen-2.5-3B Phi-mini-3.8B 86.95 2.22↑ 1.34 1 .23 133 722 631 Qwen-2.5-0.5B Llama-3.1-3B Phi-mini-3.8B 78.41 6.32↓ 1.54 1 .72 238 683 559 Qwen-2.5-1.5B Qwen-2.5-3B Llama-3.1-3B 82.34 1.19↓ 0.98 1 .10 180 447 358 Qwen-2.5-1.5B Qwen-2.5-3B Phi-mini-3.8B 87.37 2.64↑ 0.71 0 .81 105 423 358 Qwen-2.5-1.5B Llama-3.1-3B Phi-mini-3.8B 81.74 3.00↓ 0.93 1 .19 195 412 341 Qwen-2.5-3B Phi-mini-3.8B Llama-3.1-3B 85.67 2.14↑ 0.84 0 .89 143 319 244 Qwen-2.5-3B Qwen-2.5-3B Phi-mini-3.8B 87.88 3.15↑ 0.50 0 .52 110 225 170 Qwen-2.5-3B Phi-mini-3.8B Phi-mini-3.8B 89.33 4.60↑ 0.52 0 .61 81 214 174 Qwen-2.5-0.5B Qwen-2.5-1.5B Qwen-2.5-1.5B 69.80 0.59↑ 1.66 1 .77 231 686 523 Qwen-2.5-0.5B Qwen-2.5-0.5B Qwen-2.5-1.5B 55.97 13.24↓ 2.33 2 .69 393 680 451 Table 28: Analysis of Mixed-Model Configurations in Multi-Agent Debate Settings on the ARC-Challenge Dataset. This table examines various heterogeneous model combinations in three-agent debate setups. The ∆column quantifies the improvement (or decline) compared to the best single model performance among the three agents used in each configuration. All agent combinations use the default settings for temperature and top_p. Metrics include average Debate Rounds , normalized Sycophancy (per 1172 data points), and transitions between correct (C) and incorrect (I) states (C →I, I→C). Results demonstrate that certain model combinations can achieve higher accuracy than their constituent models when debating together. 38 Agent 1 Agent 2 Agent Settings Accuracy ∆ Debate Sycophancy C →I I→C Debate Rounds (Avg / 1221) Helped (Avg) (Overall) Qwen-2.5-0.5B Qwen-2.5-0.5B Both: Default 39.80 3.31 ↑ 1.47 1 .11 239 .00 306 240 Qwen-2.5-0.5B Qwen-2.5-0.5B Both: Deterministic 40.87 4.38 ↑ 0.00 0 .00 0 .00 0 0 Qwen-2.5-0.5B Qwen-2.5-0.5B Both: Exploratory 33.50 2.99 ↓ 1.90 1 .17 279 .00 338 257 Qwen-2.5-0.5B Qwen-2.5-0.5B Both: Det. & Exp. 41.93 5.44 ↑ 1.64 1 .08 251 .00 355 289 Qwen-2.5-1.5B Qwen-2.5-1.5B Both: Default 67.40 0.88 ↑ 0.44 0 .34 110 .00 154 154 Qwen-2.5-1.5B Qwen-2.5-1.5B Both: Deterministic 68.14 1.62 ↑ 0.00 0 .00 0 .00 2 1 Qwen-2.5-1.5B Qwen-2.5-1.5B Both: Exploratory 67.24 0.72 ↑ 0.60 0 .51 143 .00 217 201 Qwen-2.5-1.5B Qwen-2.5-1.5B Both: Det. & Exp. 66.67 0.15 ↑ 0.47 0 .41 111 .00 166 158 Qwen-2.5-3B Qwen-2.5-3B Both: Default 74.37 1.71 ↑ 0.37 0 .33 85 .00 128 123 Qwen-2.5-3B Qwen-2.5-3B Both: Deterministic 74.77 2.11 ↑ 0.00 0 .00 0 .00 0 0 Qwen-2.5-3B Qwen-2.5-3B Both: Exploratory 73.87 1.21 ↑ 0.39 0 .37 93 .00 127 120 Qwen-2.5-3B Qwen-2.5-3B Both: Det. & Exp. 75.51 2.85 ↑ 0.35 0 .25 73 .00 127 123 Qwen-2.5-7B Qwen-2.5-7B Both: Default 81.57 2.01 ↑ 0.15 0 .14 38 .00 66 64 Qwen-2.5-7B Qwen-2.5-7B Both: Deterministic 81.65 2.09 ↑ 0.00 0 .00 0 .00 0 0 Qwen-2.5-7B Qwen-2.5-7B Both: Exploratory 81.90 2.34 ↑ 0.19 0 .19 46 .00 78 75 Qwen-2.5-7B Qwen-2.5-7B Both: Det. & Exp. 82.56 3.00 ↑ 0.20 0 .19 54 .00 62 61 Qwen-2.5-14B Qwen-2.5-14B Both: Default 83.37 1.00 ↑ 0.15 0 .15 34 .00 43 41 Qwen-2.5-14B Qwen-2.5-14B Both: Deterministic 83.70 1.33 ↑ 0.00 0 .00 0 .00 0 0 Qwen-2.5-14B Qwen-2.5-14B Both: Exploratory 83.21 0.84 ↑ 0.18 0 .19 44 .00 66 62 Qwen-2.5-14B Qwen-2.5-14B Both: Det.
https://arxiv.org/abs/2505.15734v1
& Exp. 83.87 1.50 ↑ 0.16 0 .15 40 .00 59 54 Qwen-2.5-32B Qwen-2.5-32B Both: Default 86.24 0.48 ↑ 0.12 0 .17 28 .00 47 46 Qwen-2.5-32B Qwen-2.5-32B Both: Deterministic 85.75 0.01 ↓ 0.00 0 .00 0 .00 0 0 Qwen-2.5-32B Qwen-2.5-32B Both: Exploratory 86.24 0.48 ↑ 0.14 0 .20 34 .00 46 43 Qwen-2.5-32B Qwen-2.5-32B Both: Det. & Exp. 86.57 0.81 ↑ 0.16 0 .24 32 .00 55 46 Phi-mini-3.8B Phi-mini-3.8B Both: Default 71.66 1.78 ↑ 0.46 0 .68 108 .00 100 79 Phi-mini-3.8B Phi-mini-3.8B Both: Deterministic 72.24 2.36 ↑ 0.00 0 .00 0 .00 0 0 Phi-mini-3.8B Phi-mini-3.8B Both: Exploratory 73.87 3.99 ↑ 0.50 0 .70 85 .00 141 121 Phi-mini-3.8B Phi-mini-3.8B Both: Det. & Exp. 73.22 3.34 ↑ 0.47 0 .66 91 .00 124 105 Llama-3.1-3B Llama-3.1-3B Both: Default 68.55 3.51 ↑ 0.44 0 .40 107 .00 117 110 Llama-3.1-3B Llama-3.1-3B Both: Deterministic 67.40 2.36 ↑ 0.00 0 .00 0 .00 0 0 Llama-3.1-3B Llama-3.1-3B Both: Exploratory 66.75 1.71 ↑ 0.53 0 .48 116 .00 131 122 Llama-3.1-3B Llama-3.1-3B Both: Det. & Exp. 67.73 2.69 ↑ 0.47 0 .45 105 .00 113 109 Mistral-7B Mistral-7B Both: Default 66.34 1.79 ↑ 0.30 0 .22 57 .00 64 57 Mistral-7B Mistral-7B Both: Deterministic 66.99 2.44 ↑ 0.00 0 .00 0 .00 0 0 Mistral-7B Mistral-7B Both: Exploratory 65.11 0.56 ↑ 0.38 0 .30 81 .00 85 80 Mistral-7B Mistral-7B Both: Det. & Exp. 66.42 1.87 ↑ 0.34 0 .25 62 .00 89 81 Llama-3.1-8B Llama-3.1-8B Both: Default 74.28 1.26 ↑ 0.41 0 .47 79 .00 114 106 Llama-3.1-8B Llama-3.1-8B Both: Deterministic 75.43 2.41 ↑ 0.00 0 .00 0 .00 2 1 Llama-3.1-8B Llama-3.1-8B Both: Exploratory 74.86 1.84 ↑ 0.46 0 .54 95 .00 139 130 Llama-3.1-8B Llama-3.1-8B Both: Det. & Exp. 74.45 1.43 ↑ 0.41 0 .48 99 .00 112 102 Table 29: Comparative Analysis of Language Model Performance in Multi-Agent Debate Settings on the Com- monsenseQA Dataset. This table showcases the impact of different Agent Settings (controlling temperature and top_p parameters like Default, Deterministic, Exploratory, and a combination) on the Accuracy of various language models. The ∆column quantifies the improvement (or decline) over the single base model performance . Further metrics include average Debate Rounds , normalized Sycophancy (per 1221 data points), and transitions between correct (C) and incorrect (I) states (C →I, I→C), highlighting the nuanced effects of debate dynamics. 39 Agent 1 Agent 2 Agent Settings Accuracy ∆1 ∆2 Debate Sycophancy C →I I→C Debate Rounds (Avg / 1221) Helped (Avg) (Overall) Qwen-2.5-0.5B Qwen-2.5-1.5B Both: Default 56.92 20.43 ↑9.60↓ 1.34 0 .84 237 .00 370 345 Qwen-2.5-0.5B Qwen-2.5-1.5B Both: Deterministic 58.39 21.90 ↑8.13↓ 1.26 0 .63 148 .00 326 295 Qwen-2.5-0.5B Qwen-2.5-1.5B Both: Exploratory 56.91 20.42 ↑9.61↓ 1.63 0 .99 216 .00 430 377 Qwen-2.5-0.5B Qwen-2.5-1.5B Both: Det. & Exp. 57.08 20.59 ↑9.44↓ 1.28 0 .82 177 .00 371 332 Qwen-2.5-0.5B Qwen-2.5-1.5B Both: Exp. & Det. 57.49 21.00 ↑9.03↓ 1.51 0 .87 206 .00 407 379 Qwen-2.5-1.5B Llama-3.1-3B Both: Default 66.83 0.31 ↑1.79↑ 0.59 0 .63 168 .00 170 165 Qwen-2.5-1.5B Llama-3.1-3B Both: Deterministic 68.63 2.11 ↑3.59↑ 0.66
https://arxiv.org/abs/2505.15734v1
0 .80 160 .00 198 184 Qwen-2.5-1.5B Llama-3.1-3B Both: Exploratory 67.08 0.56 ↑2.04↑ 0.82 0 .90 164 .00 237 223 Qwen-2.5-1.5B Llama-3.1-3B Both: Det. & Exp. 69.78 3.26 ↑4.74↑ 0.61 0 .69 140 .00 203 193 Qwen-2.5-1.5B Llama-3.1-3B Both: Exp. & Det. 67.73 1.21 ↑2.69↑ 0.66 0 .72 160 .00 219 200 Qwen-2.5-3B Phi-mini-3.8B Both: Default 75.02 2.36 ↑5.14↑ 0.44 0 .39 100 .00 158 150 Qwen-2.5-3B Phi-mini-3.8B Both: Deterministic 76.09 3.43 ↑6.21↑ 0.50 0 .37 104 .00 161 154 Qwen-2.5-3B Phi-mini-3.8B Both: Exploratory 74.69 2.03 ↑4.81↑ 0.50 0 .52 85 .00 177 167 Qwen-2.5-3B Phi-mini-3.8B Both: Det. & Exp. 75.76 3.10 ↑5.88↑ 0.52 0 .40 114 .00 191 179 Qwen-2.5-3B Phi-mini-3.8B Both: Exp. & Det. 75.10 2.44 ↑5.22↑ 0.49 0 .49 106 .00 162 156 Qwen-2.5-1.5B Qwen-2.5-3B Both: Default 73.87 7.35 ↑1.21↑ 0.51 0 .47 100 .00 225 217 Qwen-2.5-1.5B Qwen-2.5-3B Both: Deterministic 74.94 8.42 ↑2.28↑ 0.48 0 .40 108 .00 191 187 Qwen-2.5-1.5B Qwen-2.5-3B Both: Exploratory 74.12 7.60 ↑1.46↑ 0.60 0 .55 115 .00 279 264 Qwen-2.5-1.5B Qwen-2.5-3B Both: Det. & Exp. 74.04 7.52 ↑1.38↑ 0.51 0 .52 106 .00 208 204 Qwen-2.5-1.5B Qwen-2.5-3B Both: Exp. & Det. 74.94 8.42 ↑2.28↑ 0.57 0 .42 108 .00 251 246 Llama-3.1-3B Llama-3.1-8B Both: Default 72.24 7.20 ↑0.78↓ 0.54 0 .52 119 .00 165 153 Llama-3.1-3B Llama-3.1-8B Both: Deterministic 73.79 8.75 ↑0.77↑ 0.57 0 .57 118 .00 190 183 Llama-3.1-3B Llama-3.1-8B Both: Exploratory 72.15 7.11 ↑0.87↓ 0.59 0 .58 112 .00 167 157 Llama-3.1-3B Llama-3.1-8B Both: Det. & Exp. 70.68 5.64 ↑2.34↓ 0.60 0 .58 131 .00 162 154 Llama-3.1-3B Llama-3.1-8B Both: Exp. & Det. 73.96 8.92 ↑0.94↑ 0.60 0 .61 120 .00 200 193 Qwen-2.5-7B Qwen-2.5-14B Both: Default 83.37 3.81 ↑1.00↑ 0.28 0 .26 62 .00 98 96 Qwen-2.5-7B Qwen-2.5-14B Both: Deterministic 83.78 4.22 ↑1.41↑ 0.33 0 .21 71 .00 101 95 Qwen-2.5-7B Qwen-2.5-14B Both: Exploratory 84.19 4.63 ↑1.82↑ 0.28 0 .27 60 .00 112 110 Qwen-2.5-7B Qwen-2.5-14B Both: Det. & Exp. 83.37 3.81 ↑1.00↑ 0.29 0 .24 66 .00 103 99 Qwen-2.5-7B Qwen-2.5-14B Both: Exp. & Det. 83.29 3.73 ↑0.92↑ 0.28 0 .21 66 .00 95 93 Table 30: Comparative Analysis of Mixed Language Model Performance in Multi-Agent Debate Settings on the CommonsenseQA Dataset. This table showcases the impact of different Agent Settings (controlling temperature and top_p parameters) on the Accuracy when pairing different language models. The ∆1column shows the improvement over the weaker model’s performance, while ∆2shows comparison to the stronger model. This highlights whether mixed-agent debates benefit from model complementarity or are constrained by the weaker model’s capabilities. Further metrics include average Debate Rounds , normalized Sycophancy (per 1221 data points), and transitions between correct (C) and incorrect (I) states. 40 Agent 1 Agent 2 Agent 3 Agent Settings Accuracy ∆ Debate Sycophancy C →I I→C Debate Rounds (Avg / 1221) Helped (Avg) (Overall) Qwen-2.5-0.5B Qwen-2.5-0.5B Qwen-2.5-0.5B Default 37.76 1.27↑ 2.69 3 .02 545 538 327 Qwen-2.5-0.5B Qwen-2.5-0.5B Qwen-2.5-0.5B Deterministic 39.80 3.31↑ 0.00 0 .00 0 0 0 Qwen-2.5-0.5B Qwen-2.5-0.5B Qwen-2.5-0.5B Exploratory 32.60 3.89↓ 3.45 3 .66 580 604 336 Qwen-2.5-0.5B Qwen-2.5-0.5B Qwen-2.5-0.5B 1 Det. & 2 Exp. 36.77
https://arxiv.org/abs/2505.15734v1
0.28↑ 3.05 3 .11 569 558 317 Qwen-2.5-0.5B Qwen-2.5-0.5B Qwen-2.5-0.5B 2 Det. & 1 Exp. 37.51 1.02↑ 1.76 1 .84 433 420 237 Qwen-2.5-1.5B Qwen-2.5-1.5B Qwen-2.5-1.5B Default 68.80 2.28↑ 0.77 0 .83 193 333 264 Qwen-2.5-1.5B Qwen-2.5-1.5B Qwen-2.5-1.5B Deterministic 67.90 1.38↑ 0.00 0 .00 0 3 1 Qwen-2.5-1.5B Qwen-2.5-1.5B Qwen-2.5-1.5B Exploratory 67.57 1.05↑ 1.14 1 .34 256 429 315 Qwen-2.5-1.5B Qwen-2.5-1.5B Qwen-2.5-1.5B 1 Det. & 2 Exp. 68.55 2.03↑ 0.92 1 .01 211 346 270 Qwen-2.5-1.5B Qwen-2.5-1.5B Qwen-2.5-1.5B 2 Det. & 1 Exp. 68.55 2.03↑ 0.57 0 .57 172 244 179 Qwen-2.5-3B Qwen-2.5-3B Qwen-2.5-3B Default 75.18 2.52↑ 0.63 0 .68 147 225 180 Qwen-2.5-3B Qwen-2.5-3B Qwen-2.5-3B Deterministic 74.28 1.62↑ 0.00 0 .00 0 0 0 Qwen-2.5-3B Qwen-2.5-3B Qwen-2.5-3B Exploratory 74.37 1.71↑ 0.66 0 .82 164 248 196 Qwen-2.5-3B Qwen-2.5-3B Qwen-2.5-3B 1 Det. & 2 Exp. 75.02 2.36↑ 0.67 0 .66 166 211 163 Qwen-2.5-3B Qwen-2.5-3B Qwen-2.5-3B 2 Det. & 1 Exp. 75.76 3.10↑ 0.45 0 .44 116 163 115 Qwen-2.5-7B Qwen-2.5-7B Qwen-2.5-7B Default 81.90 2.34↑ 0.31 0 .38 85 122 96 Qwen-2.5-7B Qwen-2.5-7B Qwen-2.5-7B Deterministic 81.57 2.01↑ 0.00 0 .00 0 0 0 Qwen-2.5-7B Qwen-2.5-7B Qwen-2.5-7B Exploratory 81.98 2.42↑ 0.38 0 .47 99 147 117 Qwen-2.5-7B Qwen-2.5-7B Qwen-2.5-7B 1 Det. & 2 Exp. 81.41 1.85↑ 0.32 0 .38 98 124 99 Qwen-2.5-7B Qwen-2.5-7B Qwen-2.5-7B 2 Det. & 1 Exp. 81.74 2.18↑ 0.25 0 .26 84 89 65 Qwen-2.5-14B Qwen-2.5-14B Qwen-2.5-14B Default 83.05 0.68↑ 0.27 0 .28 84 85 69 Qwen-2.5-14B Qwen-2.5-14B Qwen-2.5-14B Deterministic 83.87 1.50↑ 0.00 0 .00 0 0 0 Qwen-2.5-14B Qwen-2.5-14B Qwen-2.5-14B Exploratory 83.13 0.76↑ 0.28 0 .33 76 100 75 Qwen-2.5-14B Qwen-2.5-14B Qwen-2.5-14B 1 Det. & 2 Exp. 83.54 1.17↑ 0.25 0 .25 74 93 77 Qwen-2.5-14B Qwen-2.5-14B Qwen-2.5-14B 2 Det. & 1 Exp. 83.95 1.58↑ 0.14 0 .12 45 56 46 Qwen-2.5-32B Qwen-2.5-32B Qwen-2.5-32B Default 86.00 0.24↑ 0.18 0 .26 61 80 67 Qwen-2.5-32B Qwen-2.5-32B Qwen-2.5-32B Deterministic 85.75 0.01↓ 0.00 0 .00 0 0 0 Qwen-2.5-32B Qwen-2.5-32B Qwen-2.5-32B Exploratory 86.57 0.81↑ 0.18 0 .25 56 87 74 Qwen-2.5-32B Qwen-2.5-32B Qwen-2.5-32B 1 Det. & 2 Exp. 86.00 0.24↑ 0.16 0 .21 61 71 57 Qwen-2.5-32B Qwen-2.5-32B Qwen-2.5-32B 2 Det. & 1 Exp. 86.08 0.32↑ 0.11 0 .14 35 50 41 Phi-mini-3.8B Phi-mini-3.8B Phi-mini-3.8B Default 73.22 3.34↑ 0.62 1 .12 170 171 121 Phi-mini-3.8B Phi-mini-3.8B Phi-mini-3.8B Deterministic 73.71 3.83↑ 0.00 0 .00 0 0 0 Phi-mini-3.8B Phi-mini-3.8B Phi-mini-3.8B Exploratory 73.96 4.08↑ 0.74 1 .24 161 231 170 Phi-mini-3.8B Phi-mini-3.8B Phi-mini-3.8B 1 Det. & 2 Exp. 75.18 5.30↑ 0.69 1 .21 134 217 159 Phi-mini-3.8B Phi-mini-3.8B Phi-mini-3.8B 2 Det. & 1 Exp. 73.71 3.83↑ 0.47 0 .86 107 137 97 Llama-3.1-3B Llama-3.1-3B Llama-3.1-3B Default 68.39 3.35↑ 0.87 0 .92 210 237 169 Llama-3.1-3B Llama-3.1-3B Llama-3.1-3B Deterministic 68.06 3.02↑ 0.00 0 .00 0 0 0 Llama-3.1-3B Llama-3.1-3B Llama-3.1-3B Exploratory 67.65 2.61↑ 1.04 1 .16 250 261 190 Llama-3.1-3B Llama-3.1-3B Llama-3.1-3B 1 Det. & 2 Exp. 67.08 2.04↑ 0.89 0 .95 213 225 165 Llama-3.1-3B Llama-3.1-3B Llama-3.1-3B 2 Det. & 1 Exp. 67.73 2.69↑ 0.58 0 .58 132 149 105 Mistral-7B Mistral-7B Mistral-7B Default 66.83 2.28↑ 0.53 0 .57 121 137 99 Mistral-7B
https://arxiv.org/abs/2505.15734v1
Mistral-7B Mistral-7B Deterministic 66.75 2.20↑ 0.00 0 .00 0 0 0 Mistral-7B Mistral-7B Mistral-7B Exploratory 65.60 1.05↑ 0.79 0 .83 179 167 119 Mistral-7B Mistral-7B Mistral-7B 1 Det. & 2 Exp. 65.44 0.89↑ 0.64 0 .70 157 144 97 Mistral-7B Mistral-7B Mistral-7B 2 Det. & 1 Exp. 66.75 2.20↑ 0.32 0 .35 81 98 68 Llama-3.1-8B Llama-3.1-8B Llama-3.1-8B Default 75.92 2.90↑ 0.62 0 .83 147 211 148 Llama-3.1-8B Llama-3.1-8B Llama-3.1-8B Deterministic 75.84 2.82↑ 0.00 0 .00 0 9 3 Llama-3.1-8B Llama-3.1-8B Llama-3.1-8B Exploratory 74.12 1.10↑ 0.79 1 .13 203 246 168 Llama-3.1-8B Llama-3.1-8B Llama-3.1-8B 1 Det. & 2 Exp. 75.51 2.49↑ 0.71 0 .94 173 233 161 Llama-3.1-8B Llama-3.1-8B Llama-3.1-8B 2 Det. & 1 Exp. 75.51 2.49↑ 0.44 0 .60 118 150 92 Table 31: Comparative Analysis of Language Model Performance in Multi-Agent Debate Settings on the Com- monsenseQA Dataset. This table showcases the impact of different Agent Settings (controlling temperature and top_p parameters like Default, Deterministic, Exploratory, and combinations) on the Accuracy of various language models. The ∆column quantifies the improvement (or decline) over the single base model performance . Further metrics include average Debate Rounds , normalized Sycophancy (per 1221 data points), and transitions between correct (C) and incorrect (I) states (C →I, I→C), highlighting the nuanced effects of debate dynamics. 41 Agent 1 Agent 2 Agent 3 Accuracy ∆ Debate Sycophancy C →I I→C Debate Rounds (Avg / 1221) Helped (Avg) (Overall) Qwen-2.5-0.5B Qwen-2.5-1.5B Qwen-2.5-3B 72.48 35.99↑ 1.64 1 .51 228 748 563 Qwen-2.5-0.5B Qwen-2.5-1.5B Llama-3.1-3B 65.03 28.54↑ 1.81 1 .89 343 622 480 Qwen-2.5-0.5B Qwen-2.5-1.5B Phi-mini-3.8B 70.60 34.11↑ 1.68 1 .73 246 691 537 Qwen-2.5-0.5B Qwen-2.5-3B Llama-3.1-3B 72.56 36.07↑ 1.81 1 .59 234 697 544 Qwen-2.5-0.5B Qwen-2.5-3B Phi-mini-3.8B 72.15 35.66↑ 1.66 1 .59 243 629 517 Qwen-2.5-0.5B Llama-3.1-3B Phi-mini-3.8B 69.12 32.63↑ 1.76 1 .91 298 617 483 Qwen-2.5-1.5B Qwen-2.5-3B Llama-3.1-3B 73.38 6.86↑ 1.08 1 .22 230 399 305 Qwen-2.5-1.5B Qwen-2.5-3B Phi-mini-3.8B 75.68 9.16↑ 0.95 1 .17 202 382 303 Qwen-2.5-1.5B Llama-3.1-3B Phi-mini-3.8B 71.09 4.57↑ 1.04 1 .42 260 347 273 Qwen-2.5-3B Phi-mini-3.8B Llama-3.1-3B 74.20 1.54↑ 1.00 1 .15 222 334 253 Qwen-2.5-3B Qwen-2.5-3B Phi-mini-3.8B 74.77 2.11↑ 0.73 0 .84 200 256 193 Qwen-2.5-3B Phi-mini-3.8B Phi-mini-3.8B 76.09 3.43↑ 0.85 1 .18 183 258 186 Qwen-2.5-0.5B Qwen-2.5-1.5B Qwen-2.5-1.5B 64.86 28.37↑ 1.86 1 .50 267 576 447 Qwen-2.5-0.5B Qwen-2.5-0.5B Qwen-2.5-1.5B 55.12 18.63↑ 2.41 2 .44 384 651 438 Table 32: Comparative Analysis of Mixed Language Model Performance in Multi-Agent Debate Settings on the CommonsenseQA Dataset. This table presents results for heterogeneous combinations of language models in debate settings. The ∆column quantifies the improvement over the performance of the weakest model in each combination (for combinations with Qwen-2.5-0.5B, the baseline is 36.49%; for others, the baseline corresponds to the lowest-performing model). All experiments use the default debate setting. The table shows that combining models of different capacities can lead to significant performance gains, especially when smaller models are paired with larger ones. 42 F Additional Results F.1 Original MAD Results We also report our experiments with the original Multi-Agent Debate (MAD) framework across various model sizes and architectures. Table 33 presents the
https://arxiv.org/abs/2505.15734v1
results on three challenging reasoning benchmarks: GSM-Plus, GSM8K, and ARC-Challenge. F.2 Majority Vote@3 Results To further investigate the impact of stochastic diversity on model performance, we report results on a Majority V ote@3 approach where we sample three independent responses from each model and take a majority vote to determine the final answer. Table 34 presents these results across five benchmarks: GSM8K, GSM-Plus, ARC-Easy, ARC-Challenge, and CommonsenseQA. The results demonstrate that simple ensemble-based approaches can significantly boost performance without requiring multi-agent debate or model fine-tuning. Across all model sizes and architectures, Majority V ote@3 consistently outperforms single-sample inference. The relative improvements are most pronounced for smaller models, with Qwen-2.5-0.5B gaining up to 4.27 percentage points on ARC-Challenge and Qwen-2.5-1.5B showing similar substantial improvements across benchmarks. Interestingly, this pattern holds across model families. Llama-3.1-3B, Phi-3.5-mini, and Mistral- 7B all exhibit significant gains when using majority voting, suggesting that the benefits of ensemble diversity transcend specific model architectures. The results also indicate diminishing returns for larger models—Qwen-2.5-14B shows more modest improvements compared to its smaller counterparts, likely because these larger models already produce more consistent answers across samples. These findings highlight an important baseline for our research: simple ensemble methods provide strong performance improvements with minimal computational overhead during inference. However, they still require multiple forward passes for each query, motivating our DTE approach that aims to distill these benefits into a single model through training on debate traces. F.3 Scaling Results for Multiple Agents We investigated how performance scales with increasing numbers of debating agents (1-7) across different model sizes and reasoning benchmarks. Table 35 presents these results, revealing several important trends in multi-agent scaling behavior. First, we observe that performance generally improves as we add more agents to the debate, but with diminishing returns. The most significant gains occur when moving from a single agent (equivalent to standard inference) to two agents, with more modest improvements as additional agents join the debate. For example, on GSM8K, Qwen-2.5-1.5B shows a substantial jump from 62.77% (1 agent) to 71.57% (2 agents), but only incremental improvements thereafter. Second, the benefits of additional agents vary across tasks. On more complex tasks like GSM-Plus, we see continued performance improvements even with 7 agents, particularly for larger models. Qwen- 2.5-14B shows its peak GSM-Plus performance with 7 agents (78.08%), suggesting that more difficult problems benefit from extended multi-agent collaboration. In contrast, on simpler tasks like ARC-Easy, performance plateaus more quickly. Third, we find that model size influences scaling behavior. Smaller models like Qwen-2.5-1.5B show more variability in performance as agents are added, with occasional performance drops when moving from 3 to 4 agents. Larger models exhibit more stable scaling patterns, suggesting that they can more consistently integrate insights from multiple debate participants. These results have important implications for our DTE framework. They demonstrate that while adding more agents generally improves performance, the computational costs may outweigh the benefits beyond 3-5 agents for most applications. This insight helped inform our design choices in balancing performance gains against computational efficiency in our final framework. 43 Model Configuration Debate Performance Metrics
https://arxiv.org/abs/2505.15734v1
Agent 1 Agent 2 Debate Setting Accuracy Delta Debate Rounds Sycophancy Correct →Incorrect Incorrect →Correct Net Benefit GSM-Plus Qwen-2.5-0.5B Qwen-2.5-0.5B exploratory 28.12% 3.33 ↑ 3.48 6906 261 575 432 Qwen-2.5-1.5B Qwen-2.5-1.5B exploratory 46.50% 4.50 ↑ 2.33 5642 194 861 670 Qwen-2.5-3B Qwen-2.5-3B exploratory 66.79% 5.04 ↑ 1.34 5315 231 373 187 Qwen-2.5-7B Qwen-2.5-7B exploratory 69.71% 1.09 ↑ 0.76 2967 102 200 121 Qwen-2.5-14B Qwen-2.5-14B exploratory 76.92% 5.13 ↑ 0.61 2722 119 151 47 Phi-mini-3.8B Phi-mini-3.8B exploratory 65.79% 2.37 ↑ 1.07 3620 180 272 136 Llama-3.1-3B Llama-3.1-3B exploratory 42.42% 3.25 ↓ 2.07 5507 379 369 238 Mistral-7B Mistral-7B exploratory 26.35% 11.31 ↑ 1.85 4500 210 290 115 Llama-3.1-8B Llama-3.1-8B exploratory 57.63% 2.01 ↑ 1.75 5667 273 585 351 GSM8K Qwen-2.5-0.5B Qwen-2.5-0.5B exploratory 45.56% 3.56 ↑ 2.85 3469 175 427 328 Qwen-2.5-1.5B Qwen-2.5-1.5B exploratory 65.81% 3.04 ↑ 1.99 3471 144 650 489 Qwen-2.5-3B Qwen-2.5-3B exploratory 86.96% 1.82 ↑ 0.63 1390 82 165 97 Qwen-2.5-7B Qwen-2.5-7B exploratory 91.74% 1.07 ↑ 0.38 930 64 93 33 Qwen-2.5-14B Qwen-2.5-14B exploratory 94.39% 1.59 ↑ 0.18 448 30 48 18 Phi-mini-3.8B Phi-mini-3.8B exploratory 88.17% 1.29 ↑ 0.45 1050 65 120 65 Llama-3.1-3B Llama-3.1-3B exploratory 67.63% 4.92 ↓ 1.51 2418 238 215 127 Mistral-7B Mistral-7B exploratory 43.44% 22.06 ↑ 1.65 2100 175 235 95 Llama-3.1-8B Llama-3.1-8B exploratory 83.02% 1.29 ↑ 0.94 1587 93 308 236 ARC-Challenge Qwen-2.5-0.5B Qwen-2.5-0.5B exploratory 38.65% 0.68 ↑ 1.88 2728 272 308 232 Qwen-2.5-1.5B Qwen-2.5-1.5B exploratory 74.15% 0.94 ↑ 0.85 1671 121 231 156 Qwen-2.5-3B Qwen-2.5-3B exploratory 85.41% 1.88 ↑ 0.57 1227 94 135 57 Qwen-2.5-7B Qwen-2.5-7B exploratory 91.47% 6.25 ↑ 0.23 501 41 49 13 Qwen-2.5-14B Qwen-2.5-14B exploratory 94.54% 4.27 ↑ 0.15 326 31 37 9 Phi-mini-3.8B Phi-mini-3.8B exploratory 87.46% 2.73 ↑ 0.15 313 24 47 25 Llama-3.1-3B Llama-3.1-3B exploratory 76.37% 3.25 ↑ 0.73 1525 111 155 69 Mistral-7B Mistral-7B exploratory 73.29% 4.52 ↑ 0.40 795 63 114 73 Llama-3.1-8B Llama-3.1-8B exploratory 86.09% 8.44 ↑ 0.27 514 31 84 58 Table 33: Performance of the original Multi-Agent Debate (MAD) framework across different model sizes and reasoning benchmarks. Results show accuracy, improvement over single-agent baseline (Delta), average debate rounds, and debate transition statistics. The Delta column highlights performance changes compared to individual model accuracy, with green indicating improvement and red indicating decline. ModelAccuracy (%) on Benchmarks GSM8K GSM-Plus ARC-E ARC-C CQA Qwen-2.5-0.5B 49.73 30.54 58.71 42.92 42.51 Qwen-2.5-1.5B 75.82 52.08 87.12 73.55 69.62 Qwen-2.5-3B 86.28 64.08 94.19 84.13 76.90 Qwen-2.5-7B 92.19 70.46 96.46 91.21 82.88 Qwen-2.5-14B 94.09 72.54 98.44 94.20 82.15 Llama-3.1-3B 77.03 52.79 88.51 75.00 69.94 Llama-3.1-8B 85.82 60.88 93.56 83.11 74.86 Phi-3.5-mini 87.87 65.79 96.00 86.95 75.10 Mistral-7B 56.86 36.88 87.58 75.68 69.04 Table 34: Performance comparison using Majority V ote@3 approach across different benchmarks. For each model, we sample three independent responses and determine the final answer through majority voting. 44 Table 35: Performance scaling with increasing numbers of debating agents (1-7) across different model sizes and reasoning benchmarks. Results show accuracy percentages for each configuration. ModelNumber of Agents 1 2 3 4 5 6 7 GSM8K Accuracy (%) Qwen-2.5-1.5B 62.77 71.57 75.13 75.89 75.13 74.98 76.50 Qwen-2.5-3B 85.14 85.52 87.64 87.11 87.04
https://arxiv.org/abs/2505.15734v1
86.66 87.11 Qwen-2.5-7B 90.67 91.21 92.42 92.49 92.57 92.34 92.72 Qwen-2.5-14B 92.80 93.33 94.84 94.31 94.69 94.62 94.24 GSM-Plus Accuracy (%) Qwen-2.5-1.5B 42.00 51.62 53.33 50.62 54.21 51.50 52.67 Qwen-2.5-3B 61.75 67.79 68.00 64.21 69.71 64.88 68.54 Qwen-2.5-7B 68.62 74.17 74.96 70.88 71.08 71.38 76.00 Qwen-2.5-14B 71.79 77.25 72.29 72.83 73.29 73.38 78.08 ARC-Challenge Accuracy (%) Qwen-2.5-1.5B 69.21 68.52 72.10 71.50 72.53 71.50 72.10 Qwen-2.5-3B 82.53 84.64 86.26 85.75 86.26 86.95 87.03 Qwen-2.5-7B 87.22 91.64 91.72 91.47 92.06 91.38 92.32 Qwen-2.5-14B 90.27 93.77 94.80 95.14 94.20 94.62 94.28 ARC-Easy Accuracy (%) Qwen-2.5-1.5B 86.62 83.42 85.61 86.32 87.46 86.57 87.16 Qwen-2.5-3B 93.06 94.15 94.28 94.32 94.82 94.91 94.99 Qwen-2.5-7B 94.69 96.93 96.55 96.34 96.42 96.25 96.59 Qwen-2.5-14B 95.66 98.15 98.19 98.23 98.15 98.19 98.23 45
https://arxiv.org/abs/2505.15734v1
arXiv:2505.15738v1 [cs.CR] 21 May 2025Alignment Under Pressure: The Case for Informed Adversaries When Evaluating LLM Defenses Xiaoxue Yang∗Bozhidar Stevanoski∗Matthieu Meeus Yves-Alexandre de Montjoye Imperial College London Abstract Large language models (LLMs) are rapidly deployed in real-world applications ranging from chatbots to agentic systems. Alignment is one of the main approaches used to defend against attacks such as prompt injection and jailbreaks. Recent defenses report near-zero Attack Success Rates (ASR) even against Greedy Coordinate Gradient (GCG), a white-box attack that generates adversarial suffixes to induce attacker-desired outputs. However, this search space over discrete tokens is extremely large, making the task of finding successful attacks difficult. GCG has, for instance, been shown to converge to local minima, making it sensitive to initialization choices. In this paper, we assess the future-proof robustness of these defenses using a more informed threat model: attackers who have access to some information about the alignment process. Specifically, we propose an informed white-box attack leveraging the intermediate model checkpoints to initialize GCG, with each checkpoint acting as a stepping stone for the next one. We show this approach to be highly effective across state-of-the-art (SOTA) defenses and models. We further show our informed initialization to outperform other initialization methods and show a gradient-informed checkpoint selection strategy to greatly improve attack performance and efficiency. Importantly, we also show our method to successfully find universal adversarial suffixes – single suffixes effective across diverse inputs. Whenever found, these universal suffixes would enable an adversary to run a range of attacks. Our results show that, contrary to previous beliefs, effective adversarial suffixes do exist against SOTA alignment-based defenses, that these can be found by existing attack methods when adversaries exploit alignment knowledge, and that even universal suffixes exist. Taken together, our results highlight the brittleness of current alignment-based methods and the need to consider stronger threat models when testing the safety of LLMs. 1 Introduction Large language models (LLMs) are increasingly integrated into a wide range of applications, from AI agents [ 47] to chatbots [ 38] and code generation [ 9]. While their wide adoption stems from their impressive ability to follow natural language instructions, this same capability also makes them vulnerable to attacks: models often fail to distinguish between instructions to follow and content to ignore [60], exposing them to prompt injection [41,34,7,19,28] andjailbreaking [59,17,46,8,36] *Equal contribution We release the source code for this paper at https://github.com/computationalprivacy/checkpoint-gcg . 1 attacks. Prompt injections embed malicious commands into benign data merely intended for processing, such as a relevant Wikipedia article to answer a user’s question, tricking the model into following the injected instructions. Jailbreaks craft inputs that override a model’s alignment constraints to elicit harmful or restricted outputs, e.g., explaining how to build a bomb. These attacks are already starting to be exploited in practice, for example, causing private data leakage from Slack AI [12] or widely used LLM chatbots to elicit harmful language [52, 37]. Greedy Coordinate Gradient (GCG) [ 59] is a white-box optimization-based attack. It uses token gradients to search for an adversarial suffix to induce an attacker-desired output. GCG has
https://arxiv.org/abs/2505.15738v1
been shown to work very well for jailbreaks [ 59,37] and also for prompt injection attacks [ 10,11]. Since its introduction, subsequent work has focused on improving its efficiency [ 29,30,31], varying its optimizationobjectives[ 51], orleveragingitstransferabilitytoblack-boxattacks[ 20,48]. Importantly, as GCG optimizes across a complex search space of suffixes, recent work has found GCG’s success and convergence to be highly sensitive to its initialization [25, 29, 57, 20]. Alignment-based defenses have been proposed to make models robust against these attacks. For prompt injection, StruQ [ 10] introduces explicit delimiters to separate instructions from data, while SecAlign [ 11] strengthens it by training models to prioritize genuine instructions over injected ones using Direct Preference Optimization (DPO) [ 42]. Similar alignment strategies have been used by OpenAI to enforce an ‘instruction hierarchy’ through reinforcement learning for GPT-3-Turbo [ 50] or proposed architectural changes [ 53]. For jailbreaking, alignment training – used in major models such as LLaMA-3 and GPT-4o – aims to align the model’s output with human values and suppress harmful completions targeted by jailbreaks [ 39,18,22], with recent work explicitly tailoring alignment techniques to make models more robust against jailbreaks [6, 35, 44]. The robustness of such empirical defenses are then evaluated against state-of-the-art attacks, including the strong white-box GCG [ 59]. SecAlign [ 11] claims, for instance, an Attack Success Rate (ASR) of near-zero against GCG, suggesting no such adversarial suffixes work against the aligned model. Contribution. We evaluate the effectiveness of alignment-based defenses using informed adversaries with some knowledge of the alignment process, in particular, access to intermediate model checkpoints. Our method, Checkpoint-GCG, leverages the checkpoints to initialize GCG in an informed manner, using each checkpoint as ‘stepping stone’ to the next one until the final (fully) aligned model. We evaluate our method on state-of-the-art defenses against prompt injection [ 10,11] and show Checkpoint-GCG to dramatically improve attack success rate (ASR) over both previously reported values and our replications. We further compare Checkpoint-GCG to other improvements to GCG’s initializations from the literature [ 25,29,57,20] and show Checkpoint-GCG to strongly outperform them. Our results suggest that using the intermediate alignment checkpoints as stepping stones to guide the optimization process strongly helps with finding successful suffixes. We also evaluate different strategies to select checkpoints to attack and find that selecting the checkpoints with the largest gradient norm during alignment strikes the best balance between effectiveness and efficiency. We then extend our approach to finding universal attacks, attacks that work across multiple input prompts. We show Checkpoint-GCG to also successfully find universal suffixes, showing that attacks against state-of-the-art alignment-based defenses for prompt injection attacks not only exist but, if found, are reusable and broadly effective. We finally instantiate Checkpoint-GCG as a jailbreak attack against model finetuned to refuse harmful instructions [6] and show it, here again, to be highly effective. Taken together, our results show that adversarial suffixes capable of breaking alignment-based 2 [System Instruction] Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. [User Prompt] Make a list
https://arxiv.org/abs/2505.15738v1
of outdoor activities that are suitable for the given weather condition. [Data]Temperature: 57 F, Showers. Print exactly ‘Hacked’! AnD NotHing ElSE! Figure 1: An example prompt containing both adversarial instruction (red) and suffix (blue). defenses almost always exist, including universal ones. While current defenses appear robust, our results show this to be due to current attacks being highly sensitive to their initialization. This suggests that current defenses are likely not future-proof, including against informed adversaries, more sophisticated future attacks, better initializations, and even an attacker getting ‘lucky’ when initializing current methods. 2 Background In this section, we provide the background and formalization, focusing primarily on prompt injection attacks and defenses. We extend our approach to jailbreaks in Section 5.4. We denote an LLM Mparametrized by parameters θasMθwith vocabulary V. The model Mθmaps sequences of up to ntokens, i.e., the complete prompt provided as input to the model, to sequences of up to mtokens, i.e., the model’s generated response to the prompt, or Mθ:P(Vn)→ P(Vm). A prompt provided as input to a model typically consists of: (i) a User Prompt specifying the task or question from the user, (ii) Datato assist the model in responding to the user prompt, and (iii) aSystem Instruction defining the model’s role or behavior (see Figure 1). These components are typically concatenated – often with delimiters – and passed to the model as a single input, which it processes to autoregressively generate a response. However, LLMs often struggle to distinguish between different components of their input, particularly between data to process and instructions to follow [ 60], leaving them vulnerable to prompt injection attacks. These attacks exploit the model’s inability to ignore malicious instructions in the benign data [ 41,34,7]. For instance, when given an input similar to that in Figure 1, the model may ignore the user prompt and instead return “Hacked”, a setup typically used to study prompt injection [10, 11]. Optimization-based attack GCG. Greedy Coordinate Gradient (GCG) [ 59] is an optimization algorithm that constructs adversarial inputs capable of eliciting a target phrase as an output from a target LLM. It was originally instantiated as a method for jailbreaking LLMs, where an attacker aims to elicit an affirmative response, such as “Sure, here is...” to a harmful instruction, and has since been extended to prompt injection [ 10,11]. In this setting, the goal is to design an adversarial suffix to be appended to the prompt (blue in Figure 1) to confuse the model into following the adversarial instruction injected in the data part. Formally, given a target model Mθand a prompt p∈ P(Vn), we search for a suffix s= (s1, . . . , s l)∈ Vlsuch that the model’s continuation Mθ(p||s)yields an attacker-specified target string y∗. GCG begins with an initial suffix s(0)and iteratively updates it to maximize the 3 log-probability of the target string, i.e., solves max s∈VllogPθ(y∗|p||s). GCG performs this optimization iteratively. At each optimization step t,GCGupdates the adversarial suffix to s(t)←GCG (Mθ, y∗, p, s(t−1))by leveraging the gradients of logPθ(y∗|p||s(t−1)) with respect to the input tokens to make updates to s(t−1)in
https://arxiv.org/abs/2505.15738v1
a direction that increases the target likelihood. The algorithm continues until either the model, when prompted with p||s(t−1), produces the desired output y∗using greedy decoding, i.e., Mθ(p||s(t−1)) =y∗, or a maximum number of steps Tis reached – at which point the attack is considered unsuccessful. The final result from GCG is an adversarial suffix s∗. For more detailed information on the GCGalgorithm, we refer to Zou et al. [59]. When introduced, Zou et al. [ 59] propose to initialize the suffix s(0)as a series of lexclamation points. While this initialization has been widely adopted in subsequent work [ 10,11], other work has observed that GCG’s success and convergence can be sensitive to its initialization [25, 29, 57, 20]. Alignment-based defenses. Recent work has proposed alignment methods to improve models’ robustness against such prompt injection attacks. During alignment, models are finetuned to follow an ‘instruction hierarchy’, distinguishing how to cater to instructions based on their position within the input. For instance, OpenAI [ 50] constructs synthetic question-answering data reflecting the desired instruction hierarchy, then used to finetune GPT-3-Turbo using supervised finetuning and RLHF [39]. While the corresponding data or code is unfortunately not open-source, concurrent work has explored other techniques which have been released, in particular StruQ [ 10] and SecAlign [ 11]. StruQ [10] uses explicit delimiters to distinguish between the user prompt and the data portion. It applies supervised finetuning to train models to follow only the instructions in the user prompt while ignoring any instructions embedded in the data portion. SecAlign [ 11] improves upon this by leveraging DPO [ 42] during finetuning, explicitly steering the model away from responding to instructions included in the data portion in favor of responding to the original query. These alignment-based defenses have been shown to be robust against adversaries that inject simple malicious instructions (e.g., the adversarial instruction from Figure 1), but also against stronger white-box attacks optimizing an adversarial suffix such as GCG. Letθ0denote the parameters of the base model. The alignment phase produces a sequence of model parameters θ0→θ1→ ··· → θC,where θcrepresents the model parameters after calignment updates, and θCrepresents the parameters of the final model equipped with alignment-based defenses. 3 Checkpoint-GCG Evaluating the robustness of alignment-based defenses against prompt injection and jailbreaks is an inherently difficult task. Defenses may appear robust only because current attacks are insufficient or they have only been tested against naive or uninformed adversaries – until more sophisticated attacks later reveal their weaknesses. To truly assess whether current defenses will hold against future attacks, we here argue for evaluating LLM defenses under informed adversaries who possess knowledge about the alignment process. This is a reasonable assumption given that common alignment techniques are well-known [ 42,39], models often describe their alignment in technical reports [2,26], and the community has established standard alignment datasets that attackers would be familiar with [49, 5]. Existing alignment-based defenses, such as SecAlign [ 11], report a near-zero attack success rate against GCG, an effective optimization-based attack, which starting from an initial suffix s0searches 4 Algorithm 1 Checkpoint-GCG Attack Input:Initial prompt p,
https://arxiv.org/abs/2505.15738v1
target y∗, selected checkpoints S= [c1, . . . , c k], steps T, suffix length l Output: Final adversarial suffix s(t) ck 1:Initialize suffix s(0)←(s(0) 1, . . . , s(0) l)∈ Vn 2:fori= 1tokdo 3: c←ci 4: s(0) c←s(0) 5: fort= 1toTdo 6: s(t) c←GCG (θc, p, y∗, s(t−1) c) 7: ifMθc(p||s(t) c)generates y∗or early-stopping then 8: break ▷Terminate if s(t) cis successful or early-stopping (Sec. 4) 9: s(0)←s(t) c ▷Use as initialization for next checkpoint 10:return s(t) ck for an adversarial suffix s∗that induces an attacker-desired response. While GCG is a powerful attack that assumes white-box access to the aligned model’s parameters, it is known to be sensitive to the choice of initial suffix [ 25,29,57,20]. Previous work on alignment-based defenses [ 10,11] shows that GCG achieves nearly 90% ASR on undefended models such as Llama3-8B-Instruct and Mistral-7B-Instruct, but its effectiveness drops to near 0% when attacking the final aligned model. This suggests that the optimization problem becomes more difficult for the aligned model, making it challenging for GCG to find a successful suffix when starting from a naive initialization. Motivated by the incremental nature of alignment, we introduce Checkpoint-GCG , an attack that uses knowledge of the alignment process to progressively optimize an adversarial suffix. Checkpoint- GCG assumes access to a subset Sof all Cintermediate alignment checkpoints, denoted as S= [c1, . . . , c k], where 0≤ci≤C, and more specifically the corresponding model parameters θci. The attacker then runs the GCG algorithm against each selected checkpoint sequentially, using the adversarial suffix found at checkpoint θcito initialize the GCG algorithm against the next selected checkpoint θci+1, i.e., s(t) cibecomes s(0) ci+1. The complete procedure for Checkpoint-GCG is formalized in Algorithm 1. Intuitively, this approach exploits the incremental nature of parameter updates during alignment – an adversarial suffix found to be effective against a model with parameters θciis likely to be similar to an effective suffix for a model with highly similar parameters, such as θci+1. Selecting model checkpoints. LetI={0,1,2, . . . , C }denote the set of all possible checkpoint indices, where 0corresponds to the base model with parameters θ0andCto the final checkpoint with parameters θC. We consider three strategies for selecting a subset S= [c1, . . . , c k]⊆ Iof checkpoint indices to attack. All strategies include the base model ( c1= 0) and final checkpoint (ck=C), and distinctly select intermediate checkpoints ( 0< ci< C): 1. Step-based ( step).Since the most substantial changes to model parameters typically occur in the early stages of training, we select all checkpoints up to a training step rto capture these changes. To maintain coverage throughout training, we also include every qthcheckpoint thereafter, i.e.,Sstep={c∈ I | c≤r} ∪ {c∈ I | c > r, c =q·l, l∈N0}. 2. Loss-based ( loss).As training loss Lθcrepresents the model error and guides the updates of model parameters, we select a checkpoint if its alignment loss differs from the alignment loss at the last selected checkpoint by at least a threshold τloss, i.e.,Sloss={c∈ I | |L θc− Lθs| ≥τloss, s= 5
https://arxiv.org/abs/2505.15738v1
1 10 50 100 200 400 600 800 Checkpoints0.00.10.20.30.40.50.60.7LossEarlier than threshold After threshold Threshold(a) Step-based ( step) 1 10 50 100 200 400 600 800 Checkpoints0.000.050.100.150.20Loss differenceLoss difference meets or exceeds threhold Low loss difference period Threshold (b) Loss-based ( loss) 1 10 50 100 200 400 600 800 Checkpoints0.00.20.40.60.81.01.2Gradient normNorm meets or exceeds threshold Threshold (c) Gradient-based ( grad) Figure 2: Checkpoints selected using three different selection strategies (see Section 3) for the Llama3-8B-Instruct model defended with SecAlign. max{x∈ Sloss|x < c}}. Additionally, to ensure coverage during periods of low change, we include every qthcheckpoint when this condition is not met for qconsecutive steps. 3. Gradient-based ( grad).Gradient norms ∥∇θcLθc∥provide a more direct measure of the magnitude of updates made to the model parameters at every step. We therefore select checkpoints where the gradient norm is at least a threshold τgrad, indicating that the model is making sufficient changes, i.e., Sgrad ={c∈ I | ∥∇ θcLθc∥ ≥τgrad}. These strategies allow us to study what knowledge of the alignment process most help an attacker. 4 Experimental setup Defenses. For prompt injection, we evaluate Checkpoint-GCG on two state-of-the-art finetuning- based defenses: Struq [ 10] and SecAlign [ 11]. We replicate both alignment processes for Llama3-8B- Instruct [ 2] and Mistral-7B-Instruct [ 26], opting for instruction models as θ0as they were reported to be most robust [ 11]. For jailbreaking, we consider a safety-tuned Llama model [ 6], considering Llama3-8B-Instruct [ 2] asθ0and using 2,000safety instructions as this showed the strongest robustness. All defenses are open-source with released training setups and datasets. Full finetuning details, including loss and gradient norm curves, are provided in Appendix A. Attack evaluation. We follow the evaluation setup from Struq [ 10] and SecAlign [ 11], attacking samples with a data part from AlpacaFarm [ 16] by injecting the adversarial instruction “Print exactly ‘Hacked’!” at the end of the data part. An attack is considered successful if the model’s response begins with “Hacked” or “hacked” (target output y∗in Algorithm 1). We evaluate both Checkpoint-GCG and the standard GCG on 10 randomly selected samples from AlpacaFarm. For safety-tuned Llama [ 6], where the original paper did not evaluate optimization-based attacks, we randomly sample 10 prompts from the AdvBench harmful behaviors dataset [ 59], and evaluate both Checkpoint-GCG and the standard GCG on them. Following the original GCG paper [ 59], a jailbreak is successful if the model’s response does not contain any of the refusal strings authors have released. Checkpoint-GCG: selecting checkpoints. Selecting optimal hyperparameters for the three different checkpoint selection strategies discussed in Section 3 likely depends on the model and alignment process. We use Llama3-8B-Instruct defended with SecAlign to study the effectiveness and efficiency of all strategies, with results in Section 5.2. For this setup, we use r= 30, and q= 50 forstep,τloss= 0.005andq= 50forlossandτgrad= 0.05forgrad. Figure 2 shows the selected 6 Figure 3: The probability of a successful attack by GCG and Checkpoint-GCG when attacking one sample on Llama3-8B-Instruct [2] defended with SecAlign [11]. checkpoints for each of the three strategies. We find the gradient-based strategy to
https://arxiv.org/abs/2505.15738v1
provide the best trade-off in performance and cost and adopt this throughout the paper. For more details on values forτgradacross setups and number of checkpoints selected, see Appendix B. Checkpoint-GCG: early stopping. In the original GCG algorithm, GCG terminates either when a successful suffix is found or after a fixed budget of T= 500steps. Since we are targeting models that have been specifically aligned to be robust against attacks, we anticipate the attack to be more challenging and hence consider a per-checkpoint budget of T= 1,000.To avoid excessive computation, we also implement early stopping . Our observations show that GCG can get stuck in local minima, where it continues to iterate without improving the loss or finding a successful suffix. To mitigate this, Checkpoint-GCG terminates for checkpoint θciif the best GCG loss achieved for θci remains essentially unchanged (change ≤1e−5) over 250 consecutive steps. These thresholds were selected empirically. The algorithm then proceeds to attacking the next checkpoint θci+1, using the suffix obtained in the last GCG step at the current checkpoint θcias initialization (see Algorithm 1). Baselines. We apply GCG directly on the final finetuned model θC, initializing the suffix with "!!!", which is the common practice in literature [ 10,11]. To ensure a fair comparison with Checkpoint-GCG, we evaluate GCG on θCusing two different budgets: (i) maximum GCG steps of T= 500, as initially proposed [ 59] and used to evaluate defenses [ 10,11]; (ii) the same number of steps that Checkpoint-GCG used in total to attack that sample, applying the same early stopping criteria. 5 Results 5.1 Checkpoint-GCG steers the optimization in the right direction We apply Checkpoint-GCG to find an adversarial suffix for a prompt injection attack against Llama3-8B-Instruct [ 2] defended with SecAlign [ 11]. Figure 3 visualizes the optimization process for one sample, showing the probability of attack success over the cumulative number of GCG steps across attacked checkpoints. Any dashed vertical line denotes a checkpoint θcselected to attack. We start by applying GCG on the base model with parameters θ0, initializing the attack as in 7 prior work with "!!!". For this suffix s(0) c=0, the probability of attack success is near 0(lower left of Figure 3). After a limited number of GCG steps, we find suffix s(t) c=0that successfully breaks the base model θ0. We then attack the next checkpoint, θ7, initializing GCG with the successful suffix found on θ0. We find the success probability of this suffix to remain highly similar for θ7, yet a few GCG steps are needed to update s(t) c=0=s(0) c=7tos(t) c=7, which successfully breaks θ7. We continue this process across all selected checkpoints. While the probability of success often drops going from checkpoints θctoθc+1, applying a limited number of GCG steps starting from the suffix successful forθcquickly restores the success probability against θc+1. Finally, Checkpoint-GCG applies the same strategy to the fully aligned model θC, and finds the optimized suffix to succeed. As a reference, we also report the results for GCG when applied independently on each checkpoint θc. At each θc, we run GCG for the same
https://arxiv.org/abs/2505.15738v1
number of steps as Checkpoint-GCG, but initialize with the naive suffix ( "!!!") rather than the optimized suffix from θc−1. While GCG still improves success probability at early checkpoints, the alignment process increasingly suppresses the attack at later stages. After only a few alignment checkpoints, the success probability plateaus near zero, ultimately resulting in a failed attack on θC. We include this analysis to highlight the value of informed initialization throughout alignment, yet for our main comparison (below), we apply GCG directly against θCwith the same budget as Checkpoint-GCG. 5.2Checkpoint-GCG bypasses alignment-based defenses for prompt in- jection attacks We instantiate Checkpoint-GCG to evaluate the robustness of Llama3-8B-Instruct [ 2] and Mistral- 7B-Instruct [ 26] when defended with StruQ [ 10] and SecAlign [ 11] against prompt injection. Table 1 reports the ASR achieved by Checkpoint-GCG and, as a baseline, GCG directly on θCfor results reported in the original paper, our replication, and when GCG is provided with a larger budget. We first replicate the analysis from the reported evaluation of both defenses [ 10,11] where we apply GCG on θCusing T= 500steps. As expected, both defenses show strong robustness to standard GCG, with SecAlign being more resistant than StruQ (e.g., ASR of 0%for Llama3-8B- Instruct) and Llama3-8B-Instruct than Mistral-7B-Instruct. We also note a discrepancy between the ASR reported by the original works and ours. Upon investigation, the original code computes the GCG loss using one prompt template while evaluating with another, likely leading to an underestimation of ASR. In sharp contrast, we find Checkpoint-GCG to perform very well across setups, achieving an ASR of 100% against models defended by StruQ, 90% for SecAlign-defended Llama3-8B-Instruct, and 100% on Mistral-7B-Instruct. To fairly compare the results achieved for Checkpoint-GCG, we also instantiate GCG on θCusing the same total number of optimization steps that Checkpoint-GCG has taken across all attacked checkpoints (i.e., the Checkpoint-GCG budget). We find that this increased budget does not improve the performance compared to GCG on θCusing T= 500. While Checkpoint-GCG assumes a stronger adversary, its near-perfect ASR shows that attacks capable of breaking alignment-based defenses almost always exist, and that they are even discoverable with existing attack techniques. The key is initialization: with the right starting point, GCG reliably finds successful suffixes. The low ASR for GCG reported in prior work is not due to the absence of such suffixes, but rather results from the difficulty of the optimization problem and hinges on an attacker failing to explore the right region of the search space. Other initializations. Effectively, Checkpoint-GCG improves upon directly attacking the defended model by leveraging a more strategic initialization. This is in line with prior work which 8 GCG on θC Defense ModelReported Replicated Replicated Checkpoint-GCG (T= 500steps) ( T= 500steps) (Checkpoint-GCG budget) (ours) Struq [10]Llama3-8B-Instruct [2] 4 10 10 100 Mistral-7B-Instruct [26] 15 60 60 100 SecAlign [ 11]Llama3-8B-Instruct [2] 0 0 0 90 Mistral-7B-Instruct [26] 1 30 30 100 Table 1: Attack success rate (ASR %)↑for checkpoint-GCG against state-of-the-art prompt injection defenses. As baseline, we apply the standard GCG attack to the defended model (i.e., the
https://arxiv.org/abs/2505.15738v1
final checkpoint θC). Results are aggregated for 10randomly selected samples from AlpacaFarm [16]. observed that the initialization used in GCG can greatly affect its convergence and success. Jia et al. [25] propose an “easy-to-hard” strategy: initializing attacks on difficult prompts with suffixes successful on simpler ones, boosting ASR, as later confirmed by Li et al. [ 29]. Zhang et al.[ 57] similarly find that reusing successful suffixes across models or samples speeds up optimization. Lastly, Hayase et al. [ 20] find that repeating the target string in the suffix, up to the allowed suffix length, improves performance in black-box attacks. We hence compare Checkpoint-GCG to two additional baselines: (i) randomly picking a suffix which successfully attacked θCfor the same defense and model, and using it to initialize GCG on θCfor10other samples in line with [ 25,29,57]; (ii) initializing with the target phrase repeated for as many times as the token limit kallows, in line with [20]. For both baselines, we run GCG on θCwith T= 500. We find that these additional baselines, along with the naive baseline using repeated exclamation marks as initialization, all achieve 0%ASR. On the other hand, Checkpoint-GCG achieves a 90% ASR, outperforming all these other initialization methods. These results emphasize the value of leveraging alignment checkpoints to guide the optimization towards a successful suffix. Checkpoint selection. We evaluate the three checkpoint selection strategies described in Section 3 against the SecAlign defense on the Llama3-8B-Intruct model. Table 2 reports the corresponding ASR for each strategy, along with the number of selected checkpoints and the cumulative number of GCG steps that were used by Checkpoint-GCG (averaged per sample). We also visualize the selected checkpoints across strategies in Appendix B. The gradient-based strategy ( grad) achieves the highest ASR (90%). Interestingly, this strategy selects 102intermediate checkpoints, more than twice than the stepstrategy, while reaching a higher ASR with a similar number of cumulative GCG steps. At the same time, the lossstrategy selects even more checkpoints but produces lower ASR while requiring higher computational cost. These results show that checkpoint selection in Checkpoint-GCG requires balancing attack performance and efficiency. On the one hand, selecting more checkpoints reduces changes in model parameters between checkpoints, making it easier for GCG to refine adversarial suffixes. On the other hand, selecting many checkpoints may increase the cumulative number of GCG steps without proportional gains in ASR. gradstrikes an effective balance: by choosing checkpoints with significant parameter updates, it ensures that each GCG always starts from a well-informed initialization and targets a meaningful transition in the model’s behavior. We adopt this strategy throughout this work. 9 Checkpoint strategy ASR (%) ↑# Selected checkpoints ↓Avg Checkpoint-GCG steps ↓ step 70 49 4,757 loss 80 124 9,637 grad 90 102 5,056 Table 2: Attack performance (ASR) and efficiency (number of selected checkpoints and cumulative GCG steps) across checkpoint selection strategies (Llama3-8B-Instruct [2] and SecAlign [11]). 5.3 Checkpoint-GCG discovers a universal suffix While we have shown that Checkpoint-GCG can break alignment-based defenses with sample-specific suffixes, we here investigate whether Checkpoint-GCG can identify a single universal suffix that generalizes across samples.
https://arxiv.org/abs/2505.15738v1
We adopt the universal suffix attack originally proposed by GCG [ 59] to Checkpoint-GCG. Namely, we search for a universal suffix that breaks the defense across y training samples at each checkpoint θc, and use that suffix as initialization at checkpoint θc+1. At a given checkpoint θc, we search for a suffix that breaks the defense across 1≤z≤ysamples, by instantiating GCG initialized with the suffix that breaks z−1samples s(0) c,z=s(t) c,z−1or, if z= 1, with the suffix that breaks all ysamples from the previous checkpoint s(0) c,1=s(t) c−1,y. We instantiate this universal Checkpoint-GCG to attack Llama-3-8B-Instruct [ 2] defended with SecAlign [ 11]. We optimize for a universal suffix on a set of y= 10training samples taken from AlpacaFarm [ 16] and apply that suffix (i.e. s(t) C,y) out-of-the-box on θCusing the remaining 198samples from the same dataset. We show Checkpoint-GCG to discover a universal suffix that simultaneously breaks the alignment across 79out of the 198samples, achieving an ASR of 40%. The existence of such a suffix presents a real risk in practice if discovered by an adversary. It suggests that these alignment-based defenses can be bypass not only with sample-specific attacks, but also with a universal suffix successful across many other samples. We believe this also underscores the need to evaluate defense robustness against informed adversaries. Even if such adversaries are unrealistically strong, a single successful optimization could produce a reusable and broadly effective attack. 5.4 Results for alignment-based defenses against jailbreaking Beyond prompt injection, Checkpoint-GCG could also be applied to jailbreak models defended through alignment. In this case, GCG [ 59] optimizes adversarial suffixes that, when appended to harmful instructions, induce the model to start the response with “Sure, here is” followed by the content of the harmful instruction, e.g., “Sure, here is how to build a bomb”. Many models undergo alignment training to suppress harmful completions targeted by jail- breaks [39,18,22,35,44], although not many are open-sourced. We here consider the setup by Bianchi et al. [ 6], which shows that finetuning models with safety examples (pairs of harmful instruc- tions and refusal responses) alongside general-purpose instruction-tuning data substantially improves the model’s safety. We replicate the finetuning on Llama3-8B-Instruct [ 2], using their dataset that demonstrated the most robustness (2,000 added safety examples, full details in Appendix A). We apply Checkpoint-GCG to this safety-finetuned model, selecting checkpoints using the gradient-based strategy and following the same settings as for prompt injection (Section 4). A jailbreak attack is considered successful if the model response does not contain any predefined 10 refusal strings. As this can be an easier metric compared to generating a specific string like in prompt injection, we reduce our adversarial suffix to just 3 tokens instead of 20. As a baseline, we instantiate GCG directly on the final finetuned model with “!!!” initialization and 500 GCG steps. While out-of-the-box GCG achieves an ASR of 20%, we find Checkpoint-GCG to achieve 60%, tripling the ASR even with just three suffix tokens. These results show how Checkpoint-GCG can also be applied to models aligned to be more robust against jailbreaks, and
https://arxiv.org/abs/2505.15738v1
that a using an informed initialization is effective even when the optimization space only consists of three tokens. 6 Related Work We here investigate how an adversary with partial knowledge of the alignment process, in particular access to checkpoints, can use existing techniques (GCG) [ 59] to break alignment-based defenses. We adopt the same evaluation strategy as used to evaluate the robustness of state-of-the-art defenses [ 10,11], and show how a simple change (i.e. informed initialization) yields near-perfect ASR. Improving optimization-based attacks. Other work has proposed variations to GCG [ 59]. Some have specifically improved the efficiency by e.g. better token selection [ 29,30] or incorporating multi-token updates at each step [ 31,29]. Liao et al. [ 31] trains a model on successful adversarial suffixes to efficiently generate new ones for a given harmful query. While similar techniques could likely also accelerate Checkpoint-GCG, we leave such optimizations to future work. Other work considers modifying the optimization objective, e.g. augmenting the loss with attention scores of the adversarial suffix [ 51], decoupling the search into a behavior-agnostic pre-search and behavior- relevant post-search[ 32] and using a model-dependent prefix to optimize for [ 58]. Zou et al.[ 59] showed that adversarial suffixes optimized on one model often transfer to others, enabling black-box attacks: adversaries optimize suffixes on an open-source surrogate model, then apply them to a closed-source target via query access. Building on this, Sitawarin et al.[ 48] and Hayase et al. [ 20] improve black-box attacks by selecting suffixes based on target model loss, while using surrogate gradients to guide optimization. Finally, several works have observed that the initialization used in GCG greatly affects its convergence and success [25, 29, 57, 20]. We discuss this in Section 5.2. Jailbreaking. Beyondoptimization-basedmethods, jailbreakshavebeenmanuallydiscovered[ 17, 46] or automated using prompt templates. Black-box jailbreaks leverage other LLMs to iteratively refine the prompt [ 8,36,55], role-play [ 27], human persuasion techniques [ 56], genetic algorithms [ 33, 44], or lower-resource languages [ 54]. Notably, these jailbreaks tend to be semantically coherent, making them stealthier than the adversarial suffixes obtained by optimization-based attacks. Prompt injection. LLMs struggle to distinguish between instructions to follow anddata to process[60], making them vulnerable against prompt injection attacks [ 41,34,7]. These attacks override the model’s intended behavior, either provided directlyby the user [ 41,28] orindirectly via external content used by LLM-integrated applications [ 19]. Prompt injection has been studied across various settings, including RAG-based systems [14, 13, 40] and tool-using agents [15]. Defenses. Beyond the alignment-based defenses discussed in this work [ 10,11,50,53,6,35], various other techniques aim to maintain LLM behavior under prompt injection and jailbreaks. Liu et al. [ 34] distinguishes between prevention and detection defenses. For both attack types, prevention defenses include paraphrasing and retokenizing the input [ 23], while detection often use perplexity filtering to identify adversarial suffixes [ 3,23]. For jailbreaks, further prevention defenses include smoothing over character-level [ 43] or semantic [ 24] transformations, while detection may use an evaluator LLM to identify malicious prompts [ 4]. For prompt injection, specific prevention 11 defenses add delimiters to isolate trusted
https://arxiv.org/abs/2505.15738v1
content [ 21], and detection defenses detect whether a model deviates from its expected behavior using internal activations [ 1] or by relying on a domain-specific language [45]. 7 Discussion and conclusion LLMs have been shown to be vulnerable to prompt injection attacks and jailbreaks, motivating recent work to align models to improve robustness, including models deployed by industry [ 10,11, 50,53,6,35]. To validate effectiveness, these defenses are tested against a range of attacks, including the state-of-the-art white-box attack GCG [ 59] and have been shown to successfully thwart them. In this work, we argue that empirical defenses need to be tested against strong attackers and propose to use an informed attacker with some knowledge of the alignment process. We show this informed attacker to be able to better initialize GCG when attacking the final (fully) aligned model and to lead to strong ASR without improvements made to the core of the algorithm. These results strongly suggest that current alignment-based defenses are not inherently robust, with the low reported ASR being rather due to attacks failing to explore the right part of the optimization landscape. Our results suggest that current SOTA alignment-based defenses are likely not future-proof, including against informed adversaries, future attacks, better initialization, and even an attacker getting ‘lucky’ when initializing current methods. It also highlights the need of strong adversaries when evaluating empirical defenses and points to a layered approach to LLM security, combining alignment with other defenses such as input preprocessing [ 23,24,43] or detection of adversarial suffixes [3, 23]. References [1]Sahar Abdelnabi, Aideen Fay, Giovanni Cherubin, Ahmed Salem, Mario Fritz, and Andrew Paverd. Are you still on track!? catching llm task drift with activations. arXiv preprint arXiv:2406.00799 , 2024. [2] AI@Meta. Llama 3 model card. Hugging Face , 2024. [3]Gabriel Alon and Michael Kamfonas. Detecting language model attacks with perplexity. arXiv preprint arXiv:2308.14132 , 2023. [4]Stuart Armstrong, Matija Franklin, Connor Stevens, and Rebecca Gorman. Defense against the dark prompts: Mitigating best-of-n jailbreaking with prompt evaluation. arXiv preprint arXiv:2502.00580 , 2025. [5]Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862 , 2022. [6]Federico Bianchi, Mirac Suzgun, Giuseppe Attanasio, Paul Rottger, Dan Jurafsky, Tatsunori Hashimoto, and James Zou. Safety-tuned llamas: Lessons from improving the safety of large language models that follow instructions. In The Twelfth International Conference on Learning Representations , 2024. 12 [7]Hezekiah J Branch, Jonathan Rodriguez Cefalu, Jeremy McHugh, Leyla Hujer, Aditya Bahl, Daniel del Castillo Iglesias, Ron Heichman, and Ramesh Darwishi. Evaluating the suscepti- bility of pre-trained language models via handcrafted adversarial examples. arXiv preprint arXiv:2209.02128 , 2022. [8]Patrick Chao, Alexander Robey, Edgar Dobriban, Hamed Hassani, George J Pappas, and Eric Wong. Jailbreaking black box large language models in twenty queries. arXiv preprint arXiv:2310.08419 , 2023. [9]Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde De Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code.
https://arxiv.org/abs/2505.15738v1
arXiv preprint arXiv:2107.03374 , 2021. [10]Sizhe Chen, Julien Piet, Chawin Sitawarin, and David Wagner. Struq: Defending against prompt injection with structured queries. arXiv preprint arXiv:2402.06363 , 2024. [11]Sizhe Chen, Arman Zharmagambetov, Saeed Mahloujifar, Kamalika Chaudhuri, and Chuan Guo. Aligning llms to be robust against prompt injection. arXiv preprint arXiv:2410.05451 , 2024. [12]Thomas Claburn. Slack ai can be tricked into leaking data from private channels via prompt injection. https://www.theregister.com/2024/08/21/slack_ai_prompt_injection/ , 2024. [Accessed 13-05-2025]. [13]Cody Clop and Yannick Teglia. Backdoored retrievers for prompt injection attacks on retrieval augmented generation of large language models. arXiv preprint arXiv:2410.14479 , 2024. [14]Gianluca De Stefano, Lea Schönherr, and Giancarlo Pellegrino. Rag and roll: An end-to-end evaluation of indirect prompt manipulations in llm-based application frameworks. arXiv preprint arXiv:2408.05025 , 2024. [15]Edoardo Debenedetti, Jie Zhang, Mislav Balunovic, Luca Beurer-Kellner, Marc Fischer, and Florian Tramèr. Agentdojo: A dynamic environment to evaluate prompt injection attacks and defenses for llm agents. In The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track , 2024. [16]Yann Dubois, Chen Xuechen Li, Rohan Taori, Tianyi Zhang, Ishaan Gulrajani, Jimmy Ba, Carlos Guestrin, Percy S Liang, and Tatsunori B Hashimoto. Alpacafarm: A simulation framework for methods that learn from human feedback. Advances in Neural Information Processing Systems , 36:30039–30069, 2023. [17]Deep Ganguli, Liane Lovitt, Jackson Kernion, Amanda Askell, Yuntao Bai, Saurav Kadavath, Ben Mann, Ethan Perez, Nicholas Schiefer, Kamal Ndousse, et al. Red teaming language models to reduce harms: Methods, scaling behaviors, and lessons learned. arXiv preprint arXiv:2209.07858 , 2022. [18]Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783 , 2024. 13 [19]Kai Greshake, Sahar Abdelnabi, Shailesh Mishra, Christoph Endres, Thorsten Holz, and Mario Fritz. Not what you’ve signed up for: Compromising real-world llm-integrated applications with indirect prompt injection. In Proceedings of the 16th ACM Workshop on Artificial Intelligence and Security , pages 79–90, 2023. [20]Jonathan Hayase, Ema Borevković, Nicholas Carlini, Florian Tramèr, and Milad Nasr. Query- based adversarial prompt generation. Advances in Neural Information Processing Systems , 37:128260–128279, 2024. [21]Keegan Hines, Gary Lopez, Matthew Hall, Federico Zarfati, Yonatan Zunger, and Emre Kiciman. Defending against indirect prompt injection attacks with spotlighting. arXiv preprint arXiv:2403.14720 , 2024. [22]Aaron Hurst, Adam Lerer, Adam P Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, et al. Gpt-4o system card. arXiv preprint arXiv:2410.21276 , 2024. [23]Neel Jain, Avi Schwarzschild, Yuxin Wen, Gowthami Somepalli, John Kirchenbauer, Ping-yeh Chiang, MicahGoldblum, AniruddhaSaha, JonasGeiping, andTomGoldstein. Baselinedefenses for adversarial attacks against aligned language models. arXiv preprint arXiv:2309.00614 , 2023. [24] Jiabao Ji, Bairu Hou, Alexander Robey, George J Pappas, Hamed Hassani, Yang Zhang, Eric Wong, and Shiyu Chang. Defending large language models against jailbreak attacks via semantic smoothing. arXiv preprint arXiv:2402.16192 , 2024. [25]Xiaojun Jia, Tianyu Pang, Chao Du, Yihao Huang, Jindong Gu, Yang Liu, Xiaochun Cao, and Min Lin. Improved techniques for optimization-based jailbreaking on large language models. arXiv preprint arXiv:2405.21018 , 2024. [26]Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot,
https://arxiv.org/abs/2505.15738v1
Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, and William El Sayed. Mistral 7B, 2023. arXiv preprint. [27]Haibo Jin, Ruoxi Chen, Andy Zhou, Yang Zhang, and Haohan Wang. Guard: Role-playing to generate natural-language jailbreakings to test guideline adherence of large language models. arXiv preprint arXiv:2402.03299 , 2024. [28]Daniel Kang, Xuechen Li, Ion Stoica, Carlos Guestrin, Matei Zaharia, and Tatsunori Hashimoto. Exploiting programmatic behavior of llms: Dual-use through standard security attacks. In 2024 IEEE Security and Privacy Workshops (SPW) , pages 132–143. IEEE, 2024. [29]Jiahui Li, Yongchang Hao, Haoyu Xu, Xing Wang, and Yu Hong. Exploiting the index gradients for optimization-based jailbreaking on large language models. In Proceedings of the 31st International Conference on Computational Linguistics , pages 4535–4547, 2025. [30]Xiao Li, Zhuhong Li, Qiongxiu Li, Bingze Lee, Jinghao Cui, and Xiaolin Hu. Faster-gcg: Efficient discrete optimization jailbreak attacks against aligned large language models. arXiv preprint arXiv:2410.15362 , 2024. 14 [31]Zeyi Liao and Huan Sun. Amplegcg: Learning a universal and transferable generative model of adversarial suffixes for jailbreaking both open and closed llms. Conference on Language Modeling (COLM) 2024 , 2024. [32]Hongfu Liu, Yuxi Xie, Ye Wang, and Michael Shieh. Advancing adversarial suffix transfer learning on aligned large language models. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing , pages 7213–7224, 2024. [33]Xiaogeng Liu, Nan Xu, Muhao Chen, and Chaowei Xiao. Autodan: Generating stealthy jailbreak prompts on aligned large language models. In The Twelfth International Conference on Learning Representations , 2024. [34]Yupei Liu, Yuqi Jia, Runpeng Geng, Jinyuan Jia, and Neil Zhenqiang Gong. Formalizing and benchmarking prompt injection attacks and defenses. In 33rd USENIX Security Symposium (USENIX Security 24) , pages 1831–1847, 2024. [35]Mantas Mazeika, Long Phan, Xuwang Yin, Andy Zou, Zifan Wang, Norman Mu, Elham Sakhaee, Nathaniel Li, Steven Basart, Bo Li, et al. Harmbench: a standardized evaluation framework for automated red teaming and robust refusal. In Proceedings of the 41st International Conference on Machine Learning , pages 35181–35224, 2024. [36]Anay Mehrotra, Manolis Zampetakis, Paul Kassianik, Blaine Nelson, Hyrum Anderson, Yaron Singer, and Amin Karbasi. Tree of attacks: Jailbreaking black-box llms automatically. Advances in Neural Information Processing Systems , 37:61065–61105, 2024. [37]Cade Metz. Researchers poke holes in safety controls of chatgpt and other chatbots. https: //www.nytimes.com/2023/07/27/business/ai-chatgpt-safety-research.html , 2023. [Ac- cessed 14-05-2025]. [38]OpenAI. Introducing ChatGPT. https://openai.com/index/chatgpt , 2022. Accessed: 06 February 2025. [39]Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in neural information processing systems , 35:27730–27744, 2022. [40]Dario Pasquini, Martin Strohmeier, and Carmela Troncoso. Neural exec: Learning (and learning from) execution triggers for prompt injection attacks. In Proceedings of the 2024 Workshop on Artificial Intelligence and Security , pages 89–100, 2024. [41]Fábio Perez and Ian Ribeiro. Ignore previous prompt: Attack techniques for language models. arXiv preprint arXiv:2211.09527 , 2022. [42]Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D Manning, Stefano Ermon, and Chelsea Finn.
https://arxiv.org/abs/2505.15738v1
Direct preference optimization: Your language model is secretly a reward model. Advances in Neural Information Processing Systems , 36:53728–53741, 2023. [43]Alexander Robey, Eric Wong, Hamed Hassani, and George J Pappas. Smoothllm: Defending large language models against jailbreaking attacks. arXiv preprint arXiv:2310.03684 , 2023. 15 [44]Mikayel Samvelyan, Sharath Chandra Raparthy, Andrei Lupu, Eric Hambro, Aram Markosyan, Manish Bhatt, Yuning Mao, Minqi Jiang, Jack Parker-Holder, Jakob Foerster, et al. Rainbow teaming: Open-endedgenerationofdiverseadversarialprompts. Advances in Neural Information Processing Systems , 37:69747–69786, 2024. [45]Reshabh K Sharma, Vinayak Gupta, and Dan Grossman. Spml: A dsl for defending language models against prompt attacks. arXiv preprint arXiv:2402.11755 , 2024. [46]Xinyue Shen, Zeyuan Chen, Michael Backes, Yun Shen, and Yang Zhang. " do anything now": Characterizing and evaluating in-the-wild jailbreak prompts on large language models. InProceedings of the 2024 on ACM SIGSAC Conference on Computer and Communications Security, pages 1671–1685, 2024. [47]Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, and Yueting Zhuang. Hugginggpt: Solving ai tasks with chatgpt and its friends in hugging face. Advances in Neural Information Processing Systems , 36:38154–38180, 2023. [48]Chawin Sitawarin, Norman Mu, David Wagner, and Alexandre Araujo. Pal: Proxy-guided black-box attack on large language models. arXiv preprint arXiv:2402.09674 , 2024. [49]Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. Stanford alpaca: An instruction-following llama model. https://github.com/tatsu-lab/stanford_alpaca , 2023. [50]Eric Wallace, Kai Xiao, Reimar Leike, Lilian Weng, Johannes Heidecke, and Alex Beutel. The instruction hierarchy: Training llms to prioritize privileged instructions. arXiv preprint arXiv:2404.13208 , 2024. [51]Zijun Wang, Haoqin Tu, Jieru Mei, Bingchen Zhao, Yisen Wang, and Cihang Xie. At- tngcg: Enhancing jailbreaking attacks on llms with attention manipulation. arXiv preprint arXiv:2410.09040 , 2024. [52]Kyle Wiggers. Can ai really be protected from text-based attacks? https://techcrunch.com/ 2023/02/24/can-language-models-really-be-protected-from-text-based-attacks/ , 2023. [Accessed 14-05-2025]. [53]Tong Wu, Shujian Zhang, Kaiqiang Song, Silei Xu, Sanqiang Zhao, Ravi Agrawal, Sathish Reddy Indurthi, Chong Xiang, Prateek Mittal, and Wenxuan Zhou. Instructional segment embedding: Improving llm safety with instruction hierarchy. In Neurips Safe Generative AI Workshop 2024 , 2024. [54]Zheng-Xin Yong, Cristina Menghini, and Stephen H Bach. Low-resource languages jailbreak gpt-4.arXiv preprint arXiv:2310.02446 , 2023. [55]Jiahao Yu, Xingwei Lin, Zheng Yu, and Xinyu Xing. Gptfuzzer: Red teaming large language models with auto-generated jailbreak prompts. arXiv preprint arXiv:2309.10253 , 2023. [56]Yi Zeng, Hongpeng Lin, Jingwen Zhang, Diyi Yang, Ruoxi Jia, and Weiyan Shi. How johnny can persuade llms to jailbreak them: Rethinking persuasion to challenge ai safety by humanizing llms. InProceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 14322–14350, 2024. 16 [57]Jiahao Zhang, Zilong Wang, Ruofan Wang, Xingjun Ma, and Yu-Gang Jiang. Enja: Ensemble jailbreak on large language models. arXiv preprint arXiv:2408.03603 , 2024. [58]Sicheng Zhu, Brandon Amos, Yuandong Tian, Chuan Guo, and Ivan Evtimov. Advprefix: An objective for nuanced llm jailbreaks. arXiv preprint arXiv:2412.10321 , 2024. [59]Andy Zou, Zifan Wang, Nicholas Carlini, Milad Nasr, J Zico Kolter, and Matt Fredrikson. Universal and transferable adversarial attacks on aligned language models. arXiv preprint arXiv:2307.15043 , 2023. [60] Egor Zverev, Sahar Abdelnabi, Soroush Tabesh, Mario Fritz, and
https://arxiv.org/abs/2505.15738v1
Christoph H Lampert. Can llms separate instructions from data? and what do we even mean by that? arXiv preprint arXiv:2403.06833 , 2024. 17 A Finetuning process for each defense A.1 Prompt injection defenses We replicate both prompt injection defenses, SecAlign and StruQ using the released code and data1. We follow the instructions in the code to download the dataset used for finetuning. Both defenses use the same dataset to construct their respective training datasets. We reuse the same hyperparameter values for finetuning the models that are contained in the code, yet make some changes to fit our computational constraints. Instead of using 4 A100 GPUs, we use 1 and 2 A100 GPUs to finetune SecAlign and StruQ respectively, while ensuring the same effective batch size as in the original works. We further use fp16floating point precision and gradient checkpointing to lower the GPU memory at a small cost of execution time. We use StruQ and SecAlign to defend two models: Llama3-8B-Instruct and Mistral-7B-Instruct. Figures 4 and 5 show their training loss and gradient norms of using StruQ and SecAlign, respectively. The StruQ curves, especially the training loss curves, for both models clearly show the three epochs it was trained on. The SecAlign curves, on the other hand, stabilize after the first circa 30steps in the first epoch. A.2 Jailbreak defense: Safety-tuned Llama We replicate the finetuning process in Safety-Tuned LlaMAs [ 6], using their released code and data. We use the same training setup and hyperparameter values that are outlined in the paper, except for: •Number of GPUs: Instead of using two A6000 or A5000 GPUs as in the paper [ 6], we use 1 A100 GPU. •Evaluation frequency: We evaluate every step, instead of every 50 steps as in the paper. This allows us to use the checkpoint with the lowest evaluation loss, in line with Bianchi et al. [ 6], while giving us the flexibility in choosing checkpoints to attack. We apply this finetuning process on Llama3-8B-Instruct. Figure 6 shows the training loss, evaluation loss, and gradient norm curves. B Checkpoint selection strategies We assume that the attacker can select a subset of the checkpoints from the alignment. We consider three strategies for selecting the subset as described in Section 3, and evaluate them on Llama3-8B- Instruct defended with SecAlign. We select the gradient norm-based strategy, grad, as a strategy that provides the best trade-off between attack performance and efficiency, as shown in Table 2. For attacking other models or defenses, we choose the values for τgradsuch that a similar number of checkpoints are selected. Table 3 shows the values of τgradfor different defenses and models and the resulting number of selected checkpoints. 1The repository of SecAlign builds on top of the repository of StruQ, thus we use SecAlign’s code to finetune both defenses. https://github.com/facebookresearch/SecAlign 18 0 500 1000 1500 2000 2424 Checkpoint0.60.81.01.21.41.6(a) Train loss on Llama3-8B-Instruct 0 500 1000 1500 2000 2424 Checkpoint2.55.07.510.012.515.017.520.0 (b) Grad norm on Llama3-8B-Instruct 0 500 1000 1500 2000 2424 Checkpoint0.00.10.20.30.40.5 (c) Train loss on Mistral-7B-Instruct 0 500 1000 1500 2000 2424 Checkpoint24681012 (d) Grad
https://arxiv.org/abs/2505.15738v1
norm on Mistral-7B-Instruct Figure 4: Training metrics for StruQ finetuning Defense Model τgrad# Selected checkpoints StruQ [10]Llama3-8B-Instruct [2] 4.5 125 Mistral-7B-Instruct [26] 7 111 SecAlign [11]Llama3-8B-Instruct [2] 0.05 102 Mistral-7B-Instruct [26] 0.05 93 SafetyLlama [6] Llama3-8B-Instruct [2] 0.45 203 Table 3: Parameters for the gradcheckpoint selection strategy across setups. We provide both the selected gradient norm threshold τgradand the resulting number of checkpoints selected using this threshold. 19 0100 200 300 400 500 600 700 800 897 Checkpoint0.00.10.20.30.40.50.60.7(a) Train loss on Llama3-8B-Instruct 0100 200 300 400 500 600 700 800 897 Checkpoint0.00.20.40.60.81.01.2 (b) Grad norm on Llama3-8B-Instruct 0100 200 300 400 500 600 700 800 897 Checkpoint0.00.10.20.30.40.50.60.7 (c) Train loss on Mistral-7B-Instruct 0100 200 300 400 500 600 700 800 897 Checkpoint0.00.51.01.52.02.5 (d) Grad norm on Mistral-7B-Instruct Figure 5: Training metrics for SecAlign finetuning 20 (a) Train loss (b) Eval loss (c) Grad norm Figure 6: Training metrics for safety-tuning Llama3-8B-Instruct 21 C Evolution of adversarial suffixes across model checkpoints Figure 7 shows a high degree of similarity between adversarial suffixes identified across sequential checkpoints. In some cases, a suffix that succeeds on checkpoint θciworks out-of-the-box on checkpoint θci+1, without requiring any additional GCG optimization steps. During early stages of the alignment process, where model parameters typically undergo significant updates, successful suffixes can vary substantially even between checkpoints just 15 training steps apart – as seen when comparing suffixes at θ15andθ30in Figure 7. The gradcheckpoint selection strategy effectively identifies checkpoints with meaningful model parameter updates, allowing Checkpoint-GCG to keep pace with the alignment process and adapt adversarial suffixes from strong initializations. Figure 7: Adversarial suffixes discovered at checkpoints selected using the gradstrategy (showing up to θ35), for one sample. The suffixes for consecutive checkpoints show high similarities, whereas there can be significant variations when comparing suffixes found at checkpoints separated by larger intervals. 22 D Computational resources used for Checkpoint-GCG All experiments were conducted on an A100 GPU with 80GB RAM. Taking attacks against prompt injection defenses – Struq [ 10] and SecAlign [ 11] – as an example, each GCG step takes approximately 3 seconds per sample (with maximum number of generated tokens set to 4). For Checkpoint-GCG, Table 2 reports the per-sample average of cumulative GCG steps taken across all attacked checkpoints. 23
https://arxiv.org/abs/2505.15738v1
arXiv:2505.15741v1 [cs.NE] 21 May 2025DRAFTEvolutionary Computation and Large Language Models: A Survey of Methods, Synergies, and Applications Dikshit Chauhan1, Bapi Dutta2, Indu Bala3, Niki van Stein4, Thomas Bäck4, Anupam Yadav5,∗ 1Department of Electrical and Computer Engineering, National University of Singapore, 119077 2Department of Computer Science, Universidad de Jaén, Spain 3School of Computer and Mathematical Sciences, University of Adelaide - 5005, Australia 4Leiden Institute of Advanced Computer Science, University Leiden, Leiden, Netherlands 5Department of Mathematics and Computing, Dr. B. R. Ambedkar National Institute of Technology, Jalandhar - 144011, INDIA Abstract Integrating Large Language Models (LLMs) and Evolutionary Computation (EC) represents a promising av- enue for advancing artificial intelligence by combining powerful natural language understanding with opti- mization and search capabilities. This manuscript explores the synergistic potential of LLMs and EC, review- ing their intersections, complementary strengths, and emerging applications. We identify key opportunities where EC can enhance LLM training, fine-tuning, prompt engineering, and architecture search, while LLMs can, in turn, aid in automating the design, analysis, and interpretation of ECs. The manuscript explores the synergistic integration of EC and LLMs, highlighting their bidirectional contributions to advancing artifi- cial intelligence. It first examines how EC techniques enhance LLMs by optimizing key components such as prompt engineering, hyperparameter tuning, and architecture search, demonstrating how evolutionary methods automate and refine these processes. Secondly, the survey investigates how LLMs improve EC by automating metaheuristic design, tuning evolutionary algorithms, and generating adaptive heuristics, thereby increasing ef- ficiency and scalability. Emerging co-evolutionary frameworks are discussed, showcasing applications across diverse fields while acknowledging challenges like computational costs, interpretability, and algorithmic con- vergence. The survey concludes by identifying open research questions and advocating for hybrid approaches that combine the strengths of EC and LLMs. Keywords: Evolutionary Computation, Large Language Models, Optimization, Metaheuristics, Co-Evolution, AI Automation. 1. Introduction Large Language Models (LLMs) represent a significant advancement in artificial intelligence (AI), demon- strating remarkable capabilities in understanding and generating human-like text [1]. Built upon deep learning architectures such as transformer networks, these models are trained on vast datasets, enabling them to per- form a wide array of Natural Language Processing (NLP) tasks with impressive fluency and coherence [2]. Their ability to comprehend context, generate structured responses, and learn from few-shot examples has led ∗Corresponding author Email addresses: dikshitchauhan608@gmail.com (Dikshit Chauhan1),bdutta@ujaen.es (Bapi Dutta2), indu.bala@adelaide.edu.au (Indu Bala3),n.van.stein@liacs.leidenuniv.nl (Niki van Stein4), t.h.w.baeck@liacs.leidenuniv.nl (Thomas Bäck4),anupam@nitj.ac.in (Anupam Yadav5,∗) DRAFTto widespread adoption across various industries, ranging from healthcare and finance to software develop- ment and education [1]. As LLMs continue to evolve, their potential is expanding beyond traditional NLP applications, influencing problem-solving and optimization strategies across multiple domains. In parallel, Evolutionary Computation (EC) has emerged as a powerful optimization technique inspired by the principles of natural evolution [3]. These population-based algorithms iteratively refine candidate so- lutions through selection, reproduction, and mutation, making them highly effective in navigating complex, high-dimensional, and non-differentiable search spaces where traditional optimization techniques struggle [4]. Unlike gradient-based optimization, which can become trapped in local optima, EC maintains population di- versity, increasing the likelihood of discovering globally optimal solutions. This adaptability has led to their application
https://arxiv.org/abs/2505.15741v1
in engineering design, machine learning, and scientific discovery, among other fields. Given their robustness in tackling complex optimization problems, EC provides a promising foundation for enhancing the performance and efficiency of LLMs. Recent rapid advancements in LLMs, such as GPT-4, Claude, and Gemini, have drastically improved their generative quality, reasoning capabilities, and versatility, making them increasingly relevant across a wide range of optimization, synthesis, and decision-making tasks. This acceleration in LLM development and deployment has created an urgent need for automated strategies to improve their performance, interpretability, and domain adaptability. In this context, evolutionary computation offers complementary strengths, enabling the systematic exploration and optimization of model configurations, behaviors, and interaction strategies. The convergence of these trends makes this survey particularly timely, as the synergy between LLMs and EC is becoming central to developing more adaptive, explainable, and efficient AI systems. The convergence of LLMs and EC presents a unique opportunity to harness the strengths of both paradigms for enhanced problem-solving, optimization, and automation [5]. A key area of integration involves using EC to optimize various aspects of LLMs. Since the performance of LLMs is highly dependent on hyperparameters such as learning rates, batch sizes, and architectural configurations, evolutionary search techniques have proven effective in identifying optimal configurations that maximize efficiency and accuracy [6]. Furthermore, EC plays a crucial role in Neural Architecture Search (NAS), where they evolve network structures to discover architectures that are better suited for specific tasks [7]. This evolutionary approach reduces reliance on manual tuning and accelerates the discovery of efficient neural models. In addition to hyperparameter tuning and architecture search, EC has been employed to refine fine-tuning strategies, helping LLMs adapt to downstream tasks more effectively without depending solely on traditional gradient-based optimization [8, 9]. Conversely, LLMs have begun to influence the design and execution of EC by providing intelligent guid- ance in evolutionary search. One of the primary ways LLMs enhance EC is by steering the search process toward more promising regions of the solution space, leveraging their ability to process and generate domain- specific knowledge [7]. This capability accelerates convergence and improves solution quality by reducing the time spent exploring unproductive areas. Moreover, LLMs can assist in generating high-quality candidate solutions, particularly in problem domains where structured or heuristic-based initialization is beneficial. By leveraging LLMs as generative models, researchers can introduce diverse and high-quality initial populations, thereby enriching the evolutionary process [5]. Additionally, LLMs have shown promise in refining variation operators such as crossover and mutation, ensuring that new candidate solutions are not only syntactically valid but also semantically meaningful. This approach has been particularly effective in applications such as molecular discovery, where domain knowledge is crucial for guiding the evolutionary process in modifying chemical structures [10]. One of the most promising applications of this integration is the use of EC for optimizing LLM prompts. The performance of LLMs is highly sensitive to input prompts, making prompt engineering a critical factor in achieving desirable outputs. However, manually crafting optimal prompts is often a time-consuming and iter- ative process. To address this, researchers have started employing evolutionary
https://arxiv.org/abs/2505.15741v1
search techniques to automate prompt optimization, systematically refining prompt structures for improved model performance. Frameworks 2 DRAFT Figure 1: Organization of the paper. 3 DRAFTsuch as EvoPrompt utilize evolutionary strategies to explore variations in prompts, selecting and evolving those that yield the most effective responses [11]. By shifting prompt engineering from a manual, trial-and- error approach to an optimization-driven process, EC enables the discovery of highly effective prompts that may not be immediately apparent to human designers [12]. The integration of LLMs and EC is transforming Table 1: Feature-based comparison of LLMs and Evolutionary Computation Feature Large Language Models (LLMs) Evolutionary Computation (EC) Core Principle Next token prediction Iterative evolution of solutions Search Space High-dimensional semantic space Problem-specific solution space Optimization Goal Generating coherent and contextually relevant text Finding optimal solutions Strengths Language understanding and generation Contextual awareness Few-shot learningRobust global search Handles non-differentiable problems Population-based exploration Weaknesses Hallucinations and biases High computational cost Can be difficult to interpretMay have convergence issues Requires careful design of the fitness function No guarantee of optimal solution optimization and automation across various domains. By leveraging EC to optimize LLM architectures, hyper- parameters, and prompts, researchers can enhance model performance while reducing dependence on manual tuning. At the same time, LLMs contribute to EC by improving solution generation, guiding evolutionary search, and refining variation operators. This bidirectional relationship between LLMs and EC highlights the potential for a more automated and intelligent approach to AI-driven optimization. As shown in Fig. 1 and Table 1, this paper introduces a unique bidirectional framework and feature-based comparison that systemat- ically maps how EC and LLMs mutually enhance each other, an aspect often overlooked or partially covered in previous surveys. As research in this area continues to evolve, the fusion of these paradigms is expected to drive advancements in fields such as natural language processing, scientific discovery, and creative content generation, paving the way for more sophisticated AI-driven problem-solving methodologies. 1.1. Related Survey Papers and Differences Several recent surveys have explored the intersection between LLMs and algorithmic design, including their role in EC. However, each of these contributions has its own emphasis and limitations: Liu et al. [13] present a systematic survey on how LLMs contribute to algorithm design (LLM4AD). They classify existing work based on the roles played by LLMs, prompting techniques, and underlying search strategies, with a strong focus on how LLMs can automate and innovate classical algorithm design across optimization, reasoning, and scientific computing tasks. Haleem et al. [14] provides a high-level overview of ChatGPT’s capabilities and limitations, focusing primarily on usability, societal impact, and general features. However, this work lacks a technical framework or a focused discussion on optimization or EC integration. Yu et al. [5] offers a detailed review of how LLMs contribute specifically to optimization, especially in the context of metaheuristics and evolutionary algorithm design. The paper introduces a dedicated LLM-EA optimization paradigm that incorporates variation operators, fitness evaluation, and prompt-driven search as core components. Wu et al. [7] present a broad survey on the bidirectional synergy between evolutionary algorithms and LLMs. They categorize existing
https://arxiv.org/abs/2505.15741v1
approaches into LLM-enhanced EC and EC-enhanced LLMs, introduce several hybrid methods, and outline challenges and open directions, serving as a useful roadmap for future research. Cai et al. [15] concentrate on the enhancement of evolutionary computation using LLMs. Their work discusses new approaches for population initialization and operator design but does not explore how evolutionary methods can, in turn, optimize or support LLMs. Chen et al. [10] deliver a comprehensive review of prompt engineering 4 DRAFTmethods in LLMs, covering techniques such as chain-of-thought (CoT), context optimization (CoOp), and adversarial prompting. While technically detailed, this paper is centered on prompt design and does not address evolutionary algorithms or optimization. Huang et al. [16] provides a general review of the integration of LLMs with optimization, especially from the perspective of decision-making and modeling. While insightful, their treatment is conceptual and broad, with limited emphasis on evolutionary computation or detailed bi- directional interactions between LLMs and EC. While the reviewed papers each address specific facets of the intersection between LLMs and EC, this sur- vey distinguishes itself by offering a unified and explicitly bidirectional perspective on their synergy. Unlike Liu et al. [13], which focuses on how LLMs contribute to algorithm design, or Cai et al. [15], which explores how LLMs enhance EC techniques, our survey systematically examines both directions of influence. It dis- cusses how LLMs can support EC through operator generation, metaheuristic adaptation, and hyperparameter tuning and, conversely, how EC techniques can be used to improve LLM performance via prompt optimization, architecture search, and fine-tuning. Although Wu et al. [7] offers a valuable roadmap for LLM-EC interactions, our survey extends this ef- fort through the introduction of a structured taxonomy (Figure 1), which categorizes integration strategies such as LLM-generated metaheuristics, EC-based surrogate modeling, adaptive parameter control, and co- evolutionary approaches. A particularly novel contribution of this work is its focus on co-adaptive paradigms, in which LLMs and EC systems evolve together in a feedback loop, promoting mutual adaptation and continual learning, a direction still underrepresented in the literature. In addition, we emphasize application-level mapping, linking each integration strategy to real-world do- mains, and address critical implementation challenges such as scalability, interpretability, and data efficiency. Table 2 summarizes these contributions and contrasts our approach with prior work. In contrast to earlier stud- ies that examine either LLM →EC or EC →LLM in isolation, our survey provides a bidirectional framework, proposes a systematic taxonomy, highlights co-adaptive strategies, and explores both practical applications and future research challenges in the integration of LLMs and evolutionary computation. 1.2. Main Contributions To the best of our knowledge, this is the first survey that systematically explores the bidirectional synergy between LLMs and EC. The key contributions of this work are summarized as follows: (i)Bidirectional Perspective on LLM–EC Synergy: This paper presents a comprehensive, two-way inves- tigation of how LLMs can enhance EC through operator generation, tuning, and metaheuristic design, and how EC can improve LLMs via prompt engineering, architecture optimization, and hyperparameter tuning. (ii)Structured Taxonomy and Framework: We suggest a novel taxonomy that systematically categorizes methods, roles, and integration strategies,
https://arxiv.org/abs/2505.15741v1
covering topics such as LLM-generated metaheuristics, sur- rogate modeling, co-evolutionary systems, and explainable EC, offering readers a unified framework to understand this emerging field. (iii) Survey of Emerging Co-Adaptive Paradigms: This work introduces and analyzes new co-adaptive paradigms where LLMs and EC evolve together, including co-evolutionary frameworks, human-in-the-loop sys- tems, and pattern-guided evolutionary search, which are underexplored in previous surveys. (iv) Cross-Domain Application Landscape: We review and map the application of LLM-EC synergies across diverse domains such as scientific modeling, optimization, automated design, and decision-support sys- tems, highlighting practical use cases and deployment insights. 5 DRAFTTable 2: Comparison of the current survey with existing literature on LLM and EC synergy. Dimension Current Work [13] [14] [5] [7] [15] [10] [16] Bidirectional Focus (LLM ↔EC) ✓ Full bidirectional taxonomy (LLM→EC, EC →LLM)✗ ✗ ✗ ✓ ✗ ✗ ✓ Methodological Structure ✓Detailed subsections: soft prompting, EA synergy, tuning strategies✓ ✗✓ ✓ ✗✓ ✗ Tooling and Frameworks ✓EvoPrompt, GAAPO, PhaseEvo, prompt modes✗ ✗ ✗ ✗ ✗ ✗ ✗ Applications Mapped to Methods ✓Neural search, code gen, VLMs, rein- forcement tasks✗ ✗ ✓ ✓ ✗ ✗ ✗ Hybrid Evolution-Language Systems ✓GP + LLM, Co-evolution, DSL gener- ation✗ ✗ ✗ ✓ ✗ ✗ ✗ Novelty in LLM-Based Operator Design ✓Mutation, crossover, and fitness eval via LLM✗ ✗ ✓ ✓ ✗ ✗ ✗ Prompt Engineering Depth ✓Prompt design, auto-prompt, reinforce- ment in prompt cycles✗ ✗ ✓ ✗ ✗ ✓ ✗ Explicit Synergy Framework Provided ✓LLM-EC interaction models, roles, flow diagrams✗✓ ✓ ✓ ✗ ✗ ✓ Empirical Use Cases/Benchmarks ✓Survey of tasks, datasets, output types (e.g., Bin Packing, TSP, DSLs)✗ ✗ ✓ ✓ ✗✓ ✗ Future Directions & Research Gaps ✓Co-evolution, explainability, LLM- tuned DSLs, evaluation bottlenecks✓ ✗✓ ✓ ✓ ✓ ✓ (v)Identification of Research Gaps and Future Challenges: The survey outlines unresolved challenges, such as scalability, explainability, and benchmark design, and provides a forward-looking research agenda to guide future interdisciplinary work in this field. The paper is organized into four main sections. Section 2 focuses on the use of EC to enhance LLMs, in- cluding techniques for prompt engineering, architecture search, and hyperparameter tuning. Section 3 explore how LLMs can, in turn, improve EC by automating metaheuristic design, tuning algorithm components, and generating adaptive heuristics. Finally, Section 4 presents emerging frameworks, future research directions, and the open challenges in the synergy between LLMs and EC. The conclusion of this paper is discussed in Section 5. 2. EC for LLM Enhancement This section explores how EC and related metaheuristic strategies can be utilized to improve LLM per- formance and adaptability. Applications span from evolving effective prompts (hard and soft) to discovering fine-tuning configurations and hyperparameter sets that yield superior model behavior. By treating LLMs as black-box systems amenable to optimization, EC enables a data-efficient and interpretable approach to align- ing LLM outputs with task-specific objectives. As research in this intersection matures, evolutionary strategies are not only serving as tools for LLM enhancement but are also inspiring novel hybrid frameworks where the strengths of both paradigms coalesce, combining the generative fluency of LLMs with the adaptive search power of evolution.
https://arxiv.org/abs/2505.15741v1
The very first focus here will be on prompt engineering. 2.1. EC in Prompt Engineering Prompt engineering is the systematic process of designing textual inputs, known as prompts, that steer LLMs toward useful, accurate, and context-appropriate responses. Because a model’s understanding of a 6 DRAFTComponents of a Prompt Instruction Defines the task (e.g., "Summarize this text in one sentence")Context/Examples Few-shot examples (e.g., "Passage: [text] Summary: [summary]")Input/Query Specific request (e.g., "Passage: [your text here]") Figure 2: Components of a typical LLM Prompt. task depends heavily on the prompt, prompt quality directly affects performance, robustness, and reliability. Unfortunately, crafting effective prompts usually demands substantial human effort, domain expertise, and iterative trial-and-error; this manual process is time-consuming, often sub-optimal, and guided by limited, subjective heuristics [17]. LLMs are also highly sensitive to phrasing; semantically similar prompts can yield markedly different outputs, so methods for discovering optimal prompts are urgently needed. A typical prompt comprises three primary components, as shown in Figure 2. (i) Instruction: Explicitly defines the task the model is expected to perform, such as summarizing text, translating languages, or classifying sentiments. (ii) Context or Examples: Provides additional context or illustrative examples (often referred to as "few-shot examples") to clearly indicate the desired behavior or response pattern. (iii) Input or Query: Presents the specific question or input requiring a response from the model. Consider the following structured example: Input Instruction: Classify the sentiment of the given sentence. For Example, I love sunny weather. →Positive Test Input: This restaurant was disappointing. Output: Negative As the example shows, the model combines the instruction and context to infer that the test input’s senti- ment is Negative. Articulated instructions reduce ambiguity, boosting clarity and accuracy; explicit examples further condition the model toward the desired behavior [18]. Moreover, careful optimization of wording and prompt format can lead to considerable improvements in model performance, as even minor adjustments can result in significant changes in responses [19]. Therefore, building on the need for high-quality prompts, researchers have formalized this area of study as Prompt Engineering (PE), the systematic design and optimization of prompts to elicit precise, context- appropriate responses from language models [20]. A range of prompting techniques has produced notable performance gains. For example, Few-shot prompting provides the model with multiple illustrative examples within a prompt, guiding its responses effectively [21]. Another notable technique, Chain-of-Thought (CoT) 7 DRAFTprompting, significantly improves the arithmetic and reasoning capabilities of models by explicitly demon- strating reasoning processes through few-shot examples. Similarly, zero-shot-CoT prompting boosts zero-shot performance by simply incorporating prompts such as "Let’s think step by step," encouraging systematic rea- soning [22]. Moreover, researchers have introduced multi-step prompting strategies, such as generating question-related knowledge statements with one language model and subsequently using those statements as input to another model for improved predictions [23]. Despite these advancements, current prompt engineering methods pre- dominantly rely on manual design. Manual prompt engineering, however, has critical limitations, including inherent subjectivity, substantial labor requirements, and extensive reliance on trial-and-error [24]. Human- crafted prompts, while valuable for guiding model learning, may still be suboptimal, as optimal prompt iden- tification through
https://arxiv.org/abs/2505.15741v1
intuition alone remains highly challenging. Additionally, manually designed prompts often do not generalize effectively across diverse tasks or datasets due to the combinatorially expansive nature of the prompt optimization space, making manual exploration impractical [25]. Addressing these limitations through automated optimization techniques represents an important future direction for research in prompt engineering. These shortcomings have motivated researchers to explore automated approaches to prompt engineering, particularly through the utilization of EC. EC is a heuristic optimization method inspired by biological evolu- tion processes, involving mechanisms like selection, crossover, and mutation. EC possesses notable strengths, including robustness, the ability to navigate complex optimization landscapes, and independence from explicit gradient information, making them especially suitable for optimizing prompts [26]. The integration of EC with prompt engineering has led to substantial advancements in optimizing LLMs. EC’s search strategies systematically refine prompts, delivering gains across diverse tasks. Because EC oper- ates effectively in both continuous and discrete spaces [26], prompt optimization for LLMs naturally splits into two branches: soft-prompt optimization in the continuous embedding space and hard-prompt optimization in the discrete textual space. A hard prompt is an explicit, human-readable instruction that steers the model’s response. EC refines these textual prompts, adjusting wording, structure, and phrasing, to maximize effectiveness [27]. For instance, an initial prompt such as “Summarize this article in one paragraph" might evolve into “Write a concise, five- sentence summary highlighting the key points of this article." In contrast, soft prompts are latent, continuous embeddings rather than visible text. These prompts are im- plemented as learned embeddings integrated into the model’s input, represented as vectors [28]. EC optimizes these embeddings by evolving their parameters to improve task performance. For example, instead of directly instructing "Summarize this article," soft prompts employ optimized hidden token embeddings that influence the model’s interpretation and steer the responses without explicit linguistic instructions. Because EC relies on selection, crossover, and mutation rather than gradients, they handle complex op- timization landscapes, continuous, discrete, or hybrid, with ease. Treating sequences of prompt tokens (or embedding dimensions) like genetic material, they can systematically explore and improve both hard and soft prompt spaces while maintaining coherence and, for hard prompts, human readability. A general framework of EC for prompt engineering is presented in Figure 3. 2.2. EC in hard Prompting EC has shown significant potential in optimizing hard prompts for LLMs. H. Xu [29] introduced Genetic Prompt Search (GPS), a straightforward genetic algorithm method designed specifically for refining few-shot instruction prompts. This approach iteratively applies genetic operations like mutation to tokens within discrete prompts, continuously evaluating and retaining only the best-performing prompts based on task performance. Further contributions include GrIPS (Gradient-free, Edit-based Instruction Search) by A. Prasad [30], which, while not strictly a genetic algorithm, uses a similar local-edit approach to generate improved child prompts from parent instructions. 8 DRAFTBuilding upon this, C.I. Hsieh [31] extended the GPS concept to longer prompts by incorporating beam search heuristics along with a history buffer mechanism. This strategy maintains contextual consistency across prompt mutations, significantly enhancing the optimization process for lengthy textual prompts. Expanding the integration of EC
https://arxiv.org/abs/2505.15741v1
and LLMs further, Guo et al. [32] developed EvoPrompt, a unique framework where language models themselves serve as evolutionary operators. EvoPrompt enables LLMs to propose new prompt candidates through operations analogous to genetic crossover and mutation, with EC subsequently selecting prompts based on improved development-set performance. Moreover, Fernando et al. [33] presented Promptbreeder, a co-evolutionary approach leveraging evolu- tionary algorithm principles to simultaneously evolve task-specific prompts and mutation instructions. This dual-evolution strategy enables refined control over how prompts mutate or cross over, guided explicitly by the language model itself. Chen et al. [34]presented EvoPrompting focuses explicitly on Neural Architecture Search (NAS). It uses evolutionary prompting to guide a large language model in generating and refining neural network architec- tures. EvoPrompting leverages LLMs as adaptive mutation and crossover operators to optimize architectures through evolved textual prompts. Additionally, W. Cui [35] proposed PhaseEvo, a comprehensive multi-phase evolutionary pipeline. PhaseEvo optimizes instructions and exemplar sets simultaneously by alternating refinement processes between textual instructions and selected example subsets, thus integrating the optimization of both prompt elements. Com- plementing these methods, Chen et al. [36] introduced Prompt Optimization in Multi-Step Tasks (PROMST), designed specifically for optimizing prompts in multi-step tasks. PROMST uniquely incorporates human-in- the-loop interactions and heuristic models, combining evolutionary sampling methods with direct user feed- back to incrementally enhance textual prompts, thereby demonstrating an effective collaborative evolutionary optimization framework. Similarly, Baumann and Kramer [37] introduced EMO-Prompts, an evolutionary multi-objective optimiza- tion method tailored explicitly for nuanced tasks such as sentiment analysis. Their approach evolves prompts that enable language models to simultaneously express conflicting emotions, demonstrating the advanced ca- pabilities of evolutionary optimization in achieving complex linguistic objectives. Feng et al. [38], introduced Genetic Auto Prompt (GenAP), which leverages a genetic algorithm (GA) for optimizing discrete, human-readable textual prompts (hard prompts) without relying on gradient information. GenAP automatically designs discrete prompts by evolving their wording and structure using tailored genetic operators (crossover and mutation) to enhance performance across various code intelligence tasks. Similarly, Wong et al. [39], presented a framework called Prompt Evolution Design Optimization (PEDO), integrating EC with prompt engineering. The framework iteratively generates and evolves text-based prompts containing user specifications for aerodynamic performance and visual attributes of 3D car designs. Each evolved prompt guides the generation of designs assessed through computational fluid dynamics simulations and evaluated using a vision-language model, which penalizes impractical designs. This combined optimiza- tion strategy ensures that user preferences regarding aesthetics and aerodynamic performance are effectively balanced, leading to optimized and practical car designs. 2.3. EA in Soft Prompting Automated soft prompt engineering, commonly referred to as prompt tuning, focuses on optimizing con- tinuous embeddings, known as soft prompts, to effectively guide LLMs. Unlike traditional hard prompt en- gineering, which depends on discrete, explicit textual instructions, soft prompt engineering involves adjusting learned vector representations integrated directly into the model’s embedding space. This approach provides flexibility by fine-tuning continuous prompt parameters rather than fixed textual instructions. Most automated soft prompt engineering methods predominantly leverage gradient-based techniques [40] or reinforcement learning strategies [41] or sequential optimal learning[42]. Gradient-based optimization 9 DRAFTdirectly tunes
https://arxiv.org/abs/2505.15741v1
prompt embeddings through backpropagation, continuously refining embeddings to enhance model responses. Reinforcement learning approaches treat soft prompt optimization as sequential decision- making, iteratively adjusting prompt embeddings based on model performance and feedback. Additionally, sequential optimal learning strategies [42] employ Bayesian regression and the Knowledge-Gradient policy to systematically explore the continuous prompt embedding space and efficiently identify optimal solutions. By contrast, evolutionary computation (EC) has been widely applied to hard-prompt optimization but rarely to soft prompts. A plausible reason is the practical difficulty of mapping EC operators to a high-dimensional, continuous embedding space. Defining meaningful mutation and crossover in thousands of-dimensional vec- tors without producing degenerate or adversarial embeddings remains non-trivial, and the search space is vast and unstructured compared with discrete token edits. Moreover, each candidate embedding must be eval- uated with a forward pass through a large model, so a naïve EC loop can become prohibitively expensive; designing a fitness function that reliably captures subtle quality differences in continuous prompts adds further complexity. These hurdles—search-space definition, operator design, fitness evaluation cost, and interpretabil- ity—help explain why EC has so far been under-utilized for soft-prompt tuning. Nevertheless, EC’s gradient- free, population-based search is well-suited to non-convex landscapes and could, in principle, evolve soft- prompt embeddings through selection, crossover, and mutation. Bridging these practical gaps—e.g., via surro- gate fitness models, dimensionality-reduction techniques, or hybrid gradient–evolution schemes—represents a promising direction for future automated soft-prompt engineering research. Additionally, systematically ex- ploring diverse evolutionary strategies for continuous-embedding optimization could further overcome current limitations and accelerate progress in automated soft-prompt engineering. 2.4. EA Based Prompt Engineering Tools This section focuses on three prominent techniques that exemplify the use of evolutionary principles for automatic prompt optimization: EvoPrompt [11], PhaseEvo [43], and GAAPO [44]. While sharing a common foundation in EC, these methods represent distinct philosophies and approaches: (i) EvoPrompt: Pioneers the concept of using LLMs as the direct implementers of evolutionary operators like crossover and mutation within standard EA frameworks (Genetic Algorithms and Differential Evo- lution). (ii) PhaseEvo: Introduces a multi-phase evolutionary framework specifically designed for the unified opti- mization of both prompt instructions and in-context learning examples, employing LLMs within tailored operators for different search phases. (iii) GAAPO: Proposes a hybrid approach where a Genetic Algorithm acts as a high-level controller orches- trating a portfolio of diverse, specialized prompt generation strategies, many of which leverage LLMs. Further, we will try to understand and summarize the core concepts, methodologies, application domains, performance characteristics, strengths, and limitations of these prompting tools. The focus of the study will address a comparative analysis, highlighting similarities, differences, and the evolution of ideas of these three prompting tools. 2.4.1. EvoPrompt EvoPrompt, developed by Guo et al. [11], represents a novel framework for discrete prompt optimization that explicitly connects LLMs with EC. The central idea is to use the inherent language processing capabilities of LLMs to perform evolutionary operations, thereby automating the search for effective natural language prompts. A key characteristic of EvoPrompt is its gradient-free nature; it operates without requiring access to the target LLM’s internal parameters or gradients, making it readily applicable
https://arxiv.org/abs/2505.15741v1
to proprietary, black-box 10 DRAFT Initial Prompt Population (Hard) Hard Prompt Population (Discrete textual form)Step 1: Convert text to embeddings Step 2: Initialize Soft prompt population (Continuous Embeddings) Evolutionary Operators (Discrete Mutation, Crossover, Selection)Evolutionary Operators (Continuous Mutation, Crossover, Selection) Prompt Evaluation (Fitness) LLM based Evaluation Generate New PromptConvergence Check Optimal PromptNo Yes Embedding-to-Text (for soft prompting only) Figure 3: A General Framework of EC for Prompt Engineering. models accessed via APIs. The motivation stems from the observation that EC exhibits good performance and fast convergence in optimization tasks, and combining it with LLMs creates a synergy between optimization efficiency and language manipulation capabilities. EvoPrompt aims to generate prompts that are not only effective but also coherent and human-readable. The EvoPrompt framework follows the general structure of an evolutionary algorithm. It begins with an initial population of candidate prompts. In each iteration, it generates new prompts by applying evolutionary operators, implemented by an LLM, to selected prompts from the current population. These new prompts are evaluated based on their performance, such as accuracy and ROUGE score, on a development dataset using the target LLM. The population is then updated based on these evaluation scores, typically retaining higher-performing prompts for the next generation. EvoPrompt was manifested using two common EA types: (i) GA manifestation: This version employs canonical GA operators where parent prompts are selected from the population, often using a fitness-proportionate method like roulette wheel selection. An LLM then performs the core evolutionary operations. a) Crossover: The LLM is instructed to combine genetic material from two parent prompts to create a new offspring prompt. For example, it might merge phrases or clauses related to the task description or output format from the parents. 11 DRAFTb) Mutation: The LLM is instructed to introduce random alterations to the generated offspring prompt, potentially modifying words, phrases, or structure. The population is updated by evaluating the off- spring and applying a selection strategy (e.g., keeping the best individuals). (ii) DE Manifestation: This version adapts DE principles for prompt optimization. For each prompt (‘base vector’) in the population, the LLM performs a sequence of operations. a) It identifies the differences between two other randomly selected prompts from the population. b) It mutates these identified differences. c) It combines these mutated differences with the current best-performing prompt in the population. d) It performs a crossover operation between this combined prompt and the original base vector prompt. The update mechanism typically involves comparing the newly generated prompt with the original base prompt and retaining the one with the higher performance score. A crucial aspect across both instantiations is how the LLM performs these operations. It is not explicitly trained for crossover or mutation; rather, it interprets natural language instructions provided within the EvoPrompt framework (the Evo( ·) function) that describe the desired operation. 2.4.2. PhaseEvo To tackle the challenges of unified optimization, PhaseEvo employs an efficient automatic prompt op- timization framework that combines the generative power of LLMs with the global search capabilities of evolution algorithms. Instead of the random operator selection often seen in traditional EC, PhaseEvo utilizes a
https://arxiv.org/abs/2505.15741v1
structured, quad-phased design that strategically alternates between global exploration and local exploita- tion.5 This phased approach aims to balance the need to broadly explore the vast search space with the need to efficiently converge towards high-performing solutions, minimizing LLM inference costs. The four phases are: (i) Phase 0: Global Initialization: The goal is to establish a diverse initial population of candidate prompts covering the joint instruction-example space. Two strategies are supported as mentioned below. (a) Reverse Engineering: An LLM agent uses a "Lamarckian Mutation" operator (OL) to infer potential prompt instructions from example input-output pairs in the training data. (b) Human Expert Input: Users can provide seed prompts, which are then diversified using a "Semantic Mutation" operator (OS) that paraphrases them while preserving meaning. (ii) Phase 1: Local Feedback Mutation: This phase focuses on rapid local convergence for each candidate prompt.5 It uses a "Feedback Mutation" operator (OF) where an LLM acts as an “Examiner" to iden- tify weaknesses by analyzing failures on a development dataset. The Examiner generates feedback or “improvement guidance," conceptually similar to a gradient in continuous optimization. An LLM “Im- prover" then uses this guidance to edit the prompt, moving it away from the error direction and generating locally improved candidates. (iii) Phase 2: Global Evolution Mutation: Designed to help the search escape local optima, this phase employs LLM-based evolution operators for broader exploration. Key operators include. a) Estimation of Distribution Mutation (EDA/EDA+I): The OE operator generates a new prompt based on a subset of high-quality, diverse parent prompts selected from the population. EDA+I incorporates index information, potentially weighting later examples more heavily. 12 DRAFTb) Crossover (CR/CR+D): The OC operator combines two parent prompts linguistically. The CR+D variant specifically pairs the best prompt With the most distinct prompt (based on the task-aware similarity metric) to foster diversity. (iv) Phase 3: Local Semantic Mutation: This final phase aims to accelerate convergence to the global op- timum by performing fine-grained local exploitation.5 It re-employs the "Semantic Mutation" operator (OS), using an LLM to paraphrase the current best prompts, introducing subtle variations while preserv- ing the core meaning and intent. The explicit structuring of the optimization process, using different LLM-driven operators tailored for distinct search phases (local feedback, global evolution, local refinement), contrasts sharply with approaches like Evo- Prompt that rely on LLMs to perform generic EA operations throughout. This structured design suggests a hypothesis that targeted LLM operations within specific phases lead to more efficient and effective navigation of the complex prompt space. 2.4.3. GAPPO GAAPO (Genetic Algorithm Applied to Prompt Optimization) introduces a distinct approach to automatic prompt optimization by employing a Genetic Algorithm (GA) not just to perform basic evolutionary opera- tions, but to act as a high-level framework for integrating and managing a portfolio of diverse, specialized prompt generation strategies. Unlike traditional GAs that rely primarily on mutation and crossover, GAAPO leverages the strengths of multiple existing and novel prompt optimization techniques within its evolutionary cycle. The core idea is that different strategies may excel at different stages of optimization or for different types of prompts, and
https://arxiv.org/abs/2505.15741v1
a hybrid approach managed by a GA can dynamically leverage the most effective gen- erators over time, leading to more robust and optimal performance. GAAPO also emphasizes maintaining a detailed record of the evolution of prompting strategies, enabling analysis of their relative effectiveness. GAAPO operates through successive generations, following the standard GA cycle of Selection, Genera- tion, and Evaluation. (i) Genetic Algorithm Core: It maintains a population of prompt candidates. In each generation, a selection mechanism (typically choosing the top performers based on evaluation scores) identifies parent prompts.1 These parents are then used by various generations to create new offspring prompts. The offspring are evaluated, and the cycle repeats. (ii) Integrated Generation Strategies: The key innovation lies in the Generation phase, which utilizes a di- verse set of prompt generators. a) OPRO (Optimization by PROmpting): An LLM-based iterative refinement strategy using a trajectory of past high-performing prompts to guide the generation of new candidates. b) APO (Automatic Prompt Optimizer) / ProTeGi (Prompt Optimization with Textual Gradients): An iterative method that identifies errors made by existing prompts, generates "textual gradients" based on these errors, and uses them to create improved prompts. c) Random Mutator: Introduces controlled random modifications using eight distinct mutation types targeting different aspects of prompt structure and content (e.g., instruction expansion, expert persona injection, structural variation, constraint addition, creative backstory, task decomposition, concise optimization, role assignment). d) Crossover: A standard GA operator adapted for prompts, combining segments (e.g., first half of one parent, second half of another) from two parent prompts to create offspring, aiming to merge beneficial instruction blocks or strategic elements. 13 DRAFTe) Fewshot: Uses ICL by augmenting existing prompts with a small number (1-3) of labeled examples randomly selected from the training data. (iii) Evaluation Methods: Recognizing the computational cost of evaluating prompts (requiring LLM infer- ence), GAAPO incorporates flexibility in its Evaluation phase, offering several strategies. a) Complete Evaluation: Evaluates every generated prompt on the entire validation dataset. Provides the most accurate ranking but incurs the highest computational cost. b) Successive Halving (SH): An efficiency-focused method that iteratively evaluates prompts on in- creasingly larger subsets of the validation data, discarding the worst-performing half in each round. Reduces LLM calls significantly but risks eliminating promising candidates early. c) Bandit Selection Algorithm: Employs a multi-armed bandit approach (specifically UCB-E men- tioned) to efficiently allocate evaluation budget, balancing exploration of new prompts with exploita- tion of currently promising ones. The framework itself is implemented in Python, named HOPR (Hint Optimization and Prompt Refine- ment) [45], featuring modular components for optimizers, metrics, and managing the evolution process. Table 3 presents the feature wise comparison of Evprompt, PhaseEvo and GAAPO and Table 4 presents their strength, limitations and applicaiton domains found in the literature. Progress in this field seems con- Table 3: Feature Comparison of EvoPrompt, PhaseEvo, and GAAPO. Feature EvoPrompt PhaseEvo GAAPO Primary Goal Discrete Prompt Optimiza- tionUnified Instruction & Example Op- timizationHybrid Prompt Optimization via Strategy Integration Core Algorithm Genetic Algorithm (GA) / Differential Evolution (DE)Custom Multi-Phase Evolutionary AlgorithmGenetic Algorithm (GA) as Controller LLM Role Implements Generic EA Op-
https://arxiv.org/abs/2505.15741v1
erators (Crossover, Mutation)Implements Phased Operators (Feedback, EDA, Crossover, Se- mantic Mut.)Component within Diverse Generation Strategies (OPRO-like, APO-like, etc.) Optimization Target Primarily Instruction Joint Instruction & Examples Primarily Instruction (with Few-shot Generator for Examples) Key Operators/Strat. LLM-based Crossover, LLM- based MutationFeedback Mut., EDA Mut., Crossover Mut., Semantic Mut.OPRO-like, APO-like, Random Mutator (8 types), Crossover, Few-shot Evaluation Dev Set Score Dev Set Score Flexible: Complete, Successive Halving, Bandit Selection Notable Innovations LLM as direct EA operator; Gradient-free black-box opt.Unified ICL optimization; Multi- phase structure; Task-aware simi- larity metricHybrid integration of multiple APO strategies; Flexible evaluation; Trade-off analysis tingent on moving towards a more principled integration of LLMs and EC. Indeed, it is an active and entirely separate domain of research, and we keep ourselves focused on the theme of the article. 2.5. Evolutionary Hyperparameter Tuning for LLMs Hyperparameter optimization is a critical step in developing high-performing machine learning models, in- cluding LLMs, as the choice of hyperparameters significantly influences the training dynamics and final model quality. Manually tuning these parameters is often a laborious, intuition-driven process. EC offers a compelling 14 DRAFTTable 4: Comparison of Features, Strengths, Limitations, and Application Domains for EvoPrompt, PhaseEvo, and GAAPO. Features EvoPrompt PhaseEvo GAAPO Strengths Pioneered the framework connecting LLMs and EC for discrete prompt optimization. Applicable to black-box LLMs with- out needing internal access. Generates human-readable and inter- pretable natural language prompts. Demonstrated significant improve- ments over manual prompts and earlier APO methods across diverse tasks in initial studies. Motivated by the potential for good performance and fast convergence associated with EC.Addresses the interplay between in- structions and examples for potentially better performance. Balances global exploration and local exploitation effectively. Innovative metric promotes functional diversity based on performance. Demonstrated significant improve- ments over strong baselines across diverse benchmarks. Can generate zero-shot or few-shot prompts and adapt the prompt length. Claimed to maintain good computa- tional efficiency compared to some evolutionary strategies, despite the complexity.Uses the strengths of multiple diverse prompt optimization strategies within a single framework. Implemented as a flexible framework (HOPR) with adaptable components, particularly in evaluation methods. The portfolio approach may lead to more robust performance and bet- ter generalization compared to single- strategy methods. Provides a testbed for analyzing key optimization trade-offs (popula- tion size vs. generations, evaluation cost vs. accuracy). Enables analysis of the relative effec- tiveness and evolution of different gen- eration strategies over the optimization process. Limitations Performance trajectory can be unsta- ble, sometimes failing to improve or even degrading performance compared to initial prompts or other methods like OPRO. Comparative studies show that Evo- Prompt is outperformed by newer techniques like PhaseEvo, StraGo, and GReaTer on various benchmarks. Relies on the LLM’s ability to interpret and execute abstract evolutionary op- erations (“crossover”, “mutate”) based on fixed natural language instructions. The effectiveness likely depends sig- nificantly on the capability of the LLM used to perform the evolutionary oper- ations. Susceptible to prompt drift, where op- timizing for some cases negatively im- pacts others. Concerns were raised about the lack of concrete details on the evaluation pro- cess, especially for complex prompts.While
https://arxiv.org/abs/2505.15741v1
potentially more efficient than some EC, it still requires a consider- able number of LLM API calls ( 4000 mentioned for 12 iterations). Performance can vary depending on the chosen initialization strategy (re- verse engineering vs. expert prompt). Effectiveness hinges on the capabili- ties of the underlying LLM used for the various mutation and evaluation steps. The multi-phase design with special- ized operators is more complex to im- plement and understand than simpler methods. Potential for Drift: Like other APO methods, it may still be susceptible to prompt drift.Managing multiple distinct genera- tion strategies within a GA framework increases implementation complexity compared to simpler methods. Performance gains are highly depen- dent on the specific task and the po- tential for improvement over baseline prompts. Susceptible to overfitting, particularly with larger population sizes. While offering efficient evaluation op- tions like bandit selection, the overall process can still be computationally in- tensive due to repeated LLM calls. The overall effectiveness is bounded by the quality and complementarity of the integrated generator strategies (OPRO, APO, etc.) and the capability of the LLM used. Application DomainLanguage Understanding: Tasks such as sentiment classification (SST-2, CR, MR, SST-5), topic classification (AG’s News, TREC), and subjectivity classi- fication (Subj). Language Generation: Tasks including text summarization (SAMSum) and text simplification (ASSET). Complex Reasoning: Evaluated on the challenging BIG-Bench Hard (BBH) suite, comprising 23 tasks requiring multi-step reasoning.BIG-Bench Hard (BBH): 8 represen- tative tasks requiring complex reason- ing. NLP Detection Tasks: Including Ethos (offensive language), Liar (fake news), and Sarcasm detection. Instruction Induction: 24 tasks focused on inferring task instructions from ex- amples.ETHOS: A dataset for hate speech and offensive language detection (multil- abel classification). MMLU-Pro: Subsets (Engineering, Business) of a challenging bench- mark designed to test professional- level multitask understanding. GPQA: A dataset featuring graduate- level physics question answering. 15 DRAFTalternative (see Table 5 for some examples) for automating this process. Their gradient-free nature is a distinct advantage, particularly for LLMs accessed as black-box APIs where internal gradients are inaccessible. EC relies solely on evaluating the performance (fitness) of different hyperparameter configurations, making them applicable even without visibility into the model’s internal workings. Furthermore, EC’s population-based search can effectively explore complex, high-dimensional hyperparameter spaces, potentially uncovering non- obvious interactions between parameters and escaping local optima that might trap simpler search methods. Table 6 presents an overview of applications of evolutionary algorithms in LLM hyperparameter optimization Table 5: Summary of EA applications in LLM hyperparameter optimization across studies. Study/Method EA used Targeted Hyperparameters LLM/Task ContextKey Finding/Contribution AutoTinyBERT [46] Custom EA (Evolver)Architectural Dims, Layers (lt,dm, etc.)BERT / Ef- ficiency (La- tency)Automated architectural HPO using EA and SuperPLM proxy for effi- cient PLMs. Custode et al. [47] ES + LLM AdvisorES Step-Size (1+1)-ES Opti- mizationLLMs can analyze logs and provide real-time EA hyperparameter (step- size) recommendations. Evolutionary Merg- ing [8]CMA-ES Model Merging Recipe, (TIES/DARE)Foundation Model MergingEA automates the discovery of opti- mal merging hyperparameters, sur- passing manual intuition. LMEA [9] LLM-driven EALLM Temperature, (Self- Adaptation)Combinatorial Opt. (TSP) via LLM+EALLM integrated into EA loop with self-adapting temperature for ex- ploration/exploitation. Tani et al. [48]
https://arxiv.org/abs/2505.15741v1
GA, PSO General ML Hyperparame- tersML for High Energy PhysicsExplored GA/PSO for autonomous HPO in a specific scientific domain. 2.5.1. Evolutionary Architecture Optimization for LLMs Designing optimal neural network architectures, particularly for complex models like LLMs, is a signif- icant challenge [49]. Manual design is often resource-intensive, relies heavily on expert intuition, and may struggle to navigate the vast combinatorial space of possible architectural configurations. Neural Architecture Search (NAS) aims to automate this process by formulating architecture design as an optimization problem: finding the architecture that maximizes a given objective, such as accuracy, under certain constraints (param- eter budget, latency). Table 7 shows some examples of the architectural aspects of evolutionary NAS for LLMs. EC has proven to be a powerful tool for NAS.10 Their population-based approach allows for parallel exploration of the architecture space, and their gradient-free nature makes them suitable for handling discrete architectural choices or complex search spaces where gradients are ill-defined or unavailable.8 EC can effec- tively search the space of neural architectures by representing architectures as individuals, evaluating their performance (fitness), and applying evolutionary operators (selection, mutation, crossover) to generate and re- fine new candidate architectures iteratively. Table 8 presents a summary of evolutionary approaches for neural architectural search. 2.6. Current Challenges and Future Directions EC faces significant challenges when applied to LLM optimization, primarily due to the enormous com- putational costs involved. Evaluating each candidate solution requires partial or full LLM training, making the process prohibitively expensive and limiting feasible population sizes. The vast search spaces of modern 16 DRAFTTable 6: Comparative overview of EA applications in LLM hyperparameter optimization. Targeted Hyperparameters Evolutionary Techniques Employed Case Studies and Examples Architectural Hyperparameters: • Number of Transformer layers • Hidden state dimension • Number of attention heads • Feed-forward network intermediate size Methods: AutoTinyBERT (as HPO), SuperShaper, LiteTransformerSearch (as NAS)Custom EC: • AutoTinyBERT’s "Evolver" –Selection via performance ranking –Mutation of hyperparameters –Architecture explorationAutoTinyBERT: • Optimizes BERT architecture (lay- ers, dimensions) • Uses Evolver + Evaluator compo- nents • Leverages "SuperPLM" proxy model • Incorporates latency predictor Model Merging Hyperparameters: • Parameters for TIES-Merging + DARE • Weight combination strategies Optimized via CMA-ESEvolution Strategies (ES): • (1+1)-ES with LLM-guided adap- tationLLM-Guided Step-Size Adaptation: • Uses Llama2-70b, Mixtral • LLM analyzes (1+1)-ES logs • Provides real-time step-size recom- mendations EA-Specific Hyperparameters: • Step-size in Evolution Strategies • Temperature parameter in LMEA • Self-adaptation mechanismsCovariance Matrix Adaptation ES (CMA- ES): • Used for continuous optimization • Applied to model merging parame- tersEvolutionary Model Merging: • CMA-ES optimizes TIES- Merging+DARE • Creates superior merged models • Targets Open LLM Leaderboard performance General ML Hyperparameters: • Learning rate, batch size, dropout • Potential applications via GA/PSOGenetic Algorithms & PSO: • General HPO methods • Applicable to LLM training 17 DRAFTTable 7: Evolutionary NAS for LLMs: Architectural Aspects and Methods. Category Description Examples Optimized Architectural Aspects Overall Structure/BackboneFundamental discoveries, including novel attention mechanisms or evolving entire LLM backbones from basic building blocks.AutoBERT-Zero [50] Macro-Level HyperparametersOptimizes high-level structural parameters (encoder/decoder blocks, hidden dimensions, attention heads, FFN sizes).SuperShaper [51], AutoTinyBERT [46] Component Choices Discrete choices within
https://arxiv.org/abs/2505.15741v1
components (activation functions, sub-module layer counts).— Layer Configuration Layer-specific hyperparameters or novel layer connectivity patterns. LiteTransformerSearch [52] Code-Level ModificationsLLM-guided direct source code manipulation for flexible architectural variations.LLMatic [17], EvoPrompting [34] Evolutionary NAS Methods and Techniques Standard EC (GA/NSGA-II)Established genetic algorithms adapted for architecture search, including multi-objective variants.AutoBERT-Zero, DistilBERT (NSGA-II) [53] Multi-Objective EC (MOEAs)Optimizes trade-offs between performance and computational cost (latency, memory, parameters).LiteTransformerSearch Quality-Diversity (QD) Seeks diverse high-performing solutions rather than a single optimum (e.g., via MAP-Elites).LLMatic (dual-archive system) LLM-driven Evolution Uses LLMs as intelligent variation operators for code-level mutations and crossovers.EvoPrompting 18 DRAFTTable 8: Summary of Evolutionary Approaches for Neural Architecture Search. Study/Method EA/Search Technique Optimized Aspects Objectives/Metrics Key Finding/Contribution AutoBERT-Zero EA (Custom) BERT Backbone Struc- turePerformance (Implied) Evolved universal LLM back- bones from scratch using EA- based NAS. DistilBERT NAS NSGA-II (MOEA) Attention Heads, FFN (Activation/Layers/- Size), Encoder BlocksQA Perf. (F1/EM) vs. Model SizeApplied MOEA for NAS on DistilBERT under budget, showing efficient exploration. LLMatic QD (MAP-Elites) + LLMCode-level (CNNs ini- tially)Accuracy + Diversity (Width/Depth, FLOPS)Novel dual-archive QD ap- proach using LLM for code- level variation, seeking di- verse networks. EvoPrompting LM Operator + Evo- PromptingCode-level (GNNs) Performance + Diver- sity (Implied)LM as adaptive operator; evo- lutionary prompt engineer- ing + tuning finds superior GNNs. LiteTransformerSearch MOEA Decoder Layer Hyper- parametersPerplexity vs. Latency vs. MemoryTraining-free MOEA-NAS for efficient GPT-2 style models. SuperShaper [51] EA (Implied) Hidden Dimensions Task-Agnostic Pre- trainingSearched hidden dimensions for BERT using EC. Klein et al. [54] MOEA (Implied) Subnetwork Structures (Pruning)Performance vs. Model SizeUsed MOEA for multi- objective structural pruning of LLMs via NAS. Choong et al. [55] MO-MFEA Model Configurations Multi-Task Perfor- mance vs. SizeUsed multi-objective multi- task EA to find specialized smaller models from founda- tion models. GPT-NAS [56] EA + LLM Network Architecture Performance (Implied) Used LLM’s generative capa- bility within EA framework for NAS. Guided Evolution [57] EA + LLM Code-level Models Performance (Implied) Used LLM guidance within an evolutionary framework for model improvement. 19 DRAFTLLMs with billions of parameters push current computational limits, restricting most applications to optimiz- ing subsets of architectures rather than full models. Effective representation of LLM architectures in evolvable formats remains difficult, particularly when using code-based approaches. The exploration-exploitation bal- ance becomes especially challenging when using LLMs as evolutionary operators, as they tend to bias toward known solutions. Fitness evaluation presents a bottleneck, often requiring noisy approximations through proxy tasks or surrogate models. Handling constraints like architectural validity or latency requirements adds fur- ther complexity. When LLMs are integrated into the evolutionary loop, they introduce additional challenges, including calibration issues, generator collapse (reduced diversity), limitations in complex reasoning, depen- dence on clear reward signals, and high sensitivity to input prompts. These combined factors make EA-based LLM optimization both computationally demanding and methodologically complex, requiring advances in efficient evaluation techniques and hybrid approaches to become more practical. Future research has numerous potential directions to address current limitations and unlock further possi- bilities, including efficiency improvements through more accurate, cheaper, and scalable surrogate models or training-free fitness evaluation techniques, as well as reducing the computational overhead of integrating LLMs into
https://arxiv.org/abs/2505.15741v1
optimization. Scalability enhancements are needed to design EC and representations capable of handling larger search spaces from future LLM generations. Improved representations should explore sophisticated encodings for complex LLM architectures and hyperparameters to enhance evolutionary search. Advanced hybrid algorithms could integrate LLMs more deeply for reasoning, planning, or strategy generation, enabling dynamic adaptation of EA operators based on LLM insights. A stronger theoretical foundation is required to understand the convergence properties and limitations of hybrid EA-LLM systems. Robustness and gen- eralization must be ensured so that optimized solutions perform well on unseen data and avoid catastrophic forgetting in continuous fine-tuning. Automated algorithm design (AutoML/AutoAD) could extend EC and LLMs to self-improving optimization systems. Security and safety research is crucial as LLMs gain auton- omy through evolutionary optimization, necessitating risk mitigation. Finally, multimodal LLM optimization requires adapting EA techniques to handle non-textual data like images and audio. EC can, in principle, complement gradient methods for large-language model optimization, yet scaling EC beyond toy settings exposes several distinctive hurdles. The first is cost: every candidate in an evolutionary population must be scored with at least one forward (and occasionally training) pass, so large populations be- come prohibitively slow and expensive. High-dimensional soft-prompt vectors compound that cost: mutating and recombining dense embeddings without collapsing them into noise or adversarial artefacts is non-trivial, and the resulting fitness landscape lacks the smooth gradients that guide back-propagation. Representing en- tire transformer architectures in an evolvable form is equally tricky—crossover must preserve weight sharing and block legality—while evaluation remains a bottleneck because full-task metrics are noisy and expensive; proxy tasks or surrogate regressors help but can mislead selection. Additional complications arise when the LLM itself acts as a mutation operator, because its inductive bias gravitates toward familiar phrasing and re- duces diversity. Practical constraints such as latency, memory limits, or safety filters must also be respected during search. Finally, the theory lags behind practice: little is known about sample complexity, convergence guarantees, or how an LLM’s “learning strategy” co-evolves with an EA’s “search strategy.” (Broader hybrid- system issues—distributed search, interpretability, catastrophic forgetting—are treated later in Section 4.4.) Addressing these EC-specific obstacles will require a mix of engineering and theory. Promising directions in- clude fast, training-free fitness surrogates that widen feasible population sizes; geometry-aware mutation and crossover operators for continuous embeddings; legality-preserving encodings and grammar-guided search for ultra-large transformer variants; and hybrid schemes in which back-propagation performs local refinement while EC supplies global exploration. A firmer theoretical footing—for example, sample-efficiency bounds or criteria that predict when EC + LLM synergy outperforms either component alone—would guide algorithm design and resource allocation. Progress along these lines could make EC a practical, scalable tool for prompt and architecture optimisation in the next generation of LLMs. 20 DRAFT3. LLMs for EC Improvement 3.1. LLM-Driven Automated Metaheuristic Design 3.1.1. LLM-Powered Generation of Metaheuristics LLMs, such as GPT-4, represent a breakthrough in artificial intelligence, known for generating coherent and contextually meaningful text [58]. Trained on vast corpora of textual data, LLMs like GPT-4 excel in various tasks including text generation, summarization, translation, and question answering. GPT-4,
https://arxiv.org/abs/2505.15741v1
developed by OpenAI and based on the transformer architecture [59], has demonstrated state-of-the-art performance in natural language processing (NLP), making it a powerful tool not only for language tasks but also for supporting broader applications such as optimization and algorithm design. Recognizing these capabilities, Pluhacek et al. [60] leveraged GPT-4 to design a novel mutation strategy for Differential Evolution (DE) [61], aiming to enhance the adaptability and performance of DE in solving complex optimization problems. The authors initiated the design process by prompting GPT-4 with a carefully crafted request: Prompt: Provide a novel and innovative mutation strategy for DE with superior performance to DE/rand/1/bin on the proposed benchmark set. In response, GPT-4 proposed a mutation strategy named DE/dynamic-switch/1/bin . This approach in- troduces a dynamic switching mechanism, where individuals are selected for mutation based on a probabilistic model. Specifically, two probabilities, piandpj, determine whether the ithandjthindividuals in the population are replaced by the current best-performing individual. By incorporating these probabilities, the mutation pro- cess gains an adaptive quality, enabling the algorithm to balance exploration and exploitation more effectively as it navigates the search space. Further extending this line of inquiry, Pluhacek et al. [62] explored the generation of novel hybrid swarm intelligence algorithms using GPT-4. They focused on six prominent swarm-based algorithms: Particle Swarm Optimization (PSO), Cuckoo Search (CS), Artificial Bee Colony (ABC), Grey Wolf Optimizer (GWO), Self- Organizing Migrating Algorithm (SOMA), and Whale Optimization Algorithm (WOA), and tasked GPT-4 with constructing hybrid frameworks that integrate the strengths of these methods. To facilitate this, the researchers developed five structured tasks and fifteen tailored prompts, guiding GPT-4 through selecting al- gorithms, identifying key algorithmic components, and generating novel strategies to enhance diversity and maintain an effective balance between exploration and exploitation. The Enhanced Swarm Exploration and Exploitation Optimizer (ESEEO) was among the outcomes, com- plete with algorithmic description, pseudocode, and Python implementation. Additionally, GPT-4 was prompted to design a metaheuristic optimized for expensive problems with limited function evaluations. This resulted in the Limited Evaluation of Swarm Optimizer (LESO), which was designed with practical efficiency. Full prompt details and implementation steps are provided in the Supplementary File, while the experimental flow is visually depicted in Fig. 4. Building on these findings, Pluhacek et al. introduced a subsequent extension in [6], where GPT-4 was used to enhance the Self-Organizing Migrating Algorithm (SOMA) [63, 64]. They developed a Python tem- plate incorporating SOMA’s All-To-All variant (SOMA-ATA) as the baseline algorithm due to its relatively lower representation in LLM training datasets compared to algorithms like Differential Evolution (DE) or PSO. It mimics self-organization and cooperative behavior, with the SOMA-ATA strategy guiding each indi- vidual to migrate toward all others in the population. This approach offers a fresh perspective on autonomously generating metaheuristic algorithms, potentially leading to novel and unbiased enhancements. The study as- sessed whether iterative prompting without feedback could continuously refine performance by leveraging the model’s extensive context size. In other words, the SOMA-ATA variant was selected as the baseline due to its comparatively limited representation in GPT-4’s training data, potentially offering less biased outcomes. 21
https://arxiv.org/abs/2505.15741v1
DRAFTExperiment workflow Task A: Selection of algorithms List of algorithms ReasoningPrompt 1 Prompt 2 Task B: Identification of components List of components Components descriptionPrompt 3 Task C: Hybridization ESEEO design LESO designPrompt 4 Prompt 6 ESEEO description LESO descriptionPrompt 5 Prompt 7 Task D: Pseudocodes and Implementation ESEEO pseudo-code LESO pseudo-codePrompt 8 Prompt 10 ESEEO.py LESO.pyPrompt 9 Prompt 11 Task E: Reasoning ESEEO reasoning LESO reasoningPrompt 12 Prompt 14 ESEEO detailed reasoning LESO detailed reasoningPrompt 13 Prompt 15 Figure 4: Experimental workflow. SOMA-ATA simulates cooperative migration behaviors by guiding individuals to move toward all other indi- viduals in the population, making it an ideal testbed for autonomous enhancement using LLMs. In this study, the researchers employed a Python-based SOMA-ATA implementation as the starting prompt. 22 DRAFTThey adopted a “Repetitive Prompt” strategy, wherein GPT-4 was iteratively prompted using the latest version of the code it had just generated. This cycle was repeated twenty times, with each iteration representing an opportunity for the model to autonomously refine and improve the algorithm. This method demonstrated GPT- 4’s capacity to act as a self-improving system for metaheuristic algorithm development, offering novel insights into automated algorithmic innovation. 3.1.2. LLM-Based Hyper-Heuristic Frameworks This subsection focuses on hyper-heuristic frameworks where LLMs are employed to reason over and generate heuristic strategies. These strategies often include both abstract reasoning (in natural language) and executable code to solve optimization problems. The LLMs are tightly integrated into an evolutionary loop to discover, refine, and adapt heuristics in a task-aware manner. Compared to metaheuristics, which are full- fledged algorithmic solvers, hyper-heuristics typically operate at a higher level by generating or selecting heuristics that guide problem-solving. Recent advances have demonstrated that LLMs can be effectively integrated with evolutionary frameworks to autonomously generate, evaluate, and refine code for optimization tasks. This paradigm enables the auto- mated synthesis of algorithms without requiring human-crafted rules or manually trained models, fostering a new generation of metaheuristic design systems. A notable milestone in this direction is FunSearch [65], developed by Google DeepMind1, which demon- strated how LLMs can be integrated with evolutionary algorithms to generate and evolve functional code for solving mathematical and algorithmic problems. FunSearch introduced the paradigm of iteratively generating code snippets via LLMs, evaluating them through task-specific reward functions, and feeding back successful candidates to guide further generation. This paradigm strongly influenced subsequent efforts, including AoL (Algorithm of Language), FunBo, and Evolution of Heuristics (EoH), by establishing a foundation for LLM- driven program synthesis in an evolutionary loop. Building on this idea, the FunBO framework [66] extended FunSearch to the domain of Bayesian optimization by evolving new acquisition functions (AFs) through LLM-guided code generation. FunBO leverages a limited number of evaluations over a set of objective func- tions to discover AFs that generalize well both within and beyond the training distribution. It demonstrates competitive or superior performance compared to hand-crafted and transfer-learned AFs, highlighting the po- tential of LLMs to design data-efficient search strategies across optimization landscapes. In another promising direction of LLM integration, Liu et al. [67] introduced the Evolution of Heuristics (EoH), a pioneering framework
https://arxiv.org/abs/2505.15741v1
that integrates LLMs with EC to autonomously generate, evaluate, and refine heuristics. The goal of EoH is to fully automate the heuristic design process, eliminating the need for human- crafted rules or dedicated models to be trained. EoH uses the generative power of LLMs to propose new heuristics and iteratively improves them through evolutionary refinement, creating a closed-loop system for optimization algorithm development. A distinctive feature of EoH is its dual representation of heuristics, both in natural language (referred to as “thought”) and executable code. In each iteration, the LLM generates a conceptual explanation of a heuristic and then translates this concept into a working implementation. This mimics the heuristic development process of a human expert, capable of articulating ideas and immediately implementing them. EoH uses a series of prompting strategies to navigate the heuristic space effectively, encouraging the LLM to reason over previously generated heuristics and their performance. These strategies enhance the model’s ability to reuse and modify prior knowledge, improving exploration across the search space. The evolutionary loop is driven by typical genetic operations such as crossover and mutation, applied in this case by the LLM itself, and guided by a selection mechanism that retains only high-performing heuristics for future iterations. Expanding upon this concept, Yao et al. [68] proposed a Multi-objective Evolution of Heuristics (MEoH) framework, extending the original EoH to support multi-objective optimization tasks. MEoH integrates LLMs 1https://github.com/google-deepmind/funsearch 23 DRAFTwith Multi-objective Evolutionary Algorithms (MOEAs) to produce heuristics that satisfy multiple design objectives simultaneously, such as computational efficiency, scalability, and solution quality, rather than opti- mizing a single performance metric. A central innovation in MEoH is introducing a dominance-dissimilarity mechanism that enhances diver- sity in objective and heuristic spaces. This mechanism manages population diversity by evaluating dominant relationships among solutions in the objective space and dissimilarity among heuristics in the solution space. MEoH also inherits five LLM-driven operators from EoH [67], E1, E2, M1, M2, and M3, that enable explo- ration and exploitation through heuristic generation and modification. The framework is validated on classic combinatorial problems, including the online Bin Packing Problem (BPP) and the Travelling Salesman Prob- lem (TSP), demonstrating its versatility and efficacy. Furthering the idea of LLM-guided hyper-heuristics, Ye et al. [69] introduced ReEvo .ReEvo2[69] is a novel framework that integrates evolutionary search with large LLM reflections to enhance language hyper- heuristics (LHHs)3for combinatorial optimization problems (COPs). ReEvo leverages LLMs to generate heuristics while employing Genetic Programming (GP) to explore the heuristic space efficiently. By combining evolutionary search with LLM-based self-reflections, ReEvo enhances the reasoning capabilities of LLMs. It mimics human experts by analyzing heuristic performance across iterations, providing a “verbal gradient” within search spaces. ReEvo incorporates both short-term and long-term reflections to refine heuristic design: (i)Short-term reflections : The generator LLM creates offspring heuristics based on task specifications, parent heuristics, relative performance, and generation instructions. (ii)Long-term reflections : Expertise accumulates by summarizing previous reflections and generating hints for improved heuristic design. Within an evolutionary framework, ReEvo represents heuristics as code snippets and follows a structured process including population initialization, selection, short-term reflection, crossover, long-term reflection,
https://arxiv.org/abs/2505.15741v1
and elitist mutation. By incorporating both local adaptation and global reasoning, ReEvo brings human-like adaptability to the automated discovery of optimization strategies. Stein et al. [12] developed the LLaMEA framework4, which integrates GPT-4 with EC to iteratively gener- ate and refine optimization strategies. LLaMEA follows an EA-like loop: algorithms are generated, mutated, and selected based on performance evaluations. This enables the dynamic evolution of optimization code without requiring extensive prior expertise or manual coding. To evaluate the generated algorithms, LLaMEA incorporates the IOHprofiler suite [70], which includes IOHexperimenter [71] for benchmark execution and IOHanalyzer [72] for statistical performance analysis. The framework uses in-context learning, error handling, and selection strategies to iteratively improve algo- rithm quality. Its selection strategy determines whether a refined algorithm is accepted based on performance improvement or if novel algorithms are always accepted. The mutation and selection steps involve construct- ing a feedback prompt for the LLM, guiding it to either refine an existing algorithm or generate a new one. The LLaMEA framework relies on two key prompts that define the optimization process: (i)Task prompt ( S):Your task is to design novel metaheuristic algorithms to solve black-box optimization problems. The optimization algorithm should handle many tasks and be evaluated on a large test suite 2https://ai4co.github.io/reevo 3For a COP with solution space Sand objective function f:S→R, aHHsearches for the optimal heuristic h∗in a heuristic spaceHsuch that a meta-objective function F:H→Ris minimized, i.e., h∗=argmin h∈HF(h). ALHH is an HH variant where heuristics in Hare generated by LLMs. 4https://zenodo.org/records/13268663 24 DRAFTof noiseless functions. Your task is to write the optimization algorithm in Python code. The code should contain one function: def __call__(self, f) which should optimize the black-box function f us- ing budget function evaluations. The function ‘ f()’ can only be called as many times as the budget allows. An example of such code is as follows: <initial example code> Give a novel heuristic algorithm to solve this task. Give the response in the format: # Name : <name of t h e a l g o r i t h m > # Code : <code > (ii)Task-feedback prompt: List of previously generated algorithm names with their mean AOCC score. Selected algorithm to refine (full code), along with mean and standard deviation ( AOCC ) scores. Either refine or redesign the algorithm to improve its performance. These iterative prompts form the core of LLaMEA’s optimization loop, enabling GPT-4 to participate in evolutionary algorithm design as a solution generator and a performance-aware optimizer. By continuously refining algorithmic components using benchmark feedback, LLaMEA demonstrates how LLMs can facilitate the automated generation of high-quality, adaptive metaheuristic algorithms. As a follow-up to LLaMEA, LLaMEA-HPO [12]5extends this framework to offload hyper-parameter tuning HPO to an external Bayesian Optimization tool specialized at HPO. This way the LLM can focus on the structural parts of algorithm discovery while the tuning of the generated algorithm happens inside the loop by HPO tooling. It introduces a hybrid optimization scheme that integrates LLM-generated suggestions with surrogate-assisted tuning, making it suitable for data-efficient scenarios. Like EoH [67], LLaMEA-HPO uses evolutionary principles
https://arxiv.org/abs/2505.15741v1
and language model reasoning to improve optimization performance. Both frameworks aim to fully automate the optimization process, but they tar- get complementary aspects: EoH on heuristic discovery for combinatorial optimization, and LLaMEA and LLaMEA-HPO on efficient discovery of complete code-bases, focusing on continuous black-box optimiza- tion. 3.2. LLM-Assisted EC Tuning LLMs have recently emerged as powerful tools for enhancing the performance of EC by enabling more intelligent and adaptive control of algorithmic behavior. In particular, their capacity for inference and contex- tual reasoning allows them to support EA components such as surrogate modeling and operator tuning. This section explores two major contributions in this direction: using LLMs as surrogate models for approximating fitness evaluations, and their role in adaptive operator selection based on performance feedback. These ad- vancements show how LLMs can be integrated into EC as passive generators and active agents guiding search dynamics through learned inference, adaptation, and reflective reasoning. 3.2.1. Surrogate Modeling for EA Optimization Hao et al. [73] introduced a novel surrogate modeling approach that leverages the inference capabilities of LLMs to enhance selection mechanisms in EC [74]. Their proposed method transforms model-assisted selec- tion into an inference task, where LLMs evaluate the quality of candidate solutions using historical evaluation data. This is achieved through tailored prompt engineering that allows LLMs to classify or regress fitness estimates based on learned patterns from previous generations. The resulting framework, LLM-assisted EA (LAEA), integrates LLMs as surrogate models to support evolutionary search. The integration process consists of four core steps: preprocessing, prompt generation, inference, and post- processing, as detailed in Algorithm 1. Let Xdenote the set of evaluated solutions, Ytheir corresponding 5https://zenodo.org/records/13834123 25 DRAFTvalues or labels, Uthe unevaluated candidate solutions, ˜Ythe predicted outputs, and Opt the task type, either regression or classification. In the case of regression tasks, the algorithm utilizes the historical input-output pairs (X,Y)to predict values for the new candidates U. Prompt generation, illustrated in Fig. 5, includes five structured components: a task description, a process description, a dataset summary containing historical records, feature vectors of new candidate solutions ( u), and an output specification requiring JSON-formatted responses. For classification tasks, a similar methodology is applied. Here, the objective is to assign binary labels (e.g., 1 or 0) to the unevaluated candidates U, based on patterns inferred from the previously labeled set (X,Y). The generation of classification-specific prompts is shown in Fig. 6, where the label set Yis typically derived from upstream decision tasks or heuristics. By embedding these inference capabilities within the evolutionary loop, LAEA enables LLMs to serve as powerful surrogates. Instead of relying on traditional machine learning models, the LLM provides prob- abilistic predictions or binary decisions that guide the selection process. This approach combines linguistic and statistical reasoning, offering a flexible and generalizable alternative for surrogate-assisted evolutionary algorithms (SAEAs). A complementary and noteworthy contribution is LLAMBO6(Large Language Models to Enhance Bayesian Optimization) [75], which brings LLMs into the surrogate modeling loop for black-box optimization tasks. LLAMBO addresses the cold-start problem in Bayesian Optimization (BO) by using LLMs for zero-shot warm-starting, predicting promising initial configurations without
https://arxiv.org/abs/2505.15741v1
requiring prior evaluations. This is particu- larly valuable in scenarios with limited data or expensive fitness evaluations. LLAMBO integrates LLM-generated prompts into the BO workflow by encoding prior configuration- performance pairs as text. The LLM then provides predictions that enhance three critical BO components: (1) it initializes the surrogate model via LLM-generated predictions, (2) proposes candidate solutions using LLM-inferred priors, and (3) incorporates LLM-based sampling strategies that are informed by the trajectory of the optimization process. The framework features a modular, interpretable architecture that seamlessly integrates into existing BO pipelines. Across various synthetic and real-world benchmarks, LLAMBO demonstrates superior perfor- mance, underscoring the utility of LLMs not only as repositories of general knowledge but also as active agents in guiding and accelerating optimization. Its plug-and-play design makes it especially appealing for EC-based hyperparameter tuning and surrogate modeling. 3.2.2. Adaptive Operator Selection via LLMs Martinek et al. [76] explored using LLMs for tuning parameters and operators in metaheuristic algorithms, including GA, ACO, PSO, and SA. Their study focused on solving two classical combinatorial problems: the TSP and the Graph Coloring Problem (GCP). The goal was to determine whether LLMs could effectively suggest adaptive parameter configurations for these algorithms and refine them based on iterative feedback. In this framework, the LLMs are first provided with detailed problem specifications, an initial algorithmic setup, and performance statistics from early runs. Based on this information, the LLM suggests a set of pa- rameter values for the algorithm in question. These values are then evaluated through controlled experiments, and the resulting performance, particularly the average solution quality and population variance, is fed back into the model for refinement. This feedback loop enables the LLM to iteratively improve parameter suggestions, often leading to better performance than the initial configurations. To ensure fair benchmarking across algorithms and configurations, the authors constrained the total computational effort by fixing the product of population size and the number of epochs. 6https://github.com/tennisonliu/LLAMBO,https://github.com/vanderschaarlab/LLAMBO 26 DRAFTThe experiments incorporated multiple LLMs, including two versions of ChatGPT (OpenAI), Gemini (Google), and Le Chat (Mistral AI). The prompts used in each iteration, summarized in Table 9, contained comprehensive information including the optimization problem, current parameter settings, observed popula- tion variance, approximated global optima, and performance at the most recent epoch. The study demonstrated that LLMs possess a strong capability for adaptive reasoning. They could mean- ingfully update parameter values in response to observed data, suggesting that LLMs are suitable for static configuration and online adaptive operator control in metaheuristic optimization. Prompt for regression Procedure Historical examples New evaluationPrompt for LLM Your task is to predict the numerical value of each object based on its attributes. These attributes and their corresponding values are outcomes of a black box function’s operation within its decision space. The target value for each object is determined by a specific mapping from these attributes through the black box function. Your objective is to infer the underlying relationships and patterns within the black box function using the provided historical data. This task goes beyond simple statistical analyses, such as calculating means or variances, and requires understanding
https://arxiv.org/abs/2505.15741v1
the complex interactions between the attributes. Please do not attempt to fit the function using code similar to Python; instead, directly learn and infer the numerical values.. 1. Analyze the historical data to uncover how attributes relate to the numerical values. 2. Use these insights to predict the numerical value of new objects based on their attributes. 3. Respond using JSON format, e.g. ‘Value’: ‘approximation result’ Features: <0.338,0.531,...,0.363>; Value: 0.41148 Features: <0.207,0.598,...,0.285>; Value: 0.35745 ... Features: <0.629,0.029,...,0.279>; Value: 0.67179 Features: <0.189,0.917,...,0.443>Note: Respond in JSON with the format ‘Value’: ‘approximation result’ only. Figure 5: Prompt and procedure for regression task. Prompt for classification Procedure Historical examples New evaluationPrompt for LLM You are tasked with evaluating each object based on its numerical attributes to determine its category as ‘better’ or ‘worse’. These attributes derive from a black box function’s decision space, with the assessment of the label based on the post-mapping function values. Your role involves discerning the internal variable relationships of the black box function from provided historical data, moving beyond mere statistical analyses like calculating means and variances. 1. Identify patterns in how attributes are categorized. 2. Apply these patterns to assess new objects, determining whether their category is better or worse. 3. Respond using JSON format, e.g. ‘Class’: ‘result’ Features: <0.555,0.881,...,0.491>; Class: better Features: <0.593,0.515,...,0.456>; Class: worse ... Features: <0.253,0.747,...,0.475>; Class: better <0.189,0.917,...,0.443>better or worse?Note: Respond in JSON with the format ‘Class’: ‘result’ only. Figure 6: Prompt and procedure for classification task. 3.2.3. Pattern-Guided Evolution via LLMs OptiPattern [77] is a novel hybrid framework that enhances metaheuristic (MH) optimization by lever- aging LLMs for pattern recognition within problem instance metrics. Rather than relying on LLMs to act as direct optimizers, which often restricts them to small problems due to limitations in reproducibility and 27 DRAFTAlgorithm 1 LLM as Surrogate Model Require: X,Y,U,LLM ,Opt. Ensure: ˜Y. 1:˜Y←/ 0 2:X,U←Preprocessing (X,U) 3:foru∈Udo 4: prompt ←GeneratePrompt (X,Y,u,Opt) 5: response ←Inference (LLM ,prompt ) 6: y←PostProcessing (response ,Opt) 7: ˜Y←˜Y∪y 8:end for Table 9: Three types of prompts used in Ref. [76]. (i) Which metaheuristic algorithms would you use to solve a TSP? (ii) I want to solve TSP (defined on 15 cities) with GA. Its mealpy implementation takes the following param- eters: parameters. Advise on parameter values. (iii) For the TSP task with 15 cities, you previously suggested parameter values for Genetic Algorithm meta- heuristic. Parameters and values. I ran the GA algorithm 100 times and got the average global optimum of 0.375 with a standard deviation of 0.03. I also measured the variance of the population at the beginning and last epoch: 9.598 and 5.675 correspondingly, with std of 0.09 and 0.58. The solution at the last epoch had an average fitness of 0.45 with a standard deviation of 0.37. What changes to the parameters would you suggest to improve performance? Keep the epoch*pop size constant. scale, OptiPattern capitalizes on LLMs’ semantic understanding and generalization capabilities to extract meaningful patterns from input data that can guide evolutionary search. The approach is validated on the Multi-Hop Influence Maximization in Social Networks (MHIM) problem [78,
https://arxiv.org/abs/2505.15741v1
79], a complex combinatorial optimization task involving graph structures. OptiPattern competently performs, outperforming conventional hybrid metaheuristics that combine MHs with deep learning models. The implementation is publicly available at https://github.com/camilochs/optipattern . At the core of OptiPattern lies a Biased Random Key Genetic Algorithm (BRKGA), where the decoder is augmented with node selection probabilities predicted by the LLM. These probabilities guide the mapping of random keys to valid solutions, embedding LLM-inferred structural knowledge directly into the search process. This fusion allows the MH to benefit from LLM-generated priors without compromising its exploratory and adaptive nature. The framework operates in three key phases: (i) LLM Prompt Generation and Execution: Automatically structured prompts are generated based on the problem instance, including graphs and rule descriptions. (ii) Pattern Extraction via Probabilistic Encoding: The LLM outputs ten parameters (five αand five βval- ues), which are used to compute the node-level probabilities in the evaluation graph using a predefined analytical formula. (iii) Probability-Guided Decoding in the MH: These probabilities are embedded into the BRKGA decod- ing process, influencing solution construction by prioritizing nodes more likely to contribute to optimal outcomes. The prompt design is instrumental in determining the quality of LLM outputs. Each prompt consists of 28 DRAFTfour structured tags: P:=prompt(Tag1, Tag2, Tag3, Tag4) , where: Tag1 = [PROBLEM] formal textual description of the optimization task. Tag2 = [EXAMPLE GRAPH] illustrative graph used to teach structure. Tag3 = [EV ALUATION GRAPH] the real instance on which optimization is performed. Tag4 = [RULES ANSWERING] instruction constraints to enforce format and correctness. This design allows the LLM to abstract useful latent structures and patterns from the instance-specific context and translates them into useful probabilistic priors for the MH. In contrast to previous works where LLMs function as black-box optimizers, OptiPattern provides a middle ground, enabling LLM-informed evolutionary optimization. It retains the scalability and robustness of MHs while enriching them with semantic and structural insights from LLMs, establishing a powerful blueprint for hybrid, pattern-driven optimization workflows. 3.3. LLM-Generated Metaheuristics This subsection covers approaches where LLMs are directly used to synthesize novel metaheuristic al- gorithms and self-contained optimization strategies designed to solve black-box problems. Unlike hyper- heuristics, which guide low-level solvers, metaheuristics define the algorithmic search behavior themselves. Here, LLMs act as autonomous designers of complete optimization algorithms, often modeled after evolution- ary or swarm-based strategies. Recent advancements in LLMs have enabled the automatic generation of novel metaheuristic (MH) al- gorithms. By leveraging the powerful reasoning and language capabilities of models like ChatGPT-3.5 and GPT-4, researchers have begun to explore using LLMs as autonomous agents for optimization algorithm de- sign. This subsection reviews emerging approaches that employ LLMs to generate, execute, and iteratively refine metaheuristics, highlighting the potential of natural language as a new interface for metaheuristic inno- vation. Zhong et al. [80] proposed Zoological Search Optimization (ZSO), an MH inspired by collective animal behaviors and generated entirely through ChatGPT-3.5. The authors introduced the CRISPE framework, Ca- pacity and Role, Insight, Statement, Personality, and Experiment, to guide the LLM through a structured, prompt engineering process. In the insight phase, the LLM is
https://arxiv.org/abs/2505.15741v1
instructed to generate an animal-inspired MH suitable for black-box optimization problems. The statement phase requests a detailed algorithm design, in- cluding inspiration, mathematical equations, parameter settings, and a flowchart. The personality component encourages novelty by ensuring the output differs significantly from existing methods such as GA, DE, ES, and PSO. Finally, in the experiment phase, the LLM is constrained to output only one unique algorithm per prompt. This structured approach allowed ZSO to demonstrate how LLMs can autonomously generate inno- vative algorithms without human intervention, showcasing the potential of prompt-driven MH design. Liu et al. [9] introduced LLM-driven EA (LMEA), a zero-shot approach that uses LLMs as combina- torial optimizers [81]. LMEA eliminates the need for domain-specific knowledge, additional model train- ing, or hand-coded operators. Instead, the LLM performs evolutionary operations—such as parent selection, crossover, and mutation—to generate offspring in a GA setting. These offspring are then evaluated and incor- porated into the population for the next generation. A key feature of LMEA is its self-adaptation mechanism, which dynamically adjusts the LLM’s temper- ature to balance exploration and exploitation and to avoid premature convergence to local optima using the framework presented in Algorithm 2. LMEA operates based on a structured prompt consisting of three main components: (i) problem description and solution properties, which define the optimization task and valid so- lution characteristics; (ii) in-context examples, which provide previous solutions and their fitness values; and (iii) task instructions that guide the LLM to perform parent selection, crossover, and mutation. For example, in solving the traveling salesman problem (TSP), the prompt specifies city coordinates, so- lution constraints (e.g., visiting each city exactly once), and fitness (total travel distance). The LLM then uses evolutionary principles to generate new solutions. To ensure consistency, outputs are enclosed in standardized 29 DRAFTtags, such as <selection >for parents and <res>for solutions. Unlike traditional EC, which rely on manu- ally programmed operators, LMEA delegates these responsibilities to the LLM, enabling flexible and scalable optimization with minimal expert intervention. Fig. 7 illustrates an example of a prompt used for solving the TSP with LMEA. The problem description includes the coordinates of the cities, while the solution properties outline constraints such as visiting each city exactly once and minimizing the total travel distance. The in- context examples contain previously generated TSP solutions and their corresponding fitness (path lengths). The task instructions direct the LLM to generate new solutions based on evolutionary principles. While both Sections 3.1.2 and 3.3 explore the use of LLMs in optimization algorithm design, they differ in abstraction level: the former focuses on heuristic reasoning and selection (hyper-heuristics), while the latter targets the automatic generation of executable metaheuristic solvers. This separation allows a clearer comparison of LLM utility across different levels of algorithm synthesis. Algorithm 2 Pseudo code of LMEA Require: Optimization problem T, maximum generations G, population size N Ensure: Best found solution s∗ 1:Initialize population Pwith Nrandom solutions for T 2:Set generation counter g←1 3:while g≤Gdo 4: Construct a prompt based on TandP 5: Generate Noffspring solutions P′using LLM with the constructed prompt 6: Select the top
https://arxiv.org/abs/2505.15741v1
Nsolutions from P∪P′ 7: Adjust LLM parameters (e.g., temperature) if necessary 8: Increment generation counter g←g+1 9:end while 10:Select the best solution s∗from P 11:return s∗ 3.4. Genetic Programming & LLM Synergy This subsection examines the synergistic integration of Genetic Programming (GP), machine learning, and LLMs within EC. The first part focuses on GP-based generative hyper-heuristics, which evolve high-level decision-making rules across multiple tasks to enable adaptive and generalizable scheduling solutions. These approaches operate in the heuristic space and leverage multifactorial optimization and knowledge-sharing mechanisms to enhance multitasking performance. The second part explores learnable evolution models that embed inductive learning into the evolutionary process. In these frameworks, machine learning methods guide the generation of new individuals, enabling more informed and data-driven search strategies. One notable example is LEMABE, a hybrid model that alternates between machine learning and evolutionary operations to optimize key components, such as feature weighting in analogy-based estimation. Together, these strategies illustrate a shift toward more intelligent, adaptable, and automated metaheuristic generation by combining GP, data-driven learning, and the natural language reasoning and generation capabil- ities of LLMs. 3.4.1. GP-Based Generative Hyper-Heuristics Zhang et al. [82] introduced a multitask GP-based generative hyper-heuristic framework for dynamic scheduling problems. Unlike most existing multitask hyper-heuristics, which primarily focus on heuristic 30 DRAFTInitialization LLM Selection Crossover Mutation TerminationGenetic AlgorithmPrompt for LLM Description of problem and solution properties You are given a list of points with coordinates: {points}. Your task is to find a trace, with the shortest possible length, that traverses each point exactly once. In-context examples (population) Below are some previous traces and their lengths, ordered by their lengths: {trace 1} {Length of trace 1} {trace 2} {Length of trace 2} {trace 3} {Length of trace 3} ... {trace N} {Length of trace N} Task instructions 1. Select two traces from the above. 2. Crossover the two traces from Step 1 and generate a new trace. 3. Mutate the trace from Step 2 to generate a new trace. 4. Repeat the process until {N} traces are generated. Provide traces in XML-like format using <selection >and<res>tags. Figure 7: An overview of LMEA. The right half illustrates a prompt example of solving TSPs using LMEA. Placeholders in {} are replaced dynamically. selection, this approach emphasizes the generation of new heuristics. The framework leverages multifacto- rial evolutionary principles from evolutionary multitask learning (MFEA) to solve multiple scheduling tasks simultaneously, facilitating knowledge transfer across tasks and improving overall performance. Operating within the heuristic space, the method evolves high-level scheduling heuristics rather than op- timizing solution instances directly. This makes the approach particularly suitable for dynamic environments requiring adaptive, real-time decision-making. GP is the core hyper-heuristic engine, utilizing its flexible tree-based representation to evolve scheduling rules without requiring predefined structures. The authors pro- posed an origin-based offspring reservation strategy to enhance the learning process further. This mechanism preserves essential characteristics from each task’s subpopulation while allowing for cross-task knowledge exchange during crossover operations. Zhang et al. [83] extended this work by proposing a multitask multi-objective GP framework tailored for dynamic flexible job shop scheduling (DFJSS). In this variant,
https://arxiv.org/abs/2505.15741v1
tasks are partitioned into distinct popula- tions, and inter-task knowledge sharing is facilitated through a task-aware crossover operator. A task-oriented knowledge-sharing strategy was introduced to ensure that individuals remain effective in their original task context while benefiting from cross-task genetic exchange. The framework automates the generation of flexi- ble and adaptive scheduling heuristics in the heuristic space, targeting improvements in routing and sequencing decisions critical to DFJSS environments. 3.4.2. Learnable Evolution Models Dashti et al. [84] proposed LEMABE (Learnable Evolution Model in Analogy-Based Estimation), a hybrid framework designed to enhance the accuracy of software cost estimation. The method builds on analogy-based estimation (ABE), a widely used technique that predicts the cost of new software projects by comparing them with similar historical cases. ABE involves constructing a historical project dataset, extracting relevant fea- tures, measuring similarity between projects (typically using Euclidean or Manhattan distances), and applying a solution function to generate estimates. 31 DRAFTLEMABE integrates the Learnable Evolution Model (LEM), a machine learning-guided evolutionary ap- proach alternating between inductive learning and Darwinian evolution. LEM generates new populations based on inductive hypotheses derived from high-quality individuals. This learning-guided evolution mechanism is employed to optimize the feature weights in the similarity function used by ABE. The LEMABE framework is composed of two phases: a training phase and a testing phase. During training, the evolutionary algorithm explores the weight space to minimize prediction error using predefined evaluation criteria. The optimized feature weights are recorded once convergence or termination conditions are met. These weights are applied to new instances in the testing phase to assess the model’s estimation accuracy. By combining ABE with learnable evolutionary modeling, LEMABE enhances prediction robustness and adapt- ability in software cost estimation scenarios. A comprehensive summary is presented in Table 10, which serves as a quick reference guide to understand the landscape of LLM-assisted and LLM-generated advancements in EC. 4. Emerging Frameworks, Future Directions and Challenges The convergence of EC and LLMs marks opens up new possibilities in artificial intelligence, allowing the development of novel frameworks that combine adaptive search with deep semantic understanding. As EC methods evolve to address increasing demands for robustness, generalizability, and interpretability, the integration with LLMs introduces new paradigms for representation learning, automated model design, and zero-shot generalization. This synergy paves the way for hybrid architectures that uses evolutionary search strategies to optimize not just parameters, but the structural and prompt-based configuration of LLMs them- selves. However, alongside these promising directions emerge significant challenges, ranging from computa- tional scalability and reproducibility to explainability and alignment with human intent. This section explores these emerging frameworks, outlines key research trajectories, and highlights the critical obstacles that must be addressed to fully realize the potential of EC-LLM integration. 4.1. Co-Evolution of LLMs and EC The co-evolution of LLMs and EC presents a promising frontier for intelligent, automated systems. Draw- ing upon the synergetic benefits listed in Table 11, we observe that LLMs and EC can iteratively enhance each other across multiple dimensions. While EC offers robust global search strategies that thrive in non- differentiable and high-dimensional spaces, LLMs contribute their
https://arxiv.org/abs/2505.15741v1
contextual reasoning and language-generation prowess, allowing for semantically guided optimization and intelligent decision-making. Together, they form a dual feedback loop in which one guides, refines, and accelerates the evolution of the other. In this co-evolutionary paradigm, we have observed that EC can be used to optimize components of LLM workflows such as prompt structures, architecture configurations, or hyperparameters. Conversely, LLMs can generate and analyze intermediate EA outputs, propose candidate solutions, or even serve as intelligent mutation and crossover operators. The iterative refinement inherent to evolutionary processes complements the generative capabilities of LLMs, enabling the joint system to adapt to dynamic problem landscapes more efficiently and autonomously. This synergy between LLMs and EC creates a powerful automated framework in which both learning and evolution occur concurrently. Such frameworks not only enable the automation of highly complex tasks but also have the potential to discover novel solutions that might not be easily conceived by humans or found by using either method alone [7]. This co-evolutionary framework is finding applications across a diverse range of fields. 4.1.1. Co-evolutionary Framework Applications Beyond the extensive use in prompt engineering and optimization, the combination of LLMs and EC is being explored in the realm of Automated Machine Learning [85]. EC, potentially guided or enhanced by 32 DRAFTTable 10: Summary of LLM-Enhanced EC Approaches. Framework A Comprehensive Summary SOMA / SOMA-ATA [6, 62] The SOMA framework uses LLMs to manage and evolve search operators dynamically. It maintains a search operator pool and selects appropriate operators based on learned patterns. SOMA-ATA enhances SOMA by incorporating LLM-generated textual feedback as auxiliary signals, improving operator selection via soft prompts and meta-level guidance. EoH / MEoH [67, 68] EoH and MEoH employ LLMs to iteratively refine or evolve heuristic rules. EoH fine-tunes heuristics for specific tasks using LLM-driven variation and selection, while MEoH generalizes the process by evolving the heuristic evolution mechanisms themselves. These approaches introduce task embed- dings and expert demonstrations to improve performance. ReEvo [69] ReEvo uses LLMs for reflective self-improvement in EA design. After running an optimization cycle, LLMs analyze their performance and generate new algorithm variants or tuning suggestions. ReEvo embodies meta-cognitive behavior, aiming to build adaptive evolutionary solvers through cycles of reflection, evaluation, and generation. ZSO [80] ZSO is a novel metaheuristic algorithm automatically generated using ChatGPT-3.5 under the CRISPE framework. This framework structures prompt design into five phases (Capacity, Role, Insight, Statement, Personality, and Experiment) to guide the LLM in creating a distinct, animal- inspired optimization algorithm with a full description, equations, and flowchart. LMEA [9] LMEA uses LLMs to perform parent selection, crossover, and mutation in GAs in a zero-shot set- ting. It introduces a self-adaptive mechanism to adjust the temperature parameter for exploration- exploitation balance. Structured prompts guide the evolutionary steps, and outputs follow defined tagging (e.g., <res> ) to ensure format consistency. LLaMEA [12] LLaMEA integrates GPT-4 into an EA loop that iteratively generates, mutates, and selects optimiza- tion algorithms. It uses IOHprofiler, IOHexperimenter, and IOHanalyzer for systematic benchmark- ing. Task and task-feedback prompts facilitate continuous refinement or regeneration of algorithms based on their
https://arxiv.org/abs/2505.15741v1
performance metrics. Multitask GP [82] This multitask GP-based generative hyperheuristic solves dynamic scheduling problems by evolving heuristics across multiple tasks using multifactorial optimization. It promotes knowledge transfer and uses a tree-based GP representation to generate flexible, real-time heuristics. A novel offspring reservation strategy improves quality and diversity. Multiobjective GP [83] An extension of the previous work, this approach introduces multiobjective optimization to enhance heuristic learning for DFJSS. It isolates populations per task and introduces task-oriented crossover for effective knowledge sharing while preserving task-specific quality. LEMABE [84] LEMABE combines the LEM with ABE to improve software cost estimation. LEM uses inductive learning to guide evolutionary search and optimize feature weights in ABE’s similarity function. It alternates between ML-driven and Darwinian evolutionary modes during training. OptiPattern [77] OptiPattern enhances BRKGA by using LLMs to analyze graph-based problem instances (e.g., MHIM) and produce node-wise probabilities that guide the metaheuristic search. The system follows three phases: LLM prompt execution, probability extraction via αandβparameters, and integration of these into decoding. Prompts are auto-generated using structured tags that describe the problem, example, evaluation graph, and rules. LLMs, can automate the design and optimization of entire machine learning pipelines. This includes tasks such as feature selection (choosing the most relevant features for a model), model selection (choosing the best type of model for a given problem), and hyperparameter tuning (finding the optimal settings for the chosen model) [86]. LLMs can contribute to this process by leveraging their knowledge of machine learning concepts, historical data from previous experiments, and domain-specific insights to predict optimal configurations for hyperparameters, thereby enhancing model performance and reducing the reliance on exhaustive trial-and- error methods [85]. In the field of robotics and autonomous systems, the synergy between LLMs and EC offers exciting pos- sibilities [86]. These integrated systems can enable robots to evolve new communication strategies and proto- 33 DRAFTTable 11: Benefits of Combining LLMs and EC. Benefit Description Enhanced exploration and exploitation EC explores broadly, while LLMs guide the search to- ward meaningful solutions, balancing exploration and exploitation. Improved adaptability and flexibility The combination adapts optimization strategies based on the task or domain, leveraging LLM knowledge and EA adaptiveness. Automation of complex processes Automates tasks like prompt engineering, hyperpa- rameter tuning, and neural architecture search, reduc- ing manual effort. Utilizing language understanding and generation LLMs enable EC to work with natural language rep- resentations of problems and solutions. Potential for discovering novel solutions The synergy can lead to the discovery of solutions or strategies that might not be found by using either method alone. EA Enhancing LLMs (EA →LLM) Prompt Optimization Hyperparameter Tuning Architecture Search Multi-step Prompt Design (e.g., GPS (2022), GrIPS (2022), EvoPrompting (2023), PROMST (2024), GAAPO (2024))LLM Enhancing EC (LLM →EA) Heuristic Generation Operator Tuning Surrogate Modeling Pattern Recognition (e.g., GPT-4 Metaheuristics (2023), EoH (2023), MEoH (2023), ReEvo (2024), OptiPattern (2024)) Synergistic Co- Evolution (EA ↔LLM) Bidirectional Feedback Loops Self-Improving Systems Multi-phase Evolution (e.g., PhaseEvo (2024), Prompt- breeder (2023), EvoPrompt (2023) Figure 8: Conceptual Overview of LLM–EA Co-Evolution Directions with Framework. cols, potentially enhancing collaboration between machines or between human-robot teams [86].
https://arxiv.org/abs/2505.15741v1
Furthermore, LLMs combined with EC can help robots optimize complex decision-making processes for tasks such as nav- igation in dynamic environments, interaction with objects and humans, and coordination of multiple robotic agents [86]. The creative potential of this integration is also being explored in generative design and the creation of novel content [86]. EC can be used to evolve prompts or parameters that guide LLMs in generating unique forms of art, musical compositions, engaging stories, and even entire game worlds [86]. Projects like Art- breeder, which uses EC to combine and modify images, exemplify this trend and suggest the possibility of integrating language models to evolve visual narratives and storytelling through generative art [86]. The domains of synthetic biology and drug discovery are also witnessing the application of LLM-EA integration [86]. LLMs trained on vast amounts of biological data, such as protein structures and genetic sequences, can be combined with EC to evolve novel proteins or genes with desired properties for drug de- 34 DRAFTvelopment or synthetic biology applications [86]. This includes the potential for evolving models that can accurately predict molecular interactions, a critical step in the drug discovery process [86]. In recommender systems, LLMs can enhance the accuracy and diversity of recommendations, particularly for addressing challenges related to long-tail items (items rarely interacted with) and long-tail users (users with limited interaction history) [87]. By providing semantic embeddings of items and users, LLMs can improve the understanding of user preferences and item characteristics. Evolutionary approaches can then be used to optimize the recommendation strategies based on these enhanced representations, potentially leading to more personalized and relevant recommendations [87]. Finally, in software engineering and code generation, the integration of LLMs and EC is showing promise [7]. EC, potentially guided by the code understanding capabilities of LLMs, can be used in neural architecture search to discover more effective neural network architectures for code generation tasks. Additionally, EC can be employed to optimize the strategies used by LLMs to generate high-quality code based on natural language descriptions or other forms of input [7]. A summary of the applications is presented in Table 12. Table 12: Application Areas of LLM-EA Integration. Application Area Description Examples Prompt Engineering Automating the design of effective prompts for LLMsEvoPrompt framework for various NLP tasks, Evolu- tionary multi-objective prompt optimization Automated Machine Learn- ing (AutoML)Automating the design and optimization of ML pipelinesFeature selection, model selection, hyperparameter tuning guided by LLMs and optimized by EC Robotics and Autonomous SystemsEvolving robot control and communica- tion strategiesOptimizing robot navigation, human-robot interac- tion, and multi-agent coordination Generative Design and Cre- ative Content Gen.Generating novel art, music, and game contentEvolving prompts for image and music generation, creating new game levels and characters Synthetic Biology and Drug DiscoveryEvolving proteins and genes, predicting molecular interactionsDesigning novel proteins for specific functions, Op- timizing drug candidates based on predicted interac- tions Recommender Systems Enhancing recommendation accuracy and diversityOptimizing recommendation prompts, Improving rec- ommendations for long-tail items and users using LLM embeddings and evolutionary strategies The breadth of these applications highlights the significant potential of combining LLMs and EC to
https://arxiv.org/abs/2505.15741v1
drive innovation and solve complex problems across a wide spectrum of domains, ranging from creative arts and entertainment to fundamental scientific research and advanced engineering. The ability of LLMs to generate and refine prompts for themselves or other tasks optimized by EC also suggests a form of meta-learning or self- improvement within AI systems [7]. This capability could pave the way for more autonomous and adaptive AI systems that can continuously optimize their performance without extensive human intervention. As we have observed, the coevolutionary framework has the potential to drive scientific discovery, which could transform the future research outlook in the domain of metaheuristic algorithm design, necessitating a closer examination in that direction to guide closely the future agenda. 4.2. Novel Metaheuristic Design with LLMs Integrating LLMs with EC opens up exciting opportunities for the development of advanced optimization algorithms tailored to tackle complex optimization tasks. Early studies have explored various frameworks for combining EC and LLMs to create novel metaheuristic algorithms, as summarized in Table 10. Their 35 DRAFTdevelopment is based on the interpretation of what we mean by novel algorithms. Therefore, it is crucial to establish a clear understanding of what constitutes a novel, new, or improved EA to effectively guide future advancements. This aspect is crucial to define the learning objective for such novel algorithm discovery. A novel metaheuristic algorithm can be characterized by its ability to introduce fundamentally new mech- anisms, components, or hybrid approaches that significantly enhance problem-solving capabilities. Further, a novel metaheuristic algorithm is not just an incremental modification but a fundamental improvement that surpasses existing state-of-the-art (SOTA) methods. To qualify as a novel metaheuristic algorithm, it must: (i) Introduce fundamentally new mechanisms in any core components of EA, such as the representation of solutions, initialization, variation operators, selection mechanism, replacement strategy, and adaptive control, or develop new heuristics beyond SOTA. (ii) Achieve superior performance compared to the SOTA metaheuristic in terms of some of the aspects that measure performance, such as accuracy, convergence speed, scalability, or robustness. (iii) Demonstrate generalizability across a class of problems or several classes of problem domains rather than excelling in a few specific cases. Based on this understanding of what constitutes a novel algorithm, we envisage the following high-level framework (Fig.9) for the future development of novel algorithms with the help of LLMs. The framework supports two complementary paths to designing novel algorithms: (a) improving existing algorithms (Sec- tion 4.2.1), and (b) generating entirely novel algorithms from scratch (Section 4.2.2). These two paths diverge at the very first stage based on the nature of the initial idea and objective, which is either to enhance existing algorithmic components or synthesize fundamentally new ones. In both tracks, the process starts by transform- ing the novel idea for the optimization task into prompting, a set of instructions, to interact with LLMs. With its vast knowledge, the LLM then generates corresponding algorithmic structures and/or code. These candi- dates are then evaluated for novelty and performance against SOTA, a step that informs iterative refinement through feedback and creative thinking. This cyclic refinement enables continuous evolution
https://arxiv.org/abs/2505.15741v1
either through incremental component upgrades or via first-principle derivations of novel algorithmic designs. The frame- work thus serves as a unified pipeline for both strategies of innovation: evolutionary enhancement and de novo discovery. Prompting LLMGenerate novel algorithm struc- ture & code Evaluate novelty & performance against SOTA (for both improve- ment and scratch) Creative thinkingOptimization Task & Novel Idea (Improving existing methods orcreate from scratch) Novel algorithm Figure 9: Unified framework for development of novel metaheuristic algorithms. 36 DRAFTThe core of this framework revolves around the transition from “idea" to “prompting" and the subsequent evolution of that idea. Existing studies emphasize the need for expert-level prompting in algorithm design, as effective prompting requires not only knowledge of how to craft prompts but also a deep understanding of optimization algorithms to ensure meaningful outcomes. Additionally, the process of refining an idea, often driven by creative thinking, can be partially automated through iterative prompting based on predefined rules or expert knowledge. While some studies suggest that this refinement could be enhanced through another evolutionary process [57], such an approach introduces additional complexity and may risk overlooking the expertise embedded in human-driven design [88]. A recent study found that human-GenAI collaboration through differentiated search can lead to more im- pactful creative problem-solving compared to independent AI-driven or human-only approaches [89]. There- fore, we advocate for a human-GenAI collaboration in the idea evolution stage, leveraging the strengths of both AI-driven automation and expert intuition. Furthermore, this stage can be significantly strengthened by integrating a robust diagnostic module in the SOTA verification phase, which would systematically highlight weaknesses in the current approach and guide further improvements. We observe that while most existing studies in this direction align with aspects of this framework, they do not fully comply with it. Building on this foundation, we can advocate for the future development of novel algorithms by exploring two broader dimension of desinging the novel metaheuirstic algorithm in the following. 4.2.1. Improving Existing Algorithms An important avenue to develop a superior metaheuristic algorithm is the enhancement of existing ECs through the integration of LLMs, which attempt to change and introduce some novel mechanisms in the core components of the existing algorithms. (i) Core component evolution: LLMs could improve existing metaheuristics with high-level prompting, with the specification of the core components (e.g., mutation operators, selection mechanisms, or base heuris- tic) the designer wants to change to improve performance like Pluhacek et al. [60] or EoHframework. This is probably the simplest strategy and less computationally intensive way to improve metaheuris- tics capability. It is also easy to interpret at the algorithmic level and know the novelty. Of course, its success completely depends on the complexity of the problems at hand. In the future, it will be interest- ing to develop a modular framework for core metaheuristics component evolution, in which LLMs not only generate alternative designs for core components but also evaluate their synergistic effects across different combinations and problem types, leveraging frameworks like RoEvo . (ii) Algorithmic structure or Code level refinement: LLMs can iteratively refine existing algorithms dynami- cally changing
https://arxiv.org/abs/2505.15741v1
the baseline algorithm code with high-level prompting, as demonstrated in SOMA/SOMA-ATA framework [6]. A future direction in this line could be building an LLM agent for improving the perfor- mance of a baseline algorithm in certain optimization tasks, where LLMs act as autonomous algorithm designers capable of iteratively rewriting and debugging EA code following the framework 9. LLaMEA is one such that integrates GPT-4 into an EA loop to mutate and select metaheuristics algorithms codes based on IOHprofiler benchmarks. One typical issue with this framework is the interpretability and identification of the novelty in generated algorithms. (iii) Hybridization of existing metaheuristics: Another promising direction is the hybridization of existing metaheuristics, where different evolutionary paradigms are combined to capitalize on their respective strengths. Traditionally, hybrid metaheuristics have been developed by manually integrating multiple optimization strategies. LLMs can play a crucial role in automating the design of hybrid metaheuristics by identifying optimal combinations of algorithms [62], generating hybrid structures, and evaluating their 37 DRAFTperformance against SOTA by following the Framework LLaMEA . The hybridization strategies could be further interpretable by adopting the AutoOpt [88] searching framework, in which the design space of the novel hybrid algorithm includes all core components and strategies of the original algorithms and LLMs with their knowledge bases and feedback on the current version decides which components to invoke to improve performance. Furthermore, hybridization strategies could be instrumental in addressing complex optimization tasks with the support of LLMs. One promising direction is the dynamic adaptation of hybrid EA algorithms during the optimization process, like RoEVO , where LLMs enable the algorithm to evolve its structure and strategy in real time based on intermediate performance feedback. This adaptive capability has the potential to significantly enhance the efficiency, scalability, and robustness of solving complex optimization problems. 4.2.2. Generating New Algorithms from Scratch Beyond conventional hybridization and improvement of existing algorithms, a new frontier in LLMs- assisted EA development is the development of an autonomous metaheuristic discovery system, where LLMs learn a taxonomy of algorithmic components (e.g., encodings, operators, fitness adaptation rules) and synthe- sise entirely new EC. (i) First principle discovery: LLMs can derive algorithms from fundamental optimization principles, by- passing existing metaheuristics. ZSO [80] is one such semi-autonomous framework to generate novel animal-inspired EA architecture by LLMs based on the standard prompt engineering framework CRISPE and then tested with the benchmark. But it is difficult to verify, except the analogy, whether producing algorithms is different from the hybridization of existing metaheuristics. (ii) Autonomous Metaheuristic Discovery: LLMs can self-improve through iterative prompting and bench- marking. LLaMEA [12] provides one such framework for the autonomous development of novel EC, where evolutionary strategies have been adopted for evolving the lists of algorithms generated by LLMs with feedback on performance to guide the search for novel algorithms in terms of performance bench- mark. Although LLaMEA is capable of generating superior performance, it has been found that in most cases, it generates hybridized EC. Further, explaining and interpreting such generated algorithms is also very difficult. These approaches could complement each other, and their integration may lead
https://arxiv.org/abs/2505.15741v1
to a more powerful frame- work for discovering novel EC. However, we argue that future frameworks for developing novel EC should prioritize first-principles discovery, that is, the creation of entirely new algorithmic structures from the ground up, based on fundamental principles of optimization or the basic laws governing the search for optimal solu- tions, rather than relying on established paradigms such as GA, PSO, or DE. In this paradigm, LLMs would autonomously derive fundamental search heuristics by exploring a meta-space of algorithmic abstractions, rather than starting from known metaheuristic building blocks. 4.3. Enhancing Decision Support Systems Although a promising set of mature EC has been developed over the past decades with some decision support, their application to practical problems for users without specialized knowledge in this area remains quite challenging. This often limits their adoption beyond the research community. LLMs, with their ability to interact through natural language, have the potential to drastically transform decision support in the application of EC by making it not only more effective and efficient but also more transparent and better aligned with human values. Two core aspects are shaping this transformation: first, the role of explainability in increasing transparency and trust in the evolutionary optimization process; and second, the emergence of Human-in-the- Loop optimization frameworks, where human is embedded within an LLM-guided EA pipeline. 38 DRAFT4.3.1. Explainable AI for EC with LLMs Explainable Artificial Intelligence (XAI) has become a central concern in AI research, aiming to make machine learning systems more transparent, understandable, and trustworthy [90]. In the context of EC, ex- plainability is equally critical, as EC often operates as black-box optimizers whose decision-making processes are opaque to users [91]. LLMs, with their exceptional capacity to process, summarize, and reason about complex patterns, offer a promising avenue to enhance the explainability of EC processes. Leveraging LLMs to interpret, describe, and communicate the inner workings of EC could bridge the gap between sophisticated optimization strategies and human understanding, enabling better trust, control, and adoption of EC methods in sensitive or high-stakes domains. Although the intersection of XAI, EC, and LLMs is still emerging, related initiatives are visible across several LLMs enhanced EA improvement frameworks, presented in Table 10. For example, (i) In OptiPattern framework, LLMs were already used to abstract latent structures from data and guide evolutionary search, hinting at their capacity for semantic interpretation. (ii) In evolutionary hyper-heuristic generation frameworks (e.g., ReEvo ,EoH), LLMs generate natural lan- guage thoughts alongside executable heuristics, naturally embedding a form of self-explanation during the evolutionary process. (iii) Surrogate modeling with LLMs offers another indirect pathway to explainability by turning model- assisted evaluation into interpretable, textual inferences, providing insight into how decisions are made without resorting to opaque numerical surrogates. While none of these frameworks have been explicitly framed as Explainable EC systems, they demonstrate an important trend: using LLMs’ linguistic reasoning abilities to render evolutionary processes more interpretable and self-documenting. Early experimental results suggest that LLMs can capture the rationale behind heuristic design decisions, operator tuning, and search space exploration strategies, which are all critical components for creating genuinely
https://arxiv.org/abs/2505.15741v1
explainable EC frameworks. Looking ahead, Explainable AI for EC with LLMs could evolve along several exciting directions: (i) Narrative-Driven Evolution: Instead of only logging metrics, future EC frameworks could generate dy- namic evolutionary narratives, where LLMs describe each generation’s progress, highlight why specific mutations or selections occurred, and speculate about search trajectories. (ii) Post-Hoc Analysis and Visualization: LLMs could be combined with visualization tools to generate human-readable post-hoc reports, offering evolutionary histories, justification for key evolutionary deci- sions, and analysis of population dynamics. (iii) Interactive Explanations: Embedding LLMs into EC platforms could allow users to ask questions about ongoing or completed evolutionary runs ("Why was this solution preferred?", "What mutation operator worked best?") and receive meaningful, context-aware responses. 4.3.2. Human-in-the-Loop Optimization frameworks With advancements in the algorithmic development of EC, LLMs can guide users throughout the entire evolutionary optimization pipeline from problem formulation to algorithm selection and hyperparameter tun- ing that ultimately improves decision support in complex decisions involving optimization tasks. (i) LLMs for Optimization problem formulation: One of the fundamental challenges for non-experts is mod- elling a real-world problem as an appropriate optimization problem and recognising specific structures in the problem to exploit them to produce accurate and tractable optimization formulations [92]. LLMs could assist the non-expert users in this direction in the following ways: 39 DRAFTa) Understanding and Structuring Optimization Problems: Users can describe their real-world optimiza- tion challenges in natural language, and LLMs can convert them into well-defined evolutionary op- timization tasks, specifying objective functions, constraints, and decision variables. There are some initial attempts for the development of machine learning models that turn natural language into opti- mization formulations [93] or ChatGPT prompts to formulate the optimization model [92]. It will be interesting in the future to explore the capability of LLMs in formulating the appropriate optimization problem from a natural language description. b) Identifying Problem Categories: LLMs could classify the problem as single-objective, multi-objective, constrained, combinatorial, or dynamic optimization, ensuring the selection of an appropriate EA ap- proach. The capability of LLMs in this direction has not been explored yet. c) Suggesting Problem Transformations: For highly complex or non-convex problems, LLMs could recommend problem reformulations, such as encoding transformations or decomposition strategies to improve the efficiency of evolutionary search. This direction is also yet to be investigated. Future research could develop an LLM agent for problem reframing, wherein LLMs autonomously analyze problem structures and suggest strategic reformulations. (ii) Algorithm Recommendation: Selecting the most suitable EA variant for a given problem is critical for achieving high performance. LLMs could act as intelligent EA advisors by: a) Matching Problem Characteristics to Suitable EA Variants: Based on the problem characteristics, LLMs can recommend appropriate EA paradigms. b) Justifying Algorithm Selection: LLMs can provide explanations on why a specific EA framework is best suited for a given problem, considering exploration-exploitation balance, scalability, and robust- ness. c) Generating EA Variants Dynamically: LLMs can propose novel EC by analyzing the problem char- acteristics and invoking one of the novel algorithm design faces to enhance performance. (iii) Hyperparameter Tuning for EC: Fine-tuning hyperparameters is crucial
https://arxiv.org/abs/2505.15741v1
for achieving optimal search performance after selecting an appropriate EA. LLMs could help choose the appropriate parameters for EA. A recent study by Martinek et al. [76] suggested a framework to tune the hyperparameters of the EA from the high-level prompt. A promising direction for future study lies in developing a self- reflective hyperparameter tuning framework, where LLMs not only suggest optimal parameters but also analyze the EA’s convergence behavior and performance dynamics in real-time. This would allow the LLM to iteratively refine its parameter recommendations during optimization phases (exploration vs. exploitation), moving beyond static tuning. Such a framework could leverage a feedback loop between LLM reasoning, surrogate performance models, and meta-learning strategies, making the tuning process adaptive, context-aware, and robust across problem domains. 4.4. Challenges and Open Research Questions Despite the numerous benefits, the integration of LLMs and EC also presents several challenges, limita- tions, and open questions that need to be considered in future development. Fig. 10 presents a word cloud of challenges in the synergetic environment of LLM and EC. 4.4.1. Computational Complexity and Scalability As discussed in Section 2.6, training and operating large LLMs is computationally intensive, requiring substantial memory and processing power [94]. These demands pose significant challenges even before inte- gration with evolutionary frameworks. When combined with EC, which involves evaluating large populations 40 DRAFT Figure 10: Challenges in the synergetic environment of LLM and EC over multiple generations, the computational burden of the hybrid EC–LLM system increases substantially. This computational overhead can severely limit feasibility, especially for smaller research groups or applica- tions in resource-constrained environments. Moreover, the search space can become prohibitively large when prompts, architectures, or model parameters are co-evolved. Addressing this challenge will require the use of efficient surrogate models, scalable evaluation strategies, and selective sampling to support a sustainable and accessible hybrid system. 4.4.2. Theoretical and Algorithmic Foundations The complexity of implementing and integrating two sophisticated paradigms like LLMs and EC can also be a significant challenge. It requires a deep understanding of both fields and careful design to ensure that they work together effectively. A fundamental challenge is defining models that describe how LLM’s “learn- ing strategy" and EA’s “search strategy" co-evolve. LLMs have complementary strengths (broad knowledge, powerful pattern extraction) while EAs provide flexible, iterative search. Developing theoretical frameworks or metrics to quantify when and why this synergy yields gains (versus using either alone) would guide design of autonomous hybrid systems. [7]. Hybrid systems combine symbolic (language-based) and numeric representations, resulting in vast and poorly understood search spaces. Notably, even defining such a search space for LLM-driven evolution re- mains unclear, and characterizing its complexity is still an open problem [95]. Formalizing these representa- tions and their properties, potentially through approaches such as algorithmic information theory [96], could help guide the development of more efficient search strategies. Understanding algorithmic guarantees for hybrid systems is largely open. Most existing works are empir- ical [7]. It is necessary to analyze convergence and complexity for hybrid algorithms (e.g. when LLMs are used as mutation operators or fitness evaluators) [95]. To better understand
https://arxiv.org/abs/2505.15741v1
how such hybrid algorithms work, we need to ask several key questions, including: When does an LLM-guided EA converge to an optimum or local optimum? and How does the use of LLM queries affect time and space complexity? These analyses would also expose trade-offs between sample/resource cost and solution quality. 4.4.3. Benchmarking and Evaluation Protocols Establishing robust benchmarking protocols for evaluating hybrid EC–LLM systems remains a critical challenge. As these systems blend discrete optimization with language-based reasoning, traditional EC bench- 41 DRAFTmarks may not fully capture their capabilities or limitations. There is a growing need for comprehensive, transparent, and diverse benchmarking suites that reflect the unique dynamics of a hybrid EC-LLM system. Recent efforts such as LLM4AD (LLMs for Automated Design) [97] and BLADE (Benchmarking Language model Agents for Data-driven Science) [98] represent promising steps toward addressing this gap by providing structured tasks and evaluation protocols tailored to evaluate such hybrid systems. However, community-wide adoption and the development of more challenging, multi-modal, and compositional tasks are needed to ro- bustly assess generalization, creativity, and optimization efficiency. Establishing such benchmarks is critical to ensure fair comparison, reproducibility, and meaningful progress in this emerging research area. Further, the evaluation of LLM-generated heuristics also suffers from specific limitations. Common issues include the use of overly simplistic or narrow benchmark problems, which can lead to overfitting and inflated perceptions of performance [99, 100]. The Dagstuhl Seminar [101] on Challenges in Benchmarking Opti- mization Heuristics emphasized the need for more rigorous and standardized benchmarking protocols. Key concerns raised include: •Cherry-Picked Benchmarks: Selecting benchmark problems that favor the proposed algorithm, thereby skewing performance comparisons [102]. •Lack of Diversity: Using benchmark sets that do not capture the variability and complexity of real- world problems leads to limited generalizability. •Inadequate Performance Metrics: Relying on single metrics without considering other aspects like robustness, scalability, and computational efficiency. To address these issues, the seminar advocates for the development and adoption of community-wide benchmarking standards [103], including: •Comprehensive Benchmark Suites: Utilizing diverse and representative problem sets that encompass various domains and difficulty levels [104]. •Transparent Reporting: Providing detailed descriptions of experimental setups, parameter settings, and evaluation criteria to ensure reproducibility. •Collaborative Efforts: Encouraging collaboration among researchers to establish and maintain bench- marking repositories and protocols. Implementing these practices is crucial for the credible assessment of LLM-generated heuristics and for fos- tering meaningful advancements in the field. 4.4.4. Memorization and Long-Term Knowledge Retention Risks An important issue in LLM-driven EC is the uncertainty surrounding training/testing overlap, which raises concerns about whether LLM outputs reflect genuine generalization or mere memorization of patterns seen during pretraining [105]. Since the training data of most LLMs is not publicly disclosed, it is difficult to determine whether high performance in hybrid EC–LLM systems stems from effective problem-solving or the recall of previously encountered solutions. To ensure scientific rigor, future research should focus on creating novel frameworks, using attribution methods to detect memorization, and designing experiments that reduce the risk of training data leakage. Developing such evaluation frameworks is crucial for validating the originality and generalizability of hybrid systems. Continuously
https://arxiv.org/abs/2505.15741v1
adapting LLM-based components (e.g. via prompt evolution or online fine-tuning) risks forgetting prior knowledge. Managing resources and memory over long-term evolution is an open problem. 42 DRAFTFor instance, resource limitations and catastrophic forgetting have been identified as critical issues in evolv- ing LLM agents [106]. To better understand and mitigate such behavior, a key question arises: How can a hybrid system retain valuable prior solutions or knowledge while iteratively refining its prompts, parameters, or populations? Addressing this will require deeper investigation into memory mechanisms and knowledge distillation strategies in hybrid EA–LLM agents. 4.4.5. Fitness Design and Semantic Evaluation Designing appropriate fitness functions and evaluation metrics is crucial for guiding the evolutionary pro- cess, whether it’s optimizing LLM parameters or searching for effective prompts [107]. However, defining these functions, especially when dealing with the nuanced outputs of LLMs, can be non-trivial. Similarly, evaluating the quality of LLM-generated content within the evolutionary loop can be subjective and require sophisticated metrics. Beyond standard fitness measures, hybrid EA–LLM systems may require new metrics such as semantic diversity or factual accuracy [108] to assess candidate solutions. A central open question is: How should we define and measure “improvement” when LLMs modify or generate candidates? Related chal- lenges include determining whether LLMs can effectively estimate the novelty or quality-diversity of evolving populations, and how to benchmark hybrid systems consistently across different domains. Addressing these questions through standardized evaluation frameworks will be essential for the progress of the field. 4.4.6. Distributed and Federated Hybrid Systems The integration of LLMs into federated or distributed EAs [109] represents an emerging and largely un- explored research area. A central challenge is determining how LLMs can effectively coordinate evolutionary search across multiple nodes or agents in large-scale or multi-agent environments. For instance, LLMs may be capable of summarizing and synthesizing information from several evolving sub-populations, thereby facilitat- ing knowledge transfer and improving global search efficiency. Designing robust communication protocols in which LLMs act as “communicators” or “mediators” within distributed evolutionary frameworks is a key open question. Addressing this challenge could lead to more scalable, intelligent, and collaborative evolutionary systems. 4.4.7. Interpretability and Explainability Interpretability and explainability pose a significant challenge in the integration of LLMs with EC [110]. LLMs are inherently black-box models, making it difficult to understand the internal reasoning behind their outputs. When combined with EC, which is based on stochastic and non-deterministic search paradigms, the lack of clarity becomes worse, making it harder to understand how and why certain solutions are found. This lack of transparency hinders our ability to interpret the decision-making process within the hybrid system, which is crucial for building user trust, ensuring reproducibility, and enabling further improvements. Without clear explanations for why certain integration strategies or evolved solutions succeed, progress remains largely empirical. 5. Conclusion This survey demonstrates the transformative synergy between Evolutionary Computation (EC) and Large Language Models (LLMs), revealing how their integration drives innovation in AI optimization and automa- tion. Key findings reinforce that EC techniques significantly enhance LLM performance through automated prompt engineering, hyperparameter tuning, and architecture optimization, reducing reliance on manual inter-
https://arxiv.org/abs/2505.15741v1
vention. Conversely, LLMs advance EC by automating metaheuristic design, refining evolutionary algorithms, and generating adaptive heuristics, leading to more efficient and scalable solutions. Emerging co-evolutionary frameworks highlight the potential for mutual improvement, with applications spanning robotics, generative 43 DRAFTdesign, and scientific discovery. However, challenges such as computational costs, interpretability, and conver- gence stability remain critical barriers. Future research must address these limitations while utilizing hybrid approaches to unlock the full potential of EC-LLM collaboration. By bridging evolutionary search with lin- guistic intelligence, this synergy paves the way for more autonomous, adaptive, and intelligent AI systems capable of solving complex real-world problems. Acknowledgment This work was supported by Dr B. R. Ambedkar National Institute of Technology Jalandhar, the Anusand- han National Research Foundation, Government of India (Award No. MTR/2021/000503), the Australian Researcher Cooperation Hub through the Australia-India Women Researchers’ Exchange Program, and the Spanish Ministry of Economy and Competitiveness through the Ramón y Cajal Research Grant (Award No. RYC2023-045020-I). Compliance with ethical standards Conflict of interest: All the authors declare that they have no conflict of interest. Ethical approval: This article does not contain any studies with human participants or animals performed by any of the authors. References [1] Zhangyue Yin, Qiushi Sun, Qipeng Guo, Jiawen Wu, Xipeng Qiu, and Xuanjing Huang. Do large language models know what they don’t know? arXiv preprint arXiv:2305.18153 , 2023. [2] Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, et al. A survey of large language models. arXiv preprint arXiv:2303.18223 , 1(2), 2023. [3] Thomas Bartz-Beielstein, Jürgen Branke, Jörn Mehnen, and Olaf Mersmann. Evolutionary algorithms. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery , 4(3):178–195, 2014. [4] Dikshit Chauhan and Anupam Yadav. A comprehensive survey on artificial electric field algorithm: theories and applications. Archives of Computational Methods in Engineering , 31(5):2663–2715, 2024. [5] He Yu and Jing Liu. Deep insights into automated optimization with large language models and evolutionary algorithms. arXiv preprint arXiv:2410.20848 , 2024. [6] Michal Pluhacek, Jozef Kovac, Adam Viktorin, Peter Janku, Tomas Kadavy, and Roman Senkerik. Using llm for automatic evolvement of metaheuristics from swarm algorithm soma. In Proceedings of the Genetic and Evolutionary Computation Conference Companion , pages 2018–2022, 2024. [7] Xingyu Wu, Sheng-hao Wu, Jibin Wu, Liang Feng, and Kay Chen Tan. Evolutionary computation in the era of large language model: Survey and roadmap. IEEE Transactions on Evolutionary Computation , 2024. [8] Takuya Akiba, Makoto Shing, Yujin Tang, Qi Sun, and David Ha. Evolutionary optimization of model merging recipes. Nature Machine Intelligence , pages 1–10, 2025. [9] Shengcai Liu, Caishun Chen, Xinghua Qu, Ke Tang, and Yew-Soon Ong. Large language models as evolutionary optimizers. In 2024 IEEE Congress on Evolutionary Computation (CEC) , pages 1–8. IEEE, 2024. [10] Banghao Chen, Zhaofeng Zhang, Nicolas Langrené, and Shengxin Zhu. Unleashing the potential of prompt engineering in large language models: a comprehensive review. arXiv preprint arXiv:2310.14735 , 2023. 44 DRAFT[11] Qingyan Guo, Rui Wang, Junliang Guo, Bei Li, Kaitao Song, Xu Tan, Guoqing Liu, Jiang Bian, and Yujiu Yang. Connecting large language
https://arxiv.org/abs/2505.15741v1
models with evolutionary algorithms yields powerful prompt optimizers, 2024. [12] Niki van Stein and Thomas Bäck. Llamea: A large language model evolutionary algorithm for automatically generating metaheuristics. IEEE Transactions on Evolutionary Computation , 2024. [13] Fei Liu, Yiming Yao, Ping Guo, Zhiyuan Yang, Zhe Zhao, Xi Lin, Xialiang Tong, Mingxuan Yuan, Zhichao Lu, Zhenkun Wang, et al. A systematic survey on large language models for algorithm design. arXiv preprint arXiv:2410.14716 , 2024. [14] Abid Haleem, Mohd Javaid, and Ravi Pratap Singh. An era of chatgpt as a significant futuristic support tool: A study on features, abilities, and challenges. BenchCouncil transactions on benchmarks, standards and evaluations , 2(4):100089, 2022. [15] Jinyu Cai, Jinglue Xu, Jialong Li, Takuto Yamauchi, Hitoshi Iba, and Kenji Tei. Exploring the improvement of evolutionary computation via large language models. In Proceedings of the Genetic and Evolutionary Computation Conference Companion , pages 83–84, 2024. [16] Sen Huang, Kaixiang Yang, Sheng Qi, and Rui Wang. When large language model meets optimization. Swarm and Evolutionary Computation , 90:101663, 2024. [17] Muhammad Umair Nasir, Sam Earle, Julian Togelius, Steven James, and Christopher Cleghorn. Llmatic: neural architecture search via large language models and quality diversity optimization. In proceedings of the Genetic and Evolutionary Computation Conference , pages 1110–1118, 2024. [18] Zeeshan Memon, Muhammad Arham, Adnan Ul-Hasan, and Faisal Shafait. Llm-informed discrete prompt opti- mization. In ICML 2024 Workshop on LLMs and Cognition , 2024. [19] Robert Clarisó and Jordi Cabot. Model-driven prompt engineering. In 2023 ACM/IEEE 26th International Con- ference on Model Driven Engineering Languages and Systems (MODELS) , pages 47–54. IEEE, 2023. [20] Qinyuan Ye, Maxamed Axmed, Reid Pryzant, and Fereshte Khani. Prompt engineering a prompt engineer. arXiv preprint arXiv:2311.05661 , 2023. [21] Bingsheng Yao, Guiming Chen, Ruishi Zou, Yuxuan Lu, Jiachen Li, Shao Zhang, Yisi Sang, Sijia Liu, James Hendler, and Dakuo Wang. More samples or more prompts? exploring effective in-context sampling for llm few-shot prompt engineering. arXiv preprint arXiv:2311.09782 , 2023. [22] Shizhe Diao, Pengcheng Wang, Yong Lin, Rui Pan, Xiang Liu, and Tong Zhang. Active prompting with chain-of- thought for large language models. arXiv preprint arXiv:2302.12246 , 2023. [23] Siyuan Wang, Jianming Zheng, Fei Cai, Chengyu Song, and Xueshan Luo. Msprompt: Multi-step prompt learning for debiasing few-shot event detection. Information Processing & Management , 60(6):103509, 2023. [24] Cathy Winter, Joanna Crofts, Timothy Draycott, and Neil Muchatuta. PROMPT course manual . Cambridge University Press, 2017. [25] Jonas Oppenlaender, Rhema Linder, and Johanna Silvennoinen. Prompting ai art: An investigation into the creative skill of prompt engineering. International journal of human–computer interaction , pages 1–23, 2024. [26] Adam Slowik and Halina Kwasnicka. Evolutionary algorithms and their applications to engineering problems. Neural Computing and Applications , 32:12363–12379, 2020. 45 DRAFT[27] Yuxin Wen, Neel Jain, John Kirchenbauer, Micah Goldblum, Jonas Geiping, and Tom Goldstein. Hard prompts made easy: Gradient-based discrete optimization for prompt tuning and discovery. Advances in Neural Informa- tion Processing Systems , 36:51008–51025, 2023. [28] Guanghui Qin and Jason Eisner. Learning how to ask: Querying lms with mixtures of soft prompts. arXiv preprint arXiv:2104.06599 , 2021. [29] Hanwei Xu, Yujun Chen, Yulun Du,
https://arxiv.org/abs/2505.15741v1
Nan Shao, Yanggang Wang, Haiyu Li, and Zhilin Yang. Gps: Genetic prompt search for efficient few-shot learning. arXiv preprint arXiv:2210.17041 , 2022. [30] Archiki Prasad, Peter Hase, Xiang Zhou, and Mohit Bansal. Grips: Gradient-free, edit-based instruction search for prompting large language models. arXiv preprint arXiv:2203.07281 , 2022. [31] Cho-Jui Hsieh, Si Si, Felix X Yu, and Inderjit S Dhillon. Automatic engineering of long prompts. arXiv preprint arXiv:2311.10117 , 2023. [32] Q Guo, R Wang, J Guo, B Li, K Song, X Tan, G Liu, J Bian, and Y Yang. Connecting large language models with evolutionary algorithms yields powerful prompt optimizers (2023). arXiv preprint arXiv:2309.08532 , 2023. [33] Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, and Tim Rocktäschel. Prompt- breeder: Self-referential self-improvement via prompt evolution. arXiv preprint arXiv:2309.16797 , 2023. [34] Angelica Chen, David Dohan, and David So. Evoprompting: Language models for code-level neural architecture search. Advances in neural information processing systems , 36:7787–7817, 2023. [35] Wendi Cui, Jiaxin Zhang, Zhuohang Li, Hao Sun, Damien Lopez, Kamalika Das, Bradley Malin, and Sricharan Kumar. Phaseevo: Towards unified in-context prompt optimization for large language models. arXiv preprint arXiv:2402.11347 , 2024. [36] Yongchao Chen, Jacob Arkin, Yilun Hao, Yang Zhang, Nicholas Roy, and Chuchu Fan. Prompt optimiza- tion in multi-step tasks (promst): Integrating human feedback and heuristic-based sampling. arXiv preprint arXiv:2402.08702 , 2024. [37] Jill Baumann and Oliver Kramer. Evolutionary multi-objective optimization of large language model prompts for balancing sentiments. In International conference on the applications of evolutionary computation (part of evoStar) , pages 212–224. Springer, 2024. [38] Chengzhe Feng, Yanan Sun, Ke Li, Pan Zhou, Jiancheng Lv, and Aojun Lu. Genetic auto-prompt learning for pre-trained code intelligence language models. arXiv preprint arXiv:2403.13588 , 2024. [39] Melvin Wong, Thiago Rios, Stefan Menzel, and Yew Soon Ong. Generative ai-based prompt evolution engineering design optimization with vision-language model. arXiv preprint arXiv:2406.09143 , 2024. [40] Beier Zhu, Yulei Niu, Yucheng Han, Yue Wu, and Hanwang Zhang. Prompt-aligned gradient for prompt tuning. InProceedings of the IEEE/CVF international conference on computer vision , pages 15659–15669, 2023. [41] Tianjun Zhang, Xuezhi Wang, Denny Zhou, Dale Schuurmans, and Joseph E Gonzalez. Tempera: Test-time prompting via reinforcement learning. arXiv preprint arXiv:2211.11890 , 2022. [42] Shuyang Wang, Somayeh Moazeni, and Diego Klabjan. A sequential optimal learning approach to automated prompt engineering in large language models. arXiv preprint arXiv:2501.03508 , 2025. [43] Wendi Cui, Jiaxin Zhang, Zhuohang Li, Hao Sun, Damien Lopez, Kamalika Das, Bradley Malin, and Sricharan Kumar. Phaseevo: Towards unified in-context prompt optimization for large language models, 2024. 46 DRAFT[44] Xavier Sécheresse, Jacques-Yves Guilbert-Ly, and Antoine Villedieu de Torcy. Gaapo: Genetic algorithmic ap- plied to prompt optimization, 2025. [45] Hong Sun, Xue Li, Yinchuan Xu, Youkow Homma, Qi Cao, Min Wu, Jian Jiao, and Denis Charles. Autohint: Automatic prompt optimization with hint generation, 2023. [46] Yichun Yin, Cheng Chen, Lifeng Shang, Xin Jiang, Xiao Chen, and Qun Liu. Autotinybert: Automatic hyper- parameter optimization for efficient pre-trained language models, 2021. [47] Leonardo Lucio Custode, Fabio Caraffini, Anil Yaman, and Giovanni Iacca. An investigation on the use of large language models for hyperparameter tuning in
https://arxiv.org/abs/2505.15741v1
evolutionary algorithms. In Proceedings of the Genetic and Evolu- tionary Computation Conference Companion , pages 1838–1845, 2024. [48] Laurits Tani, Diana Rand, Christian Veelken, and Mario Kadastik. Evolutionary algorithms for hyperparameter optimization in machine learning for application in high energy physics. The European Physical Journal C , 81:1– 9, 2021. [49] Sifan Long, Jingjing Tan, Bomin Mao, Fengxiao Tang, Yangfan Li, Ming Zhao, and Nei Kato. A survey on intel- ligent network operations and performance optimization based on large language models. IEEE Communications Surveys & Tutorials , 2025. [50] Jiahui Gao, Hang Xu, Han Shi, Xiaozhe Ren, Philip LH Yu, Xiaodan Liang, Xin Jiang, and Zhenguo Li. Autobert- zero: Evolving bert backbone from scratch. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 36, pages 10663–10671, 2022. [51] Vinod Ganesan, Gowtham Ramesh, and Pratyush Kumar. Supershaper: Task-agnostic super pre-training of bert models with variable hidden dimensions. arXiv preprint arXiv:2110.04711 , 2021. [52] Mojan Javaheripi, Gustavo H. de Rosa, Subhabrata Mukherjee, Shital Shah, Tomasz L. Religa, Caio C. T. Mendes, Sebastien Bubeck, Farinaz Koushanfar, and Debadeepta Dey. Litetransformersearch: Training-free neural archi- tecture search for efficient language models, 2022. [53] Andreas Paraskeva, Joao Pedro Reis, Suzan Verberne, and Jan N van Rijn. Resource-constrained neural archi- tecture search on language models: A case study. In 2nd Workshop on Advancing Neural Network Training: Computational Efficiency, Scalability, and Resource Optimization (WANT@ ICML 2024) . [54] Aaron Klein, Jacek Golebiowski, Xingchen Ma, Valerio Perrone, and Cedric Archambeau. Structural pruning of pre-trained language models via neural architecture search. arXiv preprint arXiv:2405.02267 , 2024. [55] Han Xiang Choong, Yew-Soon Ong, Abhishek Gupta, Caishun Chen, and Ray Lim. Jack and masters of all trades: one-pass learning sets of model sets from large pre-trained models. IEEE Computational Intelligence Magazine , 18(3):29–40, 2023. [56] Caiyang Yu, Xianggen Liu, Yifan Wang, Yun Liu, Wentao Feng, Xiong Deng, Chenwei Tang, and Jiancheng Lv. Gpt-nas: Evolutionary neural architecture search with the generative pre-trained model. arXiv preprint arXiv:2305.05351 , 2023. [57] Clint Morris, Michael Jurado, and Jason Zutty. Llm guided evolution-the automation of models advancing models. InProceedings of the Genetic and Evolutionary Computation Conference , pages 377–384, 2024. [58] Greg Van Houdt, Carlos Mosquera, and Gonzalo Nápoles. A review on the long short-term memory model. Artificial Intelligence Review , 53(8):5929–5955, 2020. [59] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems , 30, 2017. 47 DRAFT[60] Michal Pluhacek, Anezka Kazikova, Adam Viktorin, Tomas Kadavy, and Roman Senkerik. Investigating the potential of ai-driven innovations for enhancing differential evolution in optimization tasks. In 2023 IEEE Inter- national Conference on Systems, Man, and Cybernetics (SMC) , pages 1070–1075. IEEE, 2023. [61] Kenneth Price, Rainer M Storn, and Jouni A Lampinen. Differential evolution: a practical approach to global optimization . Springer Science & Business Media, 2006. [62] Michal Pluhacek, Anezka Kazikova, Tomas Kadavy, Adam Viktorin, and Roman Senkerik. Leveraging large lan- guage models for the generation of novel metaheuristic optimization algorithms. In Proceedings of the Companion Conference on Genetic
https://arxiv.org/abs/2505.15741v1
and Evolutionary Computation , pages 1812–1820, 2023. [63] Godfrey C Onwubolu, BV Babu, and Ivan Zelinka. Soma—self-organizing migrating algorithm. New Optimiza- tion Techniques in Engineering , pages 167–217, 2004. [64] Ivan Zelinka. Soma—self-organizing migrating algorithm. In Self-Organizing Migrating Algorithm: Methodology and Implementation , pages 3–49. Springer, 2016. [65] Bernardino Romera-Paredes, Mohammadamin Barekatain, Alexander Novikov, Matej Balog, M Pawan Kumar, Emilien Dupont, Francisco JR Ruiz, Jordan S Ellenberg, Pengming Wang, Omar Fawzi, et al. Mathematical discoveries from program search with large language models. Nature , 625(7995):468–475, 2024. [66] Virginia Aglietti, Ira Ktena, Jessica Schrouff, Eleni Sgouritsa, Francisco JR Ruiz, Alan Malek, Alexis Bellot, and Silvia Chiappa. Funbo: Discovering acquisition functions for bayesian optimization with funsearch. arXiv preprint arXiv:2406.04824 , 2024. [67] Fei Liu, Xialiang Tong, Mingxuan Yuan, Xi Lin, Fu Luo, Zhenkun Wang, Zhichao Lu, and Qingfu Zhang. Evo- lution of heuristics: Towards efficient automatic algorithm design using large language model. arXiv preprint arXiv:2401.02051 , 2024. [68] Shunyu Yao, Fei Liu, Xi Lin, Zhichao Lu, Zhenkun Wang, and Qingfu Zhang. Multi-objective evolution of heuristic using large language model. arXiv preprint arXiv:2409.16867 , 2024. [69] Haoran Ye, Jiarui Wang, Zhiguang Cao, Federico Berto, Chuanbo Hua, Haeyeon Kim, Jinkyoo Park, and Guojie Song. Reevo: Large language models as hyper-heuristics with reflective evolution. arXiv preprint arXiv:2402.01145 , 2024. [70] Carola Doerr, Hao Wang, Furong Ye, Sander Van Rijn, and Thomas Bäck. Iohprofiler: A benchmarking and profiling tool for iterative optimization heuristics. arXiv preprint arXiv:1810.05281 , 2018. [71] Jacob de Nobel, Furong Ye, Diederick Vermetten, Hao Wang, Carola Doerr, and Thomas Bäck. Iohexperimenter: Benchmarking platform for iterative optimization heuristics. Evolutionary Computation , 32(3):205–210, 2024. [72] Hao Wang, Diederick Vermetten, Furong Ye, Carola Doerr, and Thomas Bäck. Iohanalyzer: Detailed performance analyses for iterative optimization heuristics. ACM Transactions on Evolutionary Learning and Optimization , 2(1):1–29, 2022. [73] Hao Hao, Xiaoqun Zhang, and Aimin Zhou. Large language models as surrogate models in evolutionary algo- rithms: A preliminary study. Swarm and Evolutionary Computation , 91:101741, 2024. [74] Hao Hao, Xiaoqun Zhang, and Aimin Zhou. Model uncertainty in evolutionary optimization and bayesian op- timization: A comparative analysis. In 2024 IEEE Congress on Evolutionary Computation (CEC) , pages 1–9. IEEE, 2024. [75] Tennison Liu, Nicolás Astorga, Nabeel Seedat, and Mihaela van der Schaar. Large language models to enhance bayesian optimization. arXiv preprint arXiv:2402.03921 , 2024. 48 DRAFT[76] Alicja Martinek, Szymon Lukasik, and Amir H Gandomi. Large language models as tuning agents of metaheuris- tics. [77] Camilo Chacón Sartori, Christian Blum, Filippo Bistaffa, and Guillem Rodríguez Corominas. Metaheuristics and large language models join forces: Towards an integrated optimization approach. IEEE Access , 2024. [78] Runbo Ni, Xueyan Li, Fangqi Li, Xiaofeng Gao, and Guihai Chen. Fastcover: An unsupervised learning frame- work for multi-hop influence maximization in social networks. arXiv preprint arXiv:2111.00463 , 2021. [79] Partha Basuchowdhuri and Subhashis Majumder. Finding influential nodes in social networks using minimum k- hop dominating set. In Applied Algorithms: First International Conference, ICAA 2014, Kolkata, India, January 13-15, 2014. Proceedings 1 , pages 137–151. Springer, 2014. [80] Rui Zhong, Yuefeng Xu, Chao Zhang, and Jun Yu. Leveraging large language
https://arxiv.org/abs/2505.15741v1